tag:blogger.com,1999:blog-32409937507991451832024-02-26T12:15:54.567+05:30SidekickExperiments and Experiences, Mostly TechnologyTanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.comBlogger87125tag:blogger.com,1999:blog-3240993750799145183.post-15830990039599505862013-03-11T15:29:00.000+05:302013-03-11T15:29:29.733+05:30Minimal WebSocket Broadcast Server in Python<div dir="ltr" style="text-align: left;" trbidi="on">
There are many websocket server implementations. Any serious attempts at building in websockets into the application should look at libraries like <a href="http://libwebsockets.org/">http://libwebsockets.org/</a> or wrappers on such. But a small lightweight implementation of your own, specific to your applications is good and fun. Here's a very simplistic one I made, while reading through the rather simple websocket protocol.<br />
<br />
The websocket connection starts as a regular HTTP connection from the client, with certain headers that indicate that the client is requesting a websocket connection. The server responds with specific response headers if it accepts the connection. Subsequent messages flow in either direction. Messages are framed with a few bytes of header data that can contain the type of data, data length and a basic XOR mask. Longer messages are broken up into multiple frames.<br />
<br />
Details at:
<br />
<ul>
<li><a href="http://en.wikipedia.org/wiki/WebSocket">http://en.wikipedia.org/wiki/WebSocket</a></li>
<li><a href="http://tools.ietf.org/html/rfc6455">http://tools.ietf.org/html/rfc6455</a></li>
</ul>
This sample websocket implementation here just broadcasts messages from any connected client to all connected clients, like a group chat application.<br />
<div>
<br /></div>
<div>
To use this demo:</div>
<div>
<ul style="text-align: left;">
<li>Download pywebsock.py and pywebsock.html</li>
<li>Run "python pywebsock.py". This starts the server on port 4545.</li>
<li>Open pywebsock.html with your browser.</li>
<li>Open pywebsock.html again. Well... to try the message broadcast feature, you would need at least one more browser tab/window with the same html file.</li>
<li>Type away to send messages from one window and see it appear on other windows.</li>
</ul>
<div>
<script src="https://gist.github.com/tanmaykm/5111225.js"></script>
</div>
</div>
<br /></div>
Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com6tag:blogger.com,1999:blog-3240993750799145183.post-12812919833187406522013-03-05T11:22:00.000+05:302013-03-05T11:22:58.130+05:30Print Julia Type Tree with julia_types.jl<div dir="ltr" style="text-align: left;" trbidi="on">
The power of Julia is its type system and multiple dispatch for methods. Understanding the Julia type hierarchy helps write efficient code with tighter control. Curious to have a look at the complete type tree of base Julia modules, I stumbled across typetree.jl included in the "examples" section of the distribution.<br />
<br />
<a href="https://github.com/tanmaykm/julia_types" target="_blank"><b>Here's another version of the code named julia_types</b></a>, made a bit simpler by using a tree data structure and recursion. Download it or fork it on Github.<br />
<br />
For a list of modules given, it
<br />
<ul style="text-align: left;">
<li>extracts all symbols defined in the module</li>
<li>if the symbol is a "Type"</li>
<ul>
<li>adds the type and all super types of the type recursively in a tree like data structure</li>
<li>each node of the tree is a complex type that holds the type representing the node and a dict to hold sub types.</li>
</ul>
<li>Pretty prints the type tree</li>
</ul>
<br />
And here's the complete type tree as produced by julia_types.jl<br />
<script src="https://gist.github.com/tanmaykm/5088310.js"></script>
</div>
Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com0tag:blogger.com,1999:blog-3240993750799145183.post-9651453009373831152013-02-25T00:31:00.000+05:302013-02-25T00:42:57.744+05:30Julia Sets using Julia<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
Recently I came across a new language named '<a href="http://julialang.org/" target="_blank">Julia</a>'. It is a dynamic language meant for technical and scientific computing. It is very fast for a dynamic language, approaching speeds of lower level languages like C. The language and work environment is well documented at the site.<br />
<br />
I am just starting to explore and this is my first program using Julia: generating "<a href="http://en.wikipedia.org/wiki/Julia_set" target="_blank">Julia Set Fractals</a>" using Julia!<br />
<br />
<script src="https://gist.github.com/tanmaykm/5025017.js"></script>
The program is very simple, since Julia has complex numbers already available in the language. The program prints the image out in <a href="http://netpbm.sourceforge.net/doc/pgm.html" target="_blank">PGM format</a> which is a simple grayscale ASCII format. PGM files can be viewed with OpenOffice or Gimp.<br />
<br />
Below are a few results:<br />
<table>
<tbody>
<tr><td><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhScdSlniaomKsd91eMUk53ZzxqqJ73Tbbq7sNJz0WXLXXBVowQzwjDl6UAwADs35JMzRqAb90q49nrixCbW4ZfGZselfGo3jeoFTKCOSfMtDc4Tkmpp28XCTtkDzA0PpcjESrcyoyzygly/s1600/julia2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhScdSlniaomKsd91eMUk53ZzxqqJ73Tbbq7sNJz0WXLXXBVowQzwjDl6UAwADs35JMzRqAb90q49nrixCbW4ZfGZselfGo3jeoFTKCOSfMtDc4Tkmpp28XCTtkDzA0PpcjESrcyoyzygly/s200/julia2.png" width="200" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="background-color: white; font-family: sans-serif; font-size: 12px; line-height: 18.046875px; text-align: left;">c= -0.75+0.11*i</span></td></tr>
</tbody></table>
</td><td><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLo5TK-S13pDfVKSSN7ea3T20bfouvjSHiiqH989wK96riN6FYyrwZCulRsnGN_FQwnBhisQdr-ZmH7F45YddEnhnFh1jpm9zk340W2Qhl6BJUe4dPCaLl6JUMwoT5dI07fdBWxAylXnjq/s1600/julia3.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLo5TK-S13pDfVKSSN7ea3T20bfouvjSHiiqH989wK96riN6FYyrwZCulRsnGN_FQwnBhisQdr-ZmH7F45YddEnhnFh1jpm9zk340W2Qhl6BJUe4dPCaLl6JUMwoT5dI07fdBWxAylXnjq/s200/julia3.png" width="200" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">c=-0.835-0.2321i</td></tr>
</tbody></table>
</td></tr>
<tr><td><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZEqg-oY5v4vC2xOiFm-4ojacbcOhX-wLNQq8ma71Ij33ewcUewSHBO1TALwHsdiwb54Sth0-S3E_HMFfoWpC4jn3B9KQq7l4q5HVw54Tp3vVWbOF2SHONSUc-o4U1bR2FpF_ILoI3cMhd/s1600/julia1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZEqg-oY5v4vC2xOiFm-4ojacbcOhX-wLNQq8ma71Ij33ewcUewSHBO1TALwHsdiwb54Sth0-S3E_HMFfoWpC4jn3B9KQq7l4q5HVw54Tp3vVWbOF2SHONSUc-o4U1bR2FpF_ILoI3cMhd/s200/julia1.png" width="200" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">c=-0.74543+0.11301*i</td></tr>
</tbody></table>
</td><td><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWhFwYBYdbBfTA-pHdhTT0NSueqLnmrPb4le2vx7mG4WCTnHp8kHR7kOxaE_-h9LW8VTA0v-1mEuUC32WkwFVr2OOtPaLoIvQ362UDt3sYHksxGe7ty8-OmGbkw18vGzAl-SgIdj5nsTbN/s1600/julia4.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWhFwYBYdbBfTA-pHdhTT0NSueqLnmrPb4le2vx7mG4WCTnHp8kHR7kOxaE_-h9LW8VTA0v-1mEuUC32WkwFVr2OOtPaLoIvQ362UDt3sYHksxGe7ty8-OmGbkw18vGzAl-SgIdj5nsTbN/s200/julia4.png" width="200" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">c=0.285+0.01i</td></tr>
</tbody></table>
</td></tr>
</tbody></table>
</div>
<span id="goog_1055098682"></span><span id="goog_1055098683"></span><br /></div>
Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com0tag:blogger.com,1999:blog-3240993750799145183.post-10756128794956527072013-02-15T15:30:00.001+05:302013-02-15T15:30:42.229+05:30Modified Clone of Django Chronograph<div dir="ltr" style="text-align: left;" trbidi="on">
I faced issues with django-chronograph's use of date data types. It seemed to have bugs which led to cron jobs being scheduled incorrectly because of time zone issues. It was also not entirely compatible with Python 2.6.<br />
<br />
Removing the date compatibility module did not take much away. If the OS, Django and database all use time zones and are set up properly, the rest of the module would work just fine without needing any complicated date transformation layer.<br />
<br />
So I <a href="https://bitbucket.org/tanmaykm/tcron" target="_blank">created a clone</a> of the <a href="https://bitbucket.org/wnielson/django-chronograph/" target="_blank">Django Chronograph module</a> and removed the date compatibility module.<br />
<br />
It is not in the form of a Python installable module yet. But it can be plonked into the application to get it working.<br />
<div>
<br /></div>
<div>
For anyone else who faced similar problems, here's the bitbucket project of my clone: <a href="https://bitbucket.org/tanmaykm/tcron">https://bitbucket.org/tanmaykm/tcron</a></div>
<div>
<br /></div>
</div>
Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com0tag:blogger.com,1999:blog-3240993750799145183.post-36414301238535374432012-12-28T12:26:00.001+05:302012-12-28T12:26:36.145+05:30OpenCV - Fun with remap<div dir="ltr" style="text-align: left;" trbidi="on">
OpenCV has an re-map function that lets you map pixels of an image from one position to another. It can be used to create interesting artefacts with images, correct distortions or create different lens effects.<br />
<br />
The remap function (<a href="http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/remap/remap.html" target="_blank">documented here</a>) takes the source and destination images, two arrays that specify where each pixel of the destination comes from, and a set of flags that control interpolation behavior.<br />
<br />
Interpolation, as you must have realized, is a critical factor here. Interpolation is required since the function that we use to map the source to destination is likely to create gaps and overlaps in the destination. So we must have some way to fill the gaps and derive the final value of a pixel based on multiple pixels of the source. OpenCV provides a few interpolation functions that can be chosen from using the flags.<br />
<br />
The other interesting thing in this API is that the pixel maps we need to provide are not maps from source to destination. Rather we say for each pixel in the destination, which source pixel it comes from. Now the source pixel we provide here is a float variable. Thus it can lie not directly on a pixel, but rather on a point in-between a few pixels. The OpenCV interpolation functions then calculate the destination pixel value using this position information and the surrounding pixel values with the chosen interpolation function.<br />
<br />
Below is a simple program using re-map that maps a rectangular image into a circle. Here, to calculate the mapping we do the following simple geometry calculation:<br />
<ul style="text-align: left;">
<li>shift the origin (0,0 coordinate) to the center of the image</li>
<li>convert the cartesian x-y coordinates to polar r-theta form</li>
<li>determine the length of the line with the same theta that touches the border of the rectangle</li>
<li>retain the theta, but scale the r value based on how much the line has to shrink to fit into the circle.</li>
<li>convert the modified r-theta values back into x-y coordinates</li>
</ul>
<br />
Here are the results:<br />
<br />
<table align="center">
<tbody>
<tr><td><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzOmr-_P9Uf3K0C9GE9MrVve-56j-QjSgxAEOcWBMKrJ3U-3g31vN_wJ5_MXv0cY-pEWB65sFJezUowB27Nm_a5284c_pUoCOqUPgeh-SAT_HBiFDFgNRF-gZC4ByfswhogJCm_sgQcqFW/s1600/opencv_remap_circle_source.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzOmr-_P9Uf3K0C9GE9MrVve-56j-QjSgxAEOcWBMKrJ3U-3g31vN_wJ5_MXv0cY-pEWB65sFJezUowB27Nm_a5284c_pUoCOqUPgeh-SAT_HBiFDFgNRF-gZC4ByfswhogJCm_sgQcqFW/s200/opencv_remap_circle_source.png" width="200" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Source Image</td></tr>
</tbody></table>
</td><td><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfOpLZZCKObsV6OT9Eitty6aP-mLHbsJWjUgKriERYp-Y6P5I90Ahyw34VNQnaDGTPk6SEi1cyIYKkRd0TGXuLHYcYY4-TW2vc8IBP8rabx8eWpn5UntaBoaIIFH-NzBr0keACs91DeYkj/s1600/opencv_remap_circle_result.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfOpLZZCKObsV6OT9Eitty6aP-mLHbsJWjUgKriERYp-Y6P5I90Ahyw34VNQnaDGTPk6SEi1cyIYKkRd0TGXuLHYcYY4-TW2vc8IBP8rabx8eWpn5UntaBoaIIFH-NzBr0keACs91DeYkj/s200/opencv_remap_circle_result.png" width="199" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Result Image</td></tr>
</tbody></table>
</td></tr>
</tbody></table>
<span style="font-size: x-small;">(Image credits for the source image goes to the owners mentioned on the image. I just picked it up for illustration purposes.)</span><br />
<br />
If you expected a fish-eye effect, this is not it. We just mapped the rectangle to a circle, and therefore you would see a pinching effect at the corners because the diagonals get compressed the most. For the fish-eye effect, we should probably map the rectangle to a sphere and then project it to a 2D surface. I'll probably take that up the next time. If you just play around with different mapping functions, you can create several interesting effects of your own.<br />
<br />
Here is the source code gist for the circle map:<br />
<script src="https://gist.github.com/4395121.js"></script>
<br />
<br />
<br /></div>
Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com4tag:blogger.com,1999:blog-3240993750799145183.post-85360634031371694922012-12-15T23:16:00.000+05:302012-12-15T23:16:07.205+05:30OpenCV - Separating Items on Store Shelves<div dir="ltr" style="text-align: left;" trbidi="on">
In my <a href="http://sidekick.windforwings.com/2012/12/implementing-hough-transform.html" target="_blank">last post</a> I went through Hough transform and how it can detect straight lines in an image. In this post I will try and use it to create a program that can separate individual items in a store shelf. The idea is simple, store racks are usually horizontal and item boxes placed on the shelves are usually rectangular. So, if we can detect straight lines in an image, we can separate out these edges as straight lines. If we have lines described for all four edges of an object, we can deduce its bounding rectangle.<br />
<br />
It is of course, not as trivial as the example we tried in our previous post. There are obvious practical issues that need to be tackled while dealing with real world images:<br />
<br />
<ol style="text-align: left;">
<li>Image quality of the provided images are typically not optimal. Brightness and contrast need to be adjusted first. </li>
<li>The edges in the images are usually not perfect. They would be broken by objects places on them </li>
<li>Some package edges are perfectly straight. </li>
<li>Sometimes the pictures are taken at an angle. </li>
<li>Text and pictures on the item packaging clutter and confuse our algorithms. </li>
</ol>
<br />
<br />
There is quite a bit of pre-processing that is required to be done on the images before they can be used. A few basic steps are:<br />
<br />
<ol style="text-align: left;">
<li>Smoothing to reduce some noise. It may be better to use bilateral smoothing to preserve the edges. </li>
<li>Adjusting brightness and contrast of the image so that the interesting portions of the image are highlighted and distinguishable. </li>
<li>Separating detection of horizontal and vertical edges. Since the horizontal and vertical edges have different degrees of clarity in the image, it helps to detect them separately with different degrees of thresholds (even if they are percentages). To separate the horizontal and vertical edges, and enhance the longer edges with comparison to the small ones, I have used a directional erosion kernel. It also serves to selectively remove vertical edges when we want to detect horizontal edges and vice versa. </li>
</ol>
<br />
<br />
In spite of the above, it is still not easy to accurately detect the boundaries. By accuracy I actually mean robustness. That is, having the same set of algorithms and tuning parameters work over a large variety of images. Lowering the threshold to detect all edges in a certain image sometimes makes it super sensitive and detects spurious lines in some other images. Moreover, it will also detect both upper edge and lower edge of the shelf as two different lines. Taking the spacing between the lines as a known fact (approximately how many shelves and items to expect) I used a simple technique of trimming outliers to filter these out.<br />
<br />
Here are the results of using my application on a few real world images. The green and blue lines are the separators. The sliders are the only parameters that I have kept adjustable. The approach is robust enough to have all the other parameters fixed.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisB3zcCCalDfYqBQZnS7RjcjGdZOulPXx_wr1LeRlp0Jj7y4Gv7_Qilw8A7tLRoPlcU0HcTKbuvVJTnt30v2ZPaVdZiiBz8oyZd_k3RRcoDgyCAo8vaKTpPtCXafOuzv0kT05eqbxFwLpG/s1600/store_item_detect_1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisB3zcCCalDfYqBQZnS7RjcjGdZOulPXx_wr1LeRlp0Jj7y4Gv7_Qilw8A7tLRoPlcU0HcTKbuvVJTnt30v2ZPaVdZiiBz8oyZd_k3RRcoDgyCAo8vaKTpPtCXafOuzv0kT05eqbxFwLpG/s320/store_item_detect_1.png" width="315" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-wKGZQPdEAQa3KWlzgWgwE8EID4NsxUk6qu7OnxowRRZyXJ2XpW1XdZeiYKwLXg66txz3nvqJXrhh2jK6yoq-kGdNyevFbg1SoiaHiLmQm63fILTbRrLkEugv2h3sVRcHREv648cR_zKW/s1600/store_item_detect_2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="298" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-wKGZQPdEAQa3KWlzgWgwE8EID4NsxUk6qu7OnxowRRZyXJ2XpW1XdZeiYKwLXg66txz3nvqJXrhh2jK6yoq-kGdNyevFbg1SoiaHiLmQm63fILTbRrLkEugv2h3sVRcHREv648cR_zKW/s320/store_item_detect_2.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwRxZoreYn7DfUyi8P-7pufMVzRtOlx_-IvDylayAU5JlobMIEd9qfdbStQ1dgLuwYY-y9Ew199OE07hNapzcSA50-Tb2vRzB10_QP1MpDUf1u8c6gLPLBSP72oSG9qQnKFz5ppLDlJezN/s1600/store_item_detect_3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwRxZoreYn7DfUyi8P-7pufMVzRtOlx_-IvDylayAU5JlobMIEd9qfdbStQ1dgLuwYY-y9Ew199OE07hNapzcSA50-Tb2vRzB10_QP1MpDUf1u8c6gLPLBSP72oSG9qQnKFz5ppLDlJezN/s320/store_item_detect_3.png" width="306" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhIQTURVTHWGY6qBB3LtXq3goqVfm_MQykmlraKoHRnG8hWKPnwaHgIB8RMTsU-BIcgfW-D5r08SnZBrupaER-IECbNAaJHQt5yhsu2iMXiucN5O-yl2obBU1-OHl-i-KqEZzn7PrgegXDU/s1600/store_item_detect_4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="303" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhIQTURVTHWGY6qBB3LtXq3goqVfm_MQykmlraKoHRnG8hWKPnwaHgIB8RMTsU-BIcgfW-D5r08SnZBrupaER-IECbNAaJHQt5yhsu2iMXiucN5O-yl2obBU1-OHl-i-KqEZzn7PrgegXDU/s320/store_item_detect_4.png" width="320" /></a></div>
<br />
<br />
As you would notice, it is able to separate out most of the packages, though it is not accurate 100% of the times. Some more detailed pre-processing would make it even more robust and accurate. The sources for this project can be gotten from <a href="https://gist.github.com/4284840" target="_blank">my github gist here</a>.
<br />
<br />
<br />
<br /></div>
Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com0tag:blogger.com,1999:blog-3240993750799145183.post-10753837075028823482012-12-13T11:39:00.000+05:302012-12-22T13:25:40.194+05:30Implementing Hough Transform (Line Detection)<div dir="ltr" style="text-align: left;" trbidi="on">
Hough transformation is an interesting image processing algorithm used to detect simple geometric shapes like straight lines in images. It serves as a beautiful example of thinking mathematically in different coordinate space. There are numerous articles on the internet describing the working. To begin with one can refer to the <a href="http://en.wikipedia.org/wiki/Hough_transform" target="_blank">wikipedia page here</a>. If you haven't already, would help to go through them before continuing in this post.<br />
<br />
Hough transform is based on the fact that a line in the x-y cartesian coordinate system can be mapped onto a point in the rho-theta space.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh71bBy2m2bZdkWWhit_zb7yk_1Jt3VVK2OrTgyHbtqXtdY27F-7T7Gv93OvBGHxrENsBaAjSAaRagbPx_bovb6-65BOKD5o_JcYthlLLkBDn-UEN7tY1WTDw61Zl_ebca4ekTSpnEJeTaW/s1600/mapping_xy_rtheta.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="147" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh71bBy2m2bZdkWWhit_zb7yk_1Jt3VVK2OrTgyHbtqXtdY27F-7T7Gv93OvBGHxrENsBaAjSAaRagbPx_bovb6-65BOKD5o_JcYthlLLkBDn-UEN7tY1WTDw61Zl_ebca4ekTSpnEJeTaW/s320/mapping_xy_rtheta.png" width="320" /></a></div>
<br />
Now when we see a point on an image, and we are not sure whether or not it belongs to a line like structure in the image, we just go ahead and plot a point for all possible lines that can pass through that point. That would result in a sinusoidal curve in the rho-theta space.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgw6tBFLbd38S3lqG1LgpHByPkPR_MsiYE5pNdM6hf0M8ZvAKEteVQbvlbh_W5dQPc2QfWk1_IiMZaX8ikC3k5vHD40XM5IUZMh-padNFkgPn5R8fJwtObJnFQqR4ai1aQJ1dUW_NX8ZdVk/s1600/hough-expl1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="160" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgw6tBFLbd38S3lqG1LgpHByPkPR_MsiYE5pNdM6hf0M8ZvAKEteVQbvlbh_W5dQPc2QfWk1_IiMZaX8ikC3k5vHD40XM5IUZMh-padNFkgPn5R8fJwtObJnFQqR4ai1aQJ1dUW_NX8ZdVk/s320/hough-expl1.png" width="320" /></a></div>
<br />
If the point actually does belong to a line, the actual rho-theta coordinate in the rho-theta plane will be reinforced by all points that belong to the line.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBfyzp8TWZaG3gOaNzRYoD6KlMgQr7R6_MI94OMfw0uJso71wqYA5_KdbGIzHS8g88Mfs1MT1TpwUawMvxHlftMY57N926262RA6kQy1S2e_njPS0Gf3zg7AperERSSeeBKYE6WPYIZdmj/s1600/hough-expl2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="160" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBfyzp8TWZaG3gOaNzRYoD6KlMgQr7R6_MI94OMfw0uJso71wqYA5_KdbGIzHS8g88Mfs1MT1TpwUawMvxHlftMY57N926262RA6kQy1S2e_njPS0Gf3zg7AperERSSeeBKYE6WPYIZdmj/s320/hough-expl2.png" width="320" /></a></div>
<br />
If we plot an intensity curve of the reinforcement strength (number of curves that cross a point in the rho-theta space), we can see peaks at the values that correspond to possible lines. These points can be isolated and picked up by applying a threshold value.<br />
<br />
Though OpenCV already has an implementation of the hough transform, it was interesting to build an implementation of my own to see how it works. The source code of <a href="https://gist.github.com/4274416" target="_blank">my implementation can be obtained here</a>.<br />
<br />
<script src="https://gist.github.com/4274416.js"></script>
Taking a simple example of an image with two lines, here’s what we observe:<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguLCg-xfKVhgivD2gxQYNgBbuP2cfzoMoqqi4Arjslja1ANT1cbu63JHZPgLmuqQGeSDCTCRhDJ42ml5SiDH2zgt3RlzpMME_Br0dNegj3rBFHFzwNZP0RIPRd5TLTnj1SHGehkGwBdzDm/s1600/straight_lines.jpeg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguLCg-xfKVhgivD2gxQYNgBbuP2cfzoMoqqi4Arjslja1ANT1cbu63JHZPgLmuqQGeSDCTCRhDJ42ml5SiDH2zgt3RlzpMME_Br0dNegj3rBFHFzwNZP0RIPRd5TLTnj1SHGehkGwBdzDm/s1600/straight_lines.jpeg" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Image with lines that we want to detect</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_vEaVAbDMVbKs7Oh4YXfr4NzvxB5hlST6oAxAg4s49CAH9g9KGQqeQjxlD1CnMWywnwqKdHwTXBBLJnwZwSkrRE6S9vYJ2W0tVvd84Nn2LuX97ZjFuOs0-gh0-_ajxjuEuDfarub3SRqX/s1600/straight_lines_edges.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_vEaVAbDMVbKs7Oh4YXfr4NzvxB5hlST6oAxAg4s49CAH9g9KGQqeQjxlD1CnMWywnwqKdHwTXBBLJnwZwSkrRE6S9vYJ2W0tVvd84Nn2LuX97ZjFuOs0-gh0-_ajxjuEuDfarub3SRqX/s1600/straight_lines_edges.png" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Edge detected image</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhndGuSa4pwaA_8g24TkUfazCJNTQcwwXOpm1CdMxb104GCu5GcKxC8MweQfrxzraEZJz6EDb8g2N4UmteF0O0ipsl1qTZDjutlJCR9s7ray3SY9g5OuUmuw3RNkjZdvWkiDLGqveF-DkEP/s1600/straight_lines_hough_space.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhndGuSa4pwaA_8g24TkUfazCJNTQcwwXOpm1CdMxb104GCu5GcKxC8MweQfrxzraEZJz6EDb8g2N4UmteF0O0ipsl1qTZDjutlJCR9s7ray3SY9g5OuUmuw3RNkjZdvWkiDLGqveF-DkEP/s320/straight_lines_hough_space.png" width="192" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Hough space intensity plot with two bright points.</td></tr>
</tbody></table>
<br />
The lines detected:<br />
<br />
<ol style="text-align: left;">
<li>rho -1, theta -1.021018 (-58.500004 degrees) </li>
<li>rho 148, theta 1.028872 (58.949993 degrees) </li>
</ol>
<br />
Line 1 corresponds to the line that appears to be starting from the bottom left corner and Line 2 corresponds to the one that appears to be starting from the top left corner.<br />
<br />
It is interesting to try out different images with different threshold values to see how it affects the detection. You can also change the rho and theta granularity level (in the code) to see their effects on the detection accuracy.<br />
<br />
<br />
<hr />
<span style="font-size: x-small;">Credits for images 2 & 3:</span><br />
<span style="font-size: x-small;">http://www.ebsd-image.org/documentation/reference/ops/hough/op/houghtransform.html</span><br />
<br /></div>
Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com5tag:blogger.com,1999:blog-3240993750799145183.post-29742626503465253312012-12-05T00:54:00.000+05:302012-12-05T00:54:00.791+05:30OpenCV Motion Detection Based Action Trigger - Part 2<div dir="ltr" style="text-align: left;" trbidi="on">
This post is in continuation of my <a href="http://sidekick.windforwings.com/2012/12/opencv-motion-detection-based-action.html" target="_blank">previous post</a> where this motion detection application was introduced. Let's examine what is being done in the program in order to detect motion.<br />
<br />
Motion detection, in simple terms, involves comparing camera images with past images and detecting if anything significant changed between them. The program captures frames from the camera and proceeds to do the following steps on each frame to detect motion:<br />
<br />
<b>1. The first step is to scale down the image to a smaller size.</b> A smaller size image needs less resources for processing. As long as the object we want to detect still has a reasonable size in the scaled down image, accuracy would not suffer. We are scaling the image down to a maximum of 800 x 600 size, but in practice even a 400 x 300 frame size would also be good enough.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj6zMgTmsLSx_o5fCUv0e90coy5HsDtMksJ0jEVynyKdiElcH89Ob_8l3UNWQY5ixTcI-fvS2NstYuhy2zPUMHPE6uMAG1wcoEkQVnaIf6a4o_6gWNHKWEOgXAVMhyZabM2AEBTfDPc9hzW/s1600/motion_detect_stage_0.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="262" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj6zMgTmsLSx_o5fCUv0e90coy5HsDtMksJ0jEVynyKdiElcH89Ob_8l3UNWQY5ixTcI-fvS2NstYuhy2zPUMHPE6uMAG1wcoEkQVnaIf6a4o_6gWNHKWEOgXAVMhyZabM2AEBTfDPc9hzW/s320/motion_detect_stage_0.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Original frame after size reduction</td></tr>
</tbody></table>
<br />
<br />
<b>2. Smoothen the image.</b> Smoothening the image would remove some noise and would reduce spurious triggering of the motion detector. We do a bilateral smoothing that is slower than the usual Gaussian smoothening, but preserves edges, thus retaining features whose movement we can detect.<br />
<br />
<b>3. Increase contrast and adjust brightness of the image.</b> This improves the image quality and the separation between different objects. If a dark object is being detected against a light background, increasing contrast and brightness can completely wash out the background, thus making it easier for us to detect the object! The contrast and brightness levels are adjustable using the corresponding sliders.<br />
<br />
<b>4. Enhance the edges in the image</b> by detecting edges and adding it back to the image. Making the edges prominent aids in detecting motion by amplifying changes around the edges. I chose the Laplacian formula for edge detection because it uses the derivatives in both x and y direction and hence is better than plain Sobel and it is less agressive than Canny edge detection.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwaC9U_Tm0vXyISvmRHZUY9iOCuY0oO8CyVbZPIBAEsUAmB1Gphl94QKMm5VChbVNd9GryrH3zMsvv3J7A7PIrstaKmnSCTVlOG4Lb4AGBs6jm6h0I0HTqV_uGDaPZv1RD8noLCQXrUgQ0/s1600/motion_detect_stage_1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="262" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwaC9U_Tm0vXyISvmRHZUY9iOCuY0oO8CyVbZPIBAEsUAmB1Gphl94QKMm5VChbVNd9GryrH3zMsvv3J7A7PIrstaKmnSCTVlOG4Lb4AGBs6jm6h0I0HTqV_uGDaPZv1RD8noLCQXrUgQ0/s320/motion_detect_stage_1.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Edges detected in the frame</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjl_w3i6p_d54SjHtlKU9VGjNOIxBCBtX_e6X0ydlz0EAXyUnOpMIkVm_iU_j3U9y5suJlebO7R3xCDEOXVrxK-40azTB_spW6FoNjdokf7gOFMWvBH38XuY5QipKkKtFOfIi2GJcJWKyBZ/s1600/motion_detect_stage_2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="262" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjl_w3i6p_d54SjHtlKU9VGjNOIxBCBtX_e6X0ydlz0EAXyUnOpMIkVm_iU_j3U9y5suJlebO7R3xCDEOXVrxK-40azTB_spW6FoNjdokf7gOFMWvBH38XuY5QipKkKtFOfIi2GJcJWKyBZ/s320/motion_detect_stage_2.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">After contrast, brightness adjustment and edge enhancement</td></tr>
</tbody></table>
<br />
<br />
<b>5. Create a running average</b> of the image processed till now. This running average acts like a memory of the past. The default running average weight used is 0.02 which means that it has memory of approximately 1/0.02 = 50 frames. With a 30fps video, it is approximately 2 seconds. The weight is adjustable by using the corresponding slider.<br />
<br />
<b>6. Subtract the current frame from the running average</b>. The difference roughly shows the item blobs that have moved. Convert this difference image to grayscale and subsequently threshold it to make the blobs clear.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhS32wxe_h3jtoQJZyMIrtzAj3C0rweYJDE0-a7MeId34DStKw0azbh2UJDgUI61CVw6krMka-W4Fql0-mqoLnLXur_wK-2ZkcsdJgeFZ9za4Rd2ejo4zJVm5_pb2WKiYPh8pkMqSR_RW4D/s1600/motion_detect_stage_3.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhS32wxe_h3jtoQJZyMIrtzAj3C0rweYJDE0-a7MeId34DStKw0azbh2UJDgUI61CVw6krMka-W4Fql0-mqoLnLXur_wK-2ZkcsdJgeFZ9za4Rd2ejo4zJVm5_pb2WKiYPh8pkMqSR_RW4D/s320/motion_detect_stage_3.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Difference with moving average</td></tr>
</tbody></table>
<br />
<br />
<b>7. Dilate and erode the thresholded image</b> to remove noise and enhance the blobs. First erode twice to remove noise. Then dilate a few times to close gaps formed in the image. Erode the dilation to get some of the original proportion back. The threshold level and erosion, dilation amounts are adjustable using the corresponding sliders. Increasing the dilation and erosion helps make contiguous blobs, but can detect noise as blob in a noisy image.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7MmcBy58VMH9U1Lzr88PUjwQgQGI6eVlAcpkx0cC1vvBWPd-ZVhgKd1QZyFcCVnJWFlVcKpTqv1FJ_Y4JUo98Et3zslUR0gTpmcnf-cCxW_9qEdtRy6DCYXKNIKRPkIySkQQmBY_GuV_M/s1600/motion_detect_stage_5.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="263" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7MmcBy58VMH9U1Lzr88PUjwQgQGI6eVlAcpkx0cC1vvBWPd-ZVhgKd1QZyFcCVnJWFlVcKpTqv1FJ_Y4JUo98Et3zslUR0gTpmcnf-cCxW_9qEdtRy6DCYXKNIKRPkIySkQQmBY_GuV_M/s320/motion_detect_stage_5.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Thresholded difference with blobs detected (overlapped with original)</td></tr>
</tbody></table>
<br />
<br />
<b>8. Find the contours of the blob and determine the bounding rectangle size for each of the blobs.</b> The bounding rectangle gives an idea of the size of the blob. Blobs of sizes smaller than our expected object size are also discarded in this step.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqssL9J-MbamkNppvqsmifLZ7gUe7le_QsaiJH-sno4dGBj7HuY4PAJlwEJn4__YVkrBex7p0-daKXPYy45a7PYT5e-aLGCDTM_zjbvJloYOjCtGebWtVpnXm7Tn6mKP1EmMnMQQgzZBTy/s1600/motion_detect_stage_6.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="262" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqssL9J-MbamkNppvqsmifLZ7gUe7le_QsaiJH-sno4dGBj7HuY4PAJlwEJn4__YVkrBex7p0-daKXPYy45a7PYT5e-aLGCDTM_zjbvJloYOjCtGebWtVpnXm7Tn6mKP1EmMnMQQgzZBTy/s320/motion_detect_stage_6.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Points of movement marked</td></tr>
</tbody></table>
<br />
<br />
<b>9. Act on the detected motion points.</b> If the program is in configuration mode, draw a marker around the point of movement detected. Otherwise execute the trigger action.<br />
<br />
This is an overview of a generic motion detection. Special processing when applied with knowledge of specific kind of images/videos can enhance accuracy. To grab the source code of this application, visit my previous post (<a href="http://sidekick.windforwings.com/2012/12/opencv-motion-detection-based-action.html" target="_blank">OpenCV Motion Detection Based Action Trigger - Part 1</a>).<br />
<br />
<br /></div>
Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com1tag:blogger.com,1999:blog-3240993750799145183.post-39595068194772577752012-12-02T21:26:00.000+05:302012-12-05T00:56:46.272+05:30OpenCV Motion Detection Based Action Trigger - Part 1<div dir="ltr" style="text-align: left;" trbidi="on">
I created a simple motion detector using OpenCV. My purpose was to use this program to trigger my USB connected digital camera to capture closeup images of birds, insects and rodents. But I think it is generic enough to be used for other purposes. The program monitors the camera feed for movement. Whenever it detects some movement, it executes a pre-configured command. In my case, the action takes a picture by using <a href="http://www.gphoto.org/" target="_blank">gPhoto</a>. It could as well be made to execute any command like sending a mail, sounding an alarm or whatever. In addition, it demonstrates the steps involved in motion detection, which may be interesting to some. So here it is...<br />
<br />
In my next post, I'll explain the steps behind the motion detection. But for the impatient, right below are the command line arguments and the source code gist.<br />
<br />
<b>Source code:</b> <a href="https://gist.github.com/4189355">https://gist.github.com/4189355</a><br />
<br />
<br />
<b>Build instructions:</b><br />
<br />
<ul style="text-align: left;">
<li>Compile with <a href="http://opencv.org/" target="_blank">OpenCV</a> 2.4.3 or above.</li>
<li>Also needs <a href="http://www.hyperrealm.com/libconfig/" target="_blank">libconfig</a> for config file storage.</li>
</ul>
<br />
<br />
<b>Executing the application:</b><br />
<br />
<ul style="text-align: left;">
<li>configuration/demo mode</li>
<ul>
<li>to use video from an attached camera<br /><span style="font-family: Courier New, Courier, monospace;">motion_detect cam camera_number</span></li>
<li>to use a pre-recorded video file<br /><span style="font-family: Courier New, Courier, monospace;">motion_detect file file_path</span></li>
<li>to save a configuration file with the adjusted parameters type 's' while on the application window. A file with name "motion_detect.cfg" would be saved.</li>
<li>configure the action to be executed by adding<br /><span style="font-family: Courier New, Courier, monospace;">command_on_motion = "the command"</span> <br />to the configuration file.</li>
</ul>
<li>action mode</li>
<ul>
<li>append 'act' to the commands above. <br />e.g. <span style="font-family: Courier New, Courier, monospace;">motion_detect cam <camera number="number"> act</camera></span></li>
</ul>
</ul>
<br />
Below is a screen shot of the application with motion being sensed from a surveillance camera. <i>(The GUI is not great as it uses only the basic OpenCV provided high_gui module.)</i><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvP1B5QR_7hQL-5aRVcS-s9HHzOqd8YckDCxnrg5VKKaxbx9Q-uREKhbao_p6tZhVaMx2MFHdoRPBc6jfpHCKXvOoY2BUSTak0se9EMANubDDv3QpID6dokoEBKmL3ULmZ8Q9hv6EeOWKJ/s1600/motion_detect_screen_shot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvP1B5QR_7hQL-5aRVcS-s9HHzOqd8YckDCxnrg5VKKaxbx9Q-uREKhbao_p6tZhVaMx2MFHdoRPBc6jfpHCKXvOoY2BUSTak0se9EMANubDDv3QpID6dokoEBKmL3ULmZ8Q9hv6EeOWKJ/s640/motion_detect_screen_shot.png" width="441" /></a></div>
<br />
<br />
Do download the source code and try out this application and do share with us if you come up with any improvements. In my next post I'll go through the image processing steps used for motion detection.<br />
<br />
Continuation: <a href="http://sidekick.windforwings.com/2012/12/opencv-motion-detection-based-action_5.html" target="_blank">OpenCV Motion Detection Based Action Trigger - Part 2</a><br />
<br /></div>
Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com2tag:blogger.com,1999:blog-3240993750799145183.post-47933161548196088632012-11-17T02:44:00.000+05:302012-11-17T02:44:42.243+05:30A One Time Password Service<div dir="ltr" style="text-align: left;" trbidi="on">
One time passwords, unlike regular passwords, can be used only once. They are typically used for authenticating occasional transactions, or alongside some other authentication factor to implement multi-factor authentication. For example, your bank may send an OTP to your mobile phone and require you to punch it in along with your regular password, so as to validate that you know the account password and you also possess the mobile phone that the account holder is supposed to have.<br />
<br />
I have implemented and hosted a simple OTP service, using <a href="http://www.mashape.com/" target="_blank">Mashape API provider</a>. The service is a REST service, accessible over HTTP and can be consumed in any way you wish. The service is hosted on my public server and uses Mashape for the passthrough proxy interface. Mashape provides generated code for PHP, Python, Ruby, Java (Android) and Obejctive-C (iOS) to make it super easy to use APIs.<br />
<br />
Through this post I'll walk you through using the service using a Python implementation.
<br />
<br />
<h3 style="text-align: left;">
<b>Step 1:</b></h3>
To use the API, you must first register with <a href="http://www.mashape.com/" target="_blank">Mashape</a> and get yourself an account. Once you register, visit the <a href="https://www.mashape.com/tanmaykm/otpgen" target="_blank">OTP generator API page</a> here to have a look at the details. Download the Python sample implementation from the link on this page and you are ready to start! During the download, you will also be shown your Mashape public and private keys to be used along with the API, copy them somewhere to be used later. Remember to keep your keys safe.<br />
<br />
<br />
<h3 style="text-align: left;">
Step 2:</h3>
Extract the downloaded sample code on to a folder. Read the README file to make sure you have the dependancies installed.
<br />
Open the file sample.py, and paste your public and private keys at the places indicated.
<br />
<br />
<br />
<h3 style="text-align: left;">
Step 3:</h3>
As a first step, you must configure your account with otpgen API server. Configuration involves setting up a shared secret for your account so that no one else can masquerade as you. For the first time, before even you use the configure API, the secret is set to be the same as your public key. You must change it before you can use any other API. The <a href="https://www.mashape.com/tanmaykm/otpgen#configure" target="_blank">configure API</a> is therefore the first API that must be called. It takes in a secret to be set for your account and a signature.
<br />
<br />
The signature is part of every API call to otpgen. The signature is a SHA1 hash of all the parameters to the API ordered by their name, along with the shared secret, all separated by a comma. So for example, if you are setting up your account with a secret value of "supersecret" and your account public key is 12345, for the configure API call the signature would be the SHA1 hash of the string "secret,supersecret,12345".
<br />
<br />
Since you have to do the signing for every API call, let's first define a method to do that. Below is the method that I have implemented:
<br />
<br />
<pre>def sign_request(req_params, secret):
keys = req_params.keys()
keys.sort()
kvarr = []
for key in keys:
kvarr.append(key)
kvarr.append(str(req_params.get(key)))
kvarr.append(secret)
sha = hashlib.sha1()
sha.update(",".join(kvarr))
return sha.hexdigest()
</pre>
<br />
<br />
Now you can define another method to do the first time secret configuration. Below is the method that I have implemented. Actually I also implemented a simple helper method called unwrap to get the response JSON out of the Mashape wrapped response object.
<br />
<br />
<pre>MY_MASHAPE_PUB_KEY = 'replace this with your public key'
MY_MASHAPE_PRIV_KEY = 'replace this with your private key'
client = Otpgen(MY_MASHAPE_PUB_KEY, MY_MASHAPE_PRIV_KEY)
def unwrap(mashape_resp):
return (vars(mashape_resp)).get('body')
def configure_first_time(secret):
req_params = { 'secret': secret }
sig = sign_request(req_params, MY_MASHAPE_PUB_KEY)
req_params['sig'] = sig
response = client.configure(**req_params)
# retrieve the JSON response from the wrapper
response = unwrap(response)
return response
</pre>
<br />
<br />
<h3 style="text-align: left;">
Step 4:</h3>
Once you have configured our account, you can go ahead and <a href="https://www.mashape.com/tanmaykm/otpgen#gen" target="_blank">generate</a> one time passwords for users and <a href="https://www.mashape.com/tanmaykm/otpgen#verify" target="_blank">verify</a> them. While issuing a token, you can set a string as data to be remembered or tagged with this OTP. On successful validation of this OTP, the tagged data will be returned to the validating call.<br />
<br />
By now you would have gotten a hang of how to use the API and we have written most of the support functions we had to write for using the API. I have implemented methods issue_otp and validate_otp in my sample. The sample source issues one OTP and verifies it immediately after that. Go ahead and refer to the <a href="https://gist.github.com/4090713" target="_blank">complete source code here</a> and see how I have implemented it.<br />
<br />
To summarize, here are a few quick links:
<br />
<ul>
<li>The Mashape OTP generation API: <a href="https://www.mashape.com/tanmaykm/otpgen">https://www.mashape.com/tanmaykm/otpgen</a></li>
<li>2. Complete python sample that we worked out in this post: <a href="https://gist.github.com/4090713">https://gist.github.com/4090713</a></li>
</ul>
<br />
This implementation should suffice many simple use cases of most applications. If you have any requirement that is not met or ideas that you would like to discuss, please do reach out to me.
<br />
<br /></div>
Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com0tag:blogger.com,1999:blog-3240993750799145183.post-34176041883143122012-10-17T17:41:00.000+05:302012-10-17T17:41:15.500+05:30Python Script to Merge "My Archives" Call and SMS Logs in Google Drive<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
I had built an Android application that regularly archives call logs and SMSs sometime back. The application can be found <a href="https://play.google.com/store/apps/details?id=com.meeteoric.myarchives&hl=en" target="_blank">at Google Play here</a>.<br />
<br />
The application allows one to set up a schedule to archive and upload call logs and SMS messages to Google Drive. If one has chosen a weekly or daily archive, there would be a lot of files in Google Drive. Though this is not a problem and Google Drive handles this perfectly, sometimes one may want to merge them for aesthetics sake.<br />
<br />
I have been experimenting with python and the Google Drive API and have created a small python app that does just that. It is not really packaged to be used as an end-user, but I am just putting it up here to serve as reference. If you are interested in developing it further, I would be glad to help you in whichever way possible. In the worst case, this may serve you as a simple learning app for Google Drive Python API.<br />
<br />
Here's what the script does:<br />
<br />
<ol style="text-align: left;">
<li>If this is the first run, it authorizes to Google Drive. Since this is a console app, it would instruct a URL to be invoked in the browser and the result to be given to the app. It stores the credentials in a file so that it can be reused in future without you having to bother you.</li>
<li>It gets the list of files in the appropriate folder of Google Drive matching the year and month that it needs to work on.</li>
<li>Downloads each file in "ods" format.</li>
<li>Runs unoconv to convert them to csv.</li>
<li>Merges the csv files</li>
<li>Uploads the merged files back into Google Drive</li>
<li>Moves the old files in Google Drive to trash.</li>
</ol>
<br />
<br />
If you are ready to try out the script, here's how to go about it.<br />
<br />
<ol style="text-align: left;">
<li>The program needs at least Python 2.6. For python 3, it would need slight modifications. If you do not have python, install it from <a href="http://python.org/">http://python.org/</a></li>
<li>For converting spreadsheets to csv files, I have used <a href="http://openoffice.org/" target="_blank">openoffice</a> and a script called <a href="https://github.com/dagwieers/unoconv" target="_blank">unoconv</a>. Openoffice can be invoked in a headless environment (no UI) with command line parameters to be used just as a document converter. And unoconv.py provides an easy platform independent interface to detect and invoke openoffice.</li>
<li>You will need to enable the Google Drive API for your account and enable the Google Drive python library. Follow the steps mentioned in the <a href="https://developers.google.com/drive/quickstart" target="_blank">Google Drive SDK quick start here</a>. You may follow the quickstart to test out whether your setup works.</li>
<li>Download the archives merge script <a href="https://gist.github.com/3905178" target="_blank">from github here</a> and replace the values of CLIENT_ID and CLIENT_SECRET with what you created in step 3.</li>
<li>Place unoconv.py in the same folder as the above script.</li>
<li>Now you can run the script with year (YYYY) and month (MM) numbers you wish to merge. But before you do that, you may want to go through the next section to know what the script does.</li>
</ol>
<br />
<br />
</div>
Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com2tag:blogger.com,1999:blog-3240993750799145183.post-32571090818845730452012-05-19T18:05:00.000+05:302012-05-19T18:06:46.711+05:30Inter Thread Communication: Socketpairs vs. In-Memory Buffers<div dir="ltr" style="text-align: left;" trbidi="on">
In multi-threaded applications, efficient communication between threads is often a challenge. A good scheme for doing this should have the following characteristics:<br />
<br />
<ul style="text-align: left;">
<li>should be lock free (threads should not necessarily block while communicating)</li>
<li>respect message boundaries when multiple threads communicate to one thread</li>
</ul>
<br />
<b>Socketpairs</b> are often used for this. A socketpair is a bi-directional socket with file descriptors of both ends provided. Data written to one end can be read from the other end. A socketpair with one end of it with the thread and the other end with another thread, it does provide a good inter-thread communication pipe without locking. Or does it?<br />
<br />
Socketpairs are a facility provided by the kernel. Reads and writes into it involve an expensive context switch (system calls). When the same process writes and reads from socketpairs, it switches to kernel mode, copies part of its memory into a kernel buffer, switches back to user mode, switches to kernel mode to do a 'select' or equivalent and back, and then switches to kernel mode to read from kernel buffer to user memory, and switches back to user mode. Phew! That piece of memory was right there in the same user space, damn it! Doing inter-thread communication with socketpairs is like snaking your arm around your head to touch your nose.<br />
<br />
Is there a better way?<br />
<br />
People usually avoid using in-memory message buffers because they require <b>locking</b>. The writer needs to lock it while writing so that the reader does not read till write is complete and no other writer writes to it.<br />
<br />
In earlier days, when <b>semaphores</b> were the only locking primitive available, this was an expensive mechanism. Semaphores are meant to do much more than a mutex. For example, they can be used across processes, they can be locked and unlocked by different threads/processes, and they can maintain a count. They interact with the scheduler much more deeply and hence are considered 'heavy'.<br />
<br />
With <b>mutex</b> based locks available in most systems now, locks can be much lighter now. Mutex is lightweight because it is simpler. It is limited to one process only, it can be unlocked only by the same thread that locked it, and it is binary (count 0 or 1). Socketpairs did have an advantage few years back when advanced locking primitives were not available. But not any more.<br />
<br />
Below I've pasted a piece of code to compare the two mechanisms we discussed above. I've implemented a simple <b>queue with two locks</b> - one head lock for the reader and a tail lock for the writer. When the queue has data, it can be written to and read from without any lock contention. Only when the reader does not find any data, does it lock the tail lock to flush any cached data and check whether it really does not have any data. There is a chance of contention at that point, but it will be very infrequent.<br />
<br />
Compile and run the code, and it will print out the command line arguments required for the two modes. Here's what I got on my laptop:<br />
<br />
For the in-memory queue:<br />
real<span class="Apple-tab-span" style="white-space: pre;"> </span>0m4.005s<br />
user<span class="Apple-tab-span" style="white-space: pre;"> </span>0m4.113s<br />
sys<span class="Apple-tab-span" style="white-space: pre;"> </span>0m0.197s<br />
<div>
<br /></div>
<br />
For the socketpair:<br />
real<span class="Apple-tab-span" style="white-space: pre;"> </span>0m22.875s<br />
user<span class="Apple-tab-span" style="white-space: pre;"> </span>0m5.770s<br />
sys<span class="Apple-tab-span" style="white-space: pre;"> </span>0m39.505s<br />
<div>
<br /></div>
<div>
In my run socketpairs took 10 times more CPU (with heavy sys time) and were 5 times slower than the in-memory queue.</div>
<script src="https://gist.github.com/2730638.js?file=tmq.c">
</script>
<br />
<div>
<br /></div>
</div>Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com1tag:blogger.com,1999:blog-3240993750799145183.post-5754849184081197662012-04-24T23:19:00.000+05:302012-04-24T23:19:45.950+05:30Memcache vs Database for simple selects<div dir="ltr" style="text-align: left;" trbidi="on">
Will using memcache for caching database query results of simple single record selects improve performance?<br />
<br />
Let's lay down the facts before we make any judgement:<br />
<ul style="text-align: left;">
<li>Memcache is a fast, out of process, in-memory cache that can be used to store and retrieve values based on keys.</li>
<li>Database also has its own implementation of query and result caches.</li>
<li>Both memcache and the database calls involve IPC and data serialization.</li>
<li>Database additionally has to provide data consistency when updates happen to the cached records.</li>
</ul>
<div>
Both look to be very similar, except that database may have an overhead of checks for data consistency. Using memcache will be beneficial only if that overhead is substantial. Note that we are considering only "simple selects". Joins, updates, index searches can not be applied to this comparison.</div>
<div>
<br /></div>
<div>
<a href="https://gist.github.com/2481633" target="_blank">Here is a set of test programs</a> to measure performance of memcache vs. database. The sources should be self explanatory. Essentially, we create 150000 records in the database, each with 11 CHAR columns of 32 bytes each. Similarly, we populate a memcache instance with 150000 records of same structure. For memcache out data is a direct memcpy of the C structure holding the data. Then we fetch the records multiple times and measure time taken for both cases (database and memcache).<br />
<br />
With a properly sized database (MySQL with sufficient query cache size to hold all records) and memcache (enough memory to hold all records), here are the results:<br />
<br />
<br />
<b>MEMCACHE:</b><br />
real <span class="Apple-tab-span" style="white-space: pre;"> </span>2m14.924s<br />
user <span class="Apple-tab-span" style="white-space: pre;"> </span>0m17.373s<br />
sys <span class="Apple-tab-span" style="white-space: pre;"> </span>0m43.588s<br />
<br />
<b>DB:</b><br />
real <span class="Apple-tab-span" style="white-space: pre;"> </span>2m7.675s<br />
user <span class="Apple-tab-span" style="white-space: pre;"> </span>0m23.906s<br />
sys <span class="Apple-tab-span" style="white-space: pre;"> </span>0m26.638s<br />
<br />
This is a very crude measurement. There are quite a few factors that may be different in a real life scenario and affecting performance. E.g. when the database is located on the same machine, the DB client library may be using a faster IPC mechanism than memcache that was on TCP/IP. However, all such influences may not be substantial. A well tuned database can in-fact be better performing than memcache for simple record fetches. Joins, updates and index searches can however require substantial database processing, and may be beneficial to cache.<br />
<br />
Memcache is meant to be used for storing processed data, or data that is difficult to fetch (e.g. remote APIs, file reads etc.). Using it for caching data that is easy to fetch, or is already cached elsewhere is just unnecessary and adds to overheads.<br />
<br />
</div>
</div>Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com0tag:blogger.com,1999:blog-3240993750799145183.post-16437453877382221562012-01-14T18:21:00.001+05:302012-01-14T18:21:26.638+05:30Android App - Versatile Lists<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuA5uFhumsG7sNTIDr79aa9KhAB_wdyD9GR2ZAJmwWzEGvt_zgIDGJMs7f3ECi1BrqevYefhyphenhyphenxRaRJg0YwYgK_cnCwjsoKxKVk45inp9utMnWPAuwP2hCFIj9cDj9DxuyH5GQiNd50l4IY/s1600/MyLists512x512.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuA5uFhumsG7sNTIDr79aa9KhAB_wdyD9GR2ZAJmwWzEGvt_zgIDGJMs7f3ECi1BrqevYefhyphenhyphenxRaRJg0YwYgK_cnCwjsoKxKVk45inp9utMnWPAuwP2hCFIj9cDj9DxuyH5GQiNd50l4IY/s200/MyLists512x512.png" width="200" /></a></div>
Smart phones today are ubiquitous. From calculators and address books to music and gaming, many gadgets have converged into today's smart phone. Android has particularly become very wide spread because of the wide range of devices available.<br />
<br />
The versatile lists application is an Android app called "My Lists" that we have developed. We developed this application after trying out many existing list apps in the market. What we found missing in all of them is the ability to define our own list types.<br />
<br />
<br />
<br />
<b><span style="font-size: large;">Features:</span></b><br />
<ul>
<li>Maintain customizable lists of items. You decide attributes for each list - completely customizable.</li>
<li>Lists can be password protected - keep things away from prying eyes.</li>
<li>Search your list directly from Android search box.</li>
<li>Mark entries with different colors with a long press on the entries.</li>
<li>Easy to use interface with large buttons.</li>
<li>Use it for shopping lists, to-do lists, trip expenses, short notes, interesting books/movies/music albums that you wish to check out.</li>
<li>Unlimited lists, and unlimited items in each list.</li>
<li>No ads in the app and absolutely free!</li>
<li>Works on all versions of Android starting from Android 2.1 (Eclair).</li>
</ul>
<div>
<br /></div>
<b><span style="font-size: large;">Screenshots:</span></b><br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAR4DlhHgPCfeWkX9k-CQZ8g6QMK5_TTp_CVkgPdWS211P9brb0OeuDU1KrJwNRGd1y43roHdVmN5JJ3o0s2wjUQeAugQb0o3JDZdtPAwD7gQlqD4ZVPV6q2vzw0q2V05a0ePqGPt2Cv5l/s1600/MyLists_ListItems.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAR4DlhHgPCfeWkX9k-CQZ8g6QMK5_TTp_CVkgPdWS211P9brb0OeuDU1KrJwNRGd1y43roHdVmN5JJ3o0s2wjUQeAugQb0o3JDZdtPAwD7gQlqD4ZVPV6q2vzw0q2V05a0ePqGPt2Cv5l/s320/MyLists_ListItems.png" width="177" /></a></td>
<td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwNPowINqzMD85mEiE3YLfX3jx8bSY227LaYZnNRBgEKdtZ418jSwYutP_HYlQWgwec9GYbgO8aSVMkOtKO8sd2IQLVQociLJX16nPVcnyM58gJc8OscId8vLYiHxAV_ZBNM016Wz2tQVF/s1600/MyLists_AddItem.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwNPowINqzMD85mEiE3YLfX3jx8bSY227LaYZnNRBgEKdtZ418jSwYutP_HYlQWgwec9GYbgO8aSVMkOtKO8sd2IQLVQociLJX16nPVcnyM58gJc8OscId8vLYiHxAV_ZBNM016Wz2tQVF/s320/MyLists_AddItem.png" width="177" /></a></td>
</tr>
<tr><td class="tr-caption" style="text-align: center;">A list of books</td><td class="tr-caption" style="text-align: center;">Adding an entry</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKy0lAfPf9NvuQb4q1YdT78B09Bdhn2BSmR4ipfJJLM1rVlxNLW_eB95tNlZ6dArnPi-RrAuh8ayQpjtRjWYbQBCF8dMk5WXKiZEYbnhCMB8gMKoX4Xf1LIiREj1G2GYKqksFq0CM8L_vb/s1600/MyLists_Lists.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKy0lAfPf9NvuQb4q1YdT78B09Bdhn2BSmR4ipfJJLM1rVlxNLW_eB95tNlZ6dArnPi-RrAuh8ayQpjtRjWYbQBCF8dMk5WXKiZEYbnhCMB8gMKoX4Xf1LIiREj1G2GYKqksFq0CM8L_vb/s320/MyLists_Lists.png" width="177" /></a></td><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8-oB3fV1ehs6i3RMxd04uP8ziU1WFlXBjNZo70UUk5sDfivVJLNOEXAmAsPxazmcx17YMoqGJqD1t13XUWgaHOGj9WLAz23b_QaezSH1IQgmd7rbk89EGHhPY_W0Cbhxo_i_YOYLUGSP1/s1600/MyLists_Settings.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8-oB3fV1ehs6i3RMxd04uP8ziU1WFlXBjNZo70UUk5sDfivVJLNOEXAmAsPxazmcx17YMoqGJqD1t13XUWgaHOGj9WLAz23b_QaezSH1IQgmd7rbk89EGHhPY_W0Cbhxo_i_YOYLUGSP1/s320/MyLists_Settings.png" width="212" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">All lists shown with menu open</td><td class="tr-caption" style="text-align: center;">Settings</td></tr>
</tbody></table>
<br />
<br />
<b><span style="font-size: large;">Download:</span></b><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDKTPj11J79MavsS74R_PM65aAODTZ0UelAZckWVNO_00zKLcUDHvHhmGZmzB0XjnCFlThA5VtAmzwBmccP245EdxukYVWftgCplqabz6_6U9hyf1_bGEBd9I5OeVt8EaIy5EH7gYQ0vnH/s1600/MyLists_QR.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDKTPj11J79MavsS74R_PM65aAODTZ0UelAZckWVNO_00zKLcUDHvHhmGZmzB0XjnCFlThA5VtAmzwBmccP245EdxukYVWftgCplqabz6_6U9hyf1_bGEBd9I5OeVt8EaIy5EH7gYQ0vnH/s200/MyLists_QR.png" width="200" /></a></div>
<br />
"My Lists" is available free on the Android market. <b>Search "My Lists" in Android market.</b><br />
<br />
If you have a QR code reader on your phone, <b>scan the QR bar code on the left</b>.<br />
<br />
If you are on a PC, <b>click <a href="https://market.android.com/details?id=com.meeteoric.mylists" target="_blank">here</a> to visit the "My Lists" page</b> on Android market.<br />
<br />
Feedback and suggestions are welcome!
</div>Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com5tag:blogger.com,1999:blog-3240993750799145183.post-83976022739369124502011-10-30T14:44:00.000+05:302011-10-30T14:44:03.242+05:30Arduino Garden Sprinkler - Making the Sensor<div dir="ltr" style="text-align: left;" trbidi="on">
This is a follow up of my previous posts on making an automated garden sprinkler, using the Arduino platform. I'll detail out the making of the humidity sensor here. The property of soil that is most impacted by humidity is its resistance / dielectric property. Commercial humidity sensors are based on capacitance or impedance measurements being affected by humidity change. While that is probably a much more appropriate method, just measuring the soil resistance provides good enough accuracy and sensitivity for the purpose of automating our sprinkler.
<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZiQbLism_BFmXv8chq7iCYgTUxWxY0IallFbWlCcj6vI7ocvDkG1E3_lwkRPDAEGQOby0L6Ywcfmdy19nnNxffCLaR6ekJKNBbPLP6wtqGY6vUjX12kePAKRC6o7F_-krNidENIra4Hqe/s1600/SoilHumiditySensorVoltageDivider.png" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="102" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZiQbLism_BFmXv8chq7iCYgTUxWxY0IallFbWlCcj6vI7ocvDkG1E3_lwkRPDAEGQOby0L6Ywcfmdy19nnNxffCLaR6ekJKNBbPLP6wtqGY6vUjX12kePAKRC6o7F_-krNidENIra4Hqe/s200/SoilHumiditySensorVoltageDivider.png" width="200" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Humidity Sensor Circuit</td></tr>
</tbody></table>
The soil humidity sensor I'm using is based on a simple voltage divider circuit. We calculate the soil resistance by measuring the ratio of voltage drop across a known resistance R1 to that across the soil probes R2. Resistance R2 varies based on the amount of humidity in the sensor - the more the humidity, the less the resistance. Choice of R1 is based on the average resistance of the sensor (R2) in the humidity range where we want the most accuracy. I am using a resistor of 330 ohms which I determined by measuring sensor resistance by creating a controlled environment through a pot full of soil. While choosing R1, make sure that it is large enough so that the sensor circuit draws current within limits. It is better to leave room for different sensor types by using a potentiometer (1K should be fine).<br />
<br />
The resistance can be calculated as:<br />
R2 = R1 * Vs / (5-Vs)<br />
And if the voltage is applied in the reverse, R2 = R1 * (5 - Vs) / Vs.<br />
<br />
Since Arduino ADC values range from 0 to 1024, the corresponding equations become: <br />
R2 = R1 * ADC_Reading / (1024 - ADC_Reading) <br />
R2 = R1 * (1024 - ADC_Reading) / ADC_Reading
<br />
<br />
In my sprinkler circuit I use the second scheme mentioned above, where the +5V is applied to the sensor terminal and R1 is grounded. The sensor reading the sprinkler unit displays is the ADC reading. It also displays the resistance calculated for reference.<br />
<br />
<b>Materials (exact dimensions are not important):</b><br />
<ol>
<li>Plaster of Paris</li>
<li>1 x Cylindrical plastic piece. 3 inch height, 3 inch dia (1.5 inch after cut . see step 1 below). A piece from a plastic bottle would do.</li>
<li>2 x Galvanized screws. 2 inch long, 4mm dia.</li>
<li>4 x nuts to fit the above screws</li>
<li>Masking tape</li>
<li>0.5 meter multi strand copper wire. Of the same gauge used for home electrical wiring.</li>
<li>A 1 inch square piece of stiff plastic. A piece from old plastic credit card would do.</li>
</ol>
<br/>
The slideshow below shows the materials and a few steps while making a sensor (detailed immediately after). In the text of the steps, the numbers in brackets refer to materials above.<br /><br />
<div align="center">
<embed flashvars="host=picasaweb.google.com&captions=1&hl=en_US&feat=flashalbum&RGB=0x000000&feed=https%3A%2F%2Fpicasaweb.google.com%2Fdata%2Ffeed%2Fapi%2Fuser%2F106731709412682628273%2Falbumid%2F5669205698774332065%3Falt%3Drss%26kind%3Dphoto%26authkey%3DGv1sRgCJPiw_Xj2I3Cdg%26hl%3Den_US" height="267" pluginspage="http://www.macromedia.com/go/getflashplayer" src="https://picasaweb.google.com/s/c/bin/slideshow.swf" type="application/x-shockwave-flash" width="400"></embed>
</div>
<br />
<b>Steps:</b><br />
<ol>
<li>Cut the cylindrical plastic piece (2) along its length so that it can be opened up by pulling it apart. Note that once cut, the plastic will usually wrap on itself and the resulting diameter will become smaller than earlier. Choose an initial size larger than what you want finally.</li>
<li>Use masking tape (5) to seal the cylindrical piece (2) from all sides except one end</li>
<li>Punch two holes of around 5mm dia on the stiff plastic square piece (7) approximately 15mm apart. A regular paper punch would do the job. Place the 2 inch long screws (3) into the holes. Use one nut (4) on each screw to tighten. Make sure they are roughly parallel.</li>
<li>Strip the ends of the wire (6). Tighten them, one each on two screws (3) using two nuts (4) on each screw as a pinch. Seal the edges where the wires join using some tape to prevent loose wire strands from shorting the terminals. Cut the plastic piece (7) so that it fits into the cylinder (2).</li>
<li>Mix PoP (1) and pour into the cylinder (2) till two inches height. Maintain a gel like consistency. If the mixture is too watery you may get cracks when it hardens.</li>
<li>Push the screw contacts into the PoP mixture till it is sticking out slightly from the PoP gel but completely inside the plastic cylinder.</li>
<li>Hold the wires from the contacts straight up and fill up the remaining part of the cylinder with more PoP gel till the top, sealing the contacts completely inside.</li>
<li>Leave the setup for 30 minutes for the PoP to harden enough. Then tear out the masking tape and open the outer plastic mould. You have your sensor! Leave it overnight for the PoP to cure and harden completely.</li>
</ol>
</div>Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com2tag:blogger.com,1999:blog-3240993750799145183.post-76895557601552403762011-09-12T23:43:00.001+05:302011-09-12T23:43:33.510+05:30Remote Camera Trigger for DSLRHere's a quick one. The remote triggers of many cameras (including mine - Canon Rebel XTi) work on a very simple mechanism. The trigger socket accepts a normal <a href="http://en.wikipedia.org/wiki/TRS_connector">2.5mm TRS (audio) pin</a>.<br/><br/>
The pin connections are as follows:
<ul>
<li>Tip is for expose function.</li>
<li>Ring is for focus function.</li>
<li>Sleeve is connected to ground. It is common for both expose pin and focus pin.</li>
</ul>
To activate any function, connect the corresponding connector to ground (sleeve), that's it!<br/><br/>
Here's how I built a simple remote with focus, expose and long expose function for bulb mode.<br/><br/>
<b>Materials:</b>
<ul>
<li>PCB mount push button switches with long stems - 2 numbers.</li>
<li>PCB mount toggle switch - 1 number.</li>
<li>Perforated board.</li>
<li>Old headphone with a good cable.</li>
<li>Feviquick - quick acting adhesive that bonds plastic/rubber.</li>
<li>M-Seal - epoxy resin putty for covering the stuff.</li>
</ul>
<br/>
<b>Steps:</b>
<ul>
<li>Cut off the ear pieces from the headphone wire and strip the edges. You should find two strands of conductors in each of the wires that went into the ear pieces. Identify the ground wire (usually gray color), the tip wire (usually red in color), and the ring wire (usually green). Use the multimeter to verify the connections.</li>
<li>Cut off a thin strip of the perforated board that can fit your switches. Keep a bit of spare length. You'll need it to fix the wires later.</li>
<li>Mount the components and solder them. Solder the headphone wires at the end, and be easy with it as the wires are very delicate. Pictures below show how I laid them out.</li>
<li>Fix a small portion of the headphone wire on to the PCB with Feviquick so that it does not flex near the solder point.</li>
<li>Test it once with the camera and the multimeter to ensure it's working and there are no loose connections.</li>
<li>Mix the M-Seal putty and cover your creation fully sealing it. Make your favorite shape as you like it. Mine looks like dog poop. :)</li>
<li>Wait until it hardens. Spray on a rubber coating if you have it. Enjoy clicking!</li>
</ul>
<br/><br/>
<b>Pictures:</b>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEintm2YpYJ5TIwIQDC954Cs09HILFs1lkNL97uy77lF8TCsXsazY8MVVDSsNk8aN0Dnx9lQ3q_IirBDAOCpcuebMdhYEXGrcie4gu5WqQxDTzuh6WSX4PrLF4oqsb-MN4l7L-WrYsSo2boV/s1600/IMG_7515.JPG" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="134" width="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEintm2YpYJ5TIwIQDC954Cs09HILFs1lkNL97uy77lF8TCsXsazY8MVVDSsNk8aN0Dnx9lQ3q_IirBDAOCpcuebMdhYEXGrcie4gu5WqQxDTzuh6WSX4PrLF4oqsb-MN4l7L-WrYsSo2boV/s200/IMG_7515.JPG" /></a> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8idURhvEx7od3UUVHxbj5YaGDqhcDDz1hGI61qOUBbk7r6cKNMKDiErtbFPq2DmThemyq2aOIdmUe9HjalkuQzpbhourpfujoS9Li-C4zy4C3sSqTeTQn5vd-RDso6IUDrJs96d_UJ42L/s1600/IMG_7516.JPG" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="134" width="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8idURhvEx7od3UUVHxbj5YaGDqhcDDz1hGI61qOUBbk7r6cKNMKDiErtbFPq2DmThemyq2aOIdmUe9HjalkuQzpbhourpfujoS9Li-C4zy4C3sSqTeTQn5vd-RDso6IUDrJs96d_UJ42L/s200/IMG_7516.JPG" /></a> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQldM5WVqHefDHPbIx8kuYNqZoJF_rrKY-OnY-Uyj5JdNlU_1vtHZOYpV6h6wlZLNzvU8Hqp_XOB4-gjm4cvtGaAgZhL1YQ4eRHrVeQVrDjixBVhWvJip_CYGNczWXcFyptzmNiC51IAoP/s1600/IMG_7522.JPG" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="134" width="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQldM5WVqHefDHPbIx8kuYNqZoJF_rrKY-OnY-Uyj5JdNlU_1vtHZOYpV6h6wlZLNzvU8Hqp_XOB4-gjm4cvtGaAgZhL1YQ4eRHrVeQVrDjixBVhWvJip_CYGNczWXcFyptzmNiC51IAoP/s200/IMG_7522.JPG" /></a></div>
<b>Compatibility:</b><br/>
This should be equivalent in function to <a href="http://shop.usa.canon.com/webapp/wcs/stores/servlet/product_10051_10051_171307_-1">Canon Remote Switch RS-60E3</a>. Hence it should be compatible with all the models listed with that product. I'm just replicating the same list here for convenience:<br/>
<ul>
<li>Digital Rebel XSi / XTi</li>
<li>EOS 60D</li>
<li>EOS Digital Rebel</li>
<li>EOS Digital Rebel XT</li>
<li>EOS ELAN 7 / 7E</li>
<li>EOS ELAN 7N</li>
<li>EOS ELAN II</li>
<li>EOS ELAN IIE</li>
<li>EOS IX LITE</li>
<li>EOS Rebel 2000</li>
<li>EOS Rebel G</li>
<li>EOS Rebel G II</li>
<li>EOS Rebel T1i (Body)</li>
<li>EOS Rebel T2</li>
<li>EOS Rebel T2i</li>
<li>EOS Rebel T3</li>
<li>EOS Rebel T3i</li>
<li>EOS Rebel TI</li>
<li>EOS Rebel X</li>
<li>EOS Rebel XS</li>
<li>EOS 350D</li>
</ul>
Many other Canon and Nikon cameras have similar mechanism, but different connectors. The same circuit would work, as long as you can get the right connector cable. If you are not sure, ask a camera mechanic or search on the internet for pin connections before you tinker with your camera! I'll post updates for other cameras, if and when I get the opportunity to try one out.<br/>
<br/>
<b>Afraid to get your hands dirty?</b><br/>
Will be glad to help you start if you are a photographer but would like to tinker a bit. I can also make one for you, if you don't like to get your hands dirty. Just leave a comment with your request and I'll get back to you.<br/>
Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com2tag:blogger.com,1999:blog-3240993750799145183.post-55051560999454387912011-08-29T11:20:00.001+05:302011-10-03T19:04:14.180+05:30Arduino Garden Sprinkler - Sketch & CircuitsI had mentioned about my Arduino project for making an automated sprinkler system for my garden in an <a href="http://sidekick.windforwings.com/2011/07/my-first-useful-arduino-project-garden.html">earlier post (here)</a>.<br />
<br />
I think I need some more time to test out the stuff, since it has been raining here mostly these days. I got some suggestions for using a latching solenoid and solar panels, and will be trying them out in my next iteration. But I decided to put it out as it is anyway, hoping that may be one of my readers can help by trying this out at a drier place. Do feel free to ping me for any information.<br />
<br />
I used Fritzing to do some rough layouts and give me an idea of the PCB layouts. I haven't gotten on to etch a PCB yet, my focus was to make modular designs and use perforated boards till I make a stable design.<br />
<br />
<a href="http://fritzing.org/projects/automatic-garden-sprinkler/"><b>Here are the Fritzing files.</b></a><br />
<br />
<b>And below is the Arduino Sketch</b> for the garden sprinkler.<br /><br/>
<b>Updates:</b>
<ul>
<li>12th Sep 2011: Fixed a bug in flushing code. Changed time storage to long. I'm facing a strange problem of the manual start/stop switch being unresponsive. I think it's the hardware, loose connections on the hobby board.</li>
<li>03rd Oct 2011: Added a LCD screen. On screen readings and status messages. No more blinking LEDs.</li>
</ul>
<br />
<script src="https://gist.github.com/1177762.js?file=sketch_sprinkler.pde"></script><br />
<br />
Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com0tag:blogger.com,1999:blog-3240993750799145183.post-10097883467710612282011-07-28T00:14:00.002+05:302011-10-30T14:48:24.273+05:30My first 'useful' Arduino project - Garden SprinklerMy goal was to automate watering of my garden plants. It's not that I do not 'enjoy' watering them, but it is more for the sake of the poor plants who sometimes suffer when I 'forget' to water them or have to go out of station for more than few days. There are commercial solutions available, but mine is cheaper and proudly 'home made'. It also gives me the liberty of changing what I want with it.<br />
<br />
Currently this is supposed to:<br />
- detect soil humidity<br />
- automatically turn on sprinkler when soil is dry<br />
- keep sprinkler system clean from algae by periodic flushing<br />
- learn and auto modes: learn mode calibrates the system for auto modes.<br />
<br />
Below are some snaps of some of the stuff I used to give you an idea. These are for my need and may not suit all scenarios. I have a small kitchen garden around 300sft and mostly multiple beds of shrubs. <br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzNn011e5fP1MJfMToyzJtMEOvza-9g_kWr3pzLDmiTVgRRMF9M5cWclf2WGaF2orDDHIOr0EUN0iyK_dVh-OZVPefeliCtOE5FEUgMmlF_VLi-wq2CwH3BS6CfBitFOGGdErriFTvw-Vs/s1600/sprinkler_hardware3.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="150" width="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzNn011e5fP1MJfMToyzJtMEOvza-9g_kWr3pzLDmiTVgRRMF9M5cWclf2WGaF2orDDHIOr0EUN0iyK_dVh-OZVPefeliCtOE5FEUgMmlF_VLi-wq2CwH3BS6CfBitFOGGdErriFTvw-Vs/s200/sprinkler_hardware3.jpg" /></a> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfTfKSXmxxeqtKEDcdMKoo5ys9tAqhKCYdugcMZb8dbKgnNCQY5q3s46zBhJIFoevqRCHPHLja-fK7a2_UFV3E47bhBtU92q52jSgAcsJqfD2lHEH4ny6bw6SH5LwXJ34sX7zN4sB-8xZx/s1600/sprinkler_hardware2.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="150" width="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfTfKSXmxxeqtKEDcdMKoo5ys9tAqhKCYdugcMZb8dbKgnNCQY5q3s46zBhJIFoevqRCHPHLja-fK7a2_UFV3E47bhBtU92q52jSgAcsJqfD2lHEH4ny6bw6SH5LwXJ34sX7zN4sB-8xZx/s200/sprinkler_hardware2.jpg" /></a></div><br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzpRPKcLSsGuX7hLcGgZEAyVa692h2imWJyNbbOD7Y5Vk3H5P4Qe8JD9qf7k73Ftlt6vywwq9oStiwBDCAio891roBdy-KmGP25O1tmoqRKI15mwcq8nyTMx_tJo55LEqOzxDC_Hu2YkTf/s1600/sprinker_hardware1.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="200" width="149" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzpRPKcLSsGuX7hLcGgZEAyVa692h2imWJyNbbOD7Y5Vk3H5P4Qe8JD9qf7k73Ftlt6vywwq9oStiwBDCAio891roBdy-KmGP25O1tmoqRKI15mwcq8nyTMx_tJo55LEqOzxDC_Hu2YkTf/s200/sprinker_hardware1.jpg" /></a> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjI3sDVg53k755eOmZLucBejmZOfPcRZO__HjGzTDFxCTMzPUaEozAxWktg4-p0iGo1pjNKsjhqdy6hbwezswJ8Ue7tAQe7kNNZToLTlwGLwCv8sfILogRchHN8weNvQ-bLweznmh8Op3YJ/s1600/sprinkler_circuit.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="200" width="150" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjI3sDVg53k755eOmZLucBejmZOfPcRZO__HjGzTDFxCTMzPUaEozAxWktg4-p0iGo1pjNKsjhqdy6hbwezswJ8Ue7tAQe7kNNZToLTlwGLwCv8sfILogRchHN8weNvQ-bLweznmh8Op3YJ/s200/sprinkler_circuit.jpg" /></a></div><br />
<b>Humidity sensor:</b> was made using Plaster of Paris (PoP) and a couple of long nails. Humidity is the resistance through the PoP. And PoP gives a more uniform medium than plain soil. PoP absorbs and loses water pretty quickly. Linearity of resistance, accuracy of values and degradation of the sensor do not affect the system much because it works only on threshold values and can be calibrated easily.<br />
<br />
<b>Valve:</b> was a 24V DC solenoid operated one as I did not want any mains voltage where there's water and wanted a simpler circuit for controlling it. For mains operated valves, one would probably use a relay to switch it and keep the valve in a dry place.<br />
<br />
<b>Electronics:</b> Atmega328 with external clock, TIP 120 to switch the valve, green and red LED to convey status, and 5 - 22V dual power supply using 78xx for regulation.<br />
<br />
Apart from this I needed small bits and pieces of stuff like wires, screws, PVC container, gum and sealants.<br />
<br />
I've used materials available locally, and since it was an experiment things are loosely coupled, over-designed and too custom for my purpose. Though the logic got tested on a breadboard, I'm yet to observe it work in practice since it's been raining since I set it up fully. I'll post more in my next post and if things are fine or if I do any changes. If the past is any indication, I'll surely have some more problems to address before everything is fine. I'll post the hardware list, code and more details after few days. Practical hardware is the most difficult thing to get right. <br />
<br />
The circuit as it is today can be made compact to fit a 6x6 box along with the power supply and can use Atmega8. Or it can also be enhanced to support multiple zones for a larger setup using Atmega328.<br />
<br />
Would also be happy to learn from you if you have tried something similar. Would be glad to provide any details if you are interested in doing it for yourself. If you are at the same place as me (either Bangalore or Bhubaneswar), and would rather prefer one unit built and deployed for you, I'd be happy to do it.<br />
<br />
<b>Updates:</b>
<ul>
<li>29 Aug 2011: <a href="http://sidekick.windforwings.com/2011/08/arduino-garden-sprinkler-sketch.html">Here is the circuit and the Arduino sketch.</a></li>
<li>12 Sep 2011: Sketch updated. A leakage sprung again at one of the joints in the PVC pipe; arrested through m-seal. I think I should get something to program and debug the Arduino on my minimal board.</li>
<li>03rd Oct 2011: New LCD screen, more information on display and no more blinking LEDs. Added another socket to the Atmega to prevent pin damage from repeated removal. Will continue programming using the Uno board.</li>
<li>30 Oct 2011: New post detailing <a href="http://sidekick.windforwings.com/2011/10/arduino-garden-sprinkler-making-sensor.html">how to make a sensor</a>.</li>
</ul>Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com4tag:blogger.com,1999:blog-3240993750799145183.post-90364952833338068422011-04-08T01:35:00.003+05:302011-04-09T03:29:23.475+05:30Hello Titanium (Appcelerator)<div dir="ltr" style="text-align: left;" trbidi="on">Here's my first Android application written using Appcelerator's Titanium. A newbie to Android itself, I was lured to Titanium because of its promise of the language (Javascript - which I thought will help speed up development) and also cross-compilation to native code for both Android and iOS.<br />
<br />
Well, my experiences were not smooth. There's lot of promise, but it is still raw. Here are the pain points:<br />
<ul><li>Very difficult to debug. All I had was debug prints and exception popups.</li>
<li>Still buggy. Many unexplainable behaviors.</li>
<li>Apart from simple features, most of the stuff is not really portable across platforms</li>
</ul><br />
Titanium today is however good for simple applications. Titanium applications come out to be much more responsive than those creating through other cross platform tools that typically use the webview (browser) for HTML based UI. The community is also very active on Titanium. They got some funding recently and are going to bring out an IDE that will ease development and debugging a bit (the beta is already out). Titanium is open source, but I don't think the IDE is. It feels so much like Flex development - being stuck to Adobe's tools was not very comforting.<br />
<br />
Anyway, <a href='http://pastie.org/1769147' target='_blank'>here's the code</a> for my first app. <br />
<br />
It is an Android app to maintain structured lists of things. It lets the user define and create a list and add/delete items to/from it. Laziness has prevented me from publishing the app to Android app-store. I'll do it one of these days. <br />
<br />
The attached code is of the app.js file. To use it, you must create a project in Titanium first and then overwrite the generated app.js with the one attached here. And the resources (icons/images) are also not attached, please get your own to build app.<br />
<br />
I'm gonna check out a few other similar platforms.<br />
<br />
</div>Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com0tag:blogger.com,1999:blog-3240993750799145183.post-17512654600837381672010-10-30T18:07:00.000+05:302010-10-30T18:07:21.355+05:30Why Was Twitter Successful?Why was Twitter successful? Why didn't other similar services match up? Why couldn't anyone else take the idea and do it better? Always wondered, but a discussion at work yesterday prompted some reading and this post.<br />
<br />
A video by Jack Dorsey with a few takeaways from his experience creating Twitter.<br />
<br />
<iframe frameborder="0" height="375" src="http://player.vimeo.com/video/11712774" width="500"></iframe><br />
<br />
<b>But wait... the best one is here</b> - <a href="http://news.ycombinator.com/item?id=1584260">a nice long discussion on this topic</a>. Myriad facts and opinions from many different people. Found this informative and thought provoking.<br />
<br />
From whatever I gathered, the most importnat factors I thought were:<br />
<br />
<ul><li>Came at the right time, when there were not many alternatives.</li>
<li>Was of the right simplicity/complexity.</li>
<li>Appealed to contributors. easy. Brevity promotes even trivial and incomplete thoughts to be presented without guilt or responsibility. Contributors dragged many followers.</li>
<li>Was a cheap advertising medium, which many corporations embraced. They publicized and pulled in followers.</li>
<li>Creators had the right influence to give it the initial publicity push.</li>
<li>Focussed attempt to solve a definite narrow problem, and only that problem, well.</li>
</ul>Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com0tag:blogger.com,1999:blog-3240993750799145183.post-33150145575660880382010-07-22T14:46:00.004+05:302010-07-22T14:55:31.478+05:30Translation in Google Chrome<div><br />What a pleasant surprise Google Chrome gave! I was struggling to make sense of this page:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidYtYpJab-eU5HREZJODQtrDrDkSWNhV5SkQRfPxQbox1dEhaH18YkUxGySy-uVq_yE8zYkpGM0jTt8rhHlzak-tflbLneowu6gm1kMhLJ2vyRlE63djXqm2Bn6qvy_Tja19pEPwzGTeXU/s1600/Screen+shot+2010-07-22+at+2.37.52+PM.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 329px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidYtYpJab-eU5HREZJODQtrDrDkSWNhV5SkQRfPxQbox1dEhaH18YkUxGySy-uVq_yE8zYkpGM0jTt8rhHlzak-tflbLneowu6gm1kMhLJ2vyRlE63djXqm2Bn6qvy_Tja19pEPwzGTeXU/s400/Screen+shot+2010-07-22+at+2.37.52+PM.png" border="0" alt="" id="BLOGGER_PHOTO_ID_5496657698392925554" /></a><br /><br />Notice the toolbar Chrome popped up at the top? Turkish, of all the languages! And it neatly translated the page to:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqD9Yk4qbLG62bFXj5TtCE8pjMsHliN0sEPrIWU0wL6e8UUQ2IteASpusZZm9wujMcFfRhtawapm6eGL7UGaxfLu7j3aMV0Rt5cu56uqqWCsLGuDKBzTf59gFAM-MgCyUUTgymDb3SeuxE/s1600/Screen+shot+2010-07-22+at+2.39.13+PM.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 325px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqD9Yk4qbLG62bFXj5TtCE8pjMsHliN0sEPrIWU0wL6e8UUQ2IteASpusZZm9wujMcFfRhtawapm6eGL7UGaxfLu7j3aMV0Rt5cu56uqqWCsLGuDKBzTf59gFAM-MgCyUUTgymDb3SeuxE/s400/Screen+shot+2010-07-22+at+2.39.13+PM.png" border="0" alt="" id="BLOGGER_PHOTO_ID_5496658548146001490" /></a><br /><br />This is what I call seamless integration. I don't remember having installed any extensions. I guess Google Translation is now built into Chrome.<br /></div><div><br /></div>Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com0tag:blogger.com,1999:blog-3240993750799145183.post-81028709498424978082010-01-10T03:07:00.001+05:302010-01-10T03:14:44.151+05:30Facebook Javascript SamplesI was hooking together a Facebook application and had created this sample application to try out different Facebook APIs. I think my initial struggles with FB API were because of FB documentation not being very structured and FB having too many (sometimes overlapping and confusing) integration points. The latter is however being sorted out now by cleaning up and merging many of the integration points.<br /><br />I also had struggled for quite some time with the cross domain scripting elements of FB, rather the location of xd_receiver.htm to be specific. Technically, it should work as long as I place xd_receiver.htm anywhere in my domain. It need not be exactly at the same location as my canvas or connect page. However, FB APIs do not seem to be uniform in their treatment towards the location of xd_rceiver.htm. Particularly the FB Connect APIs require that xd_receiver.htm must be at the same location as the connect URL.<br /><br />With the thought that it might be useful for others, I've shared it as an FB app called "<a href="http://www.facebook.com/apps/application.php?id=246237267560">Javascript API Samples</a>" in the "Utilities" section of listed applications. It is an IFrame Canvas application with all static pages. The static pages are hosted on Google App Engine.Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com2tag:blogger.com,1999:blog-3240993750799145183.post-20338420162245705172010-01-09T12:19:00.000+05:302010-01-09T12:25:55.113+05:30Getting Erlang ODBC to work on Mac Snow LeopardI started using <a href="http://en.wikipedia.org/wiki/Erlang_%28programming_language%29">Erlang</a> a month back and had a difficult time getting it (the ODBC part really) to work on Mac. You can read about <a href="http://demo.erlang.org/documentation/doc-5.3/pdf/odbc-1.0.8.pdf">Erlang ODBC implementation architecture here</a>. I'm still a novice in the Erlang world, but I'll try and put forth what I have done.<br /><br />The Erlang release I will be referring to in this post will be R13B02. I can probably summarize the problems I faced with Erlang ODBC to the following few:<br /><ul><li>Mac ODBC implementation with iodbc seems to be having some linking problems with 32 bit compilation and dynamic linkage.</li><li>Erlang by default compiles in 32 bit mode on Mac.</li><li>Erlang odbcserver by default compiles with dynamic linkage to odbc library.</li><li>Erlang odbcserver implementation seems to be having some issues in handling cases where no records are found by a query.</li></ul>And below are the summarized steps that solve the above problems and get you a working build:<br /><ol><li>Download otp_src_R13B02-1.tar.gz from erlang.org. </li><li>Set LDFLAGS to include Mac CoreFoundation. I'm not sure if this is absolutely essential, but no harm including it.<br />export LDFLAGS="-framework CoreFoundation".</li><li>Configure the build for 64 bit. This will enable iodbc but seems to disable wxWidgets. For now, iodbc is more important.<br />configure --enable-darwin-64bit</li><li>Configure will create the build files, but don't start building just yet.</li><li>Edit the make file of odbcserver (lib/odbc/c_src/i386-apple-darwin10.2.0/Makefile) manually to get it statically linked to iodbc libraries. The default dynamic linkage does not work in "Snow Leopard".<ul><li>Use iodbc-config to get the flags to add</li><li>iodbc-config --static-libs</li><li>Replace LDFLAGS in lib/odbc/c_src/i386-apple-darwin10.2.0/Makefile with what you got from above.</li></ul></li><li>There is probably a bug in odbcserver.c that throws error when a SQL results in no records found. For example, an update that modified 0 records, or a select that fetched 0 records. Download the attached odbcserver.patch (below) and apply it to odbcserver.c.</li><li>Run make, make install. By default installs into /usr/local. </li><li>Download otp_doc_man_R13B02-1.tar.gz from erlang.org. Go to /usr/local/lib/erlang. Extract the tar file here.</li><li>Run erl -man erl to test man pages</li><li>Run erl to get the console. Test odbc by doing at least odbc:start() and odbc:connect() and executing a select.</li></ol><br /><br /><b>odbcserver.patch</b><br /><pre><br />--- odbcserver.c 2009-11-20 19:06:30.000000000 +0530<br />+++ odbcserver.patched.c 2010-01-07 16:20:38.000000000 +0530<br />@@ -151,7 +151,7 @@<br />static db_result_msg encode_empty_message(void);<br />static db_result_msg encode_error_message(char *reason);<br />static db_result_msg encode_atom_message(char *atom);<br />-static db_result_msg encode_result(db_state *state);<br />+static db_result_msg encode_result(db_state *state, SQLRETURN sql_result);<br />static db_result_msg encode_result_set(SQLSMALLINT num_of_columns,<br /> db_state *state);<br />static db_result_msg encode_out_params(db_state *state,<br />@@ -585,12 +585,12 @@<br /><br /> /* OTP-5759, fails when 0 rows deleted */<br /> if (result == SQL_NO_DATA_FOUND) {<br />- msg = encode_result(state);<br />+ msg = encode_result(state, result);<br /> } else {<br /> /* Handle multiple result sets */<br /> do {<br /> ei_x_encode_list_header(&dynamic_buffer(state), 1);<br />- msg = encode_result(state);<br />+ msg = encode_result(state, result);<br /> /* We don't want to continue if an error occured */<br /> if (msg.length != 0) {<br /> break;<br />@@ -749,11 +749,12 @@<br /> byte *sql;<br /> db_result_msg msg;<br /> int i, num_param_values, ver = 0,<br />- erl_type = 0, index = 0, size = 0, cols = 0;<br />+ erl_type = 0, index = 0, size = 0, cols = 0;<br /> long long_num_param_values;<br /> param_status param_status;<br /> diagnos diagnos;<br />- param_array *params;<br />+ param_array *params;<br />+ SQLRETURN result;<br /><br /> if (associated_result_set(state)) {<br /> clean_state(state);<br />@@ -784,10 +785,16 @@<br /> num_param_values, state);<br /><br /> if(params != NULL) {<br />- if(!sql_success(SQLExecDirect(statement_handle(state),<br />- sql, SQL_NTS))) {<br />- diagnos = get_diagnos(SQL_HANDLE_STMT, statement_handle(state));<br />- msg = encode_error_message(diagnos.error_msg);<br />+<br />+ result = SQLExecDirect(statement_handle(state), sql, SQL_NTS);<br />+ if (!sql_success(result) || result == SQL_NO_DATA) {<br />+ diagnos = get_diagnos(SQL_HANDLE_STMT, statement_handle(state));<br />+ }<br />+ /* SQL_NO_DATA and SQLSTATE 00000 indicate success for<br />+ updates/deletes that affect no rows */<br />+ if(!sql_success(result) &&<br />+ !(result == SQL_NO_DATA && !strcmp((char *)diagnos.sqlState, INFO))) {<br />+ msg = encode_error_message(diagnos.error_msg);<br /> } else {<br /> for (i = 0; i < msg =" encode_out_params(state," msg =" encode_result(state);" msg =" encode_result(state," length ="=" buffer =" dynamic_buffer(state).buff;" num_of_columns =" 0;" rowcountptr =" 0;" sql_no_data_found ="=" num_of_columns =" 0;" num_of_columns ="=" sql_no_data_found ="=" rowcountptr =" 0;" record_nr =" 1;" result =" SQLGetDiagRec(handleType,"><br /></pre>Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com0tag:blogger.com,1999:blog-3240993750799145183.post-50424837379363201522009-06-09T12:29:00.003+05:302009-06-09T12:56:28.101+05:30AspectJ Through Bytecode - Examining The Woven ClassIn the <a href="http://windforwings.blogspot.com/2009/06/aspectj-through-bytecode-anatomy-of.html">previous post</a> we had a look at the Aspect class. Now let's go through the woven class itself. So we do a javap of the woven Test class and have a look at the output.<br /><br /><span style="font-weight: bold;">The static initializer</span><br />In our aspects we refer to the join point object in two places - in ExceptionTracer.aj and in FactoryIntercept.aj. Corresponding to these references, there are two private static fields of type org.aspectj.lang.JoinPoint$StaticPart injected in to the class.<br /><pre>private static final org.aspectj.lang.JoinPoint$StaticPart ajc$tjp_0;<br />private static final org.aspectj.lang.JoinPoint$StaticPart ajc$tjp_1;<br /></pre>These fields are initialized in the static initializer of the class, which calls helper methods in aspectj to construct the JoinPoint$StaticPart objects.<br /><br /><span style="font-weight: bold;">Code injection</span><br /><ul><li>Public methods and variable names are retained after being injected. So they can be accessed later using reflection.</li><li>The injected methods are just wrappers that call the actual body - a static method in the aspect class.<br /><pre>public int getCalls();<br />Code:<br />0: aload_0<br />1: invokestatic #110; //Method ajtest/aspects/AroundAndInject.ajc$interMethod$ajtest_aspects_AroundAndInject$ajtest_java_Test$getCalls:(Lajtest/java/Test;)I<br />4: ireturn<br /><br />public void incCalls();<br />Code:<br />0: aload_0<br />1: invokestatic #102; //Method ajtest/aspects/AroundAndInject.ajc$interMethod$ajtest_aspects_AroundAndInject$ajtest_java_Test$incCalls:(Lajtest/java/Test;)V<br />4: return</pre></li><li>Private injected variables are declared as public, but with an obfuscated name. So they can not be accessed with their original names through reflection. Why public? Because we have an aspect on the field access join point and the aspect needs to access this field from within the aspect code!<br /><pre>public int ajc$interField$ajtest_aspects_AroundAndInject$nCalls;</pre></li></ul><span style="font-weight: bold;">Advices 'around' a field access</span><br />We test the FieldAccess aspect around the join points involving get of fld1 in our testFieldAccessAspect method. Look at the source code and you can see that we read fld1 thrice in the testFieldAccessAspect method - once to print it, then to increment it by 1 and then again to print it. Now take a look at the modified bytecode of the woven testFieldAccessAspect method in the javap output.<br /><ul><li>At the first instance where we read the field, instead of directly fetching the field, now there is a call to the around advice, a method <span style="font-family:courier new;">fld1_aroundBody1$advice</span>, to get the value.<pre>invokestatic #144; //Method fld1_aroundBody1$advice:(Lajtest/aspects/FieldAccess;Lorg/aspectj/runtime/internal/AroundClosure;)I</pre></li><li>The advice method, in turn, invokes another method "<span style="font-family:courier new;">private static final int fld1_aroundBody0()</span>" when it needs to access the field value. This method accesses the field directly through a getstatic instruction.</li><br /><li>At the second and third instances where we read the field again, the same happens, but to a different set of methods.<br /><pre>invokestatic #150; //Method fld1_aroundBody3$advice:(Lajtest/aspects/FieldAccess;Lorg/aspectj/runtime/internal/AroundClosure;)I<br />invokestatic #152; //Method fld1_aroundBody2:()I<br /><br />and<br /><br />invokestatic #156; //Method fld1_aroundBody5$advice:(Lajtest/aspects/FieldAccess;Lorg/aspectj/runtime/internal/AroundClosure;)I<br />invokestatic #158; //Method fld1_aroundBody4:()I<br /></pre></li><li>The three sets of methods are identical. And they are copies from the FieldAccess aspect class bytecode method <span style="font-family:courier new;">ajc$around$ajtest_aspects_FieldAccess$1$32f71218</span>. So the weaver has been picking up the bytecode from the aspect class method and injecting new advice methods into the woven class.</li><br /><li>The "<span style="font-family:courier new;">ajc$around$ajtest_aspects_FieldAccess$1$32f71218proceed</span>" method in the FieldAccess aspect class is however ignored in this case. It would have been used to chain aspects if I had multiple aspects on the same join point.</li><br /><li>The reason behind multiple identical methods generated for the advice however beats me. If you have any explanations/suggestions, I'll be glad to hear.</li></ul><span style="font-style: italic;">Therefore, having an aspect around a field access may look innocent, but may be an excessive overhead in terms of code generation and execution. If you can, consider a different design like having an accessor method and having an advice around the execution of the accessor method.</span><br /><br /><span style="font-weight: bold;">Advices 'around' a method call</span><br />We test the an aspect around a method call in the call to the "doSyso" method in main. The story here is very similar to the behavior above.<br /><ul><li>There are two methods injected into the Test class for each instance of the call to doSyso. The methods injected for the first instance of the call are:<br /><pre>private static final void doSyso_aroundBody7$advice(ajtest.java.Test, java.lang.String, ajtest.aspects.AroundAndInject, ajtest.java.Test, java.lang.String, org.aspectj.runtime.internal.AroundClosure);<br /><br />private static final void doSyso_aroundBody6(ajtest.java.Test, java.lang.String);<br /></pre></li><li>The method call at the join point is replaced to call the advice <span style="font-family:courier new;">doSyso_aroundBody7$advice</span>.</li><br /><li>The weaver copies code from<br /><pre>public void ajc$around$ajtest_aspects_AroundAndInject$1$38b5b4f8(ajtest.java.Test, java.lang.String, org.aspectj.runtime.internal.AroundClosure);<br /></pre>in the AroundAndInject aspect class into<br /><pre>private static final void doSyso_aroundBody7$advice(ajtest.java.Test, java.lang.String, ajtest.aspects.AroundAndInject, ajtest.java.Test, java.lang.String, org.aspectj.runtime.internal.AroundClosure);<br /></pre>in the Test class.<br /></li><br /><li>The advice method in turn calls the second generated method <span style="font-family:courier new;">doSyso_aroundBody6</span> to actually call the method.<br /><pre>invokespecial #21; //Method doSyso:(Ljava/lang/String;)V<br /></pre></li><li>Again, the reason behind multiple identical methods generated for the advice beats me.<br /></li></ul><span style="font-weight: bold;"><br />Advices 'around' a method execution</span><br />We test the an aspect around a method call in the call to the "doSysoExec" method in main. This is also similar to the behavior above, except:<br /><ul><li>The call to doSysoExec method is retained as it is.</li><li>The body of the doSysoExec method is replaced with a call to an injected advice method:<br /><pre>invokestatic #306; //Method doSysoExec_aroundBody13$advice:(Lajtest/java/Test;Ljava/lang/String;Lajtest/aspects/AroundAndInject;Lajtest/java/Test;Ljava/lang/String;Lorg/aspectj/runtime/internal/AroundClosure;)V<br /></pre></li><li>The injected advice method in turn calls another advice method<br /><pre>invokestatic #308; //Method doSysoExec_aroundBody12:(Lajtest/java/Test;Ljava/lang/String;)V<br /></pre>which actually contains what the original doSysoExec method had<br /><pre>private static final void doSysoExec_aroundBody12(ajtest.java.Test, java.lang.String);<br />Code:<br />0: getstatic #3; //Field java/lang/System.out:Ljava/io/PrintStream;<br />3: aload_1<br />4: invokevirtual #10; //Method java/io/PrintStream.println:(Ljava/lang/String;)V<br />7: return<br /></pre></li><li>Multiple advice methods are not injected for multiple calls to the method - obviously since we are not interested in the 'calls' (which are at multiple places), but in execution (which is in one method).</li></ul><span style="font-style: italic;">Therefore, if your advice can do what it needs to do equally well around both the method call and method execution, prefer the execution join point as it is much more efficient in terms of code generation and execution.</span><br /><br /><span style="font-weight: bold;">Advices 'before' and 'after'</span><br />The testBeforeAfterAspect method in the Test class tests the before and after aspects.<br /><ul><li>The advice code is a method in the aspect class.</li><li>Calls are made to the advice methods before and after the join pont.<br /><pre>invokevirtual #176; //Method ajtest/aspects/BeforeAfterIntercept.ajc$before$ajtest_aspects_BeforeAfterIntercept$1$218e91dd:()V<br /><br />and<br /><br />invokevirtual #179; //Method ajtest/aspects/BeforeAfterIntercept.ajc$after$ajtest_aspects_BeforeAfterIntercept$2$218e91dd:()V<br /></pre></li><li>Since our aspect was for an "after" join point, it implicitly meant both "after returning" and "after throwing". And the weaver injected an exception handler to do the job.<br /></li></ul><span style="font-style: italic;">Therefore, if your advice can do equally well before and after the join point, consider 'before' to avoid any unnecessary exception handling.</span><br /><br />We also used a before advice for the exception handler advice and the code generation is similar.<br /><br />We got some insights into what really happens when AspectJ weaves our aspects into the code. Hopefully, it will help us designing our aspects better. In the next post we'll see what happens under the hood when we use different aspect instantiation models.Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com0tag:blogger.com,1999:blog-3240993750799145183.post-20335428310592529442009-06-07T15:12:00.006+05:302009-06-09T12:57:53.061+05:30AspectJ Through Bytecode - Anatomy of an Aspect ClassWe have been using <a href="http://www.eclipse.org/aspectj/index.php">AspectJ</a> in our product for sometime now. I thought it would be interesting to examine what actually the AspectJ compiler and weaver do at the bytecode level. I made a few simple test classes and a few aspects to test out different types of pointcuts and join points, particularly:<br /><ul><li>Field access</li><li>Exceptions</li><li>Code injection</li><li>Before, After and Around constructs</li><li>Intercepting and completely replacing method calls</li></ul>You can find the source code of the classes and aspects <a href="http://wind4wings.googlepages.com/aspectj_tests1.tar">here</a>.<br /><br />I compiled the test classes and the aspects into separate jar files and used the compile time weaver to create a woven jar file separately. My intention was to examine the java bytecode before and after being woven to get a better understanding of aspectj code generation. Knowing what happens under the hood helps in creating better designs. Let me take you through what I went through. I have included the javap outputs and compiled classes along with the source code, but you may want to <a href="http://wind4wings.googlepages.com/aspectj_tests1.tar">download the source code</a> and compile them once yourself before we start.<br /><br /><span style="font-weight: bold;">Examining the Aspects Themselves</span><br />First, lets examine the aspect bytecode. We pick up one of the simplest aspects - the FieldAccess aspect, and do a bytecode disassembling with javap. Here's what we see:<br /><br /><ul><li>It is a public class <span style="font-family:courier new;"><br />(public class ajtest.aspects.FieldAccess extends java.lang.Object)</span></li><li>There is a singleton instance of the aspect stored as <span style="font-family:courier new;">ajc$perSingletonInstance</span> and initialized in a static block. So only one instance of the aspect is created when the aspect class loads.<br /><br /><span style="font-style: italic;">This is an important learning which the novice tend to overlook. This implies that the aspects must be coded to be thread safe. Otherwise, remember to modify the aspect declaration with a per... (perthis, pertarget, ...) modifier.<br /><br /></span></li><li>In case there is an exception during initialization of the aspect, there is a private static Throwable named <span style="font-family:courier new;">ajc$initFailureCause</span> declared in the class which is initialized in the static block of the class with the exception.</li><li>Since the aspect was used <span style="font-style: italic;">'around'</span> the pointcut, there is a method for around and a corresponding method for proceed which is called from within the around method.<br /><pre><br />public int ajc$around$ajtest_aspects_FieldAccess$1$32f71218(org.aspectj.runtime.internal.AroundClosure);<br />static int ajc$around$ajtest_aspects_FieldAccess$1$32f71218proceed(org.aspectj.runtime.internal.AroundClosure) throws java.lang.Throwable;<br /></pre><br /></li><li>The proceed method is static and does not simply access the field. Instead, it calls run method of the AroundClosure object. That is to futher chain any other aspects that may need to be run.<br /><pre><br />invokevirtual #67; //Method org/aspectj/runtime/internal/AroundClosure.run:([Ljava/lang/Object;)Ljava/lang/Object;<br /></pre><br /></li><li>Note the strange naming convention of the methods, ending with $1$32f71218. We will take it up later and cover another interesting fact of the AspectJ weaver.<br /></li><li>Then there are other generated methods like aspectOf and hasAspect.</li></ul>The other interesting aspect would be the one that does the code injection. So we disassemble the AroundAndInject aspect class using javap. Apart from the regular artifacts that we saw earlier, here are few new ones in this class:<br /><br /><ul><li>For each injected field, the aspect has initializer, getter and setter methods<br /><pre><br />public static void ajc$interFieldInit$ajtest_aspects_AroundAndInject$ajtest_java_Test$nCalls(ajtest.java.Test);<br />public static int ajc$interFieldGetDispatch$ajtest_aspects_AroundAndInject$ajtest_java_Test$nCalls(ajtest.java.Test);<br />public static void ajc$interFieldSetDispatch$ajtest_aspects_AroundAndInject$ajtest_java_Test$nCalls(ajtest.java.Test, int);<br /></pre><br /></li><li>For each injected method, the aspect has the code that goes into the method body. What is injected into the class are methods that in turn call these methods in the aspect.<br /><pre><br />public static void ajc$interMethod$ajtest_aspects_AroundAndInject$ajtest_java_Test$incCalls(ajtest.java.Test);<br />public static int ajc$interMethod$ajtest_aspects_AroundAndInject$ajtest_java_Test$getCalls(ajtest.java.Test);<br /></pre><br /></li><li>For each injected method, there are local dispatcher methods in the aspect that in turn call the method of the instrumented class.<br /><pre><br />public static int ajc$interMethodDispatch1$ajtest_aspects_AroundAndInject$ajtest_java_Test$getCalls(ajtest.java.Test);<br />public static void ajc$interMethodDispatch1$ajtest_aspects_AroundAndInject$ajtest_java_Test$incCalls(ajtest.java.Test);<br /></pre><br /></li><li>The aspect itself used the local dispatch methods to access the injected methods or variables. So calling an injected method from within the aspect goes through the following path:<br />dispatcher method in aspect --> injected method in class --> method body in aspect.<br /></li><br /></ul>All this seems to be big overheads, but are required to handle complex situations like multiple aspects overlapping at a join point and weaving the same code at multiple times with different aspects. So, if you are thinking of using aspects to just increment an integer in a class, think twice; there might be better ways of doing it. Use aspects for incorporating complex concerns, that is what it is meant for.<br /><br />In the <a href="http://windforwings.blogspot.com/2009/06/aspectj-through-bytecode-examining.html">next post</a> we'll go through a few woven classes and see what interesting things we can see there.Tanmayhttp://www.blogger.com/profile/05342457728508357508noreply@blogger.com0