When installing opensource software, dependencies are are usually downloaded over the internet. This is done via build automation scripts. Putting this in mind, when trying to install software in a location with restricted internet access, downloading dependencies is not possible. Thus, installing software is challenging.
This was seen while trying to install bespin locally. Under restricted access, was not possible. With open internet, too 3 minutes.
HTML5 is the new standard for web pages. One of the most exciting features that it introduces is the canvas tag. This component allows rendering vector graphics on the browser. This was not previously possible with older HTML versions.
With that advancement in HTML, new JavaScript libraries emerge to be able to create vector visualizations at the client side. One of these libraries is Processing.js, which is a port of the java processing library.
For a first look at processing, the examples show very exciting examples. But the week point of the framework shows when try to develop an application that required much interaction week. To reach such an application, the "processing.js" written has to be highly optimized. No room for unnecessary code execution, use of floats or doubles, cache as much as you can, etc....
My other issue with Processing.js is the fact that they ported from the java library everything, even the syntax. This comes to be very frustrating to web developers. The first reason for that is most web developers are used to the syntax of JavaScript, so having things like int x=0; and int [] arr = new int[4]; come as a big surprise. This could be good for the java developer, but not the the web developer.
My other bone to pick with Processing.js is how primitive it is relative to the current state of the web. There are no components, and you can't define events on specific components. Everything has to be done from scratch. Further strengthening the point of needing highly optimized code.
Backing to the title of my post, Processing.js tries to provide an alternative for Flash. In the web, the major disadvantage of Flash is it's isolation from the rest of the webpage, making it an independent entity. Processing.js does not over come this problem, as the visualization code for each canvas is a world of it's own. It can only access normal JavaScript code in the same page (which is possible with ActionScript 3's ExternalInterface). This problem even escalates with lack of direct communication between different canvases directly using Processing.js code. Furthermore, Flash is currently cross browser compatible, unlike the canvas tag due to it's new addition to the HTML standard.
Putting all the above factors in mind, I feel that using a the much mature flash plugin libraries which are closer to web developers is the best option currently in the market until the canvas tag matures more with richer libraries.
With starting up the German University in Cairo Open Source Community (g-osc) working along the open source course, we were assigned to write ubiquity commands for our community to use. This would make it easier for users to access different functionalities. I choose to build a command that would submit issues to the g-osc issue tracker.
As I had some experience building very simple commands for some of my own day to day internet surfing, I decided to focus on this command a bit more to see the full potential of ubiquity. This was to provide the means of submitting an issue in the most natural way. As I went through details of the ubiquity tutorial, I figured that a natural approach for defining how the command will be used is as follow:
gosc post issue {issue title} to {user issue will be assigned to} as {type of issue}
Such a formulation is similar to how it would be written in normal English.
The first design question was, how will be the issue submitter be known. This is automatically figured out by mediawiki, though the browser session. In other words, if a user is already logged in, then mediawiki knows this information on its own. With that issue out of the way, there was no need to write specific login functions.
The next step, was to reverse engineer the issue posting form, so that AJAX would be able to send the information as if a user was filling the form manually. After obtaining the form information, the next obstacal was actually send the data. This was straight forward task, due to the inclusion of the JavaScript jQuery library within ubiquity. This a very strong feature, due to strength of jQuery.
With that out of the way, the command was able to submit issues to the tracker. The next step was to be able to format the command natually, and to provide auto-completion for the available users and types of issues. This was done using ubiquity nouns. These tell ubiquity that a certain argument has a specific format. This idea is same as auto-completion, where the specific format is a specific set of word. In both the users and issue types, this data has to be dynamically loaded. Again, jQuery was used to load the form page, and parse the data to obtain the data inside the combo boxes.
A problem that comes up with auto-completion and loading dynamic data, is the need for doing the AJAX call with every key stoke. This is very expensive. The first solution that came to my mind was to cache the data, and refresh it every time the ubiquity command line is loaded. Sadly, as i mentioned in a previous post, the ubiquity documentation is not very accurate. Thus this cache technique was not successful, as the function for handing the loading function was not called. The alternative solution, was to cache the content only once.
With both the auto-completion working well, and a preview of the possible users present the command was complete. The complete code for the issue submission command could be found here.