Monday, December 13, 2010

SoCPaR2010

SoCPaR (International Conference on Soft Computing and Pattern Recognition) is an annual conference that focuses on bringing the Soft Computing and Pattern Recognition together ~ "Innovating and Inspiring Soft Computing and Intelligent Pattern Recognition". For the second consecutive year, SoCPaR has been successfully conducted. SoCPaR 2009 was held in Malacca, Malaysia on December 4th - 7th, 2009, which was followed by SoCPaR2010 in Cergy Pontoise / Paris, France at Universite' de Cergy Pontoise on December 7th - 10th, 2010. Following the two successful years, SoCPaR2011 has been scheduled to be held on Dalian, China on October 14th - 16th, 2011.


Presenting the paper on Association Rule Mining
It was really a pleasant experience for me presenting our paper on our research "Horizontal Format Data Mining with Extended Bitmaps [1]" on SoCPaR2010. Our paper has been listed as Paper 113 in the proceedings. I presented the paper on Dec 8th, 2.30 - 3.00 p.m at E1 auditorium of the University, where the conference had 3 parallel sessions at E1, E2, and Colloque. It should be noted that the paper was from the same team of undergraduates from the University of Moratuwa who published a paper on their product, "Mooshabaya - A Mashup Generator for XBaya [2]". Our paper got positive and constructive feedbacks, which essentially gives us more idea towards taking the algorithm forward. We have our algorithm implementation benchmarked with the FIMI datasets, and also have the door opened to the competition of algorithm implementations on Frequent Itemset Mining Implementations, as suggested by the chair.

Paris (6th - 11th, December 2010)
Apart from the paper presentations and tech talks, we also had social events and extra social activities such as 'Paris by Night', 'Wine Tasting', 'Visit to Chateau de Chantilly', and Banquet at Abbey of Royaumont [3][4], organized by the committee. It was a nice learning experience along with the days filled with fun. It should also be noted that we had the opportunity to face the strongest snowfall that Paris experienced after the year 1986. After the conference, we were also able to enjoy two more days at Paris, and were lucky enough to visit Louvre (the museum where Mona Lisa and many other master pieces live), Notre Dame Cathedral, Montmartre Hill along with a big white church Basilica of Sacre-Coeur on its crust, Eiffel Tower, and a few other places of interest.

[1] Buddhika De Alwis, Supun Malinga, Kathiravelu Pradeeban, Denis Weerasiri, Shehan Perera. "Horizontal Format Data Mining with Extended Bitmaps," in  Proceedings of the 2010 International Conference on Soft Computing and Pattern Recognition (SoCPaR2010), Cergy-Pontoise, Paris, France. pp 220-223, Dec. 2010.

[2] Buddhika De Alwis, Supun Malinga, Kathiravelu Pradeeban, Denis Weerasiri, Srinath Perera, Vishaka Nanayakkara. "Mooshabaya: mashup generator for XBaya," in Proceedings of the 8th International Workshop on Middleware for Grids, Clouds and e-Science (MGC '10), Bangalore, India. ISBN: 978-1-4503-0453-5 doi>10.1145/1890799.1890807 

[3] The abbey
[4] Photos of the Abbey

Wednesday, December 8, 2010

Horizontal Format Data Mining with Extended Bitmaps

Abstract
Analysing the data warehouses to foresee the patterns of the transactions often needs high computational power and memory space due to the huge set of past history of the data transactions. Apriori algorithm is a mostly learned and implemented algorithm that mines the data warehouses to find the associations. Frequent item set mining with vertical data format has been proposed as an improvement over the basic Apriori algorithm.
In this paper we are proposing an algorithm as an alternative to Apriori algorithm, which will use bitmap indices in conjunction with a horizontal format data set converted to a vertical format data structure to mine frequent itemsets leveraging efficiencies of bitmap based operations and vertical format data orientation.

Categories and Subject Descriptors
[Data Mining] Association Rule, Apriori, Bitmap Indices.
[Data Analysis] Data warehousing, Data Analysis.
General Terms - Data Analysis and Mining
Keywords - Data mining, Association Rule, Apriori, Vertical format mining, Bitmap Indices



Here we are proposing an algorithm "Horizontal Format Data Mining with Extended Bitmaps," for the Association Rule Mining. First we will have a look into the association rule mining and the roots of our algorithm. What is association rule mining? Finding interesting relationships between the variables is defined as Association Rule Mining. Association rule mining is often explained by market basket analysis, where the customers' purchase details are analyzed to find interesting relationships between the items. Here we find the variable sets which appear together. This is defined as Frequent Itemsets, and it is an interest of research due to its expensiveness.

Apriori Algorithm is a fundamental algorithm for the association rule mining. This mines the frequent patterns that are presented in a horizontal format, where the items are listed against their respective transaction. Apriori algorithm abides to the apriori property - any subset of the frequent itemsets is frequent. Each pass should go through the whole data set in Apriori algorithm. Hence it is not an optimized algorithm. Many improvements are suggested to the Apriori Algorithm.

Transaction data mostly occur in horizontal format, where vertical format is an alternative way of looking into it. Here the transactions are listed against the respective items. Since data may not appear in this format, we may need to re-organize the data into the vertical format, before mining them for the associations. Many effective algorithms are built on top of the vertical format data mining.

Now let's look at the next interesting terminology of our algorithm - Bitmaps. Bitmaps are used to store individual bits compactly. It's 0's and 1's where 1's depict the existence. Major advantage of using bitmaps is the possibility of effectively exploiting the bit-level parallelism. We have seen the vertical data formats and the bitmaps. Now we have a question. Is it possible to grab the benefits from both the vertical format representation and the bitmap operations to find frequent itemsets in a distributed environment?

Here we propose the algorithm 'Horizontal Format Data Mining with Extended Bitmaps'. The algorithm takes the data set organized in horizontal format. With one pass of the data set, we construct a bit map based data structure. The bit map structure will be in the vertical format. This structure facilitates an efficient mining.

First we take the transaction id of T100. (T100, {I1, I2, I5}). We will mark the items that appear in T100 in the ordered array. At the same time we link the associated items to the ordered array. Hence I2 will be linked to I1 in the master array, while I5 will be linked to I2. I5 will also be linked to I2 in the ordered array. Linking I1 to I2 or I2 to I5 in the ordered array is avoided to prevent redundancy. I1, I2, and I5 are marked 1 to represent their existence, hence constructing their bitmaps.

Refer to the Slides for a simple explanation on the algorithm "Horizontal Format Data Mining with Extended Bitmaps" itself.

T - Average size of transaction (Transactions).
I - Average size of maximal potentially-large itemsets (Itemsets).
D - Number of Transactions (Datasets).


Image: http://en.wikipedia.org/wiki/File:Storeisle.png

Tuesday, December 7, 2010

Mooshabaya and the Detailed Story..

Abstract
Visual composition of workflows enables end user to visually depict the workflow as a graph of activities in a process. Tools that support visual composition translate those visual models to traditional workflow languages such as BPEL and execute them thus freeing the end user of the need of knowing workflow languages. Mashups on the other hand, provide a lightweight mechanism for ordinary user centric service composition and creation, hence considered to have an active role in the web 2.0 paradigm. In this paper, we extend a visual workflow composition tool to support mashups, thus providing a comprehensive tooling platform for mashup development backed up by workflow style modeling capabilities, while expanding the reach of the workflow domain into web 2.0 resources with the potential of the mashups. Furthermore, our work opens up a new possibility of converging the mashup domain and workflow domain, thus capturing beneficial aspects from each domain.

Full Text



Before going into the Mooshabaya research, it is worth to have a look at the roots of Mooshabaya. JavaScript based Mashups - A content aggregation technology, that is used to compose services by remixing two or more sources [sample mashup site]. And the workflows - We prefer to represent real world processes as a sequence of operations, and here comes workflows. In practice both the mashups and the workflows have their own domain of applications. Workflows have gained their major use cases from the research communities as scientific workflows and the business communities as the business workflows. Mashups mostly target the end users and empower the user-oriented web.


What if a merge of the domains is possible? Can the domains use each other? This became the core research of  Mooshabaya.

Mooshabaya composes workflows graphically, and exports them as mashups. The mashups can then be executed and monitored at run time.
What motivates us into Mooshabaya? We have two major views. 1) As a Mashup Composer - Mashups can be created graphically by dragging and dropping components. 2) Extending the reach of the workflow domain. This includes web based APIs and web contents such as feeds. Mashup also becomes an alternative lightweight medium for workflow execution.

Before going further into Mooshabaya, we look at the related works. Yahoo Pipes, JackBe Presto, and Serena are some of the Mashup composers, that allow visual mashup composition through their graphical user interface. XBaya, Taverna, Triana, CAT (Composition Analysis Tool), Kepler, and Pegasus are some of the graphical workflow composers. Each of these tools has its own set of features and target use cases.

So what is special about Mooshabaya? Visual Mashup Composers mostly restrict themselves to Web 2.0 APIs. So far mashups are seen mostly as a content aggregation technology. Existing mashup composers do not support the monitoring of the execution of the mashups at run time. When considering the workflow composers, the traditional workflow languages have a high learning curves. Content aggregation is minimally supported in the existing tools. Mooshabaya is expected to fill these gaps in both the domains.

Let's have a look at the implementation of Mooshabaya. Registry Integration, Mashup Generation, Deployment, Monitoring, Security and the user interface are the major components.
Here we have the major use case of Mooshabaya. 

First the user wants to compose workflows - He integrates a web service registry into the system. Then he discovers and fetches the services metadata from the registry. By dragging and dropping the metadata files and other service components into the Mooshabaya canvas, he composes the workflows. 

Then he exports the workflow as a mashup. After that he can deploy the mashups into a mashup server. Here mashups are deployed into a mashup server and the relevant workflow files and the service metadata are stored into a registry. After the deployment, he can execute the mashups deployed in the mashup server and monitor them at the execution time. Secured services found in an Identity Server too can be used in the workflow composition.

As we discussed, Mooshabaya provides a solution for the complete mashup life cycle, where the existing  products are not providing at the moment. For example the monitoring phase of the mashups.

Our reasearch was mostly focused on the two domains. The integration of a WS-Eventing based notification system for remote monitoring of the mashups. Next we have the mashup security, based on username password based authentication.

Mooshabaya has used XBaya as the core workflow composer. Rather we can consider it a mashup generator for XBaya. WSO2 Mashup Server is used as the mashup server for Mooshabaya, and WSO2 Governance Registry as the web services registry. For the performance analysis, we benchmarked its performance against the XBaya's existing options. The performance graphs compare the file sizes and the generation times. The mashup file size was considerably smaller than the respective BPEL file, thus reducing the time to deploy the file to a remote server. The mashup generation time and BPEL generation time too were close to each other, without much difference. 

Scientific researches such as the LEAD system, business processes, and educational researches are the major target users of Mooshabaya. As a point for future works, we can mention that, Mooshabaya still can be extended with further research, specifically a web based interface, and the support for the delegated authentication scenarios.

Monday, November 29, 2010

On Llovizna itself..

Since I started my blog on the 11th of November, 2008, it has seen many posts - mostly my random thoughts or product updates under the name of a blog post. With recent enhancements of blogspot, we are able to view the mostly viewed posts by time (now, day, week, month, always), which I guess worth mentioning.. ;)

I should mention one point. The mostly viewed pages weren't the ones that I took much time to write, nor the ones that I wrote with much care. Viewers decide what to view, themselves.. :D


Update as on the 6th of May, 2012.

Auto Scaling With Amazon EC2 [Feb 1, 2011]
How to apply/create an svn patch (for beginners [Apr 1, 2009]
how to ignore someone you love [Apr 24, 2011]
SVN Commit in windows [Could not use external edi... [Mar 28, 2009]
Google Summer of Code 2012 [Dec 18, 2011]

Sunday, November 28, 2010

as an undergraduate at the University of Moratuwa.. (3)

Level 4
Level 4 has become the most interesting and successful academic year (With GPA's 3.74 and 4.04 for semester 1 and 2) among the four. We were busy with final year project and a huge array of assignments, still managed everything efficiently. Our Database paper and Final year project (Mooshabaya) paper got selected to conferences held in Paris and Bangalore respectively. Somehow the final year turned me into an 'owl', which made me work more on mid-nights. Our final year was mostly group-oriented too. We have spent much of the time as the final year project group. Fortnight reports, monthly presentations and meetings. We were spending most of the time in the research lab. 

During the level 4, I was able to work as an instructor for the level 2 operating systems module, which reminded me 2007 more often. I tried to get some students into the AbiWord projects for their Programming Project module. It should be noted, that during our batch, we were just producing dummy projects most of the time, for the level 3 programming project module, which was turned as an open source module from 07CSE onwards, which is really an interesting change. I am pretty sure, that would have motivated my juniors more into open source and open source programs like Google Summer of Code. We also faced the mile stone 25 years anniversary of CSE, during our final year. As the batch, we organized IT seminars allover the island to mark the anniversary. It is an interesting fact that CSE has started on 29th of January, 1985 (exactly two years before my birthday.. :D)

The final days of the final semester were full of interviews. We all secured our jobs even before we completed the final semester. Interestingly we joined WSO2 on 13th of September, a Programmer Day. Programmer Day is the 256th day of every year, September 13th or the 12th on leap years. Our work started with a remarkable event marking the 5th year anniversary of WSO2, and it was September 17th, 2010.

as an undergraduate at the University of Moratuwa.. (2)

Level 2
I entered the department of Computer Science and Engineering, which we prefer to call CSE, with the GPA of 3.78. And this is how the family of 06CSE was formed at 2007. Most of the time in level 2, Electronics (EN) and Electrical (EE) departments share the same modules with us (CS). That means, we had to learn EE and EN modules, in addition to CS modules. For the L2S1 too, a 5 credit module from the Electrical department spoilt my grade, and reduced it from 4.01 to 3.66 as a single module. This time the lessons were, 5 credit modules are damn powerful, and respect the other department modules too. :D Second semester went smooth, and ended with a GPA of 3.81. 

Mentoring
Mentoring was an interesting concept introduced to us in Level 2. We were divided into groups and each group is sent to a company for their mentoring session. We were assigned a mentor from the company. I got my mentoring from Virtusa along with 9 others from 06CSE. We also had a drama festival and 'CSE ourbounds' - a social outdoor event to welcome the juniors to the department. CSE Outbounds evolved with time, and junior batches had the same event with different names 'CSE Indifference' and 'CSE BeyondWavez' in the following years. I should also mention about the june terms, which carry very low work load than the semester, and come with 2 or 3 modules. In the meantime, we were able to contribute to the localization efforts named as 'Lakapps' which are made with the support of Lakapps and the department. It was an interesting learning experience for us.


Level 3
Level 3 has only one semester and it became the semester with the highest workload of all the 7 semesters of our undergraduate life. We no more had any modules from the other departments as compulsory modules. In level 3 and 4, we are given options. That means, not all the modules are compulsory. Level 3 had 10 modules. I always do a comparison of Level 3 with O/L and Level 4 with A/L due to this subject count and the workload. I always feel, I could have omitted the Embedded Systems module after receiving a C+ for that, making my Semester GPA 3.62. 

Internship
By end of the semester, we were given the option to apply for our preferred companies for the internship. I applied for WSO2, Virtusa, and Duo Software, and got selected for all the three companies. As WSO2 matched my personal interests more, I finally decided to join WSO2 as an intern. I consider choosing WSO2 for my internship was the cleverest decision I took as an undergraduate. Life at WSO2 as an intern made a strong impact in me, which also encouraged me more into open source technologies, that lead me into participating and getting selected to Google Summer of Code in 2009 with AbiWord and 2010 with OGSA-DAI of OMII-UK. At the end of our internship, 4 WSO2 interns formed a group and picked a project "Mashup Generator for XBaya" with the title "Mooshabaya", where another group of four WSO2 interns formed a group "Bissa".  After completing the final year, the team Mooshabaya joined WSO2 as software engineers.

as an undergraduate at the University of Moratuwa..

With the release of the finalized results, I feel the urge to post a short summary of my life as undergraduate for the 4+ years (2006 - 2010) at the Faculty of Engineering of the University of Moratuwa. In short, I see my undergraduate studies a successful one, which essentially makes the year 2010, the most remarkable year of my life so far. An A+ for the final year project which happens to be the 10 credit module, making the final semester GPA to be 4.04, as well as the final overall GPA of 3.80 which puts me into the first class became the highlights of the year.

Level 1
First semester of the first year didn't go well, if I consider the grades. First difference I saw from the advanced level was, in advanced level I knew questions as well as the answers when I sat for the exams. In undergraduate studies, for some modules, I felt like I even didn't know the questions ;) I did score well, and was in the GPA of 3.96 (out of 4.20), when we got the results for 5 modules out of 6, with all A+, A's and A-. But a single module was powerful enough to change the result. It was a C- for Thermodynamics. This immediately dragged my GPA from 3.96 to 3.58. After all, it was just a 2 credit module (13 total credits for the semester). Amazingly almost all the friends were with A+'s and A's for the very same module! I later learned that most of the questions were from the past papers, where unfortunately I didn't even have a single paper for that module.

Lessons learned
--------------------------
  • Unlike in school, a lower grade for a single module can be poisonous - In school, I happened to get lower marks to not-so-interesting modules like Social studies, which never made any impact to my rank, as Mathematics and other *good* subjects always saved me with 90+'s. :)
  • GPA system is weird. It starts with a big interval, and all of a sudden, even for a difference of 5 marks the grade falls like hell. C has a GPA of 2.0 where C- has 1.5, just with 5 marks interval for C and C-, where A+ has 4.20, and A got 4.0 where the interval is [85,100] and [75,85).
  • Past papers are life-savers. I didn't know the fact that past papers are stored in the library for the students to get photocopies. I got to know this only at my level 2. I could have gone to the library and got some past papers for that module.
  • Getting two A's is better than getting an A+ and an A-, as far as GPA's are considered.
  • We can score maximum of C, in a second attempt. I always thought of upgrading my C- of thermodynamics to C, but never did that as I hate repetition!
  • Everything works in this world according to the theory of relativity. If we do average in an easy exam, that will essentially make our score to the bare minimum. Here *easy exam* means, an exam where many people get high score.
  • You can't just omit/ignore a single module. No this-is-my-module sentiments.
  • The first year was an important year in our undergraduate life. After the first two semesters, we will be allowed to choose our preferred major. But stuff doesn't work like that. Everyone prefers some departments, hence it goes with the GPA. That made first year the most competitive year of all! We are batch 06'. With Batch 09' it was changed to First semester. That means, after first semester students will be divided into the departments.
As I scored lower for the first semester, I had to work smart (please note the usage of the phrase *work smart* instead of *work hard*. I worked hard in the first semester. No doubt.) for the second semester to avoid ending up with a department that I do not like much. Getting lower grade means, you will lose the freedom to choose. That's it!

To be on the safer side, I tried to attend the 'kuppi' classes (whatever the media it is), though personally I hate those type of *classes*. These 'kuppi' classes are classes where senior students explain the particular module in Sinhala/Tamil medium. Mostly they will explain the concepts or do some past paper questions. I don't like huge crowds, but it was a good memory to recall. I DID feel some of those classes were just wasting my time. ;) This time, I paid attention to the past papers. With all A+'s, A's, and A-'s, I was able to get 4.00 for the second semester, which I consider a reasonable success.

Stratos is not a stranger!

WSO2 Stratos is not a stranger to the world of SOA Middleware enterprise. Rather it deploys the WSO2 Carbon platform as a-Service on cloud infrastructure. Hence Stratos by default provides all the products that are built on top of WSO2 Carbon as services, while managing your underlying cloud infrastructure,  handling the scalability demands of your applications seamlessly. You just sign up once, and use the entire WSO2 middleware platform without worrying about installing or configuring them. Single sign-on made it easy, where you just have a single account with a manager to manage all the services.

* You can either deploy Stratos as a private cloud for your organization, so that client organizations, project teams, or departments can run in an independant environment, sharing the same middleware platform utilizing your resources.
* Or simply register your domain in the WSO2's public cloud Stratos deployment (https://cloud.wso2.com), and start experimenting with the services now!
Stay tuned by subscribing to the WSO2 newsletters.

Red Signal is Taboooooo!


Whenever I see this 'amber/yellow' in gmail chat status, I hope it would turn 'Green' when some one becomes alive in the chat. They should introduce a new color if it is going to turn 'Red'.. Seriously! 'Amber' should always be followed by 'Green', at least for me.. ;) 'Red Signal' is taboo in my books!!

Monday, November 15, 2010

Stratos - 1.0.0 ~ Sky is not a limit for the cloud!

Last couple of days WSO2 team was energized in shaping up the most powerful PaaS (Platform as a Service) it has to offer. Stratos is a complete middleware platform as a service. Standing as the backbone of WSO2 Stratos, WSO2 Carbon is a complete SOA middleware platform. Stratos - 1.0.0 was released today and it has become yet another impressive and creative piece of work from WSO2.
For the last couple of weeks both the Stratos team lead by Azeez and Shankar and the QA team lead by Charitha were in action shaping up the young Stratos.  Not just the Stratos team, but the entire team of WSO2 was there for Stratos. I'd say nightly coding is stylish. You would love it, if you give it a try in a perfect environment for coding. After the hackathon and war modes, Stratos team has got to celebrate it. It was a nice experience being at WSO2, when the Stratos 1.0.0 was deployed for public, replacing its predecessor stratos-alpha. It was like Dec 31st Night, and it worthed the celebration!

Haven't you signed up yet for WSO2 Stratos Services? Sign up an account for your organization free in the public cloud

Download Stratos from here.

Stratos Community and Product home.

Samisa's post on Stratos explains why is Stratos so special.

Saturday, October 16, 2010

oh! crap!!

Have you ever noticed some innocent guy or a girl sharing a crappy video or links in their Facebook profile? This often leads to curiosity and make you too visit the page. Now you are gone! Your friends will get notified that you are watching *that* video in their home page. Here once you proceed into the page, without your knowledge, you will be tricked to share the link to the page in your profile.


Let's look at some such tricks played to make you share the link to the page/video or make you like it, without even knowing that.
 
In this page, once you click 'play', a link to the page will be shared on your profile. It is actually a 'share' and 'like' options engineered to have a video look!
Here, both the 'Confirm' and 'Cancel' work as the 'share'.

If you really want to see what is inside the video, just log out and try to click the link. Sometimes it will ask you to login, and sometimes it will try to redirect you to some crappy survey, which will lead you to a no-where-zone.
Facebook has a rigid censor policy. It doesn't allow any 18+ material in it. Don't be fooled by such claims, and avoid further embarrassment of sharing all the craps over your profile. These tricks are, of course used to spread rumors and hoaxes too.

A better option would be, go elsewhere anonymous and try, not in a social networking site, where you have proven your identity. Google is always your friend, and public image matters!

Friday, September 24, 2010

Over the road of flowers..

The last two weeks were really interesting and most remarkable. Our Advanced Database (CS4420) Module research paper "Horizontal Format Data Mining with Extended Bitmaps," got accepted for International Conference of Soft Computing and Pattern Recognition - SoCPaR-2010 (Paris, France. December 7th - 10th, 2010), under the category "Pattern Recognition," as a short paper. We got to know this exciting news on 11th of September. After a few days, on 16th, we were announced that our Mooshabaya paper too has been accepted to the 8th International Workshop on Middleware for Grids, Clouds and e-Science - MGC-2010 (Bangalore, India . November 29th - December 3rd, 2010), as a full paper, to double our joy. It should also be noted that both the projects were from us - the same 4 member team Mooshabaya, who are currently Software Engineers at WSO2.

Our job at WSO2 started with a remarkable week. The new building at #50 was opened the very same day (September) we joined. The next two days we had the WSO2Conf marking the 5 years of excellence of WSO2. The first week ended with the 5 years party at the Waters Edge. Walks between the forts of 50 and 59 over the road of flowers⚘. Loving these days... ♥ ♥ Cloud/Could Duality ;) And I am into WSO2 Stratos! I got into the Stratos (WSO2 Carbon Middleware Platform as a Service) team - Stratos Manager Component and Stratos Security as the first tasks. Finally completing one of the most remarkable fortnights with this post.

Friday, September 10, 2010

With Llovizna, 2010..

It is really a nice time to have a blog post on the recent past, looking back at 2010 and the yesteryears. 2010 has been one of the most remarkable years I have faced so far. Some events worth mentioning since the beginning of the year. First interesting event was the rebirth of my blog as 'Llovizna'. I feel it was a major face-shift for my blog. Followed was the completion of the L4S1 exam along with quite a good results. CSE IT seminar was one of the events those add some spice to 2010. It was really nice to see the effort of Sri Lankan undergraduates for their younger brothers island-wide. I joined the team that went to Jaffna, and it was really an excellent learning experience - learning the school kids, and the outcome was really successful.

The first highlight of 2010 was obviously my GSoC 2010 with OMII-UK, with the project OGSA-DAI. An interesting point to note is, even last year I had an interest in applying to OMII-UK project, where I later applied to Abiword as I loved to join the Abiword community due to my personal interest in the word processor as a user, as well as a developer. At that moment, I decided that, if a GSoC 2010 is possible, it should be with OMII-UK for me. So that goal was successfully met. OGSA-DAI project is one of the best FOSS communities for an enthusiastic developer.

The successful completion of the final year project 'Mooshabaya' must always be mentioned, when recalling 2010. Special thanks goes to my team mates, who are simply the best. The final semester exam followed after. It went pretty well too. After the exam followed the CS&ES Conference marking the 25 years of excellence of CSE, along with the ExMo Exhibition of University of Moratuwa, Faculty of Engineering. Sep 6th hence marked the completion of our undergraduate life. Apart from that, we are blessed to join the dream job that we are passionate about. My sincere thanks goes to WSO2 Team at this moment recalling 2010 so far, awaiting the remarkable 13th of September 2010.

Monday, August 16, 2010

Deploying Resources - OGSA-DAI/CXF/Linux/Ant/Tomcat/mysql

A random post on deploying the resources. :)

Set an OGSADAI_HOME environment variable
export OGSADAI_HOME=/home/pradeeban/ogsa-dai/ogsa-dai/trunk/release-scripts/ogsa-dai/cxf/build/ogsadai-4.1-cxf-2.2.6-src/build/ogsadai-4.1-cxf-2.2.6-bin

Set the CLASSPATH
cd $OGSADAI_HOME
source setenv.sh

Checking for too long path issue
After setting the environment using setenv.sh, make sure that your classpath hasn't exceeded the maximum allowed length. If it had exceeded, as a quick fix, you can consider moving your OGSADAI_HOME to somewhere upwards towards the root directory.


Creating a sample MySQL database 'ogsadai' using ogsa-dai's createTestMySQLDB class with the table 'littleblackbook' and 10000 entries.

java uk.org.ogsadai.dbcreate.CreateTestMySQLDB -host coal.epcc.ed.ac.uk -port 3306 -database ogsadai -username mysqlUser -password 123456 -rootusername ogsadairootuser -rootpassword ogsadairootpassword

java -cp $CLASSPATH:/home/pradeeban/ogsa-dai/third-party/dependencies/mysql/mysql-connector/5.0.4/mysql-connector-java-5.0.4-bin.jar:/home/pradeeban/gsoc2010/data/build/lib/ogsadai-4.1-sampledata-1.0.jar uk.org.ogsadai.dbcreate.CreateTestMySQLDB -host localhost -port 3306 -database ogsadai -username root -password root -rootusername root -rootpassword root
MySQL Settings:
    MySQLDriverClass:        org.gjt.mm.mysql.Driver
    MySQLHostName:           localhost
    MySQLPortNumber:         3306
    MySQLDatabaseName:       ogsadai
    MySQLUserName:           root
    MySQLPassword:           root
    NameOfTableToCreate:     littleblackbook
    NumberOfRowsToCreate:    10000
    MySQLRootUserName:       root
    MySQLRootPassword:       root
Opening connection to MySQL system database
Creating 'ogsadai' database in MySQL if it does not already exist
Creating user 'root' with password 'root' in MySQL if it does not already exist
User 'root' already exists within MySQL
Dropping table if it already exists
Creating littleblackbook table in database
Preparing insert statement
Adding 10000 entries to 'littleblackbook' .............
Test database created successfully!


Deploying the database to ogsa-dai
DeployResource deployMySQL MySQLResource jdbc:mysql://localhost:3306/ogsadai Login permit MySQLResource ANY
DeployResource deployMySQL MySQLResource jdbc:mysql://localhost:3306/ogsadai Login permit MySQLResource ANY ogsadai root



getVersion method
java -cp $CLASSPATH:/home/pradeeban/gsoc2010/core/client/build/lib/ogsadai-4.1-client-1.0.jar:/home/pradeeban/gsoc2010/core/common/build/lib/ogsadai-4.1-common-1.0.jar:/home/pradeeban/gsoc2010/core/clientserver/build/lib/ogsadai-4.1-clientserver-1.0.jar uk.org.ogsadai.client.toolkit.example.ServerClient -u http://localhost:8080/dai/services/ -c getVersion


java -cp $CLASSPATH:/home/pradeeban/gsoc2010/core/client/build/lib/ogsadai-4.1-client-1.0.jar:/home/pradeeban/gsoc2010/core/common/build/lib/ogsadai-4.1-common-1.0.jar:/home/pradeeban/gsoc2010/core/clientserver/build/lib/ogsadai-4.1-clientserver-1.0.jar uk.org.ogsadai.client.toolkit.example.ServerClient -u http://localhost:8080/dai/services/ -c getVersion

java uk.org.ogsadai.client.toolkit.example.ServerClient -u http://localhost:8080/dai/services/ -c getVersion


How to show the server version
java uk.org.ogsadai.client.toolkit.example.ServerClient -u http://localhost:8080/dai/services/ -c listResources

java -cp $CLASSPATH:/home/pradeeban/gsoc2010/core/client/build/lib/ogsadai-4.1-client-1.0.jar:/home/pradeeban/gsoc2010/core/common/build/lib/ogsadai-4.1-common-1.0.jar:/home/pradeeban/gsoc2010/core/clientserver/build/lib/ogsadai-4.1-clientserver-1.0.jar uk.org.ogsadai.client.toolkit.example.ServerClient -u http://localhost:8080/dai/services/ -c listResources

java -cp client/build/lib/ogsadai-4.1-client-1.0.jar:common/build/lib/ogsadai-4.1-common-1.0.jar:clientserver/build/lib/ogsadai-4.1-clientserver-1.0.jar uk.org.ogsadai.client.toolkit.example.ServerClient -u http://localhost:8080/dai/services/ -c listResources


java -cp client/build/lib/ogsadai-4.1-client-1.0.jar:common/build/lib/ogsadai-4.1-common-1.0.jar:clientserver/build/lib/ogsadai-4.1-clientserver-1.0.jar uk.org.ogsadai.client.toolkit.example.ServerClient -u http://localhost:8080/dai/services/ -r MySQLDataResource -t uk.org.ogsadai.DATA_RESOURCE -c getLifetime


DB Query
pradeeban@pradeeban-laptop:~/ogsa-dai/ogsa-dai/trunk/extensions/basic/client$ ant jar
pradeeban@pradeeban-laptop:~/ogsa-dai/ogsa-dai/trunk/extensions/relational/client$ ant jar

mySQLResourceConfig.txt
DeployResource deployMySQL MySQLResource jdbc:mysql://localhost:3306/ogsadai Login permit MySQLResource ANY
DeployResource deployMySQL MySQLResource jdbc:mysql://localhost:3306/ogsadai Login permit MySQLResource ANY ogsadai root


Deploy the above DeployResource script with:
ant -Dtomcat.dir=$CATALINA_HOME -Dconfig.file=mySQLResourceConfig.txt configure

java -cp /home/pradeeban/ogsa-dai/ogsa-dai/trunk/extensions/relational/client/build/lib/ogsadai-4.1-relational-client-1.0.jar:/home/pradeeban/gsoc2010/core/client/build/lib/ogsadai-4.1-client-1.0.jar:/home/pradeeban/gsoc2010/core/common/build/lib/ogsadai-4.1-common-1.0.jar:/home/pradeeban/gsoc2010/core/clientserver/build/lib/ogsadai-4.1-clientserver-1.0.jar uk.org.ogsadai.client.toolkit.example.SQLClient -u http://localhost:8080/dai/services -d MySQLResource -q "SELECT * FROM littleblackbook WHERE id <  10"
*DRER ID: DataRequestExecutionResource
Data Resource ID: MySQLResource
Base Services URL: http://localhost:8080/dai/services
SQL-Query: SELECT * FROM littleblackbook WHERE id <  10

Wrapping up GSoC 2010

CXF Release Build
After a fix for a missing module in revision 1314, CXF release-build was successful in the test server.

Test Server
Version 1317 and 1318 focused on fixing some bugs in the test code. In 1319,  Base64 tests were done, using org.apache.cxf.common.util.Base64Utilit as the implementation of Base 64 encoding for OGSA-DAI for cxf based layer [1]. The default Base64 implementation of OGSA-DAI is, org.apache.axis.encoding.Base64, as defined in Base64 class of OGSA-DAI.

Base64
Conversion of char[] to string, and mapping the encode and decode methods of Base64Utility, is handled by presentation/cxf/client/src/main/java/uk/org/ogsadai/client/toolkit/presentation/cxf/CXFBase64Mapper.java from the version 1322. This fixes the cxf base64 tests. Further, Base64Mapper interface will be in the core/common module, and  ogsadai.common.Base64 will have method public static synchronized void registerBase64Class(Base64Mapper mapper). Decode encode operations will use the instance of the interface, instead of the implementation itself. After 1322, cxf/server tests depend on the cxf/client module too. Hence adding it as a dependency in ant jarUnitTests for cxf/server in 1323. These changes made all the 13 tests of cxf/server module to run successfully [2], almost completing the development of the CXF - SOAP based layer for OGSA-DAI.

CXF/JAX-WS Layer
With the client toolkit for cxf, the pencils down date reached. Though we will continue with our projects, the changes we did to the code base after 16th of August 1900 UTC will not be counted as work under Google Summer of Code 2010.

ReSTful Layer CXF/JAX-RS
Version 1300 starts the presentation/rest for the ReSTful layer based on CXF/JAX-RS. XXXResource classes define the CRUDs, where the CRUD methods call XXX classes, the classes that calls the lower level implementation further. 1398 becomes the initial implementation of DataRequestExecutionResource for CXF-JAX/RS presentation layer. 

1399 provides a Simple ReST Server implementation. 1402 Lists resources for ReST resource types, while 1403 Gets and Deletes resources with the given resource ids. 1404 Sets the Lifetime, from the Request Service. ReST layer specific exceptions need to be written, and current error handling is very primitive for ReSTful layer.

CXF/JAX-RS Release Script
Commits 1406 and 1407 starts the release script of the ReSTful layer at trunk/release-scripts/ogsa-dai/rest, based on cxf/jax-rs. Needs more changes, and this script is still evolving. This becomes my final commit that comes under Google Summer of Code 2010 (Tonight 1900 UTC was the firm-pencils-down date).

Patches
Patches that cover my contribution towards the timeline can be retrieved by executing the below lines from the trunk.
svn diff -r 994:1407 presentation/cxf > gsoc2010.diff
svn diff -r 1274:1407 presentation/rest > gsoc2010Rest.diff
svn diff -r 1261:1407 release-scripts/ogsa-dai/cxf > gsoc2010Release.diff
svn diff -r 1406:1407 release-scripts/ogsa-dai/rest > gsoc2010ReSTRelease.diff
As the code is committed itself into trunk, the patches is practically not necessary, apart from the Google's requirements or for records.

Modules
Considering the modules, presentation/cxf, presentation/rest, release-scripts/ogsa-dai/cxf, and release-scripts/ogsa-dai/rest contain the code written during this Summer of Code. 1407 becomes the last commit that was done during the Summer, hence the later commits won't be counted for Google Summer of Code. 

The generated tarball of code for the code submission to Google can be downloaded from here.

Sunday, August 15, 2010

Summer of Love 2010 with OGSA-DAI

OGSA-DAI, a name that became the highlight of the year 2010. When we reach the climax of Google Summer of Code, I have to mention that it has become one of the sweetest memories of my 2010. Working with OGSA-DAI team is an amazing experience. We discuss about almost everything over the irc. It varies from Web Services to Neural Networks, Sri Lankan tea to Colombian Coffee, and Databases to Grid Computing. Apart from the scope of the Summer of Code project, I am pretty sure that Michal (my GSoC 2010 mate in OGSA-DAI) and I were able to gain a considerable amount of knowledge on OGSA-DAI as well as the technologies that are used. The super-friendly environment of OGSA-DAI team is one of the best learning platforms that I have experienced, During the last few months, while working with the presentation layers, I was also able to play with the code base, and experiment with it. 

Developing a presentation layer for OGSA-DAI is a challenging, but an interesting task to do. I started to write an article on how to write a presentation layer for OGSA-DAI, soon after I completed developing the SOAP based layer, which I later found more SOAP and CXF biased, when I started working with the ReSTful layer, which made me postpone releasing the article, as it obviously needs further modification. Officially Google Summer of Code completes by tomorrow, and of course, that just completes the direct involvement of Google. After taking a short break for my final exams, I will be back to OGSA-DAI on 6th of September. A compatible community is always a great motivation for a FOSS developer. With a strong community, OGSA-DAI becomes an ideal project for someone who is passionate on Software Development. 

At this moment, I should thank my mentors Ally Hume and Bartosz Dobrzelecki for the great support and motivation they offered throughout the project. My Special thanks goes to Mario Antonioletti for his constructive thoughts and help when I first applied to GSoC, as well as his encouragement throughout the project timeline. My sincere thanks to Mike Jackson for providing help in setting up the tests and release builds in the server, effectively managing the project. I should also thank Neil Chue Hong for administrating Google Summer of Code this year, providing updates and assistance to students and mentors from OMII-UK. Thoughts of the core team members Charaka Palansuriya, Carlos Buil Aranda, Tilaye Alemu, and Amy Krause were always fascinating and highly motivating, and the OGSA-DAI family is really a nice environment to work with. OGSA-DAI is a team with strong bonds, and we all love the cute hexapus!

Friday, July 23, 2010

OGSA-DAI ~Presenting in CXF~ (4)

Build Scripts - Release Builds
Build files and CXF web.xml for the release build of OGSA-DAI/CXF/JAX-WS are committed in the revision 1273. 1281 adds Spring aop (Aspect Oriented Programming) as a dependency for cxf/server module.



OGSA-DAI/CXF deployed in Tomcat

More modifications in the release script, towards deploying in Tomcat (currently testing with Apache Tomcat 6.0.24) are done in the revisions 1282 and 1283. Listing all the 6 OGSA-DAI (SOAP) Services in http://localhost:8080/dai/services. The functionality provided by the package uk.org.ogsadai.rest.files will be handled more for CXF layers here.
OGSA-DAI SOAP Services along with the operations are tabulated here (in DOC format  and PDF format). These tables are drawn from the service listing of OGSA-DAI deployed in Tomcat (refer to the images).
ogsa-dai/trunk/presentation/rest
1274 - ReSTful Presentation Layer code development based on JAX-RS/CXF. Committed the folder.

Testing
1277 is a minor fix that adds log4j as a dependency in the cxf/server module build, which fixed the odd java fork failure in the test framework, due to the Logger class not found exception [1],[2].
Fixes to Base64 Tests will follow.
Todo
Details of CXF based layer to be added here.

[1] Test Results (Currently) [2] Test Framework

Saturday, July 17, 2010

Your Web Identity - Clean or Not?

Have you ever searched your full name using Google (alone or with other key words), and have had fun looking at the results? I did recently, and finally annoyed by a very ugly hit!

oh, it was just a reply I gave while banning a  spammer, who posted that *crap* into one of our local FOSS groups. Finally that has created  an ugly google reputation, as well as the display in the discussion board, as given above. These pervs join decent groups just to post their $hit. Anyway think twice before replying to these discussions in the google or yahoo groups or other online forums. Your name may be associated with these search terms, by the Google Search engine's association rule algorithms.

Sunday, July 11, 2010

Mid evaluations with the cute hexapus!

Mid evaluation is on the way today evening, while we have done the initial design of ReSTful presentation layer, and towards the completion of the SOAP based layer. Web Application Description Language (WADL), which is said to be the ReST equivalent to WSDL of SOAP, paired with JAX-RS supported by CXF will serve us in this. WADL2Java tool for the stub generation is worth mentioning here. Service Oriented Architecture (SOA) and Resource Oriented Architecture (ROA) all sound good. Again feeling the need for a wsdl2wadl tool [1] (and a wadl2wsdl ;)).

WADL is becoming the standard (if not already) for REST as WSDL for SOAP, though REST with WSDL2.0 too is possible. "CXF JAX-RS now supports the auto-generation of WADL for JAX-RS endpoints. [2]" I was initially planning to write WADL on my own, by referring to the respective WSDL (and then use wadl2java for the stub generation). Finally felt auto-generation from JAX-RS is a smarter option, where we start from Java to WADL and after fine tuning the WADL that is auto-generated from the JAX-RS endpoints, going back to the java code using wadl2java (as thought initially).

[1] "Modelling Web-Oriented Architectures", by G.Thies and G.Vossen
[4] SOAPUI - Working with REST Services. 
[5] #rest on irc.freenode.net
[6] wadl2java  
[7] MYEclipse - Developing REST Web Services Tutorial
[8] Chapter 13. How requests (workflows) are executed
[9] OGSA-DAI 4.0 Axis Documentation
[10] The Client Toolkit Tutorial
[11] JAX-RS (JSR-311)

 * Hexapus refers to the logo of ogsa-dai.

Saturday, July 3, 2010

OGSA-DAI ~Presenting in CXF~ (3)

Resource Oriented Architecture
While focusing on the SOAP based layer implementation, we also had the first formal discussion on the design of ReSTful presentation layer, this week. I was able to join the discussion via skype, and took down notes, which were also summarized to the committer list by my mentor Ally Hume.

More development
When considering the implementation and the commits, commit 1232 is CXFServer, which is the access point of OGSA-DAI service/resource proxies for OGSA-DAI CXF services. client/toolkit/presentation/cxf/CXFResource.java - A place for the client side resources, is committed in 1244, which is still being modified. CXF implementation of DataRequestExecutionResource, CXFDataSinkResource, and CXFDataSourceResource are committed in the revisions 1245, 1247, and 1248 respectively.

Builds
1250 is literally an insignificant. But I couldn't resist committing this, as I am 'worried' to see the cxf/client module being broken at the test server. So let it build happily till the complete fix is committed, which makes all sub-modules of the module presentation being built without any issue. During this period, I also refactored the code to have column width of 80, to fit the ogsa-dai coding standard. My usual column width was 120. 1261 becomes the first commit towards the cxf binary release, by including the ogsa-dai/cxf release script. Building the source.zip distribution is fine. Deploying to Tomcat from the binary.zip that is built, still need to be fixed, obviously. Commit 1263 uncomments the cxf/server module from the trunk's build, as it is not failing anymore.

CS4010 Professional Practice and Fun

We enjoy our lectures a lot, and this is just a sample.. :D

Friday, July 2, 2010

Facebook Permanent Deletion - An option that is hidden..

"Do you know that you can permanently delete your account, as well as deactivate the account?"
"What??"
"Confusing, isn't it? 
Mostly the deactivating option has been misinterpreted as the delete option, while it is just a mere hide. You can log-in to your account at any time, and start 'facebooking' as usual, though restoring the groups and notes may take a few hours. But there *is* also a permanent deletion option, which is hardly known/used.

You can give it a *try*. 

The funny part is, permanent deletion dialogs are not user-friendly and mechanical, while the deactivating option has a nice interface with super-friendly messages, "oh.. why are you leaving.. your pal Llovizna will miss you.. "

Once you requested the permanent deletion of the account, you will be given a fortnight. You can reactivate your account within that period, if you feel so. After the fortnight your account will be gone forever! :)

Delete my account
If you do not think you will use Facebook again and would like your account deleted, we can take care of this for you. Keep in mind that you will not be able to reactivate your account or retrieve any of the content or information you have added. If you would like your account deleted, then click "Submit".


(No one is going to miss you in this case! Poor you!!)

Permanently delete account
Your account has been deactivated from the site and will be permanently deleted within 14 days. If you log into your account within the next 14 days, you will have the option to cancel your request.

Pradeeban feels it could have been done better.

Thursday, July 1, 2010

OGSA-DAI and ROA

OGSA-DAI, by nature can be exposed as resources, and it fits the ReSTful design pretty well, and the initiatives to an alternative ReSTful layer was taken a few months ago. Since the inclusion of the ReSTful layer as a goal of a GSoC project of OGSA-DAI, the design discussions started accelerating.

The first formal discussion on the design was held on 1st of July at Edinburgh Parallel Computing Centre (EPCC), by the senior architects Ally and Bartek. The discussion was targeted towards a high level description of the ReSTful layer design.

Monday, June 21, 2010

Mooshabaya - The Story..

Dr. Malinga is a scientist interested in e-Science researches. He predicts the weather by analyzing the data collected over the grid. Dr. Malinga uses workflow domain to analyze the scenario, and comes up with workflows that describe the rapidly changing atmospheric conditions. He executes the workflows in his workflow system and monitors them at run time. In this process, he also wants to secure some of the services from unauthorized access.

He is looking for a much light weight model to prototype the scenario rapidly. So he can avoid learning the XML technologies or the workflow languages such as BPEL, which takes a lot of time. He is interested in using yahoo geocode and similar web based APIs. He also prefers to integrate real time data collected via satellite feeds and feeds from the other sources into his system of workflows.

Mashup creates new services by aggregating two or more sources. We can develop them rapidly using the APIs, without investing much on learning them. They are light weight and can be extended easily.

As a solution for the scientist, Mooshabaya comes as a system that can utilize the best of both the domains, by a potential merge. By exporting workflows as mashups Mooshabaya invests on the synergy of the domains.

In the process of developing the workflow system, we have used the known existing tools as the base. XBaya Graphical Workflow Composer from Indiana University is used as the core of the system. We have extended XBaya to export the workflows as mashups, and the mashups are deployed into WSO2 Mashup Server. The service metadata are fetched from WSO2 Governance Registry. The composed mashups are executed in the Mashup server and monitored in the run time.

Here we come back to Dr.Malinga with Mooshabaya. Mooshabaya discovers the service metadata from Governance Registry and fetches them. Created workflows can be saved and fetched later. Workflows are exported as Mashups. Respective service metadata is added to the Registry, while the mashups are deployed into Mashup Server. Mooshabaya also supports composing with secure services found in WSO2 Identity Server. Mooshabaya executes the workflows that are deployed into Mashup Server as mashups. WS-Messenger is used to publish the notifications from the executed workflows, and Mooshabaya monitors the execution by subscribing to the notifications. Hence Mooshabaya provides a better workflow solution for the scientist.

Sunday, June 20, 2010

OGSA-DAI ~Presenting in CXF~ (2)

During this week I have been working on the server module, and committed them with the unit tests, though some more commits are remaining.

CXF-Compliant implementation of OGSA-DAI WS-EPR resolver portType operations is done in the commit 1189. Unit tests for the server module - 1190 becomes the first commit on the tests. Removed NStoPkgProperties from the build file, as it is irrelevant for CXF stub generation. in the commit 1205. As of the commit 1206 - which commits the files of the server module, the server module is fixed with some todos left.

After completing the client module and testing, will have to start implementing the Resource Oritented Architecture - Exposing the services as resources and the ReSTful presentation layer utilizing JAX-RS implementation of CXF. We decided to start discussions and design of the ReSTful layer, by mid of this week parallel to the SOAP based layer, as that will make the timeline more efficient.

[1] Test Reports 20/06/2010 22:06:14 : daitest
[2] Unit Test Results
[3] The Test Framework

I got the welcome package on June  19th - 2 Stickers, a pen, and a note book, along with the card. Special Thanks - OGSA-DAI, OMII-UK, and Google.

Thursday, June 17, 2010

Google Docs.. The Good News and The Bad News..

We were working with a Google presentation, and all of a sudden google docs crashed hiding the document, with the message,


The bad news is that Google Docs has just encountered an error.
The good news is that you've helped us find a bug, which we are now looking into.
We apologize for any inconvenience this has caused you.
In the meantime, if you'd like updates on this and other issues, try visiting our Google Docs Help Group: http://www.google.com/support/forum/p/Google+Docs

Sorry, and thanks for your help!
- The Google Docs Team

Can't prepare without a quorum of living replicas Trace: [ 0.000] LogSession {[logKey="dchc522j_15cw6c8ng5", logSpace=logspace:global/writely.production-data.data.logspace]} +------------------------------------------------------- | Started 0.000 after root (absolute time 1276770846.526 seconds) | [ 0.000] CoordinatedLog.init key="dchc522j_15cw6c8ng5" replica=E params=op=WRITE, piggybackRead=true, dl=null, localReadPatienceMs=750, allowFailover=true | [ 0.000] Transitioning from=Init to=AskCoordinator | [ 0.000] SEND [#0] E.isUpToDate() | [ 0.000] Transitioning from=AskCoordinator to=FindPos | [ 0.000] SEND [#1] E.queryLog(queryMode=DEFAULT, piggybackReadRequest=[#8:=]com.google.rewritely.storage.ReplicatedReader$PiggybackReadRequest@f8125b, logPos=null, dl=11.000) | [ 0.000] Predicted local read (5.196 mdev=1.122) as fast as quorum (90.646 mdev=32.633); waiting 750ms before issuing majority read. | [ 0.001] CB [#0] E.isUpToDate => (up-to-date: true, pos: 216) | [ 0.004] CB [#1] E.queryLog => (appliedPos: 249, appliedTime: 1276769410763000, nextLeader: E, lastMustRoll: 249, LWM: {D=(low=228,high=228), E=(low=249,high=249), A=(low=248,high=249), B=(low=228,high=228), C=(low=248,high=249)}, logEntry: null, piggyRead: com.google.rewritely.storage.ReplicatedReader$PiggybackReadResponse@7d3de4) | [ 0.004] Transitioning from=FindPos to=FindFirstReplica | [ 0.004] Transitioning from=FindFirstReplica to=Querying +------------------------------------------------------- | *** trace being toString'd *** | Started 0.004 after root (absolute time 1276770846.530 seconds) | [ 0.000] CoordinatedLog.write key="dchc522j_15cw6c8ng5" replica=E logspace=logspace:global/writely.production-data.data.logspace | [ 0.000] Constructed proposer Proposer {[localReplica=E, id=8130, logPos=250]} timestamp=1276770846530000 syncApply=[E] syncLearn=[] | [ 0.000] Transitioning from=Init to=AcceptLdr | [ 0.000] SEND [#0] E.accept(logPos=250, proposal=[#18:=](0, 0): (nextLdr E, hash f811b883b75d1f21, @1276770846530000, logPos 250, 126 bytes), mustRollForward=false, logState=[#19:=](PS: Unready, Acpt=[], Inv=[], Mrf=[], View=replicas={A=dv, B=dv, C=dv, D=dv, E=dv}, gen=0, time=0, LWM: {D=(low=228,high=228), E=(low=249,high=249), A=(low=248,high=249), B=(low=228,high=228), C=(low=248,high=249)}), dl=1.001) | [ 0.008] CB [#0] E.accept => EXCEPTION / com.google.bigtable.BigtableOverQuotaException: OVER_QUOTA: owner class (@production) exceeds its quota, while the owner (chubby!mdb/writely) is using 36631910110495.500 of its own 108851651149824.000 disk:.|OVER_QUOTA|/bigtable/srv-gd/writely.production-data | [ 0.008] Transitioning from=AcceptLdr to=Prepare | [ 0.009] SEND [#1] A.prepare(logPos=250, proposalNum=(298744293, 7902741955929074510), mustRollForward=false, logState=[#19], dl=6.991) | [ 0.009] SEND [#2] B.prepare(logPos=250, proposalNum=(298744293, 7902741955929074510), mustRollForward=false, logState=[#19], dl=6.991) | [ 0.009] SEND [#3] C.prepare(logPos=250, proposalNum=(298744293, 7902741955929074510), mustRollForward=false, logState=[#19], dl=6.991) | [ 0.009] SEND [#4] D.prepare(logPos=250, proposalNum=(298744293, 7902741955929074510), mustRollForward=false, logState=[#19], dl=6.991) | [ 0.009] SEND [#5] E.prepare(logPos=250, proposalNum=(298744293, 7902741955929074510), mustRollForward=false, logState=[#19], dl=6.991) | [ 0.039] CB [#4] D.prepare => EXCEPTION / com.google.storage.megastore.replication.StorageException: Remote RPC failed {[status=EXCEPTION, replicationStatus=MISC, stubbyErr=/AcceptorService.Prepare to 10.230.37.70:25685 [APPLICATION_ERROR(4)] << EXCEPTION / com.google.bigtable.BigtableOverQuotaException: OVER_QUOTA: owner class (@production) exceeds its quota, while the owner (chubby!mdb/writely) is using 54750724655241.000 of its own 134580222820352.000 disk:.|OVER_QUOTA|/bigtable/srv-vb/writely.production-data | at com.google.storage.megastore.replication.StorageException.wrap(StorageException.java:65) | at com.google.storage.megastore.replication.StorageException.wrap(StorageException.java:46) | at com.google.storage.megastore.replication.monitor.RootCallback.failure(RootCallback.java:89) | at com.google.storage.megastore.replication.health.HealthTracker$HealthTrackingCallback.failure(HealthTracker.java:288) | at com.google.storage.megastore.replication.monitor.ApiTrace$ApiTraceCbk.failure(ApiTrace.java:254) | at com.google.storage.megastore.replication.acceptor.AcceptorImpl$15.failure(AcceptorImpl.java:1909) | at com.google.storage.megastore.replication.net.BtCallbackWrapper$2.run(BtCallbackWrapper.java:113) | at com.google.storage.megastore.replication.net.RequestContainer$ExceptionForwardingRunnable.run(RequestContainer.java:357) | at com.google.storage.megast ... [exception text truncated by msrepl; total 6685 characters] | [ 0.058] CB [#3] C.prepare => EXCEPTION / com.google.storage.megastore.replication.StorageException: Remote RPC failed {[status=EXCEPTION, replicationStatus=MISC, stubbyErr=/AcceptorService.Prepare to 10.13.114.19:25782 [APPLICATION_ERROR(4)] << EXCEPTION / com.google.bigtable.BigtableOverQuotaException: OVER_QUOTA: owner class (@production) exceeds its quota, while the owner (chubby!mdb/writely) is using 55970616351910.500 of its own 115976485994496.000 disk:.|OVER_QUOTA|/bigtable/srv-ia/writely.production-data | at com.google.storage.megastore.replication.StorageException.wrap(StorageException.java:65) | at com.google.storage.megastore.replication.StorageException.wrap(StorageException.java:46) | at com.google.storage.megastore.replication.monitor.RootCallback.failure(RootCallback.java:89) | at com.google.storage.megastore.replication.health.HealthTracker$HealthTrackingCallback.failure(HealthTracker.java:288) | at com.google.storage.megastore.replication.monitor.ApiTrace$ApiTraceCbk.failure(ApiTrace.java:254) | at com.google.storage.megastore.replication.acceptor.AcceptorImpl$15.failure(AcceptorImpl.java:1909) | at com.google.storage.megastore.replication.net.BtCallbackWrapper$2.run(BtCallbackWrapper.java:113) | at com.google.storage.megastore.replication.net.RequestContainer$ExceptionForwardingRunnable.run(RequestContainer.java:357) | at com.google.storage.megast ... [exception text truncated by msrepl; total 6685 characters] | [ 0.059] CB [#2] B.prepare => EXCEPTION / com.google.storage.megastore.replication.StorageException: Remote RPC failed {[status=EXCEPTION, replicationStatus=MISC, stubbyErr=/AcceptorService.Prepare to 10.224.115.11:25699 [APPLICATION_ERROR(4)] << EXCEPTION / com.google.bigtable.BigtableOverQuotaException: OVER_QUOTA: owner class (@production) exceeds its quota, while the owner (chubby!mdb/writely) is using 54920498505744.000 of its own 124683614683136.000 disk:.|OVER_QUOTA|/bigtable/srv-qa/writely.production-data | at com.google.storage.megastore.replication.StorageException.wrap(StorageException.java:65) | at com.google.storage.megastore.replication.StorageException.wrap(StorageException.java:46) | at com.google.storage.megastore.replication.monitor.RootCallback.failure(RootCallback.java:89) | at com.google.storage.megastore.replication.health.HealthTracker$HealthTrackingCallback.failure(HealthTracker.java:288) | at com.google.storage.megastore.replication.monitor.ApiTrace$ApiTraceCbk.failure(ApiTrace.java:254) | at com.google.storage.megastore.replication.acceptor.AcceptorImpl$15.failure(AcceptorImpl.java:1909) | at com.google.storage.megastore.replication.net.BtCallbackWrapper$2.run(BtCallbackWrapper.java:113) | at com.google.storage.megastore.replication.net.RequestContainer$ExceptionForwardingRunnable.run(RequestContainer.java:357) | at com.google.storage.megas ... [exception text truncated by msrepl; total 6686 characters] | [ 0.059] Proposer failing: Can't prepare without a quorum of living replicas | [ 0.059] Transitioning from=Prepare to=Failed | [ 0.061] CB [#1] A.prepare => CANCELLED | 1 unfired callbacks: {5}  

Thank God.. We had a local backup, quite outdated though.. 
And once more thank God.. The document in Google Docs came alive in an hour!!!