Monday, June 21, 2010

Mooshabaya - The Story..

Dr. Malinga is a scientist interested in e-Science researches. He predicts the weather by analyzing the data collected over the grid. Dr. Malinga uses workflow domain to analyze the scenario, and comes up with workflows that describe the rapidly changing atmospheric conditions. He executes the workflows in his workflow system and monitors them at run time. In this process, he also wants to secure some of the services from unauthorized access.

He is looking for a much light weight model to prototype the scenario rapidly. So he can avoid learning the XML technologies or the workflow languages such as BPEL, which takes a lot of time. He is interested in using yahoo geocode and similar web based APIs. He also prefers to integrate real time data collected via satellite feeds and feeds from the other sources into his system of workflows.

Mashup creates new services by aggregating two or more sources. We can develop them rapidly using the APIs, without investing much on learning them. They are light weight and can be extended easily.

As a solution for the scientist, Mooshabaya comes as a system that can utilize the best of both the domains, by a potential merge. By exporting workflows as mashups Mooshabaya invests on the synergy of the domains.

In the process of developing the workflow system, we have used the known existing tools as the base. XBaya Graphical Workflow Composer from Indiana University is used as the core of the system. We have extended XBaya to export the workflows as mashups, and the mashups are deployed into WSO2 Mashup Server. The service metadata are fetched from WSO2 Governance Registry. The composed mashups are executed in the Mashup server and monitored in the run time.

Here we come back to Dr.Malinga with Mooshabaya. Mooshabaya discovers the service metadata from Governance Registry and fetches them. Created workflows can be saved and fetched later. Workflows are exported as Mashups. Respective service metadata is added to the Registry, while the mashups are deployed into Mashup Server. Mooshabaya also supports composing with secure services found in WSO2 Identity Server. Mooshabaya executes the workflows that are deployed into Mashup Server as mashups. WS-Messenger is used to publish the notifications from the executed workflows, and Mooshabaya monitors the execution by subscribing to the notifications. Hence Mooshabaya provides a better workflow solution for the scientist.

Sunday, June 20, 2010

OGSA-DAI ~Presenting in CXF~ (2)

During this week I have been working on the server module, and committed them with the unit tests, though some more commits are remaining.

CXF-Compliant implementation of OGSA-DAI WS-EPR resolver portType operations is done in the commit 1189. Unit tests for the server module - 1190 becomes the first commit on the tests. Removed NStoPkgProperties from the build file, as it is irrelevant for CXF stub generation. in the commit 1205. As of the commit 1206 - which commits the files of the server module, the server module is fixed with some todos left.

After completing the client module and testing, will have to start implementing the Resource Oritented Architecture - Exposing the services as resources and the ReSTful presentation layer utilizing JAX-RS implementation of CXF. We decided to start discussions and design of the ReSTful layer, by mid of this week parallel to the SOAP based layer, as that will make the timeline more efficient.

[1] Test Reports 20/06/2010 22:06:14 : daitest
[2] Unit Test Results
[3] The Test Framework

I got the welcome package on June  19th - 2 Stickers, a pen, and a note book, along with the card. Special Thanks - OGSA-DAI, OMII-UK, and Google.

Thursday, June 17, 2010

Google Docs.. The Good News and The Bad News..

We were working with a Google presentation, and all of a sudden google docs crashed hiding the document, with the message,


The bad news is that Google Docs has just encountered an error.
The good news is that you've helped us find a bug, which we are now looking into.
We apologize for any inconvenience this has caused you.
In the meantime, if you'd like updates on this and other issues, try visiting our Google Docs Help Group: http://www.google.com/support/forum/p/Google+Docs

Sorry, and thanks for your help!
- The Google Docs Team

Can't prepare without a quorum of living replicas Trace: [ 0.000] LogSession {[logKey="dchc522j_15cw6c8ng5", logSpace=logspace:global/writely.production-data.data.logspace]} +------------------------------------------------------- | Started 0.000 after root (absolute time 1276770846.526 seconds) | [ 0.000] CoordinatedLog.init key="dchc522j_15cw6c8ng5" replica=E params=op=WRITE, piggybackRead=true, dl=null, localReadPatienceMs=750, allowFailover=true | [ 0.000] Transitioning from=Init to=AskCoordinator | [ 0.000] SEND [#0] E.isUpToDate() | [ 0.000] Transitioning from=AskCoordinator to=FindPos | [ 0.000] SEND [#1] E.queryLog(queryMode=DEFAULT, piggybackReadRequest=[#8:=]com.google.rewritely.storage.ReplicatedReader$PiggybackReadRequest@f8125b, logPos=null, dl=11.000) | [ 0.000] Predicted local read (5.196 mdev=1.122) as fast as quorum (90.646 mdev=32.633); waiting 750ms before issuing majority read. | [ 0.001] CB [#0] E.isUpToDate => (up-to-date: true, pos: 216) | [ 0.004] CB [#1] E.queryLog => (appliedPos: 249, appliedTime: 1276769410763000, nextLeader: E, lastMustRoll: 249, LWM: {D=(low=228,high=228), E=(low=249,high=249), A=(low=248,high=249), B=(low=228,high=228), C=(low=248,high=249)}, logEntry: null, piggyRead: com.google.rewritely.storage.ReplicatedReader$PiggybackReadResponse@7d3de4) | [ 0.004] Transitioning from=FindPos to=FindFirstReplica | [ 0.004] Transitioning from=FindFirstReplica to=Querying +------------------------------------------------------- | *** trace being toString'd *** | Started 0.004 after root (absolute time 1276770846.530 seconds) | [ 0.000] CoordinatedLog.write key="dchc522j_15cw6c8ng5" replica=E logspace=logspace:global/writely.production-data.data.logspace | [ 0.000] Constructed proposer Proposer {[localReplica=E, id=8130, logPos=250]} timestamp=1276770846530000 syncApply=[E] syncLearn=[] | [ 0.000] Transitioning from=Init to=AcceptLdr | [ 0.000] SEND [#0] E.accept(logPos=250, proposal=[#18:=](0, 0): (nextLdr E, hash f811b883b75d1f21, @1276770846530000, logPos 250, 126 bytes), mustRollForward=false, logState=[#19:=](PS: Unready, Acpt=[], Inv=[], Mrf=[], View=replicas={A=dv, B=dv, C=dv, D=dv, E=dv}, gen=0, time=0, LWM: {D=(low=228,high=228), E=(low=249,high=249), A=(low=248,high=249), B=(low=228,high=228), C=(low=248,high=249)}), dl=1.001) | [ 0.008] CB [#0] E.accept => EXCEPTION / com.google.bigtable.BigtableOverQuotaException: OVER_QUOTA: owner class (@production) exceeds its quota, while the owner (chubby!mdb/writely) is using 36631910110495.500 of its own 108851651149824.000 disk:.|OVER_QUOTA|/bigtable/srv-gd/writely.production-data | [ 0.008] Transitioning from=AcceptLdr to=Prepare | [ 0.009] SEND [#1] A.prepare(logPos=250, proposalNum=(298744293, 7902741955929074510), mustRollForward=false, logState=[#19], dl=6.991) | [ 0.009] SEND [#2] B.prepare(logPos=250, proposalNum=(298744293, 7902741955929074510), mustRollForward=false, logState=[#19], dl=6.991) | [ 0.009] SEND [#3] C.prepare(logPos=250, proposalNum=(298744293, 7902741955929074510), mustRollForward=false, logState=[#19], dl=6.991) | [ 0.009] SEND [#4] D.prepare(logPos=250, proposalNum=(298744293, 7902741955929074510), mustRollForward=false, logState=[#19], dl=6.991) | [ 0.009] SEND [#5] E.prepare(logPos=250, proposalNum=(298744293, 7902741955929074510), mustRollForward=false, logState=[#19], dl=6.991) | [ 0.039] CB [#4] D.prepare => EXCEPTION / com.google.storage.megastore.replication.StorageException: Remote RPC failed {[status=EXCEPTION, replicationStatus=MISC, stubbyErr=/AcceptorService.Prepare to 10.230.37.70:25685 [APPLICATION_ERROR(4)] << EXCEPTION / com.google.bigtable.BigtableOverQuotaException: OVER_QUOTA: owner class (@production) exceeds its quota, while the owner (chubby!mdb/writely) is using 54750724655241.000 of its own 134580222820352.000 disk:.|OVER_QUOTA|/bigtable/srv-vb/writely.production-data | at com.google.storage.megastore.replication.StorageException.wrap(StorageException.java:65) | at com.google.storage.megastore.replication.StorageException.wrap(StorageException.java:46) | at com.google.storage.megastore.replication.monitor.RootCallback.failure(RootCallback.java:89) | at com.google.storage.megastore.replication.health.HealthTracker$HealthTrackingCallback.failure(HealthTracker.java:288) | at com.google.storage.megastore.replication.monitor.ApiTrace$ApiTraceCbk.failure(ApiTrace.java:254) | at com.google.storage.megastore.replication.acceptor.AcceptorImpl$15.failure(AcceptorImpl.java:1909) | at com.google.storage.megastore.replication.net.BtCallbackWrapper$2.run(BtCallbackWrapper.java:113) | at com.google.storage.megastore.replication.net.RequestContainer$ExceptionForwardingRunnable.run(RequestContainer.java:357) | at com.google.storage.megast ... [exception text truncated by msrepl; total 6685 characters] | [ 0.058] CB [#3] C.prepare => EXCEPTION / com.google.storage.megastore.replication.StorageException: Remote RPC failed {[status=EXCEPTION, replicationStatus=MISC, stubbyErr=/AcceptorService.Prepare to 10.13.114.19:25782 [APPLICATION_ERROR(4)] << EXCEPTION / com.google.bigtable.BigtableOverQuotaException: OVER_QUOTA: owner class (@production) exceeds its quota, while the owner (chubby!mdb/writely) is using 55970616351910.500 of its own 115976485994496.000 disk:.|OVER_QUOTA|/bigtable/srv-ia/writely.production-data | at com.google.storage.megastore.replication.StorageException.wrap(StorageException.java:65) | at com.google.storage.megastore.replication.StorageException.wrap(StorageException.java:46) | at com.google.storage.megastore.replication.monitor.RootCallback.failure(RootCallback.java:89) | at com.google.storage.megastore.replication.health.HealthTracker$HealthTrackingCallback.failure(HealthTracker.java:288) | at com.google.storage.megastore.replication.monitor.ApiTrace$ApiTraceCbk.failure(ApiTrace.java:254) | at com.google.storage.megastore.replication.acceptor.AcceptorImpl$15.failure(AcceptorImpl.java:1909) | at com.google.storage.megastore.replication.net.BtCallbackWrapper$2.run(BtCallbackWrapper.java:113) | at com.google.storage.megastore.replication.net.RequestContainer$ExceptionForwardingRunnable.run(RequestContainer.java:357) | at com.google.storage.megast ... [exception text truncated by msrepl; total 6685 characters] | [ 0.059] CB [#2] B.prepare => EXCEPTION / com.google.storage.megastore.replication.StorageException: Remote RPC failed {[status=EXCEPTION, replicationStatus=MISC, stubbyErr=/AcceptorService.Prepare to 10.224.115.11:25699 [APPLICATION_ERROR(4)] << EXCEPTION / com.google.bigtable.BigtableOverQuotaException: OVER_QUOTA: owner class (@production) exceeds its quota, while the owner (chubby!mdb/writely) is using 54920498505744.000 of its own 124683614683136.000 disk:.|OVER_QUOTA|/bigtable/srv-qa/writely.production-data | at com.google.storage.megastore.replication.StorageException.wrap(StorageException.java:65) | at com.google.storage.megastore.replication.StorageException.wrap(StorageException.java:46) | at com.google.storage.megastore.replication.monitor.RootCallback.failure(RootCallback.java:89) | at com.google.storage.megastore.replication.health.HealthTracker$HealthTrackingCallback.failure(HealthTracker.java:288) | at com.google.storage.megastore.replication.monitor.ApiTrace$ApiTraceCbk.failure(ApiTrace.java:254) | at com.google.storage.megastore.replication.acceptor.AcceptorImpl$15.failure(AcceptorImpl.java:1909) | at com.google.storage.megastore.replication.net.BtCallbackWrapper$2.run(BtCallbackWrapper.java:113) | at com.google.storage.megastore.replication.net.RequestContainer$ExceptionForwardingRunnable.run(RequestContainer.java:357) | at com.google.storage.megas ... [exception text truncated by msrepl; total 6686 characters] | [ 0.059] Proposer failing: Can't prepare without a quorum of living replicas | [ 0.059] Transitioning from=Prepare to=Failed | [ 0.061] CB [#1] A.prepare => CANCELLED | 1 unfired callbacks: {5}  

Thank God.. We had a local backup, quite outdated though.. 
And once more thank God.. The document in Google Docs came alive in an hour!!!

Monday, June 7, 2010

OGSA-DAI ~Presenting in CXF~

Coding has started to accelerate. Revision 1065 customizes the built-in JAXB mapping, by an external binding file. This changes the default mapping of xs:dateTime from XMLGregorianCalendar to java.util.Calendar. For the ease of further commits, I added the respective client and server implementation folder structure in the commit 1066 along with the utility class CXFUtils.

1069 extends the Request Status Builder for the cxf client. 1070 includes the utility methods for handling the exceptions for the client toolkit. 1076 was an interesting  [ ;) ] fix, which fixes some years which I overlooked.

As with the common case of CXF stub generation, Lists are generated instead of arrays.
An example is,
-  final DataType[] inputData = inputBean.getInputLiteral();
+  final DataType[] inputData = inputBean.getInputLiteral().toArray(new DataType[inputBean.getInputLiteral().size()]);

 -              inputType.setInputLiteral(mCurrentData.getDataTypeArray());
+           inputType.getInputLiteral().clear();
+           inputType.getInputLiteral().addAll(Arrays.asList(mCurrentData.getDataTypeArray()));

1093 fixes some issues in an earlier commit related to this.

1094 and 1095 includes the CXF compliant Request management and Session Management Services respectively. 1096 is the CXF implementation of  the WS-ResourceLifetime portType operations.
1118 includes classes in execution/workflows. 
Classes in execution/Worflows.

Considerable changes in the stub generation includes,
(1) Inner class InputsType.Input replaces the class InputsTypeInput.
(2) Getters of the list returns a reference to the live list, not a snapshot. Hence to an item, getOutputStream().add(newItem);
from the method  public List getOutputStream() in OutputsType class.

(3) Regarding the generated fault types, in case of Axis, it was in the format of XXXType from the stub which needed to be used in the code, in cxf stubs, we have XXXTypes associated with XXX, where XXX from the stub is used in the code. [XXX denotes the faults.]

These changes in the stub code are reflected in the respective server side code.


1119 - The CXF implementation of Request Builder implementation. 1122, handles the AddressingUtils, which parses the SOAP Message Header.

@WebServiceProvider
public class AddressingUtils
{

....
    @Resource
    private static WebServiceContext context;



        SOAPMessageContext soapMsgContext = (SOAPMessageContext) context.getMessageContext();
        SOAPEnvelope envelope = soapMsgContext.getMessage().getSOAPPart().getEnvelope();


Commit 1128 defines CXF_MESSAGE_CONTEXT_ERROR. Commits 1144, 1154, and 1173 are for the cxf compliant data sink service, execution, and data resource information services, while 1161 and 1162 implement the data source and intrinsics  portType operations. 1159 is for the initialization class for CXF-compliant presentation layers, which is being modified at the moment. 1172 does some minor fixes in the exceptions thrown.

Friday, June 4, 2010

Mooshabaya - 1.0.0

As we are reaching the completion of the project, we have currently released the Mooshabaya binary to Sourceforge, which can be downloaded from here. At the same time, mooshabaya can also be built using ant, from the source as an open source project.

To build the distribution
trunk$ ant dist

To run the built binary
trunk/dist$ java -jar mooshabaya-1.0.0.jar

Trunk can be built currently using ant, and it is not broken at the moment as well. We have just created the tag 1.0-RC4 as well, as we always keep tagging the releases regularly.