Thursday, June 30, 2011

Concerns of the public cloud and how PaaS helps mitigating them..

Using cloud model to deploy the applications has become a major trend in the recent years. Developers deploy their applications on top of the infrastructure as a service providers, considering its advantages.

Information outside

Software as a Service (SaaS) providers rely on the Infrastructure as a Service providers for their hardware requirements, in many cases. Having a private cloud set up on their own servers is yet another choice, where they deploy their SaaS solutions in their own servers. Hybrid cloud setup comprises of the best of both the worlds - private and public clouds.

Cloud outages and Server unavailability

When hosting your applications over a simple Infrastructure as a Service, you will have to consider the IaaS provided as you would consider your local single hard drive. Redundancy should be considered to mitigate the risk for the availability, as you would, in case of the local hardware in place. Recent outage faced with the Amazon infrastructure has invoked many cloud developers to think more in terms of developing so as to withstand the failures. What is your backup plan to ensure the availability during such outages or how are you going to handle the situation when an infrastructure or the platform provider leaves the business altogether? You have to have proper migration plan and backups for that. Multiple availability zones and having lesser dependency over the infrastructure or the platform should be considered.

Security Issues

In the cloud, you are on your own to find the ways of securing your applications from the attacks, though the infrastructure vendors have their measures in protecting the data, platform or the applications deployed on top of them. Security vulnerabilities may be higher in IaaS, than in a local hard-drive. When using IaaS in its purest form, customer has the responsibility to implement his own security solutions for his control objectives. Here Platform as a Service comes handy, where it takes care of the security solutions that are common to all the software applications. Hence the Software-as-a-Service (SaaS) developer doesn't need to bother about the recurring security and availability issues for each of the software being deployed on top of the infrastructure. Rather, a platform handles all the issues that should be taken care of, and let the software as a service developer focus solely on the application itself, than considering mitigating the issues that might occur due to the inherent concerns of the cloud.

Privacy and legal issues

Data of your business are legally bound, and care should be taken such that deploying it on cloud will not compromise any legal requirements. In most of the cases, infrastructure providers have the control over your information, and legal and state entities of the host country may have control over your information even without your knowledge, which is highly unlikely, when you are having your data in your own data servers, private cloud, or local network of computers.

Vendor lock-in

On the other hand, coding for a platform minimizes the vendor lock-in than coding for a particular infrastructure. However we should also note the risk of a vendor lock-in by the PaaS provider, where they require proprietary service interfaces or development languages. Platforms should adhere to the standards such that the migration between multiple platforms won't be of much of a task for the SaaS developers, unlike migrating between the infrastructures.


Rapidly evolving applications often require more flexibility in the PaaS offerings. Platform-as-a-Service should provide more flexibility than an Infrastructure-as-a-Service can provide.

Let's look at an example Platform-as-a-Service, and see how does that provides the required flexibility. WSO2 Carbon incorporates the accepted standards as a lean middleware platform, and as WSO2 Stratos is WSO2 Carbon platform as a service, it has the advantages inherent to the award winning WSO2 Carbon. Above all, WSO2 Carbon as well as WSO2 Stratos is open source, and free to extend, without any hidden fees for licensing, hence this eliminates the fear of being locked before trying, while providing the required flexibility to suit the needs of the sophisticated SaaS developers in the enterprise.

Loss of Control / Freedom

As we move towards the cloud, the cloud providers decide themselves in many cases, what are the services to offer or what services will be compatible with their offering. Unlike in servers in-house, as you move your data centers and servers to be on cloud, you almost lose the control over them to the cloud providers, not to mention, the privacy and security risks that may occur due to that. This leads to the third party controlling your applications.

The vendor of the IaaS or the platform has control over the applications provided or supported, or even whether they allow the external developers to write new applications to their infrastructure or platform. A platform that facilitates the extensibility of the cloud comes to the rescue in this place. If the platform supports installing or deploying multiple software applications or configure and update the existing applications, that would mitigate the above said short coming for a considerable extent.

Infrastructure expertise

When software are developed considering the infrastructure in mind, they often require the expertise for the particular infrastructure provider, let it be Amazon EC2 Cloud or Rackspace. Migrating for another infrastructure provider will need code level changes in many cases. While consuming the fruits of the scalability and the pay-as-you-go model of the cloud, this includes an additional overhead from the software application developers' point of view. Platform as a Service providers invest on this, by delegating the need to code for the underline infrastructure to themselves, away from the SaaS developers.

More Portability

While some of the PaaS such as Windows Azure rely completely on their own infrastructure, platforms including WSO2 Stratos are designed to function on any of the infrastructure. In this case, as the infrastructure level issues related to the portability are handled by the PaaS layer, the SaaS developers are freed from the burden of coding for multiple infrastructures. This is analogous to the well-known programming challenge - coding for multiple operating system APIs.

An extra bit of work

Those existing applications that work amazingly well on the standard computers or the servers are to be work on the new cloud model. Are they cloud-ready? In other words, can they work over the cloud, as they worked in the non-cloud environment? Ideally, making an application that was written without the cloud in mind needs some extra bit of work to make it utilize the cloud. Coding a new application having cloud in mind is a different consideration altogether, where SaaS model comes. But porting the applications to a cloud infrastructure needs the know-hows on elasticity, load balancing, and auto-scaling, which come as the fruits of the cloud. Platform as a Service vendors promise a smooth deployment of your existing applications into their platform, with lesser or no effort.

Wednesday, June 29, 2011

[31/May/2006 - 06/Sep/2010]

[28th June, 2011 - Tuesday; BMICH, Colombo, Sri Lanka] 
So we had our academic convocation. It was really pleasant to recall the university life, and being awarded the Bachelor of the Science of Engineering, majoring in Computer Science & Engineering with a first class, with a GPA of 3.80 for the total of 4.20. Nothing much to complain.

I am really thankful to the teachers and lecturers who were/are always with me. Time - Today's investment; tomorrow's insurance. My special thanks should go to my Advanced level teachers, the ones who taught me Math, Physics, and Chemistry.
General Convocation - It reminded me my school prize givings.. :) Now I miss the University of Moratuwa, as I missed Royal College.. :( Thank you my country for giving me the opportunity to study at THE BEST places, always.. ♥ I am proud of you.. :) //End-of-yet-another-chapter

Saturday, June 25, 2011

Being a mail star.. :D

So it was a huge collection of unread mail. Today I took some time to read through the 1000s of them. Nevertheless it feels so robotic, having an inbox with a few hundred thousands of mail - 80% mail from IT-groups, svn, etc, 10% fun mail from friends bulk sent, 5% spam/scam/hoax, 5% other group mails. No personal mail at all, almost. Regardless of the fact that I have proper mail filters, I have also missed some interesting mails with valuable information long time, it seems. Nothing personal though.

The Arpanet dialogues dated back to 1975 - This archive of Arpanet dialogues  provides the initial conversations between Marcel Broodthaers, Jane Fonda, Ronald Reagan & Edward Said and Samir Amin, Steve Biko, Francis Fukuyama & Minoru Yamasaki. It is good to notice that Sri Lanka was mentioned in their first conversation. :)

At an interview, 
The interviewer says, "Our work environment expects you to work 24/7. We do not provide you with any casual leave, unless given a medical evidence or a police report. You should also be ready to work on holidays. You should be on time to office and should be strictly on formal kit, always." 
The interviewee smiles, "Sir, I suit your job much."
"Are you confident?, Why do you say so?" asks the interviewer.
"Yes Sir, I am a masochist."
"There! You are selected"..

My mail box is funny.. :D

Sunday, June 19, 2011

Benchmarking Appserver remotely - ab, nohup, ec2, and more ..

This time we wanted to benchmark WSO2 Stratos Application Server instances running on two amazon ec2 instances. Both of the instances are proxied by WSO2 Load Balancer. Here we have used which has only the load balancing, and with load balancing and auto-scaling.

Now we picked Java Bench (A clone of Apache Bench - ab) for the purpose to benchmark the application server running over the two amazon ec2 instances. It was a long running test from another ec2 instance. This test lasted for more than 30 hours. We were able to benchmark and also notice a few interesting stuff too.
Here even after closing the connection to the remote instance where the test is running, the test should keep running.

Now we are running the test using the load of web service calls on a web service that is deployed on Application Server. The service is a simple adder service.

The below is stored as request.xml

From two directories cloud1 and cloud2, from the instance,
nohup ab -p request.xml -n 10000 -c 200 -k -H "SOAPAction: multiply" -T "application/soap+xml; charset=UTF-8" &

nohup ab -p request.xml -n 10000 -c 200 -k -H "SOAPAction: multiply" -T "application/soap+xml; charset=UTF-8" &

Here nohup comes handy. Even after you terminate the connection to the remote host, the test will continue, unless you find the process and kill it using kill -9.

You can view the nohup output from the above directories cloud1 and cloud2 above using tail -f nohup.out 

But since ab itself can't handle this much of a bulky requests at once, it may fail as below, timing ab out for the specific requests.
Benchmarking (be patient)
apr_poll: The timeout specified has expired (70007)

So let's create a shell script instead, to prevent firing all the requests to ab at once, like a million of requests. 

while true; do ab -p request4.xml -n 4000 -c 200 -k -H "SOAPAction: multiply" -T "application/soap+xml; charset=UTF-8"; sleep 3; ab -p request4.xml -n 4000 -c 200 -k -H "SOAPAction: multiply" -T "application/soap+xml; charset=UTF-8"; sleep 3; done

Save the above script ( Now by running 
nohup sh &
you can start the load test for both the instances.

ubuntu@ip-10-120-61-234:~$ nohup sh &
[1] 6512
ubuntu@ip-10-120-61-234:~$ nohup: ignoring input and appending output to `nohup.out'

Here 6512 above indicates the pid of, which is the benchmarking process of ab.

You can see the nohup output for the above script with the results from both the instances alternating, with a gap of 3 seconds. Here the time interval between two chunks of requests on the same instance is 6 seconds.
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd,
Licensed to The Apache Software Foundation,

Benchmarking (be patient)
Completed 400 requests
Completed 800 requests
Completed 1200 requests
Completed 1600 requests
Completed 2000 requests
Completed 2400 requests
Completed 2800 requests
Completed 3200 requests
Completed 3600 requests
Completed 4000 requests
Finished 4000 requests

Server Software:        WSO2
Server Hostname:
Server Port:            80

Document Path:          /services/HelloService
Document Length:        260 bytes

Concurrency Level:      200
Time taken for tests:   25.772 seconds
Complete requests:      4000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    0
Total transferred:      1652000 bytes
Total POSTed:           2188000
HTML transferred:       1040000 bytes
Requests per second:    155.20 [#/sec] (mean)
Time per request:       1288.623 [ms] (mean)
Time per request:       6.443 [ms] (mean, across all concurrent requests)
Transfer rate:          62.60 [Kbytes/sec] received
                        82.91 kb/s sent
                        145.50 kb/s total

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1    2   5.3      1      81
Processing:     7 1256 711.9   1290    4351
Waiting:        6 1256 711.9   1290    4351
Total:          7 1258 711.7   1293    4363

Percentage of the requests served within a certain time (ms)
  50%   1293
  66%   1696
  75%   1883
  80%   1957
  90%   2165
  95%   2292
  98%   2416
  99%   2447
 100%   4363 (longest request)
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd,
Licensed to The Apache Software Foundation,

Benchmarking (be patient)
Completed 400 requests
Completed 800 requests
Completed 1200 requests
Completed 1600 requests
Completed 2000 requests
Completed 2400 requests
Completed 2800 requests
Completed 3200 requests
Completed 3600 requests
Completed 4000 requests
Finished 4000 requests

Server Software:        WSO2
Server Hostname:
Server Port:            80

Document Path:          /services/HelloService
Document Length:        260 bytes

Concurrency Level:      200
Time taken for tests:   20.210 seconds
Complete requests:      4000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    0
Total transferred:      1652000 bytes
Total POSTed:           2188000
HTML transferred:       1040000 bytes
Requests per second:    197.92 [#/sec] (mean)
Time per request:       1010.486 [ms] (mean)
Time per request:       5.052 [ms] (mean, across all concurrent requests)
Transfer rate:          79.83 [Kbytes/sec] received
                        105.73 kb/s sent
                        185.55 kb/s total

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1    1   1.1      1      14
Processing:     6  987 577.5    990    3700
Waiting:        6  987 577.5    990    3700
Total:          6  988 577.4    990    3700

Percentage of the requests served within a certain time (ms)
  50%    990
  66%   1292
  75%   1468
  80%   1570
  90%   1782
  95%   1907
  98%   1986
  99%   2013
 100%   3700 (longest request)
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd,
Licensed to The Apache Software Foundation,
This nohup.out contains the useful load testing results.

WSO2 Load Balancer

WSO2 Load Balancer (LB) is yet another upcoming product from WSO2. WSO2 Load Balancer is currently a stripped down version of WSO2 Enterprise Service Bus (ESB) containing the load balancing and auto scaling features of WSO2 Enterprise Service Bus. Eventually, the tenant aware load balancing components and the other load balancer specific features will be added, making Load Balancer an attractive new product. WSO2 Load Balancer to be used as the Load Balancer for WSO2 Stratos - 1.5.0 Public cloud deployment - StratosLive.

WSO2 Load Balancer profile can be found here (trunk) and here (branch-1.0.0) in WSO2 SVN repository. One of the recent release candidates of WSO2 Load Balancer can be found here. Wait for yet another fascinating product from WSO2.

Fixing 'Too many open files' issue in Load Balancer

When load balancing a production system like Stratos with heavy load of requests, or while load testing it, you may face a 'Too many open files' issue as below.

[2011-06-09 20:48:31,852]  WARN - HttpCoreNIOListener System may be unstable: IOReactor encountered a checked exception : Too many open files Too many open files
    at Method)
    at org.apache.http.impl.nio.reactor.DefaultListeningIOReactor.processEvent(
    at org.apache.http.impl.nio.reactor.DefaultListeningIOReactor.processEvents(
    at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(
    at org.apache.synapse.transport.nhttp.HttpCoreNIOListener$

This is a common exception in load balancer, if the maximum allowed files  (ulimit) to be open is set to low.
You can get the number by, ulimit -Sn. By default it can be as low as 1024 for your desktop, server, or the ec2 instance. You can fix the above exception by increasing the ulimit by giving a higher value (given 655350 here).
ulimit -n 655350
This will fix the above issue.

Setting up Stratos locally in your home computer is just a few steps away ..

Now you can check out the trunk code of Carbon from the repository, build, and configure WSO2 Stratos in a few steps.
Check out
mvn clean install from carbon directory

Set up the following in ~/.bashrc (change the values accordingly for the email addresses, carbon directory, the transport sender, and the other information given below.)
# Stratos setup constants
export stratos_local_ip=""

export CARBON_DIR="/home/pradeeban/trunk/stratos/carbon"
export STRATOS_DIR=$CARBON_DIR/stratos_home
export SSO_ENABLED=true
export CREATE_DB=true
export STRATOS_MAIL_TRANSPORT=' ********.***.***25 false false '

If you are going to use the stratos binaries, you can use the downloaded packs, usually available from the builders online (find it here, for the Stratos-1.5.1 binaries), as of the trunk or development branch version or from the download area, for a released version.

Place the downloaded zips in to a location and also define that location in bashrc profile.
export PACKS_DIR="/home/pradeeban/Downloads"

Now you may run the set up script as above.

Running Stratos Set up script - carbon/build/

Add the following lines to your /etc/hosts file

Start all the Stratos services
from $stratos_home
sh start all

Stop all the Stratos services
form $stratos_home
sh stop all

Also you can start and stop the services individually or as a chosen subset of all the available services.

Here by pointing the browser to, you can access Stratos Manager. Pls note that in the configuration we have enabled SSO. You can also set up the Stratos services with SSO disabled instead, by setting export SSO_ENABLED=false, above in .bashrc profile. 

Since we have enabled SSO as above, once you have logged in to one of the services, you can just click and enter the other services without logging into each of the other services separately. Here is the url for WSO2 Identity Server as a Service, and is WSO2 Governance Registry as a Service, and the other urls as given above lists the respective services.

Await the official Stratos-1.5 release soon!

Sunday, June 12, 2011

WSO2 Carbon 3.2.0 ~ New releases from WSO2

Check out the latest WSO2 Products release - with lots of new features and bug fixes.  Two new products - WSO2 Complex Event Processing Server (abbreviated as CEP) and WSO2 Message Broker (MB) are debuted, with the stable product version 1.0.0, where the other products have also faced major improvements, with this new release of Carbon - 3.2.0, the SOA middleware platform for the WSO2 Products.

Await the new Stratos release - Stratos - 1.5.0, coming soon, with the improvements to the cloud platform of WSO2, along the WSO2 Carbon 3.2.x based products as services over the cloud. WSO2 Stratos is WSO2 Carbon Platform as a Service, catering your organization's needs. You can use the publicly hosted WSO2 Stratos Cloud platform, host it in your own servers as a private cloud, or go for hybrid options.

Wednesday, June 8, 2011

How to close an open port in unix/linux environments

In many cases it is not uncommon to see exception like " Address already in use" when running programs. Sometimes this exception occurs even after the program that was using the particular address/port has been terminated. That is mostly due to improperly terminated application. Here the port is still assigned to the process of the particular program. How can we release the port and let the new program use that specific port, without throwing address already in use exceptions.

First let's find the process attached to the program running on the particular port.
pradeeban@pradeeban:~/LB/wso2lb-1.0.0/bin$ lsof | grep 8243
notificat  1802  pradeeban  txt       REG                8,4     53296   13638243 /usr/lib/gnome-panel/notification-area-applet
firefox-b  1821  pradeeban   52u     IPv4             896036       0t0        TCP pradeeban.local:43306->pradeeban.local:8243 (ESTABLISHED)
java      10500  pradeeban  113u     IPv6             867578       0t0        TCP pradeeban.local:8243->pradeeban.local:45898 (CLOSE_WAIT)
java      10500  pradeeban  300u     IPv6             867966       0t0        TCP pradeeban.local:8243->pradeeban.local:46006 (CLOSE_WAIT)
java      10500  pradeeban  471u     IPv6             858667       0t0        TCP *:8243 (LISTEN)
Here, lsof lists open files. From the above command, we notice that a process having the pid of 10500 is listening on the port 8243.

If you know the name of the running application process or part of it, you can use
ps -xa | grep
instead, to get the pid.

Now we can easily release the port 8243 by killing the relevant process (process of pid 10500).
pradeeban@pradeeban:~/LB/wso2lb-1.0.0/bin$ kill 10500