Saturday, June 30, 2012

Self help is..

So we have completed the first half of the year 2012, and have just reached the second half of it. Mid-year resolutions anyone? I am not much into the new year resolutions. Except for the resolutions I made in the year 2004 and 2010, no new year resolution went as I planned. But I rather prefer resolutions to be made on a random day!
Spending time effectively with computer has always been in the top among my resolutions. 

1) Contributing more to the open source, during free time.
This will mostly be open source development, but also will include blogging, community interaction, and evangelization. I will surely be more aggressive with this.

If you are into software development, open source projects give you a second life. I would encourage you to contribute. Blogging or creating a positive online presence would suit for everyone. One can be completely or partially anonymous online, will contributing positively to the world knowledge.

2) Spending time effectively online.
I am fine with reading new stuff which are totally unrelated, or listening to some music of an artist that I didn't heard of, before. However, my focus is to limit wasting time idling.

3) Learning something new.
I recently started learning Chinese from the Internet. Guess that was a failed attempt. Why? I didn't have a proper reason to back my motivation to learn Chinese up. I will focus to learn Portuguese instead. Anyway with my failed attempt to learn Chinese, I still have learned a few interesting facts of Chinese.


4) No L4 activities for any reason
As discussed by Stephen Covey, the L4 activities are the tasks that are neither important nor urgent. Spending time on them is a waste of time. Let's use the time for some L2 activities instead (not urgent, but important tasks)!

Facing the problem and trying to solve it instead of finding distractions to run away from it might be the ideal solution for this. If that is something difficult to achieve, and if there is a real need to keep you distracted from the issue, I recommend getting involved in other healthy activities instead. When we get even a mild fever, we seek the assistance of medicine and the doctors. We always tend to underestimate the health of the mind. We always take leaves for physical illness, whilst silently ignoring the wounds of the heart. If you ask me, taking a leave for being depressed is perfectly fine. As I might have mentioned somewhere before too, it is an ill-condition of the mind. Mental health is equally or even more important than that of physical.

5) Mixing the stuff up!
I often have felt sorry for the people who do routine tasks as jobs, which doesn't involve creative thinking or some change from a predefined agenda. Though this relates to the clerical jobs, call centers, or the jobs where bulk of employees are considered a cheap labour carrying forward the order of a big-guy. In late years with the rise of IT, many concerns were risen considering the fact that programmers are considered the same. Human nature is to avoid repetition of uninteresting tasks, with the exception of something that is addictive. An addict may find something interesting, which may not be for others. Someone often gets addicted to something, when he wants to be distracted from his mainstream life. The addiction can be the excessive use of alcohol, drugs, porn, or whatever. Addiction provides a short-term relief from the pain and the pressure of the real life (let's call it the first life). It leads to a feeling of guilty, inefficiency, and low self esteem. This vicious cycle continues. However, mild-addictions such as addiction to music or movies may not be harmful at all. 

A random improvement or attending to a random L2 event is always fun.

6) Listening to myself
I am going to be a good listener of myself. :D

Friday, June 22, 2012

No more hoaxes online, please!

ATTENTION!!!!!!!!! do not join the group currently on facebook with the title "becoming a father or a mother was the greatest gift of my life." it is a group of paedophiles trying to access your photos, this was on fox news at 5 last night. please copy and post!!! lets keep children safe (take a minute to copy and paste)
Do your facebook friends spread hoaxes? Knowingly or unknowingly people spread hoaxes online. Why do they spread hoaxes? Except for the one who initially created or engineered the hoax, the folks who spread it often have good motives, or at least supporting what they believe to be morally and ethically correct. The chain letters is one common form, which has now mostly been transformed into the Facebook status updates, messages, and wallposts.

I used to reply by just posting the relevant Snopes or Hoax-Slayer link whenever I see an annoying hoax online. But whenever I did that, I found that the friend who shared the link feels offended and starts to protect the hoax. In many cases, the folks who post those stuff may even be more educated or experienced than you. Some times they may even be your superiors in your office. They are knowledgeable; but they just don't have the time, need, or courage to confirm the validity of the message before posting it - their motivation is better to be careful, in case if the hoax is true.

Eventually, I have found an effective way of replying.
Thanks for your message. But seems that message is just a hoax, according to snopes. http://www.snopes.com/computer/internet/greatestgift.asp But pls keep us updated.

I am nowadays using the above message format (with the link replaced with the relevant link confirming that the status or the message is just a hoax). To my surprise, this has been proven to be extremely effective. 

Here the posters are actually kind hearted (refer to the first paragraph, which contains the hoax, warning about pedophiles). Hence I thank them for the care. Then I suggest them the link might be a hoax. Finally, I finish the message with "Pls keep us updated!" This avoids the feeling of being taught or advised something, which everyone hates of. I have seen many of the posters, indeed copy my above reply to the source where they initially found the hoax. This way the message spreads against the hoax, at the same speed of hoax. Finally the responses and facts are propagated both upstream and downstream. Everyone is safe!

Let's have a nice stay online! No worries! 

Saturday, June 16, 2012

Moments with Twitter - II

This post continues from my previous post, Moments with Twitter..  I just noticed the "Embed This Tweet" option provided by Twitter, that copies the tweet with formatting. However, I was just copy-pasting the tweets here.

#Google #hoaxes - Really funny. http://en.wikipedia.org/wiki/Google%27s_hoaxes Jun 20, 2010  
Google never fails to amuse the users with its easter eggs and April fool hoaxes

Seems a nice collection: http://www.alldissertations.com/univ.php Jun 21, 2010  
This is indeed a useful collection, with theses from multiple academic institutes.

sleepless night at Paris.. ;) Dec 08, 2010
This tweet is indeed remarkable. This was my first tweet from abroad, during my visit to Paris for SoCPar2010.

wanna discuss more about Google Summer of Code with other enthusiasts and students from Sri Lanka? Join #gsoc-lk at irc://irc.freenode.net Feb 10, 2011  
We created an irc (along with the mailing list, which is quite popular among the students) for the Sri Lankan students to discuss the GSoC among themselves. During the Summer of Code student application period, this room gets a bit of traffic, while remaining passive during the other days of the year.

If #Microsoft had Invented ... http://t.co/3GiwBgM ;) Jun 25, 2011  
We always enjoy a good set of jokes at Microsoft, as Linux fans. Not that we hate Microsoft. It is just we too love fun and a good laugh.

You and Your Research - http://t.co/I71uUkKA Oct 16, 2011
This is again a good read for the future researchers.

An interesting journey around the world.. http://t.co/qgVm0dt1 Jan 14, 2012  
This includes a nice set of photos taken around the world, by an artist.

Google Summer of Code Workshop in Poland - http://t.co/BL2UHNZ7 #GSoC Mar 29, 2012
This workshop from Poland reminded me the Google Summer of Code awareness sessions we had allover the country. We had 5 for GSoC 2012, and 2 for GSoC 2011.

Anti patterns - http://t.co/DswnzcUD May 29, 2012  
Learning anti patterns is more fun than learning patterns. It is more like learning from your own mistakes. It is also suggested that one should learn anti patterns before learning patterns, for the efficient use of the patterns. In any case, anti patterns are fun.

Friday, June 15, 2012

Issues that you may encounter during the migration to Cassandra using DataStax/Sqoop and the fixes.


My previous blog post, Moving data from mysql to cassandra discusses how I migrated my database from mysql to cassandra. In this post, I will discuss some issues that I encountered, as I started to use DataStax, and how easily can they be fixed.

If you fail to indicate the primary key to sqoop, the below exception will be thrown.
ERROR tool.ImportTool: Error during import: No primary key could be found for table Category. Please specify one with --split-by or perform a sequential import with '-m 1'.
Solution: Indicated in the error log itself!

Exceptions similar to the below will be thrown, if you try to use sqoop as above, without properly starting the cassandra.
Exception in thread "main" java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: pradeeban; nested exception is:
    java.net.ConnectException: Connection refused]
    at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:338)
    at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:248)
    at org.apache.cassandra.tools.NodeProbe.connect(NodeProbe.java:141)
    at org.apache.cassandra.tools.NodeProbe.(NodeProbe.java:111)
    at com.datastax.bdp.tools.DseTool.(DseTool.java:136)
    at com.datastax.bdp.tools.DseTool.main(DseTool.java:562)
Caused by: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: pradeeban; nested exception is:
    java.net.ConnectException: Connection refused]
    at com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:101)
    at com.sun.jndi.toolkit.url.GenericURLContext.lookup(GenericURLContext.java:185)
    at javax.naming.InitialContext.lookup(InitialContext.java:392)
    at javax.management.remote.rmi.RMIConnector.findRMIServerJNDI(RMIConnector.java:1886)
    at javax.management.remote.rmi.RMIConnector.findRMIServer(RMIConnector.java:1856)
    at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:255)
    ... 5 more
Caused by: java.rmi.ConnectException: Connection refused to host: pradeeban; nested exception is:
    java.net.ConnectException: Connection refused
    at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:601)
    at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:198)
    at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184)
    at sun.rmi.server.UnicastRef.newCall(UnicastRef.java:322)
    at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source)
    at com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:97)
    ... 10 more
Caused by: java.net.ConnectException: Connection refused
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
    at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
    at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
    at java.net.Socket.connect(Socket.java:529)
    at java.net.Socket.connect(Socket.java:478)
    at java.net.Socket.(Socket.java:375)
    at java.net.Socket.(Socket.java:189)
    at sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:22)
    at sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:128)
    at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:595)
    ... 15 more
Unable to run : jobtracker not found




If you try to run the above migration example once more, it will complain as below, and the migration will halt.
12/06/15 15:39:56 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
12/06/15 15:39:56 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
12/06/15 15:39:56 INFO tool.CodeGenTool: Beginning code generation
12/06/15 15:39:56 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Category` AS t LIMIT 1
12/06/15 15:39:56 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..
Note: /tmp/sqoop-pradeeban/compile/5ddc038aef3f4db8ed8f643cdba0786d/Category.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
12/06/15 15:39:57 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/5ddc038aef3f4db8ed8f643cdba0786d/Category.jar
12/06/15 15:39:59 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import
12/06/15 15:39:59 INFO mapreduce.ImportJobBase: Beginning import of Category
12/06/15 15:40:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
12/06/15 15:40:01 INFO mapred.JobClient: Cleaning up the staging area cfs:/tmp/hadoop-root/mapred/staging/pradeeban/.staging/job_201206151241_0006
12/06/15 15:40:01 ERROR security.UserGroupInformation: PriviledgedActionException as:pradeeban cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory Category already exists
12/06/15 15:40:01 ERROR tool.ImportAllTablesTool: Encountered IOException running import job: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory Category already exists

Solution: Make sure to delete the output directory, "Category", while also deleting the source files generated in the working directory, before running it once more. This is because hadoop doesn't like overwriting files.

The output directory can be deleted as below.
$ bin/dse hadoop dfs -rmr Category
12/06/15 15:41:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Deleted cfs:/user/pradeeban/Category

Further assistance on hadoop trouble shooting can be found here.

Thursday, June 14, 2012

Moving data from mysql to cassandra

I had a relational database, that I wanted to migrate to cassandra. Cassandra's sstableloader provides option to load the existing data from flat files to a cassandra ring. Hence this can be used as a way to migrate data in relational databases to cassandra, as most relational databases let us export the data into flat files. 

sqoop gives the option to do this effectively. Interestingly, DataStax Enterprise provides everything we want in the big data space as a package. This includes, cassandra, hadoop, hive, pig, sqoop, and mahout, which comes handy in this case.

Under the resources directory, you may find the cassandra, dse, hadoop, hive, log4j-appender, mahout, pig, solr, sqoop, and tomcat specific configurations. 
For example, from resources/hadoop/bin, you may format the hadoop name node using
 ./hadoop namenode -format
as usual.

* Download and extract DataStax Enterprise binary archive (dse-2.1-bin.tar.gz).
* Follow the documentation, which is also available as a PDF.
* Migrating a relational database to cassandra is documented and is also blogged.
* Before starting DataStax, make sure that the JAVA_HOME is set. This also can be set directly on conf/hadoop-env.sh.
* Include the connector to the relational database into a location reachable by sqoop.
I put mysql-connector-java-5.1.12-bin.jar under resources/sqoop.
* Set the environment
$ bin/dse-env.sh
* Start DataStax Enterprise, as an Analytics node.
$ sudo bin/dse cassandra -t
where cassandra starts the Cassandra process plus CassandraFS and the -t option starts the Hadoop JobTracker and TaskTracker processes.
if you start without the -t flag, the below exception will be thrown during the further operations that are discussed below.

No jobtracker found
Unable to run : jobtracker not found
 

Hence do not miss the -t flag.

* Start cassandra cli to view the cassandra keyrings and you will be able to view the data in cassandra, once you migrate using sqoop as given below.
$ bin/cassandra-cli -host localhost -port 9160
Confirm that it is connected to the test cluster that is created on the port 9160, by the below from the CLI.
[default@unknown] describe cluster;
Cluster Information:
   Snitch: com.datastax.bdp.snitch.DseDelegateSnitch
   Partitioner: org.apache.cassandra.dht.RandomPartitioner
   Schema versions: 
f5a19a50-b616-11e1-0000-45b29245ddff: [127.0.1.1]

If you have missed mentioning the host/port (starting the cli by just bin/cassandra-cli) or given it wrong, you will get the response as "Not connected to a cassandra instance."
$ bin/dse sqoop import --connect jdbc:mysql://127.0.0.1:3306/shopping_cart_db --username root --password root --table Category --split-by categoryName --cassandra-keyspace shopping_cart_db --cassandra-column-family Category_cf --cassandra-row-key categoryName --cassandra-thrift-host localhost --cassandra-create-schema
Above command will now migrate the table "Category" in the shopping_cart_db with the primary key categoryName, into a cassandra keyspace named shopping_cart, with the cassandra row key categoryName. You may use the --direct mysql specific option, which is faster. In my above command, I have everything runs on localhost.

+--------------+-------------+------+-----+---------+-------+
| Field        | Type        | Null | Key | Default | Extra |
+--------------+-------------+------+-----+---------+-------+
| categoryName | varchar(50) | NO   | PRI | NULL    |       |
| description  | text        | YES  |     | NULL    |       |
| image        | blob        | YES  |     | NULL    |       |
+--------------+-------------+------+-----+---------+-------+
This also creates the respective java class (Category.java), inside the working directory.


To import all the tables in the database, instead of a single table.
$ bin/dse sqoop import-all-tables -m 1 --connect jdbc:mysql://127.0.0.1:3306/shopping_cart_db --username root --password root --cassandra-thrift-host localhost --cassandra-create-schema --direct

Here "-m 1" tag ensures a sequential import. If not specified, the below exception will be thrown.
ERROR tool.ImportAllTablesTool: Error during import: No primary key could be found for table Category. Please specify one with --split-by or perform a sequential import with '-m 1'.

To check whether the keyspace is created,

[default@unknown] show keyspaces;
................
Keyspace: shopping_cart_db:


  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
  Durable Writes: true
    Options: [replication_factor:1]
  Column Families:
    ColumnFamily: Category_cf
      Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type
      Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
      Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type
      Row cache size / save period in seconds / keys to save : 0.0/0/all
      Row Cache Provider: org.apache.cassandra.cache.SerializingCacheProvider
      Key cache size / save period in seconds: 200000.0/14400
      GC grace seconds: 864000
      Compaction min/max thresholds: 4/32
      Read repair chance: 1.0
      Replicate on write: true
      Bloom Filter FP chance: default
      Built indexes: []
      Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy
.............

[default@unknown] describe shopping_cart_db;
Keyspace: shopping_cart_db:
  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
  Durable Writes: true
    Options: [replication_factor:1]
  Column Families:
    ColumnFamily: Category_cf
      Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type
      Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
      Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type
      Row cache size / save period in seconds / keys to save : 0.0/0/all
      Row Cache Provider: org.apache.cassandra.cache.SerializingCacheProvider
      Key cache size / save period in seconds: 200000.0/14400
      GC grace seconds: 864000
      Compaction min/max thresholds: 4/32
      Read repair chance: 1.0
      Replicate on write: true
      Bloom Filter FP chance: default
      Built indexes: []
      Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy



You may also use hive to view the databases created in cassandra, in an sql-like manner.
* Start Hive

$ bin/dse hive

hive> show databases; 
OK
default
shopping_cart_db



When the entire database is imported as above, separate java classes will be created for each of the tables.
$ bin/dse sqoop import-all-tables -m 1 --connect jdbc:mysql://127.0.0.1:3306/shopping_cart_db --username root --password root --cassandra-thrift-host localhost --cassandra-create-schema --direct
12/06/15 15:42:11 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
12/06/15 15:42:11 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
12/06/15 15:42:11 INFO tool.CodeGenTool: Beginning code generation
12/06/15 15:42:11 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Category` AS t LIMIT 1
12/06/15 15:42:11 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..
Note: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Category.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
12/06/15 15:42:13 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Category.jar
12/06/15 15:42:13 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import
12/06/15 15:42:13 INFO mapreduce.ImportJobBase: Beginning import of Category
12/06/15 15:42:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
12/06/15 15:42:15 INFO mapred.JobClient: Running job: job_201206151241_0007
12/06/15 15:42:16 INFO mapred.JobClient:  map 0% reduce 0%
12/06/15 15:42:25 INFO mapred.JobClient:  map 100% reduce 0%
12/06/15 15:42:25 INFO mapred.JobClient: Job complete: job_201206151241_0007
12/06/15 15:42:25 INFO mapred.JobClient: Counters: 18
12/06/15 15:42:25 INFO mapred.JobClient:   Job Counters 
12/06/15 15:42:25 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=6480
12/06/15 15:42:25 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
12/06/15 15:42:25 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
12/06/15 15:42:25 INFO mapred.JobClient:     Launched map tasks=1
12/06/15 15:42:25 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
12/06/15 15:42:25 INFO mapred.JobClient:   File Output Format Counters 
12/06/15 15:42:25 INFO mapred.JobClient:     Bytes Written=2848
12/06/15 15:42:25 INFO mapred.JobClient:   FileSystemCounters
12/06/15 15:42:25 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=21419
12/06/15 15:42:25 INFO mapred.JobClient:     CFS_BYTES_WRITTEN=2848
12/06/15 15:42:25 INFO mapred.JobClient:     CFS_BYTES_READ=87
12/06/15 15:42:25 INFO mapred.JobClient:   File Input Format Counters 
12/06/15 15:42:25 INFO mapred.JobClient:     Bytes Read=0
12/06/15 15:42:25 INFO mapred.JobClient:   Map-Reduce Framework
12/06/15 15:42:25 INFO mapred.JobClient:     Map input records=1
12/06/15 15:42:25 INFO mapred.JobClient:     Physical memory (bytes) snapshot=119435264
12/06/15 15:42:25 INFO mapred.JobClient:     Spilled Records=0
12/06/15 15:42:25 INFO mapred.JobClient:     CPU time spent (ms)=630
12/06/15 15:42:25 INFO mapred.JobClient:     Total committed heap usage (bytes)=121241600
12/06/15 15:42:25 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=2085318656
12/06/15 15:42:25 INFO mapred.JobClient:     Map output records=36
12/06/15 15:42:25 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87
12/06/15 15:42:25 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 11.4492 seconds (0 bytes/sec)
12/06/15 15:42:25 INFO mapreduce.ImportJobBase: Retrieved 36 records.
12/06/15 15:42:25 INFO tool.CodeGenTool: Beginning code generation
12/06/15 15:42:25 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Customer` AS t LIMIT 1
12/06/15 15:42:25 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..
Note: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Customer.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
12/06/15 15:42:25 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Customer.jar
12/06/15 15:42:26 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import
12/06/15 15:42:26 INFO mapreduce.ImportJobBase: Beginning import of Customer
12/06/15 15:42:26 INFO mapred.JobClient: Running job: job_201206151241_0008
12/06/15 15:42:27 INFO mapred.JobClient:  map 0% reduce 0%
12/06/15 15:42:35 INFO mapred.JobClient:  map 100% reduce 0%
12/06/15 15:42:35 INFO mapred.JobClient: Job complete: job_201206151241_0008
12/06/15 15:42:35 INFO mapred.JobClient: Counters: 17
12/06/15 15:42:35 INFO mapred.JobClient:   Job Counters 
12/06/15 15:42:35 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=6009
12/06/15 15:42:35 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
12/06/15 15:42:35 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
12/06/15 15:42:35 INFO mapred.JobClient:     Launched map tasks=1
12/06/15 15:42:35 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
12/06/15 15:42:35 INFO mapred.JobClient:   File Output Format Counters 
12/06/15 15:42:35 INFO mapred.JobClient:     Bytes Written=0
12/06/15 15:42:35 INFO mapred.JobClient:   FileSystemCounters
12/06/15 15:42:35 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=21489
12/06/15 15:42:35 INFO mapred.JobClient:     CFS_BYTES_READ=87
12/06/15 15:42:35 INFO mapred.JobClient:   File Input Format Counters 
12/06/15 15:42:35 INFO mapred.JobClient:     Bytes Read=0
12/06/15 15:42:35 INFO mapred.JobClient:   Map-Reduce Framework
12/06/15 15:42:35 INFO mapred.JobClient:     Map input records=1
12/06/15 15:42:35 INFO mapred.JobClient:     Physical memory (bytes) snapshot=164855808
12/06/15 15:42:35 INFO mapred.JobClient:     Spilled Records=0
12/06/15 15:42:35 INFO mapred.JobClient:     CPU time spent (ms)=510
12/06/15 15:42:35 INFO mapred.JobClient:     Total committed heap usage (bytes)=121241600
12/06/15 15:42:35 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=2082869248
12/06/15 15:42:35 INFO mapred.JobClient:     Map output records=0
12/06/15 15:42:35 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87
12/06/15 15:42:35 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 9.3143 seconds (0 bytes/sec)
12/06/15 15:42:35 INFO mapreduce.ImportJobBase: Retrieved 0 records.
12/06/15 15:42:35 INFO tool.CodeGenTool: Beginning code generation
12/06/15 15:42:35 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `OrderEntry` AS t LIMIT 1
12/06/15 15:42:35 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..
Note: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/OrderEntry.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
12/06/15 15:42:35 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/OrderEntry.jar
12/06/15 15:42:36 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import
12/06/15 15:42:36 INFO mapreduce.ImportJobBase: Beginning import of OrderEntry
12/06/15 15:42:36 INFO mapred.JobClient: Running job: job_201206151241_0009
12/06/15 15:42:37 INFO mapred.JobClient:  map 0% reduce 0%
12/06/15 15:42:45 INFO mapred.JobClient:  map 100% reduce 0%
12/06/15 15:42:45 INFO mapred.JobClient: Job complete: job_201206151241_0009
12/06/15 15:42:45 INFO mapred.JobClient: Counters: 17
12/06/15 15:42:45 INFO mapred.JobClient:   Job Counters 
12/06/15 15:42:45 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=6381
12/06/15 15:42:45 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
12/06/15 15:42:45 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
12/06/15 15:42:45 INFO mapred.JobClient:     Launched map tasks=1
12/06/15 15:42:45 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
12/06/15 15:42:45 INFO mapred.JobClient:   File Output Format Counters 
12/06/15 15:42:45 INFO mapred.JobClient:     Bytes Written=0
12/06/15 15:42:45 INFO mapred.JobClient:   FileSystemCounters
12/06/15 15:42:45 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=21569
12/06/15 15:42:45 INFO mapred.JobClient:     CFS_BYTES_READ=87
12/06/15 15:42:45 INFO mapred.JobClient:   File Input Format Counters 
12/06/15 15:42:45 INFO mapred.JobClient:     Bytes Read=0
12/06/15 15:42:45 INFO mapred.JobClient:   Map-Reduce Framework
12/06/15 15:42:45 INFO mapred.JobClient:     Map input records=1
12/06/15 15:42:45 INFO mapred.JobClient:     Physical memory (bytes) snapshot=137252864
12/06/15 15:42:45 INFO mapred.JobClient:     Spilled Records=0
12/06/15 15:42:45 INFO mapred.JobClient:     CPU time spent (ms)=520
12/06/15 15:42:45 INFO mapred.JobClient:     Total committed heap usage (bytes)=121241600
12/06/15 15:42:45 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=2014703616
12/06/15 15:42:45 INFO mapred.JobClient:     Map output records=0
12/06/15 15:42:45 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87
12/06/15 15:42:45 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 9.2859 seconds (0 bytes/sec)
12/06/15 15:42:45 INFO mapreduce.ImportJobBase: Retrieved 0 records.
12/06/15 15:42:45 INFO tool.CodeGenTool: Beginning code generation
12/06/15 15:42:45 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `OrderItem` AS t LIMIT 1
12/06/15 15:42:45 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..
Note: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/OrderItem.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
12/06/15 15:42:45 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/OrderItem.jar
12/06/15 15:42:46 WARN manager.CatalogQueryManager: The table OrderItem contains a multi-column primary key. Sqoop will default to the column orderNumber only for this job.
12/06/15 15:42:46 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import
12/06/15 15:42:46 INFO mapreduce.ImportJobBase: Beginning import of OrderItem
12/06/15 15:42:46 INFO mapred.JobClient: Running job: job_201206151241_0010
12/06/15 15:42:47 INFO mapred.JobClient:  map 0% reduce 0%
12/06/15 15:42:55 INFO mapred.JobClient:  map 100% reduce 0%
12/06/15 15:42:55 INFO mapred.JobClient: Job complete: job_201206151241_0010
12/06/15 15:42:55 INFO mapred.JobClient: Counters: 17
12/06/15 15:42:55 INFO mapred.JobClient:   Job Counters 
12/06/15 15:42:55 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=5949
12/06/15 15:42:55 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
12/06/15 15:42:55 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
12/06/15 15:42:55 INFO mapred.JobClient:     Launched map tasks=1
12/06/15 15:42:55 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
12/06/15 15:42:55 INFO mapred.JobClient:   File Output Format Counters 
12/06/15 15:42:55 INFO mapred.JobClient:     Bytes Written=0
12/06/15 15:42:55 INFO mapred.JobClient:   FileSystemCounters
12/06/15 15:42:55 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=21524
12/06/15 15:42:55 INFO mapred.JobClient:     CFS_BYTES_READ=87
12/06/15 15:42:55 INFO mapred.JobClient:   File Input Format Counters 
12/06/15 15:42:55 INFO mapred.JobClient:     Bytes Read=0
12/06/15 15:42:55 INFO mapred.JobClient:   Map-Reduce Framework
12/06/15 15:42:55 INFO mapred.JobClient:     Map input records=1
12/06/15 15:42:55 INFO mapred.JobClient:     Physical memory (bytes) snapshot=116674560
12/06/15 15:42:55 INFO mapred.JobClient:     Spilled Records=0
12/06/15 15:42:55 INFO mapred.JobClient:     CPU time spent (ms)=590
12/06/15 15:42:55 INFO mapred.JobClient:     Total committed heap usage (bytes)=121241600
12/06/15 15:42:55 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=2014703616
12/06/15 15:42:55 INFO mapred.JobClient:     Map output records=0
12/06/15 15:42:55 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87
12/06/15 15:42:55 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 9.2539 seconds (0 bytes/sec)
12/06/15 15:42:55 INFO mapreduce.ImportJobBase: Retrieved 0 records.
12/06/15 15:42:55 INFO tool.CodeGenTool: Beginning code generation
12/06/15 15:42:55 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Payment` AS t LIMIT 1
12/06/15 15:42:55 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..
Note: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Payment.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
12/06/15 15:42:55 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Payment.jar
12/06/15 15:42:56 WARN manager.CatalogQueryManager: The table Payment contains a multi-column primary key. Sqoop will default to the column orderNumber only for this job.
12/06/15 15:42:56 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import
12/06/15 15:42:56 INFO mapreduce.ImportJobBase: Beginning import of Payment
12/06/15 15:42:56 INFO mapred.JobClient: Running job: job_201206151241_0011
12/06/15 15:42:57 INFO mapred.JobClient:  map 0% reduce 0%
12/06/15 15:43:05 INFO mapred.JobClient:  map 100% reduce 0%
12/06/15 15:43:05 INFO mapred.JobClient: Job complete: job_201206151241_0011
12/06/15 15:43:05 INFO mapred.JobClient: Counters: 17
12/06/15 15:43:05 INFO mapred.JobClient:   Job Counters 
12/06/15 15:43:05 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=5914
12/06/15 15:43:05 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
12/06/15 15:43:05 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
12/06/15 15:43:05 INFO mapred.JobClient:     Launched map tasks=1
12/06/15 15:43:05 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
12/06/15 15:43:05 INFO mapred.JobClient:   File Output Format Counters 
12/06/15 15:43:05 INFO mapred.JobClient:     Bytes Written=0
12/06/15 15:43:05 INFO mapred.JobClient:   FileSystemCounters
12/06/15 15:43:05 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=21518
12/06/15 15:43:05 INFO mapred.JobClient:     CFS_BYTES_READ=87
12/06/15 15:43:05 INFO mapred.JobClient:   File Input Format Counters 
12/06/15 15:43:05 INFO mapred.JobClient:     Bytes Read=0
12/06/15 15:43:05 INFO mapred.JobClient:   Map-Reduce Framework
12/06/15 15:43:05 INFO mapred.JobClient:     Map input records=1
12/06/15 15:43:05 INFO mapred.JobClient:     Physical memory (bytes) snapshot=137998336
12/06/15 15:43:05 INFO mapred.JobClient:     Spilled Records=0
12/06/15 15:43:05 INFO mapred.JobClient:     CPU time spent (ms)=520
12/06/15 15:43:05 INFO mapred.JobClient:     Total committed heap usage (bytes)=121241600
12/06/15 15:43:05 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=2082865152
12/06/15 15:43:05 INFO mapred.JobClient:     Map output records=0
12/06/15 15:43:05 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87
12/06/15 15:43:05 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 9.2642 seconds (0 bytes/sec)
12/06/15 15:43:05 INFO mapreduce.ImportJobBase: Retrieved 0 records.
12/06/15 15:43:05 INFO tool.CodeGenTool: Beginning code generation
12/06/15 15:43:05 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Product` AS t LIMIT 1
12/06/15 15:43:06 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..
Note: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Product.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
12/06/15 15:43:06 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Product.jar
12/06/15 15:43:06 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import
12/06/15 15:43:06 INFO mapreduce.ImportJobBase: Beginning import of Product
12/06/15 15:43:07 INFO mapred.JobClient: Running job: job_201206151241_0012
12/06/15 15:43:08 INFO mapred.JobClient:  map 0% reduce 0%
12/06/15 15:43:16 INFO mapred.JobClient:  map 100% reduce 0%
12/06/15 15:43:16 INFO mapred.JobClient: Job complete: job_201206151241_0012
12/06/15 15:43:16 INFO mapred.JobClient: Counters: 18
12/06/15 15:43:16 INFO mapred.JobClient:   Job Counters 
12/06/15 15:43:16 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=5961
12/06/15 15:43:16 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
12/06/15 15:43:16 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
12/06/15 15:43:16 INFO mapred.JobClient:     Launched map tasks=1
12/06/15 15:43:16 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
12/06/15 15:43:16 INFO mapred.JobClient:   File Output Format Counters 
12/06/15 15:43:16 INFO mapred.JobClient:     Bytes Written=248262
12/06/15 15:43:16 INFO mapred.JobClient:   FileSystemCounters
12/06/15 15:43:16 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=21527
12/06/15 15:43:16 INFO mapred.JobClient:     CFS_BYTES_WRITTEN=248262
12/06/15 15:43:16 INFO mapred.JobClient:     CFS_BYTES_READ=87
12/06/15 15:43:16 INFO mapred.JobClient:   File Input Format Counters 
12/06/15 15:43:16 INFO mapred.JobClient:     Bytes Read=0
12/06/15 15:43:16 INFO mapred.JobClient:   Map-Reduce Framework
12/06/15 15:43:16 INFO mapred.JobClient:     Map input records=1
12/06/15 15:43:16 INFO mapred.JobClient:     Physical memory (bytes) snapshot=144871424
12/06/15 15:43:16 INFO mapred.JobClient:     Spilled Records=0
12/06/15 15:43:16 INFO mapred.JobClient:     CPU time spent (ms)=1030
12/06/15 15:43:16 INFO mapred.JobClient:     Total committed heap usage (bytes)=121241600
12/06/15 15:43:16 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=2085318656
12/06/15 15:43:16 INFO mapred.JobClient:     Map output records=300
12/06/15 15:43:16 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87
12/06/15 15:43:16 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 9.2613 seconds (0 bytes/sec)
12/06/15 15:43:16 INFO mapreduce.ImportJobBase: Retrieved 300 records.


I found DataStax an interesting project to explore more. I have blogged on the issues that I faced on this as a learner, and how easily can they be fixed - Issues that you may encounter during the migration to Cassandra using DataStax/Sqoop and the fixes.

Sunday, June 10, 2012

Moments with Twitter..

I was recently playing with a few tools, that let you have a look into your past tweets. All My Tweets is an interesting service, that lets you view all your tweets in a simple page, from the beginning. Interestingly, another service, named Twitario gives a view of all your tweets, as a diary. Though Twitario provides a nice interface to find the tweets easily, based on the calendar, it doesn't support Unicode characters, which is surely a minus for those who tweet in Unicode languages. "All My Tweets" include the link to the original tweet itself, where Twitario provides the option to delete the tweets. Having analyzed these tools, I should mention that these tweets were really useful, and brought a few interesting memories back.

Now time to look into the trail of my tweets. Here are some of my tweets, since the 26th of March, 2009.

is finishing the documentation. Mar 26, 2009
It is notable that my first tweet is on documentation. It was probably on the documentation on the project done during the internship. I have always been a supporter of good documentation - It helps the blood flow of open source.

Abiword Cross-compiling using wine successful on Ubuntu. Apr 16, 2009
At that time, uwog was still completing the MSVC build for AbiWord. I found Cross-building useful, since it was complete and gave me a usable AbiWord build for Windows, with no issues. This was indeed a remarkable point, which gave me further confidence to work on AbiWord Windows API, using Ubuntu as my platform.

Summer Love with Abiword... Apr 20, 2009
This was a happy announcement of me getting into the Google Summer of Code 2009. This was my first Google Summer of Code, and I was pretty much excited. AbiWord community was super-friendly, and I am proud to be a member, since then.

with Anjuta.. an IDE similar to Visual Studio... for Linux. May 08, 2009
Anjuta DevStudio is a Gnome Integrated Development Environment. I have mostly used Anjuta as a syntax highlighter for my C/C++ projects including AbiWord development. For compiling and building of AbiWord, I just use make directly.

needs a mute option and filtering for facebook messages. Any suggestions... Jun 01, 2009  
At that time, there was no way to opt-out from the Facebook notifications from the photos that we commented, or to remove ourselves from the facebook threads. I was annoyed, when someone sends group messages directly to inbox. It is great to see that these options are now available for facebook. Now we can remove ourselves from the messages. However, filtering is still not possible. Neither muting (receiving the messages, but not getting the notification of that red one for the new message, for the uninteresting thread).

#AbiWord Turns 11! Happy Birthday to dear Abiword! Happy Birthday to you... Jul 16, 2009
That was remarkable to mark the 11th year of AbiWord, since it started as an open source project in the year 1998.

My computer never complained abt me repeating the same build million times, and I've never complained abt its time delays. We <3 each other. Aug 17, 2009  
Some romance with my computer.. ;)

10 reasons to avoid talking on the phone http://theoatmeal.com/comics/phone from @oatmeal Feb 23, 2010  
Oatmeal never fails to amuse me. Many of its posts deserve a tweet.

my #javascript has gone wild and bigggggg and GO #bananascript GOO.. http://www.bananascript.com/ Compress it.. :) #fb Mar 03, 2010  
Bananascript is a nice online tool to compress javascript files.

A periodic table of visualization methods http://www.visual-literacy.org/periodic_table/periodic_table.html Apr 04, 2010  
Each of these visualization methods deserves a blog post on its own. Visual-Literacy.org provides interesting learning resources, such as an introduction to argumentum. I have also enrolled to their online courses, full of study materials.

I should create some of my own thought experiments as well.. :D http://plato.stanford.edu/entries/thought-experiment/ Jun 06, 2010  
Thought experiments are fun, and it enhances your ability to think, weird. ;) Follow the above link to realize that. Again, each of these thought experiments deserves a post on its own.

GSoC welcome package once more. Special thanks and love OGSA-DAI, OMII-UK and Google. Reminds me the lovable days of GSoC2009 - Abiword too. Jun 19, 2010  
A blog post that happily announces my second welcome package from Google. Yes, this was for my Google Summer of Code with OMII-UK.

Also make sure to read Moments with Twitter - II, the successor of this post.

Friday, June 8, 2012

Building WSO2 Carbon from source

If you are looking to build WSO2 Carbon based products such as WSO2 ESB, Stratos, and the entire WSO2 Carbon platform, you have found the correct article. Just follow the below steps, to build the trunk.
Checkout orbit, kernel, and platform. 

Build in the order (orbit -> kernel -> platform), using maven3. 
mvn clean install