Monday, December 12, 2011

Semantic Cloud Governance

Recently, I got an invite to speak in the SemTech Berlin 2012 conference. The topic of my speech would be "Semantic Cloud Governance".

The complete schedule of the conference is available at:

Here is the abstract link for "Semantic Cloud Governance":

In the subsequent posts, I'll write more about my findings in this topic!

See you!

Monday, November 14, 2011

Oracle Edition Based Redefinition question ...

I was playing with Oracle's EBR feature using Oracle 11.2.x. I added a new dummy column in a table using the "ALTER TABLE table_name ADD (column_name, dataType)" SQL statement. I also made the necessary required changes in the editioning view - namely adding the above mentioned column in the editioning view.

While testing, what I discovered was, the newly added column was always showing up in the first position (ie, column number is one) in the new editioning view. I am not sure if this is the expected behaviour in EBR. If you notice, the "ALTER TABLE table_name ADD (column_name, dataType)" SQL statement, would always add the new column as the last column in the table. I expected a similar thing in the new editioning view.

If I know the answer, will update this post!

Thursday, October 13, 2011

SemTech UK 2011 conference in London, UK

Here is the Deck PPT of our speech titled Semantic SOA Governance, BPM and Complex Event Processing:

I attended many sessions as well. I'll write about them in a subsequent post.

Wednesday, August 31, 2011

Semantic SOA Governance, BPM and Comple Event Processing

I would like to let you know about an upcoming open source conference, the SemTech UK 2011. This conference would be held in London, UK between Sept 26-27, 2011. My ex-colleague, Keshava Rangarajan and I received an invitation to speak in the conference. The topic of our speech is "Semantic SOA Governance, BPM and Complex Event processing". Our topic falls under the "Innovative Products & Solutions" track.

For the complete events itinerary, please advert to:

Here is the link to our abstract:

Earlier Keshava Rangarajan and I have spoken in "HP Software Universe 2008" at Las Vegas representing Oracle. The topic was "Semantic SOA Governance". This time we are extending that topic, and additionally we would be covering BPM and Complex Event Processing as well.

Briefly, here are some of the challenges that we are intending to cover:

• How do we intelligently create, publish, search, discover, consume, manage, meter, monitor, govern and report on SOA services such that it delivers superior value to organizations?
• How can we deploy semantic technologies to solve this problem effectively ?

I intend to write more on this. Also, I'll write about the entire conference, other speeches, interesting ideas, etc. in the subsequent posts!

Sunday, July 31, 2011

Semantic SOA Governance, Cloud Governance for enterprise vs public domain problems

I have been doing some research on using Semantics in SOA Governance and Cloud Governance for quite some time.

So far, I had only been considering Enterprise domain problems when we think of Governance solutions. I came across the following two links:

1) Cloud Governance:

2) HP Service Catalog:

These are interesting areas.

Now, if we open up our scope and consider public domain problems as well, along with the traditional enterprise domain problems, the significance would be great! But one thing to be aware of is, opening up the services infrastructure to public infrastructure would definitely change the scale of the problem, which brings its own set of challenges, but they can be solved.

Besides there is another angle. This is the mobile domain. How about the Twitter and Facebook ?

Basically, we would be very curious to know about what others think about our company, product, or our blog post or article that we put out. The users write reviews, and it would add great value to have an ontological structure and persist them in repositories. By this way, we would have covered the whole nine yards from the Taxonomy to the Folksonomy when we include the feedback reviews of users!

The potentials are great which demands attention and can not be overlooked!

Thursday, June 30, 2011

Fiddling with Spring Roo ...

I have heard a lot about Spring Roo. So I really wanted to get my hands dirty in Spring Roo.

I am thinking of developing a Semantic Web Java application. The front end would be using JavaScript, jQuery. The DB would be MySQL using Hibernate for the OR mapping. In the business layer, Spring framework would take care of transaction management. RESTful services would be used in the business layer.

Additionally, the open source Jena technologies would be used to build the Semantics Web piece of the application. Using Protege-OWL, some OWL and RDF would be created and stored.

To make it available in the cloud, I am considering Cloud Foundry PaaS.

I'll keep you posted and let you know my progress.

Tuesday, May 31, 2011

Database Vs In-memory data grid

Recently, I went through a session in Oracle Coherance. Oracle Coherance is an in-memory data grid solution that enables organizations to predictably scale mission-critical applications by providing fast access to frequently used data.

I was trying to do a comparison between an in-memory data grid solution and a database solution. Say, Oracle Coherance and Oracle database.

Oracle Coherance:
In simple terms, Oracle Coherance is a hash map approach. The primitive operations supported by a hash map are get, put and remove.

Oracle database:
The primitive operations supported are CRUD (Create, Read, Update, Delete) operations.

In essence, Oracle Coherance has the necessary primitive operations to handle all the primitive CRUD operations that can be performed by a database approach. But there is one significant area where there is a difference. When it comes to XA transaction management, Oracle Coherance has some limitations. That is, Oracle database is really great with respect to 2-phase commits in a distributed environment.

One of the common use cases that we deal with is the versioning of objects. This could mean both in the Data Tier and in the Mid Tier. That is, lets say we have a DB schema 1.0 with Mid Tier object 1.0 version is already deployed and up and running for some time. Now, we have version 2.0 DB schema and its equivalent Mid Tier objects for 2.0 version. We need to upgrade the already running server to version 2.0. Edition-Based Redefinition is a great way to achieve this. Oracle supports this.

We'll delve deeper into this with a later post.

Saturday, April 30, 2011

Schema and Data comparison between two databases

I have been looking into an interesting new requirement.

1) Consider there are two host machines - Host_A and Host_B. An older version of our product is installed in Host_A. And a newer version of our product is installed in Host_B. From a Data tier perspective, there are lots of schema differences between old and new version of our product. Also, there could be lots of instance data persisted in the databases of each product installation. I am looking at writing a tool which would migrate instance data from the older instance of the product into the newer instance of the product.
2) In the first test case, the databases in old product instance and new product instance are same. (Homogeneous database comparison for schema and data). Later, this would be extended to consider heterogeneous databases - ie, the database types could be different in the old and new product instances.

Eg: Old product instance uses Oracle database. New product instance uses Microsoft SQL Server.

As a first step, we would require a tool which would do the following:
1) compare schema between older product instance and the newer product instance
2) compare data between older product instance and the newer product instance

I evaluated quite a few products that are currently available. They are DBDiff, Red Gate and TOAD. I am yet to evaluate Golden Gate.

The tools that I have evaluated were great so far. At the end of the comparison, the tools produce a SQL script which could be used to migrate from the old to the new produce instance.

So far, all these tools are Windows tools. That is, they run on Windows platforms. I am looking for Linux tools. If I find anything, I'll update this post.

The Future of Java programming language ?!

Recently I happened to see a tech talk by one of the core scientists in the original Java language development team of Java (now Oracle). The burning question now is about the future of Java programming language.

What would happen to Java as a language? Would the language development eventually stop after few releases? Or would it continue forever ? Would the language disintegrate?

After looking at the road map of Java programming language, it looks like there would be at least a couple of releases in the future. This might play out for 5 years. But after that, who knows about the full-fledged continuation of Java programming languages ?

The good news is, Oracle has predicted this one too. Meaning, they know that Java development effort would be winding down in the future. And in its place, there would be a plethora of other programming languages which would co-exist.

James Gosling had some where mentioned that everything that is needed is already in the JVM! Meaning, there could be multiple languages co-existing in future systems. But all these languages are capable of running inside the JVM. Building a system that way would be the most performant and efficient way. This is because, lots of optimization effort (which have stood the test of time!) has already been engineered into the JVM!

To paraphrase it differently, for a multi-language system, if all the languages are run inside the JVM, then that would be the best solution which is performant and efficient. The other approaches which dont make use of the JVM, would not be performant.

The future of Java seems to lie in its JVM! At present there seems to be a plan for the future of Java programming language! We shall see what we shall see!

Semantic SOA Governance and BPM

I would like to write about some of the research that we have been doing in the area of Semantic SOA Governance and BPM. I would like to write this as a series of posts. Please check back later!

Thursday, March 31, 2011

Real Life Social network

Recently, I came across an article in Slideshare about Real Life Social network. This was a research article written by some guy from Google. It is an amazing article which I strongly recommend.

In real life, every one of us has a physical Social Network around us. This could involve our friend circle from school, college, locality, community, work, relatives, etc.

In the electronic world, we have our digital Social Network like Facebook.

There is a fundamental difference between the real-world Social Network and the digital Social Network, in the way these two networks are constructed.

In the real-world Social Network, there are various nuances and layers of commonality that we share with our friends.

In digital Social Network, all our buddies are "friends" and they are all added in one humongous network. This doesnt really capture all the various nuances that we understand between our buddies in real-world.

Once we understand/appreciate this subtle difference, we can address the two networks in a better way.

Though this research article is very lengthy, it is an easy bed-time read! I'll post more on this later!

Monday, February 28, 2011

Sensing meets Mobile Social Networks !

On Feb 08, 2011, the IEEE Computer society of Silicon Valley organized a conference session at the Cadence Design Systems in San Jose. I attended this interesting session. The topic was was "Participatory Urbanism: Smart Computing, or Big Brother is Watching?". My friend Keshava Rangarajan, from Oracle Corporation, was one of the speakers. The other speaker was Vipul Gupta from Sun labs division of Oracle.

Please advert to the following link for more details:

The slides are available here:

It was an interesting session which brings the two worlds - (1) Sensing and (2) Mobile Social Networks - together.

The potential seems to be limitless. Sensor applications can be of great help in a wide range of field starting from Defence, Education, Ecology, Health care, etc. My feeling is, it is inevitable - the mobile applications are going to be the future.

Some of the things that I learnt from this session are:

1) Participatory Urbanism
2) Passive Altruism
3) Squawk virtual machine
4) SunSPOT Hardware Developers Kits
5) Spaughts

Just Google "Spaughts" and you would find a whole bunch of youtube videos. You would get a good idea about SensorApps.

Squawk is an open source research virtual machine for the Java language that examines better ways of building virtual machines. The idea seems to be that virtual machines can be simplified by writing them in higher level languages, and further simplified by implementing the VM in the language that the VM is implementing. Squawk is a Java micro edition virtual machine for embedded system and small devices. What makes Squawk different is that Squawk's core is mostly written in Java.

To learn more, here are some links:


The other topics which were of interest to me are Participatory Urbanism and Passive Altruism. The speaker talked about a case study that was done in Accra, Ghana. This was a thorough case study about the Carbon Monoxide emissions in Accra, Ghana. This study was done by having sensors hooked on to people and taxis in and around the capital city! The readings from the sensors were stored in backend servers/databases, and were processed. The usage pattern can be mapped. This is a great way of putting mobile technologies and sensing to work!

Any new field is not without hurdles! The sensing and mobile applications also have a huge number of challenges ahead of them. There is a broad spectrum of issues that needs to be resolved. Starting from the hardware, sensors, etc. all the way to the software, there are issues galore. Until then, one can not fully come to a conclusion that the sensing and mobile apps are ready!

Also, the users may not be ready as well! Privacy becomes a major issue. How much of our personal info. can the machines easily gain access to ? How to regulate this limit ?

There is also another problem. Suppose lets say, we come up with a standard for a Health care application. Lets say that we have an intelligent sensor which keeps track of all our medical results, and monitors a whole bunch of stuff from our body. The intension of this application could be genuine. This application can be for helping your health as a medical coach. Lets say, you are using this application for about 5-10 years. Now there is lots of your personal data in the application. There may be some form of access to this data by either the hardware or software company of this application. What if after 10 years, they sell this data to some other company who are interested in knowing their target audience ??

This is a tough question to answer as of today! It is tricky because there are no standards yet.

But this is interesting though! Hopefully issues would get ironed out in the future, and we might have a standard to work with.

Tuesday, February 1, 2011

Fiddling with JProbe 8.3

I was encountering an "OutOfMemory" issue in one of my Java projects. Out of curiousity, I wanted to delve deep and get to the bottom of it.

I downloaded the latest JProbe 8.3 and was going through the tutorial videos, samples, documentation material, and what not.

Our project is a SOA project involving Web Tier, Business Tier and Data tier. I wanted to test out all the various tiers end-to-end and come up with a report of all the memory-leaking spots.

It is always ideal to integrate JProbe with the IDE. This would immensely help the probing exercise. After you have deployed all your applications, you can run an end-to-end test case and monitor through JProbe.

The first step would be to configure JProbe. Follow the documented steps and configure JProbe for your particular App Server. Our app server is Weblogic Server.

Second, integrate the JProbe with your App Server. First, a backup of the existing file. Then JProbe creates another file where it modifies some parameters based on your JProbe configuration performed in step 1.

Then you can launch JProbe via that script. You would see that JProbe monitors everything pertaining to that server.

I was sailing smoothly until this point. Life was all great!!

The next step for me was to integration JProbe 8.3 with our IDE. Our IDE is JDev 11g. I had a shocker here. It seems beginning JProbe 8.x, the support for JDev IDE integration has been dropped! This was a bummer!

JDev was one of the supported IDEs for JProbe integration in earlier releases like JProbe 7.x, etc.

In JProbe 8.x, they support Eclipse integration. But they have dropped JDev. Unbelievable !

This is a bit of a blow. Looks like JDev is being marooned! Is it because as a development platform, JDev is losing market share?