Introduction to CloudBees, Developer’s perspective

Thanks to my leftover PTOs (vacation days of my company), I got time to revive my personal projects. So far, my code is on my personal laptop and I have been spending time to research right SVN host, Amazon EC2 instances. And CI tools, Maven repos were kind of out of scope for now. Then I decided to try CloudBees tools.

So Why CloudBees?

Lets begin with situation of typical solo developer or solo start-up teams. Generally, you start with your favorite IDE and as soon as something is working, you start thinking about managing whole project life cycle.  You wish for these things.

For your day-to-day development

  • At least, you ( or your team) wants to commit changes to external source control. (svn, git etc.)
  • You should be able to preserve and release artifacts. (Our favourite tool is Maven.)
  • It would be nice, as soon as you commit (or regular intervals), some external CI tool, takes update from your SCM repo and do build. (And publish to private maven repository too). 
  • And don’t we feel productive, if we have task management system to align with personal or team goals. (if agile tools available, even better.) 
For your testing and production servers. (application servers)
  • You may want one 24 x 7 running development integration server, who is always running HEAD (aka snapshot) of your code. 
  • Once you are ready to release your application to world, you should be able to deploy your code to load-balanced servers. 
That’s where CloudBees comes into picture. CloudBees is trying these two categories of problems by offering two different suites of tools. 
  1. CloudBees DEV@cloud (or BUILD) provides tools for CI, SCM & Maven Repositories. 
  2. CloudBees RUN@cloud (or RUN) provides pre-installed java application servers, ready to run your JEE compliant code. 
How I use it?
I am sure, first thing you will do, go to Don’t get temped with their sophisticated UI landing pages. Looks like lots of work and some complex process. (I wish, I could change their landing pages to be more KISS compliant, anyways). Main thing is, first you open an free account, with their FREE tier. (yes FREE option to try out every tool). As soon as you do that, you will have access to their tools which follows simple URLs. For example, if your account id is johndoe, then typical urls looks like
  •  – for SVN repos and Maven repos. You get 2GB free, combined for both. 
    • (& release too.)
  •  – Jenkins landing page. Create jobs, configure it. Very much intuitive.
  • (Hopefully in future)

For you run time needs

  •  , yes you DB too. Free 5MB of MySQL, to get you started. If you already have big data set, I will recommend to keep Amazon RDS (or Xeround) in mind.  
  • You can provision Tomcat or JBoss (and some other kinds too. Here is list
Apart from these in-house tools, CloudBees provide 3rd Party integrations. It’s good to have integrations with other hosted solution providers, who are already market leaders. Here is complete list.

End Note:

We are living in age of cloud computing and there are lots of services are popping up. Also it has become important for us that we realize the real value & utilize cloud-based-services and more focused on actual productivity. In my case, I would have spent weeks in perfecting Jenkins (or Hudson), installing Subversion server, installing Maven repositories on Amazon EC2 infrastructure, but now, I have spent 2 days in navigating CloudBees and now I am thinking more of functional goals, not infrastructure tasks.

I have argued with myself, in self-hosting tools, I get much cheaper & unlimited CPU time. But I will be wasting my 80+ hours on perfecting it. Plus reliability of self-hosting service is definitely questionable.

Links to bookmark

Why Certified and Why No Non-Certified?

This topic has come up many times. And even #MartinFowler wrote a blog about this. Recently I was doing code review, found a code snippet. Let me show you a simplified code snippet.
public boolean isEligible(List list) {

  boolean isEligible = false;

  for (int i = 0;i < list.size(); i++) {

if (list.get(i) == 1) {

     isEligible = true;



  return isEligible;


Now, when I asked developer to put in break statement, he replied, it’s small array. Though he was right, as there is not much performance penalty, but that code was wrong from “certified” developer point of view. And this pretty much sums up difference b/w certified & non-certified developer. 
I am a certified Java programmer (eleven years back) but whenever I do code reviews, looking at this kind of code, really makes me fire that developer. It’s not that, if you are not certified, you are not good or you are not capable, but it makes difference, how you started programming.
From my point of view, mostly non-certified programmers come in two categories.
  1. Very experience programmer, who started programming in 80s something.
  2. Comparatively newer breed of programmers after IT boom in late 90s.

First category of programmers, most of them, never bothered to get certified, as they been programming enough long time, to optimize code at assembly level. I always end up learning something new from them. (In-fact, they would have optimized above mentioned code, to use iterator and stored list.size() in one local variable 🙂 )
But after IT boom in late 90s, lots of college graduates jumped into programming (where is $$). Now, when they got jobs and did programming in languages like Java, no body really code reviewed their code and nobody did performance analysis at code loop level. And they were never penalized for that, they continued programming like this. If I interview them, they might know difference b/w stateless and stateful EJBs but no idea about IOC pattern. Are they programmers in true sense? Have they given serious enough preference to their skills set, to get certified in that.
You can say, certified programmers got certified only because, they can get better jobs. And you are right. But getting certifications, also made you complete programmer and make you think, before you commit your code. Hence if given choice of hiring someone for my team, I will go for certified programmer (unless he/she flunk my java/jee questions). 
To end my blog entry, here is another code example, from java developer, who has 8yrs of experience (and guess what? :)) . Same developer also wrote MyService class.
(see technical requirement as javadoc)

/** Finds service and invoke it.
      If service is not enabled, then don’t invoke it.
public void findAndInvoke() throws Exception{
  try {
         MyService service = serviceFinder.findService(“MyService”);
  } catch (NullPointerException ex) {
      logger.log(“Service is not enabled, hence ignoring”);

Public Key Cryptography (Yet another guide)

Traditional encryption method, where you encrypt using one key and de-crypt using same key. Of course it suffers with basic problem is sharing key. And it is difficult or impossible, when we want to exchange information in public domain. (secure websites extra.). 
Public key encryption is asymmetric, means, you encrypt using one key and other key decrypts it. Starting point of this encryption is to generate two keys using some tool at same time and share one key to other party. Other party can be public to all or internet. The key which gets published to public is called public key and key which you keep to yourself is called private key. Hence this type of cryptography called public key cryptography.

pub-key concept is basis for digital signatures and digital certs.

PGP (Pretty Good Privacy) encryption.
PGP uses both symmeteric and assymetric encryption. PGP uses one time session key for encrypting whole document and uses public-private key method to share that session key.

PGP is a hybrid cryptosystem.

Hence it’s faster to encrypt and decrypt compare to just using pub key to encrypt whole document. Conventional encryption is about 1,000 times faster than public key encryption. Public key encryption in turn provides a solution to key distribution and data transmission issues.
These days, PGP method is being used most widely.
Digital Signatures
One of important use of public key cryptography is, authentication of sender. Hence this type of cryptography is perfect for digitally signing documents (or emails). This even more secure than signing by hand (nobody can forge it) (Imagine, digital signing report cards of kids, they are out of luck 🙂 ).
Instead of encrypting information using someone else’s public key, you encrypt it with your private key. If the information can be decrypted with your public key, then it must have originated with you.
General rule of using PGP (or public key ) cryptography
If you want to exchange information, where origin from you has to be verified then you use your private key to encrypt. Example: Digital signatures in email.
If you want to authenticate origin of document, then you use that origin’s public key to decrypt. Example secure website of banks.
Hash Functions (Message Digest or MD5)
PGP tool can take any file and generate fixed length hash value of that file. Fixed length is generally couple of bytes to 10-20 bytes. (Lets say 160 bits). Now if recipient gets that file or downloads (or using torrents), then he can generate same hash code for received file. If that hash code matches from hash code published (on website), it means he has right file with no modifications. This concept is generally used if users are getting software archives from different sources (or server mirrors) and users want to make sure, they got original file.
Even a single bit changed in file, will cause different hash code. The generated hash code is called message digest or MD5.
To make cryptography even more fasted using PGP, instead of encrypting whole document with session key, PGP generates message digest and digitally sign it using private key. (Basically encrypted message digest). Recipient then uses public key to decrypt MD and generate new one from received file. If both matches, then voila.
Digital Certificates
Now with all above encryption methods, one thing is that, public key has to published and we need to make sure that public key originated from right party. To solve problem authentication of public key, digital certificates comes into picture.
A digital certificate consists of three things:
  1. A public key of entity, whole this certificate belongs to.
  2. Certificate information. (“Identity” information about the user, such as name, user ID, and so on.)
  3. One or more digital signatures of third party companies vouching for authenticity of public key. Digital signature is for public key of entity in question, signed by ‘trusted’ 3rd party. Example verisign, geotrust etc.

One way for a recipient to check whether a certificate is valid is by verifying its digital signature, using its issuer’s (signer’s) public key. That key can itself be stored within another certificate whose signature can also be verified by using the public key of that next certificate’s issuer, and that key may also be stored in yet another certificate, and so on. You can stop checking when you reach a public key that you already trust and use it to verify the signature on the corresponding certificate.
Hence there is hierarchy of CAs (Certificate Authority). Top most level CA is called root CA.
A CA creates certificates and digitally signs them using the CA’s private key.
Public Key Infrastructures (PKI)
A PKI contains the certificate storage facilities of a certificate server, but also provides certificate management facilities
Our browsers come equipped with some top level certificate issuing authority public keys.
Digital Certificates are of two types
  • PGP Certs (lesser used)

No 3rd Party digital signature
Self signed digital signature
Multiple people can sign it.

  • X.509 Certificates (most commonly used) (web browsers).

Apart from above three labels, it has DN (distinguished name)
Example: CN=Bob Allen, OU=Total Network Security Division, O=Network Associates, Inc., C=US
How to get X.509 Certificates?
To obtain an X.509 certificate, you must ask a CA to issue you a certificate. You provide your public key, proof that you possess the corresponding private key, and some specific information about yourself. You then digitally sign the information and send the whole package — the certificate request — to the CA. The CA then performs some due diligence in verifying that the information you provided is correct, and if so, generates the certificate and returns it
In other words, you send a self-signed certificate signing request (CSR) to the CA. The CA verifies the signature on the CSR and your identity, perhaps by checking your driver’s license or other information. The CA then vouches for your being the owner of the public key by issuing a certificate and signing it with its own (the CA’s) private key. Anybody who trusts the issuing CA’s public key can now verify the signature on the certificate. In many cases the issuing CA itself may have a certificate from a CA higher up in the CA hierarchy, leading to certificate chains.
Other Misc Topics

Further private key can be stored encrypted by using some password. Generally it is phrase, hence it’s called passphrase. Think of situation, if someone has access to your m/c and steal your private keys. Unless, they decrypt private key using same passphrase, they can’t use that key to encrypt any document

Strength of encryption.
Keys (private & public) are measured in bits. Generally it ranges from 64 bit to 1024 bit. Larger is key, more powerful encryption but bad performing. Hence while choosing key strength, it has to be right balance between strength and performance. Generally 128bit to 256 bit keys are enough for day to day operations like secure website etc. Unless it is military secret. 1024 bit keys are overkill.


Comparing Amazon Kindle Fire and Apple iPad 2

Comparing Technical Specs of Apple iPad 2 and Amazon Kindle Fire

Amazon announced Kindle FIRE on September 28th, with availability on November 15th.  As soon as Kindle Fire is annouced, whole tablet fans are enthuasitic about new tablet which really poses some serious competition to Apple iPad.

Well main attraction of Kindle Fire is, it’s price point of $199 compare to iPad $499. Lets compare apple to apple.

I took iPad2 minimum configuration available.
Apple iPad has wifi+3G model. 3G model has A-GPS available.

Do you want to add rows to comparison? Please comment.

Subversion END OF LINE (EOL) problem

Generally, if you have been working on subversion source control long enough, you might seen this error. Most of developers end up doing some work around and forget about it. But what is this problem is.

svn: File “xxx.vsd” has inconsistent newlines svn: Inconsistent line ending style
Subversion whenever adding or commiting file, checks EOL (end of line) style. It can either UNIX (LF) or Windows (CRLF). But if it is not consistent in a file, then it doesn’t know, which style to use while storing. Hence it stops whole committing process and throw this error.

Subversion provides various options for windows and developers for this. First option they provide using svn property ‘svn:eol-style’. This property can be set on a file, whether LF or CRLF.  Subversion will store this file using Operating System native format of line-ending. But different developers can be editing same file on different operating system. Hence subversion provide option of ‘native’ which means, whatever OS is being used, let developer check-in using that but other developer on different OS, will see file, in it’s own native format. Here Subversion is play proxy to EOL format. 

Coming back to error above, this problem won’t be fixed even if, we have svn:eol-style is set. For this either, we open that file and chance EOL to one format. Subversion is confused, by seeing different types of EOL in one single file.
Other solution mentioned in forums is, to use dos2unix utility. Not sure, whether it is even available on Windows platform and certainly, I may not want to commit another version of file, just because of EOL.

What about, if you have hundreds of files?

Subversion has ‘config’ file where you can set values at global scope. Hence you don’t need to set svn:eol-style for different files all the time. This global file resides at ~/.subversion/config file (or %APPDATA%Subversionconfig on Windows) 

Now by default, “enable-auto-props = yes” is set and uncommented. Also you can see at bottom of file, certain file extensions has eol-stype property beingset to ‘native’ But to solve problem of mixed bag of EOLs in different files, you just comment back “enable-auto-props“. So what happens, subversion will NOT try to expect any EOL and let you add/commit a file. Once done, you can un-comment it back.



USPS, Perspective from a customer.

The United States Postal Service (USPS) is facing a fiscal crisis and it’s been quite uproar in Washington and online world for what to do next. USPS is among largest employers of USA and important part of American history & pride. Postmaster General and CEO ‘Patrick Donahoe’ has presented his testimony in front of congress. (link). There, he has emphasized on cutting costs by reducing number of postal employees, consolidating & closing postal processing units, closing post offices etc. Also with very abstract proposal of ‘streamlining process and product offerings’. So far, other than cutting costs, he has not suggested any of concrete methods to increase revenue. And here comes my purpose of writing this blog.

I come from software development career, spanning 15yrs. I have been keen observer of how commerce industry innovating and finding new methods to market and sell their services and products. On same line, here is 10-point suggestions for USPS.

  1. Use Your Real Estate
USPS got 32,000 retail locations. Inspite of real estate crash, 32,000 locations is big piece of asset and lots of floor space. They need to start recognizing value of their floor space in middle of towns and put them to use. Recent example of 7-Eleven stores will allow local pickups. USPS can allow some kind of locker services or start some program, where online retailer or new retailer can deliver/pickups through their retail location. Here is another example, allow Redbox movie kiosks.

  1. Make Priority Shipping More Simple and Intuitive
Add tracking number by default. In 21st century, getting status and confirmation of their shipment, has become standard. That’s one of biggest reason, I use UPS. Make pricing simple. Lets say two fixed prices for small and big priority box. (Yes, USPS already does that, I am moving towards, to make it more standardizing in their priority offering).

  1. Use Global Shipping As Strength
U.S. Postal Service revenue from international mailing and shipping products has seen a 12.3 percent year-on-year increase so far in the first three quarters of the 2011 fiscal year.”  I think, USPS international operations and contracts with international postal services is big asset for them. I am surprised, why they are not marketing it and streamlining it. USPS provides international priority and standard shipping. Of course, whenever you do, you end up filling in multiple documents as required. They need to make it simpler to ship and price attractive. Coupled with marketing, USPS can capture & encourage, immigrants segment of their customer. Think of shipments going from USA (USA Brand) to countries like India, Mexico, Spain. European tourist come to USA, to do shopping around Thanksgiving holidays. It’s big market outside USA for USA retailers. They can be right shipping company for businesses and individual consumers.

  1. Do not Take Away Saturday Delivery And Introduce Sunday Express
This is one of new measurements, Mr. Donahoe proposed. I am pretty sure, he is trying to pick low hanging fruit for now. My perspective is, use Saturday delivery as one of strength. In fact, if why don’t you give laid off employees to work part time on Sunday, for express mail deliveries. Think of marketing USPS can do, where you ship on Friday and residential customers can get it on Sunday.

  1. Add Automated Postal Center (APC) In Every Location. In fact Add Two.
Factoid: APC can provide the services needed for 85 percent of post office visits. I have it in my zipcode and loves it. Only problem, I have seen, if someone is hogging the machine for full ½ hr. I really wished they have two APC machines. Assuming, 85% people can use APC for USPS deliveries, think of that, number of people wants to stand in line for window service. Plus, incentive of lower rate compare to window service, is good enough for most of shippers.

  1. Add More Informational Posters And Provide Pamphlets to Educate Shipper For Different Options

I am not sure, if USPS ever gathered analytical data like percentage of different services rendered at retail window. What I personally feel, if USPS put some more posters on different services and even put some matrix table, comprising weight/rate columns, then shippers doesn’t have to take help. Let him/her use APC afterwards.

  1. Reduce Number Of Service Employees Per Location to ONE.
In combination to above mentioned proposals, USPS can reduce number of employees needed to manage a retail location. It’s contrary to proposal to close down retail locations. I think, instead of closing loss-making locations, turn them into profit making units or at-least let them self sustain.  

  1. Get Contractors Instead of Employees.
It’s been established in testimony of Mr. Donahoe, paying salary to Employees and their health benefits plus mandatory retirement benefits is one of major expense compare to other private shipping companies. I understand, USPS employee union may not appreciate this idea, but in this economy and age, paying by hour makes more sense. Getting health insurance for a private contractor is another painful topic of America, I won’t go there. By implementing policies to hire seasonal private contractors, USPS can smartly manage their workforce expenses with-out worrying about ever increasing health insurance and putting money in retirement funds. Further expanding on this idea, for full time employees, USPS can start incentive based salary structure, like basic salary + performance based bonus. Performance can be measured using number of hours, number of packages handled or number of customers dealt with.

  1. Empower “Approved Postal Providers”, More Than Just Selling Stamps.
We all have seen, checkout counters on many grocery stores or bank windows, giving you option to buy usps stamps. Why doesn’t USPS do some pilot programs with interested partners, let them provide more services of USPS. Simple services like Accepting mail, Accepting Priority boxes, Selling USPS approved packing boxes, various documents. Even ask them to allow, to install APCs at their premises (on commission/affiliate basis.). It’s all about getting more packages into USPS delivery. More volume means more money and more utilizing mammoth infrastructure of USPS.

  1. Provide Other Services Through Their Service Window.
Currently, whenever I see big line for window service, I always wonder, how many customers thinking of other things, they need to do, like buying phone cards, coffee (i know, i know 🙂 ) or small items they need to buy. Well, why don’t we use time of postal worker to sell more and generate more money for their employer. (which is USPS). UPS stores sell things other than shipping supplies. Few of things, I can think of, phone cards, greeting cards, copy using copier machines, fax ability.

Other Small Ideas

  • Open early on weekdays. Work with working and paying class of customers. How many working people can come between 10am to 5pm. That’s why it’s crazy during lunch hours.
  • Hire students part time, to pre-handle customers entering post office. Let them assist/teach customer, how to use APC machines or fill forms. Target is customer is handled as quickly as possible and spend less time with USPS employee on unnecessary trivial things.
  • In this Internet age, make your USPS online shipping site super easy and intuitive. Hire 3rd party consultants to do usability study and design.

Facts & Numbers

  • The Postal Service delivers 212 billion pieces of mail to over 144 million homes, businesses and Post Office boxes every year.
  • The U.S. Postal Service ended its third quarter of fiscal year (FY) 2011 (April 1-June 30) with a net loss of $3.1 billion, compared to a net loss of $3.5 billion for the same period in FY 2010. Total mail volume declined to 39.8 billion pieces for the quarter, compared to 40.9 billion pieces in the third quarter of FY 2010.
  • U.S. Postal Service revenue from international mailing and shipping products has seen a 12.3 percent year-on-year increase so far in the first three quarters of the 2011 fiscal year.
  • Postmaster General is Patrick Donahoe
  • The Postal Service ended Quarter III of fiscal year 2011 (April 1 – June 30) with a net loss of $3.1 billion.  Net losses for the nine months which ended June 30 amount to $5.7 billion and we are currently projecting a net loss of up to $10 billion by the end of this fiscal year, depending on interest rates.

# # #

A self-supporting government enterprise, the U.S. Postal Service is the only delivery service that reaches every address in the nation, 150 million residences, businesses and Post Office Boxes. The Postal Service receives no tax dollars for operating expenses, and relies on the sale of postage, products and services to fund its operations. We’re everywhere so you can be With 32,000 retail locations and the most frequently visited website in the federal government,, the Postal Service has annual revenue of more than $67 billion and delivers nearly 40 percent of the world’s mail. If it were a private sector company, the U.S. Postal Service would rank 29th in the 2010 Fortune 500. Black Enterprise and Hispanic Business magazines ranked the Postal Service as a leader in workforce diversity. The Postal Service has been named the Most Trusted Government Agency six consecutive years and the sixth Most Trusted Business in the nation by the Ponemon Institute.

#LeanDevelopment for #LeanStartups

To align with #LeanStartups way of doing things, we also need lean development,  lean technical stack and lean project cycle. Startups are big news these days. And when we talk about startups, there is big push for quick turn-over and get to market before time & money runs out. Hence we talking about #LeanStartups. (If you wondering, why I am using ‘#’ sign, because, that’s how I am getting my topic news on Twitter. So keeping up lingo).

Coming to topic of this blog, whenever I think of implementing some idea or start chalking up some plans to develop something, first thing comes to my mind, which web framework? Which technology stack of APIs. How I can get other developers involved with-out spending time on discussing these things. When we talk about Lean startup, it means you should be able to develop some features quickly and deliver it. As user experience ‘demands’, you are agile enough to change backlog continuously, with-out changing technologies or whole project direction.

Keyword is ‘Continuous’. (This explains it better,

  • You change backlog continuously
  • You build continuously (every commit)
  • You deploy continuous. (even 5-6 times a day)
Now problem is, when we start spending time developing code using traditional frameworks (or technical stack), then we are introducing rigidness to whole continuous process. 
Example: Lets say, you are using Apache tiles + JSP + Spring framework + Hibernate, to formulate any development efforts. If you need to change DB schema, need to change JSPs & their layouts, then it will take a lot of effort to change everything.  That’s one of reason, I think, we have a lot of  PHP based frameworks for startups. Because they are missing layers of layers of configurations. (See my older post touching this subject)
So What I Propose?
Lean Development (& technologies)
(Note: This is written with Java/JEE APIs in mind)

  1. If you are in same boat as me, I have choosen ‘Spring Framework’. Spring Framework is not a framework, it has become underlying nerve of almost all project. It’s all IOC. 
  2. I recently dump technologies like ORM (Hibernate/JPA) in favor of old school JDBC (See my other post on Why?)
  3. I am sticking to JSPs and old school JSP includes. Why? Because my team should be able to introduce any jsp with minimal time and minimal impact. Any developer with little experience of java, can work around JSPs. 
  4. At last but least, standardize html technologies including CSS and JavaScript libraries.
Sample Recipe (If I choose my stack now)
  1. MySQL DB (or you can go with PostgresSQL )
  2. Spring Framework (JDBC, MVC, rich set of Annotations for transaction, cache, webservices )
  3. JSPs for displaying content. No templating framework. Plain JSP includes. 
  4. HTML 4.01 Strict transitional standardized UI
  5. JQuery 1.4.2+ (including JQuery UI and plugins as needed). Google it, you will find thousands
  6. Yahoo YUI CSS grids, to stanardized your grids once for all. (including reset css)

Why not PHP? 

Now speaking of PHP, yes, you can argue why not use PHP (hence products like Drupal)? That’s main concern with us (or me). Acting on a idea is not everything. Having passion to develop it with technologies you love and you are confident in it. I won’t discuss pros/cons of PHP v/s JAVA right now. I have friends, who are in same boat.  Many of them belongs to group, “Why you thinking technology? Just think of idea and pursue it. Just pick Drupal or any existing CMS application.” Yes this is what I been told many times. But I am technologist and I want to know about technology that will be used. I am java web developer with decade of experience. So with so many tested web frameworks available, sticking to java/j2ee technologies makes sense. I don’t advise, against PHP but this is something, doesn’t suit me or my time. Why learn new skills and figure out things, whenever you need to do something different. 
Lean Project Cycle: 
When we talk about this, we talk about all other tasks to give light to a written code. This involves everything like recording requirements, maintaining backlog, continuous builds & deploys.
Generally, if we are bunch of developers, we think, of getting one build server and install some open source tools like ‘Hudson’. Also make maintain source control repositories using subversion ourselves.

  • Provision a development integration server.
  • Provision DB servers for different stages of a project
  • Provision a Unix box, for source control. 
  • And then find people, to install those and manage those. 

Welcome to world of PaaS and Saas

(Platform As A Service and Software As A Service)

If you have experience enough, we know, getting into maintaining tools ourselves, we will be putting hours, which we could have been writing code. Hence, I am in favor of, why not use online services? For example, altassian group provides execellent suite of all tools in one single subscription package. Link .  Personally, I am big fan of JIRA and used Confluence. So I don’t want to waste my time, learning new tool or spend time, how to install/maintain it. There are lot of other paid subscription services available like and many others. Point is, I will rather buy subscription for these services and use it.  (If I need my car serviced, I will drive to auto shop, rather than setting up a new one in my garage 🙂 )

Excellent examples of PaaS & Saas

  • Xeround, They are new kid on block, providing DB hosting as a service (SaaS model). I have experimented with their services. At least for startup or low volume work, they can be right fit. You can always do cost-benefit-DBA_salary-Ping_time analysis.
  • Atlassian Studio: Provide hosted services for Subversion/ JIRA / Bamboo, confluence wiki and even agile tools like Greenhopper. Frankly speaking, I am ok with spending $125/mo with them, if I am seriously spending my time for startup. Think of productivity. 
  • Amazon EC2, If you need to provision a server for any reason (or live production hosting) This is one very good option. I am using their micro instance for running my dedicated MySQL server. I even hosted wordpress blog there. (A week back, I moved to (To save some more bucks, I am using ‘Spot Instances’. )
Basically, you need Laptop and your favorite IDE to start a #LeanStartup. By the way, if you not using Maven for builds, you need to start from there. 

Why or Why Not AppFuse ?
For those, who don’t know, AppFuse is excellent open source project, which allows you to download & use it’s pre-built projects. It provides different projects with different technology stack with web interface. Yes, it is good starting point, when you want to see certain API in action and modify, experiment with it. But from my point of view, it’s more like proof of concept. It can be good starting point for you, if you starting from scratch, as long as it matches you pre-determined technical stack. For my own project, it doesn’t. I was spending more time, working around it, hence I decided to write some ‘generic’ web framework from scratch. Also adding different goodies like separate admin, user interface, cache support using annotations etc. Hopefully, I will release it, it’s first alpha release as open source soon.

End Note:
In this post, I tried to look at technologies side of #LeanStartups.  This is something, which can be black hole for many startups if not controlled to begin with. Programming languages like PHP can be easy step for day 1, but when things get serious, we need serious enterprise friendly language and tools supporting it. And Lean development model (as described above, using Java), can serve both purposes of starting easily and able to sustain long term.

Spring 3.1 with Cache and ORM technologies

Recently Spring Framework released Milestone 1 of version 3.1 (New features).  One of new key feature caught my eye is, support for cache using annotations. Now you can annotate your DAOs (or Repositories) with provided cache specific annotations and let spring take case of behind the scene bridge to cache provider.

To be fair, Cache using annotation is not new. It’s been available as part of open source projects like But now cache annotations, being part of core spring framework, it is going to be used more seriously and developed pro-actively.

Another major impact is, I am revisiting actual benefits of using ORM API for data access. As we all know, we have pretty much two ways to access database, either plain JDBC or ORM API like Hibernate. I am big fan of Hibernate, due to obvious reasons. And one of main reason is, adding cache capabilities by enabling 2nd level cache.  But in design sense, cache is more like cross-cutting requirement for a running system. Like any other cross-cutting concerns like Transactions, it makes more sense to use ‘annotations’ way.

Why back to JDBC now?
That’s one decision, I recently made. I switched back to JDBC based DAO implementations instead of Hibernate backed DAOs.

  1. We all have experience of writing SQLs. And using Spring JDBC templates, it’s very easy to implement methods.
  2. When we actually implementing DAOs, most of operations are not CRUD type. It’s mostly join of multiple tables. Although HQL/JPQL can achieve can anything but it’s another learning curve. 
  3. Personally, I don’t want to add another layer API in whole technical stack. Hibernate (or any other ORM layer) is great and might have obvious benefits but it always have learning curve and yes, we spend time in debugging HQL or JPQL. 
  4. Major factor was ‘cache’ feature in Hibernate. Now Spring Framework provides cache annotations. It gives me more flexibility on what to cache and what not to. (even eviction policies are supported) 
  5. Plus spring cache annotation can be applied anywhere. Not just DAO, even remote webservices, or any method which is resource intensive. 
You can be setup and running by simple steps like upgrading spring dependency to 3.1.0-M1 (3.1.0.M1) and read this useful article at spring blog. 
Things which might need polishing
  • There seems to be incomplete code, how to generate key. (It’s using just method name to generate key? )
  • Spring team is working on adding more providers (apart from ehcache). Support for providers like JBoss treecache will be nice. Specially in multi-cluster environment.

Fast Forward To MVC ‘ONE’

Recently, I worked on adding a feature to legacy webapp of my client, based upon MVC one model. And  it’s simplicity and getting work done, made me think again, about MVC one based webapps. I am always on hunt for simple web framework, which i can embark on for proof of concept of my startup ideas.

This is year 2010 and we as java developers, have spent countless time in arguing and reading (fun though) about various java web frameworks in play. Many bloggers publish their findings, opinions and countless others copied their content, republished them anyway. One common subject has been, which web framework? Cult is increasing everyday, for Spring MVC, Wicket, now JBoss Seam. I had personally spent time thinking about choosing one framework of choice, for my own projects, but I could never conclude on one. Spring MVC has become strong contender and safe choice, because I will be using spring for services and data access layer. Have we ever took step back and thought, we have surrounded ourselves plethora of frameworks for unlimited reasons, but never thought of ROI and time to market any product. Java always been treated as enterprise solution for big companies with big projects with big budgets. What about simple B2C startup projects which has limited user base and no funds. In today’s real web applications, PHP is thriving as before. People are still coding & adding more to PHP based projects. And those projects have been more successful and widely accepted. Example would be ‘WordPress’ , ‘Drupal’, ‘OSCommerce’. (Even used drupal. Not sure whether they still using it). So what is main charishma or any characteristics of PHP? We all have argued, bloggoed about How PHP is inferior compare to Java, but PHP is productive.

Having said that, lets come to drawing board, as new developer or even CIO of a company. What we need in a web application (lets assume startup webapp, upto 100 concurrent users in one cluster)

  1. Pretty interface, which users to love to use it. (Example
  2. Interactive & Intuitive interface.
  3. Functional interface. It does what it promises and always does that.
  4. Low or almost NIL learning curve for any new developer.
  5. Able to extend interface + functionality by 3rd party developers. (That strives any web product)
  6. Quick turn around of code to market
  7. Able to remove or disable code or by choice of end user. (user is deployer)

Having listed all the requirements, how many requirements really demand for use of MVC II framework?

First two are more of DHTML only.

Number 3 is quality of code and how well tested code is.

Number 4 is not good for MVC II frameworks like spring or wicket or even struts. They have some learning curve and you need resources, comfortable on that.

Number 5, people can argue, here MVC II excels. But at what cost? Lets analyze that more (I will use example of Spring MVC, because of popularity. Nothing against it)

First drawback is, to write any java code, you need to be java developer. It eliminates pool of casual web programmers who just like to write scripting (like PHP) and DHTML. Second is, if you write anything in JAVA, webapp has to be repackaged (and ofcourse released again). So there is no on-fly programming or releasing code on running webapp. (Lets forget about OSGi for now. It’s different ball game).

What about writing code using JSPs only?

That’s this post is all about. I know, many of you, thinking, c’mon man, Nineties called, they want their MVC one back. But lets entertain this idea for a while. For example, I want to write simple webapp ‘java press’ blogging engine, able to deploy it with in couple of weeks and my available heap size is 64MB (pretty common for VPS plans). If I go by spring mvc route, it will take me a couple of weeks, just enough to setup my environment, DBs, etc etc. I will have re-hash my knowledge of spring, and spring mvc controllers etc etc. But if I just pick JSPs model, i am ready to convert static HTMLs with-in a day.

Using JSPs and JSTL tags, I get lots of java functionality baked in. I can divide my jsps using includes (keeping option of apache tiles open). I can keep two sets of JSPs, presentation jsp and worker jsps. I can use extensively extra request parameters, to drive backend jsp, where to forward or redirect my request after finishing work.

If we still on this concept, few custom tags can be written and being delivered as part of SDK. So anyone who wants to customize it or extend it, he has JSPs, JSTL and some custom util tags at his disposal.

Advantages or trying to look at positive aspects.

  1. For deployments for small companies or concept deliveries or Y combinator kinda projects, time to deliver is small.
  2. You don’t have to muck around which framework to use. You think of doing something, and you doing it.
  3. Think of 3rd party developers pool, you get access to. If you document well, even PHP developer can pick JSP skills in few days and write JSPs for your webapp. (Imagine hiring PHP developer for spring MVC work. I don’t have time & budget for training).
  4. With new JEE specs coming out, you are always safe to upgrade your app and take advantages of new features without rewriting anything. (That’s what spring says, we give you another layer of abstraction. But you have upgrade code for spring 3.x now, isn’t it?)
  5. For scalability, it’s cheaper than before, clustering solutions based upon clouds. So my one deployment supports 50 concurrent sessions, I will just throw couple extra virtual tomcat instances, with in same cloud instance. or couple other different ways.
  6. I am less worried about performance, because recent benchmarks for tomcat 6 using JDK 6, is impressive.
  7. Using data source pooling, getting connections from JSP isn’t resource intensive. (As part of SDK, some classes with static methods can provided, to provide some encapsulation on common used API calls)
  8. web.xml supports basic security model. And again, as part of SDK, some common servlet filters can be provided for cross cutting concerns like security, logging, transactions.
  9. DB Caching can be ignored for now, assuming DB server and tomcat instance are co-located or installed in same instance. Caching adds lots of code maintenance, research & testing. Benefits of using caching diminishes in small db and small users.
  10. Well known advantage of JSP, do code change and see changes immediately on web browser. Hence time to see results, while doing development is almost nil. We all have taken coffee breaks in our past projects while waiting for server to bounce.

This post is not comparison between MVC I and MVC II model. It’s more of another look at MVC I model and putting on my thoughts, why It could make sense to use it.

Running MySQL on Amazon EC2 with Elastic Block Store (another take)


I have been following ‘Eric Hammod’ article on how to setup MySQL so that, it survives EC2 instance termination or restarts.  That article is one of best article but it’s too much for my requirements. So I came up with some more simple steps to host MySQL files on EBS volume. Many parts of Eric’s article still applies like how to create EBS volume and how to create snapshot of EBS.

How it is different from Eric’s version.

  1. Eric’s article is solving this problem by mounting original locations as mirror of EBS locations. In case of restart, you will have to fix fstab file. In my case, you will have to fix my.cnf file. I rather not touch fstab file.
  2. My purpose, just to do enough, to save my MySQL data files in case of my instance crashes/restarts. Hence I don’t care of configuring log files location.
  3. I am using elastic fox plugin for creating EBS volume etc. Able to avoid ec2 tools. (if you planning to do lots of work, better spend some time setting up tools and get familiarized.)

Pre-requisites for readers

  • You are familiar with Amazon EC2 images and how to play with them.
  • You already have installed MySQL and it is successfully running.

How to configure MySQL to use data files on EBS.

I have created 10G EBS volume and mounted that volume to /mnt/jframeworks. Attach volume to instance (as /dev/sdf). I used Elasticfox firefox plugin. Using amazon ec2 tools was getting too much time consuming.

Let’s create a new user also, make his home on EBS only, to start with. Adding new user is not necessary for this purpose. But I think, it is good idea to create new user and use that user for all new changes starting from fresh EC2 image.

sudo useradd -d /mnt/jframeworks/home/<newuser> -m <newuser>
sudo passwd <newuser>
sudo usermod -a -G admin <newuser> -s /bin/bash
sudo usermod -s /bin/bash <newuser> (may not be needed)
sudo mkfs -t ext3 /dev/sdf

(Formatting as ext3. Being linux noob, not sure, why xfs filesystem as mentioned in Eric’s article.)

sudo mkdir /mnt/jframeworks
sudo mount /dev/sdf /mnt/jframeworks

First create required data directory on mounted EBS volume

mkdir /mnt/jframeworks/var
mkdir /mnt/jframeworks/var/lib

Copy MySQL libs to EBS directory

cp -a /var/lib/mysql /mnt/jframeworks/var/lib/

Note Remember to use “-a” flag while copying.

MySQL settings are stored in my.cnf file, available at /etc/mysql.

Hence change /etc/mysql/my.cnf file to change data directory to new location

In this example, here it looks like ->

datadir = /mnt/jframeworks/var/lib/mysql

AppArmor is new tool, which is being used in ubuntu by default. Hence lets fix that too.

sudo vi /etc/apparmor.d/*mysqld, replicate lines /var/lib/mysql** with new data dir location

Now finally, lets restart MySQL and AppArmor, to make changes in effect.

sudo /etc/init.d/apparmor restart
sudo /etc/init.d/mysqld restart