Search This Blog


Sunday, August 28, 2011

Java on Heroku has just announced support for Java applications on the Heroku platform.

This is absolutely revolutionary for the Java world. Why?

The big value-add for Java is that Heroku takes care of the configuration and deployment of the application platform. A developer can write a Java application on a local workstation and then deploy it to Heroku with a couple of commands. Deployment becomes much faster and easier.

Java developers today have to know a lot about the configuration of JavaEE or J2EE application containers such as Tomcat, Glassfish, JBoss, etc. The learning curve for these software technologies keeps a lot of developers out of the Java world. It is hard to get good at this kind of expertise outside the shelter of a paid job at a Java development shop.

Java on Heroku means that new Java developers can learn much faster how to build Java applications because they don't have to learn about JavaEE application containers and deployment. This rapid feedback enabled by fast deployment on Heroku is what enables faster and more effective learning. This will open the doors to a lot more programmers learning Java.

Now new software developers have a real choice. If a developer wishes to work for a big company with a Java development shop, they can deliberately practice building Java applications on a Heroku platform and get good at it. Knowing how to build good Java applications greatly increases the chances of employment for a software developer. If a new software developer wishes to work for a small company which needs to develop ideas quickly, there is still Ruby/Rack on Heroku and Python on the Google App Engine. Java won't help as much here.

In the long run, this means that Java developers can develop some cloud computing expertise using the Heroku platform, and many will even elect to host their Java applications on Heroku's cost-effective Java platform instead of spending time and money configuring and deploying on JavaEE application containers.

This is a great boon for Java developers and new programmers looking for work.

Friday, May 27, 2011

Microsoft:: Data Security & Privacy

Dan Reed from Microsoft Research asks and answers some good questions about data security and privacy in a cloud computing environment.

Tuesday, May 3, 2011

Post-mortem of AWS outage

A couple of weeks ago Amazon Web Services (AWS) had a major service outage.  It knocked out a lot of high-profile web sites such as Reddit, Quora, Flightcaster and others. It also partially degraded a few other services which used AWS.It took 2 days to fully recover from that outage.

This article at was one of the best early commentaries on the outage. The folks at Heroku also published a full commentary on the outage from their point of view.  Eventually the folks at AWS finally wrote up a pretty good post-mortem on what happened. The folks at Y Combinator News have more comments

A few companies did not suffer for an outage in spite of hosting their applications on AWS. Twilio did not go down, nor did Netflix, and both published their analyses. 

The core AWS component which failed was their "Elastic Block Storage" or EBS service. EBS is a service which provides virtual hard disks which you can virtually hang off (or mount) your EC2 virtual machines to give more local disk capacity. EBS drives are mirrored over their network for redundancy.

Amazon's infrastructure is separated into regions and availability zones. Regions are theoretically independent from each other as are availability zones within regions. This is supposed to firewall failures from propagating to other regions.

So...what happened? Here is my interpretation of the various blog posts:
  1. After fixing a bug, AWS operations people rerouted network traffic away from one part of their system, but mistakenly to a part of the system with limited networking capacity. 
  2. The part of the network with the limited networking capacity had EBS virtual disks in it. The network congestion caused disk mirroring failures. 
  3. The automatic control system for each mirrored EBS kicked in and searched the availability zone for another EBS to mirror with. 
  4. With a large number of EBS re-mirrorings in process, network traffic went up, resulting in more mirroring failures and more attempts to remirror. 
  5. Eventually EBS virtual disks just became stuck. Any EC2 computer which depended on them also became stuck, and then the outage was very apparent. Fortunately, the isolation of the regions meant that only EBS instances in the US-East region were affected.

Why did it fail?
  1. There was a single point of failure which was the "control plane" or network on which the EBS instances tried to re-mirror. 
  2. The automatic control system did not apparently have a degrading random back-off on collisions. For example, in your office network, if your computer tries to send a message on a hub and somebody else is using the network hub the packets from both will be in error (called a  packet collision) and both computers will back off for a random amount of time and retry. More than likely, backing off for a random amount of time and retrying will result in no collision because the random times for each computer will differ. If there is still a collision, both computers will back of for a longer random amount of time. This allows the network to settle down instead of being congested with computers all trying to use it at the same time. 
  3. It's not clear if the disk mirroring network traffic was cleanly separated from other network traffic. If it was, I think that the outage would not have been so bad. 

Why did Netflix and Twilio not fail?
  1. Both Netflix and Twilio had AWS EC2/EBS servers in other regions. When one AWS region failed, warm and cold stand-by servers were quickly and automatically started in other regions to take the load. 
  2. They did not depend as much on EBS. Heroku went down because their database servers were heavily dependent on EBS.

This failure will bring out the nay-sayers who think that cloud computing is a passing fad and will never meet the robustness "requirements" of enterprise-class systems. The nay-sayers have a point, but they will be proven wrong. The companies who failed spectacularly to provide service on their AWS-based systems will put in the kinds of measures used by Twilio and Netflix to ensure that the worst they suffer is a degradation of service.

And when you analyze the data, failures like these are like airplane crashes. Spectacular. Wide-spread and well quantified damage. But we know that if you look at the data, flying is still safer than driving. The data will eventually reveal that cloud-based systems suffer less down-time than in-house systems and at a fraction of the cost.

Saturday, April 23, 2011

The Economics of the Cloud

Microsoft, of all companies, recently published a great article on the economics cloud computing. If you ever wanted to get a concisely presented solid understanding about why cloud computing is good for you, this article answers the question.

Here are some key take-aways:

  1. When you deploy applications on the public cloud, you can experience a 40x reduction in the total cost of ownership if  you are a small enterprise (small handful of servers), and a 10x reduction in cost if you are a medium-sized enterprise.
  2. You can deploy servers much more quickly on a public cloud in response to demand, and you can de-commission these servers just as quickly for almost nothing. 
  3. If you want to realize the full benefit of deploying to the cloud, you are better off using a platform-as-a-service (PaaS) offering or a software-as-a-service (SaaS) rather than using infrastructure-as-a-service (IaaS). (You still do well with IaaS, but it is only a few times better than PaaS/IaaS). 

With numbers like this, you have to try one. Imagine, you are a small office with 10 servers, and you are paying $4k/year/server to own and operate them (for a total of $40k/year). Now suppose you have to expand your capacity by 10%. So you try using one cloud server, which will cost you about $1000/year. It works, so you try two cloud servers, and get rid of one of your in-house servers. Now you are paying $36k/yr for your in-house servers and $2k/yr for your cloud servers, which is a 5% cost saving in spite of increasing capacity by 10%. With numbers like this it is hard not to try to do more.

The long and the short of this article is that cloud computing has advanced far enough for small and medium-sized enterprises to try cloud computing to see if they can realize the cost advantages of going to the cloud. For many enterprises it will be  a worthwhile and profitable experiment.

Saturday, March 19, 2011

Vendor Failure & Vendor Lock-in Risks: Heroku: Mitigated

People who talk about cloud computing platforms worry about vendor lock-in. And they should. In the old days, people often wrote applications on operating systems and databases which were appealing when they started, but became cumbersome as time moved on. When they wanted to move their applications to a more appropriate platform, they were stuck with paying for an expensive application retro-fitting.

This kind of problem drove the industry to push for open standards such as POSIX, the C standard library, SQL, HTTP, HTML, CSS, XML, OAuth, et cetera.

As cloud computing emerges, many folks rightly worry about vendor risk and vendor lock-in. Well, it turns out that has provably less vendor lock-in than any other cloud platform. Why? Because Dr. Nic Williams of Engine Yard, one of their competitors, wrote a tool to automatically migrate applications from to Engine Yard. You can read about it at ReadWriteWeb.

For me, this mitigates the risks of vendor lock-in and vendor failure because a competitor with no more system access than any other user was able to write such a tool. This tells me that Heroku's application environment is open, its data is open, its data interchange and storage standards are standard and open, and that its programming environment is open and transparent. It is one thing to tell the market that you are open and portable, but it is quite another to enable a competitor to prove it.

What this means is that producers of cloud applications should not be afraid to initially host applications at There are many usage scenarios where it may be more useful to have a more generic platform such as that provided by Engine Yard, and they don't often happen when an application is first launched. When the operators of these Heroku cloud applications start spending more money on hosting with, they compare at the economics of switching to Engine Yard or some other IaaS or PaaS provider. In my opinion, that is also the point where you will realize that sticking with Heroku may actually be a better idea, but the freedom and ability to move your application elsewhere is reassuring.

Thursday, March 17, 2011

Cloud Computing Saves Cash Because You Can Turn It Off

Andrew Hickey wrote a great article at CRN about why cloud computing can save you money

Of course, we know that the promise of cloud computing is that you can scale out cheaply, avoid capital expenditure, et etera. But the big saving is that when the traffic turns down,  you can turn off the computers and stop paying.

Why is that important? Because it means that you can try different ideas without making a long term commitment to paying for computing. If it takes a while to ramp up, you don't pay until it does. If traffic declines for a while, you can shut off your computing and stop paying.

If your traffic spikes because you got "slashdotted", you start more cloud servers. But once the traffic passes, you can turn them all off. 

This means that you can more easily try different ideas, and only pay for the ones that need computing resources. Because you can turn off computing when not in use, the average cost drops, and that itself means that you can try more ideas for a fixed amount of money.

I once worked at a small start-up where we had a test server and a production server which sat idle most of the time. These servers cost us over $1000 per month in fees at a collocation centre, and these servers were incredibly underutilized for a year. If we were to do it again today, we could run the same software on an Amazon EC2 instance at less than a tenth of the cost (because we could turn them off at night). And we would also save on system administration, installation, patches, etc. Back then we had to buy the servers ourselves because EC2 was not available, and commissioning a server took a long time.

Suppose that you  live in a big city and have to choose between buying a car or taking the taxi. Public transportation is good in many big cities, so you don't need a car for most transportation. If you bought a car, you would pay a bundle of cash for insurance, maintenance, operation, ownership, licensing, and parking. Or you could just pay a taxi driver to take you for some trips on demand. Taking a taxi daily for a long ride is expensive, but if you only need a couple of short trips per week, it is still cheaper than what you would pay for parking alone. And you don't pay for a taxi when you  aren't using it.

Cloud computing is analogous to using a taxi. And the traffic patterns of many web applications are bursty enough that you might be better off using cloud computing on demand instead of fixed computing in a collocation centre.

That is how cloud computing will save you money. (I sound like the ING Direct guy).

Tuesday, March 1, 2011

Cloud Computing Concern: Vendor Risk

Cloud computing service vendors are popping up like mushrooms on a dung-hill. And rightly so. There are some tremendous opportunities to reduce the cost of computing. As a producer of cloud computing applications, you must be aware of vendor risk. You need to be able to know what to do if your cloud computing platform vendor goes out of business or stops providing the service.

Thursday, February 24, 2011

Wednesday, February 23, 2011

How Big Is The Cloud Computing Market?

Sourya Biswas posted an interesting article at which asked the question, "How Big Is The Cloud Computing Market?"

That is a really good question. His basic message is that it is growing quickly, and should take more than the expected 10% of the IT spending market. The analysis is based on the revenues of some of the biggest cloud computing providers.

So how would one characterize the cloud computing market? From what I have read about the cloud computing market, one could say that the cloud computing market,
  • is comprised of people who provide valuable IT services to their organizations, including application management, data management, business continuity, to facilitate efficient business operations ;
  • who face expensive challenges such as constrained capital budgets, high operations costs, the need to upgrade their infrastructure to meet new demands;
  • and who consult each other through personal networks, conferences, and trade publications to help them make the best decisions in procuring IT infrastructure and services. 
The 10% estimate is a good one based on this market characterization because the IT folks don't perceive that the cloud can assure the security of their data. One could argue that about 10% of data is not critical to the core business and could be hosted in the cloud without incurring business risks. 

This situation reminds me of what happened in the early 1990s. Computer engineers  used Unix computers, which were expensive, quite robust and reliable. Also, many people used computers to perform "mission critical" functions in their organizations. Such organizations took data management very seriously, and bought expensive computer systems in house to process data efficiently, securely, and reliably. IT people regularly spoke to each other about improving best practices and often talked about the best computing hardware, operating systems, and applications which would help them efficiently run their operations.

When DOS/Windows came into the enterprises around that time, the existing IT and computing professionals (including me...I was a Unix snob) sneered at DOS/Windows, and asked why anybody would bother spending money on a crappy underpowered machine that crashed 5 times per day. They sneered at the fact that data was unsecured and unmanaged. But something was afoot. Cool new applications and computing tools were enabling people to enjoy tremendous productivity gains without spending an arm and a leg on expensive Unix computers and expensive Unix software. Applications such as WordPerfect, MS-Word, Excel, Powerpoint, BASIC, D-Base, and others could be harnessed on a shoestring budget to give people the benefits of computerizing their business processes without incurring the high up-front costs of a Unix workstation and expensive Unix applications. Sound familiar?

Over time, we know what happened. MS-Windows virtually replaced Unix on the desktop, and Windows  on inexpensive PCs enabled the size of the computing market to grow by at least a factor of 10. And with a market that large, the best applications were developed for Windows. (Over time, Windows did become more and more like Unix in reliability and security).

What is the lesson here? The market for desktop computing was not the IT professionals and computer engineers who needed mission-critical data management and application reliability. Rightly or wrongly, unreliable data storage and data loss were not a huge concern. The market for desktop computing was the average worker who wanted to use computers to improve their productivity. What the "Unix snobs" didn't understand back then was that even if the Windows computer crashed 5 times per day, it was still a huge productivity booster for most people, and using mission-critical Unix work stations wouldn't have improved their productivity much more.

To carry this analogy to 2011, the cloud computing market is not the people who provide valuable IT services to their organizations as described above. The cloud computing market is, in fact, comprised of
  • people and organizations who use valuable IT applications to facilitate efficient business operations, either as end-users or as IT service providers;
  • who are either sensitive to the total cost of applications, operations, and infrastructure upgrades, or who are unable to use such applications because of high up-front costs;
  • and who consult each other through personal networks, social networks, conferences, trade publications and online resources to help them make the best decisions in procuring IT applications. 
It is interesting to note here that many of these folks are not overly sensitive to data management, data loss, or business continuity,  nor do they always feel the need to own the infrastructure along with the application.

When the cloud computing market is characterized this way, I believe it is much larger than the traditional IT computing market because in addition to the people from the previous market, it includes
  • small businesses who could not afford to procure, install, and manage business software, and
  • people who could not afford to procure, install, and manage applications. 
Providers in the cloud computing market have brought application costs down so low that even cash-strapped organizations and people can benefit from using them.

Based on this alternative characterization of the cloud computing market, I believe that the cloud computing market will eventually become larger than the traditional IT computing market, both in dollars spent and in people served.

Tuesday, February 22, 2011

PHP Web Framework in the Cloud: Symfony 2

The folks at have just released a new video on installing Symfony 2 on their platform.

Symfony is a Model-View-Controller PHP web framework. The power of easy deployment with CloudControl and the fast development with Symfony 2 will help PHP developers quickly get their cloud applications up and running.

Sunday, February 20, 2011

Spot market for cloud computing capacity...

The Economist published an interesting article on cloud computing. It seems that SpotCloud is going to create a spot market for the exchange of cloud computing capacity.

How does it work?
  • Install Enomaly ECP on your server to make it ready to plug into the SpotCloud network. They have requirements that  your server must meet.
  • Register your server with SpotCloud, and it goes into the spot market. 
  • When a buyer needs your server, he will rent it through SpotCloud and pay them. SpotCloud takes their cut and then pay you. 
This will become a great way for data centre owners to wring extra revenues out of their idling servers. Apparently when SpotCloud started, they were surprised by who offered their servers, and how many people offered their servers.

It will also be a great way for data centre owners who cannot use old servers for their own needs, but find it hard to sell them or dispose of them because of the cost of capital write-offs. This allows them to continue to use those servers for a while longer before disposing of them. Or not...

This service looks like a generalized version of SETI@home. I think that giving server owners an opportunity to gain more utility out of their existing but idling infrastructure is a good thing. It will work out well because it gives server owners a chance to get something for nothing. (It doesn't violate the laws of physics. The something is just excess capacity which perishes with time).

Another way to look at this is that SpotCloud is bringing together the idle capacity of multiple customers to create a kind of service similar to a virtual Amazon EC2.

Saturday, February 19, 2011

Cloud Services: Amazon S3: Static Web Sites has just announced that they are going to allow people to host static web sites on Amazon S3.

Hosting static web sites at Amazon S3 will revolutionize static web sites, because it will cut the cost of hosting static web sites to almost nothing. The biggest beneficiaries will be web hosting companies and companies which build and operate web sites for their clients.

Let me show you. I used YSlow to analyze the static small business web sites of my friends Jeff, Samir, and Tyler. Here is what I got when I hit each home page once:

Now suppose that each site gets about 100 hits per day or 3000 hits per month, and a full site update is done once per month with all files changed. The traffic numbers become:
  • Bates Home Improvements - 54000 GET requests and 5.1 GBytes of outbound data transfer, and 1.7 MBytes of storage and the same for inbound data transfer.
  • Marmah Magnetics - 9000 GET requests and 17.1 MBytes of outbound data transfer, and 5.7 kBytes of storage and the same for inbound data transfer.
  • Ardoch Electric - 51000 GET requests and 804 MBytes of outbound data transfer, and 268 kBytes of storage and the same for inbound data transfer.
Now just plug those numbers into Amazon's S3 calculator and you get:
  • Bates Home Improvements - $0.69 per month.
  • Marmah Magnetics - $0.02 per month.
  • Ardoch Electric - $0.07 per month.
Bates' site  looks like it has the most expensive static web server. Now suppose that Jeff does a home improvement show on local TV every week, and instead of getting 3000 hits in a month, he gets 30000 hits per month. That means that storage and data transfer in stay the same, but GET requests and data transfer out go up by a factor of 10. (i.e. 540000 GET requests, 51 GBytes of  outgoing data transfer). According to the Amazon pricing calculator, his static web hosting cost goes to a whopping $8.11 per month.

Why are these low prices even possible? Because data storage has become super-cheap, servers have become super cheap, Amazon enjoys volume discounts, and because Amazon has standardized and massively automated their provisioning. All of this enables them to pass their savings to us.

Web sites hosted at Amazon S3 will revolutionize web site hosting. Why?
  • Web site operators for small static web sites can cut their hosting costs to the bone. 
  • If traffic grows for a static site hosted at S3, the web site operator doesn't have to do anything to scale the system, such as adding more servers, caching, network links, etc. A "massive" bump in traffic for a small web site is a mere blip on the traffic pattern of Amazon S3. They can handle it. Not having to worry about or do anything about scaling will reduce operations costs. 
  • Amazon S3 data is mirrored. That means even if disks or servers fail, you are very unlikely to lose your data. (You are more likely to lose data on your own server with a non-RAID disk). For a web site operator that means they spend less money replacing servers/disks and redeploying their static web sites. All costs of infrastructure maintenance and recovery are bundled in the price of the S3 service. 
  • Small traditional web hosts can gradually migrate their static hosting from their own servers or servers in a collocation centre to Amazon S3. This becomes useful especially when they have to dispose of old broken servers. As they do this, they will save a lot of costs for the acquisition, installation, configuration and commissioning of new servers. S3 has already done it. 
  • Web hosts of dynamic web sites could engineer the web sites so that S3 hosts and serves the static files of the web site while the dynamic content could be served through their servers. Overall that will mean that they will get more "bang" for their existing servers, because those servers won't have to serve up most static assets.
  • A lot of "long tail" web sites actually get very little traffic. There is no reason to deploy a ton of hardware and software just to serve that traffic. Amazon S3 provides this opportunity. Furthermore, web hosting companies don't have to engineer their servers to figure out the right mix of customers to put on their servers. Amazon S3 does for them.
Hosting web sites at Amazon S3 (which is in the "cloud") will make it more cost-effective for businesses and people to host their web sites.

Here are the technical notes on how it to set up your site at Amazon S3 after  you get an (free) Amazon Web Services account and go into the management console:
  1. Create a S3 bucket with the same name as the subdomain of your static web site. So, if your site is, create a S3 bucket named
  2. Write two static HTML files called index.html and error.html.
  3. (Follow the instructions at the S3 Web Hosting Guide). Upload an index.html and an error.html file to the S3 bucket. 
  4. Right click on the S3 bucket (e.g. named, and enable the Properties tab, and in there check the box to enable the web site, on the "Web Site" tab. 
  5. Designate your index.html as the main file and error.html as the error file. Set permissions on these files so that everybody can read. 
  6. The Properties tab will give you a URL for the site which looks something like If you click on the link, you will see your index.html file rendered in a new web browser. 
  7. Go to your Domain provider and create a new CNAME record for your domain ( and map it to as the address. 
After a few minutes or hours, your web site is live under a subdomain of your domain, and hosted at Amazon S3. 

Friday, February 18, 2011

How To Reduce IT Service Costs In The Cloud

Alok Misra at Information Week posted this article on How To Reduce IT Service Costs In The Cloud.

The crux of the article is this. Because many cloud applications can be acquired on a subscription model, and because costs are often a fraction of what an in-house application would cost, an enterprise does not have to hire consultants for $12k (2 weeks) to figure out what to buy and how to install it. Instead, the enterprise can just buy a quick subscription for half of that money and try it out before they buy it.

Costs are cut because:
  • No on-site software installation means no installation costs. 
  • Low  up-front costs (in the form of subscription) instead of  high up-front consts (in the form of licenses) mean that you don't need to pay for high-priced consulting to ensure that you buy the correct high-priced software. 
  • Software infrastructure maintenance (server fixes, OS software upgrades, security patching, etc) is done by the cloud application provider. i.e. There is no cost to you because it is bundled in the subscription price. 
I think that this is also why Google Docs is giving MS-Office a run for its money. With Google Docs:
  • No software to distribute, therefore no software distribution costs. 
  • Minimal advertising and marketing costs because it piggy-backs on Google Search & GMail. 
  • Minimal license/patent fees. 
All that is left is data hosting, software evolution, etc. That is probably covered by advertising revenues. (With Google Docs, support is a paid option, so I would expect it to be self-sustaining).And when costs are so low, they don't even bother charging for anything more than optional support, which is $50/year/user if you choose to get support. i.e. Google Docs is a much cheaper way to get the functionality of MS-Office, and that is good for businesses that need to cut IT costs.

If you remember a long time ago, people used to buy software to install on their computers. But with vastly reduced costs of distributing and selling software, it it isn't profitable for a retailer to carry much software. PC software companies still make money, but they do it by offering support and upgrades. Software companies distribute software  virtually for free by using downloads and license keys acquired online. They can no longer earn good margins by just selling software. Low software prices means more for the bottom line of businesses that use the software. 

Just as the Internet completely gutted the cost model of software installed on your PC, cloud computing is gutting the cost model of installed server software. This is good for small and medium sized enterprises who would love to use more software to improve productivity, but cannot because of high up-front costs.

Now I am not advocating a world where software producers (e.g. developers, integrators, product owners) are not paid. It just means that the Internet and the cloud have drastically cut the costs of marketing, distribution, sales, and deployment.  It still costs just as much to produce the software.

Overall, this is why cloud applications will help small and medium-sized businesses by cutting their IT costs.

Thursday, February 17, 2011

Cloud Computing Concern: Security: User Names & Passwords

Does your cloud application require users to sign in? If so, it may mean that you are keeping user identification and password data on your system. That could get you in trouble if somebody gets unauthorized access to your data.

Many stories,  such as the story reported at Wired, have reported that a database full of userids and passwords were hacked and stolen. This is really bad news if it happens to your users because somebody hacked your site. Why?  

Because most users don't have the mental capacity to remember a different userid and password for every single web application they use.

So what do users do? They use a small number of userid and password combinations for all web applications that they use. If somebody gets the userid/password by hacking your system and getting your database, they will probably be able to get to the accounts of those users on other systems. Very bad.

There is, of course, a simpler solution. Outsource your user authentication. Many well-known online services will authenticate users for your web application using protocols such as OpenID or OAuth. Examples include Google, Yahoo, Facebook, LinkedIn, and others. However, these authentication protocols are evolving and different providers use different methods.

The easiest one to use is JanRain Engage. You integrate your software with their system once, and they handle the ever-evolving authentication protocols and integration methods with 25-ish other providers including Google and Facebook. After that, you only have to configure your JanRain integration properties to work with the other providers.

Here is roughly how JanRain works:
  1. You place a login button on your web page which directs the user to a JanRain frame offering them a choice of providers.
  2. The user selects a provider and clicks the button. That directs the user to a your login page at the provider (e.g. Google).
  3. On that page it asks for the user's userid and password for that provider, and it asks the user's permission to share some information about the user's account. The user types in the proper userid/password and grants permission. 
  4. When the authentication clears, the provider (e.g. Google) handshakes with JanRain, JanRain then redirects the user to a URL with a unique single-use token. 
  5. Your web application parses out the token and makes a secure web services request call to JanRain to request the user's authentication information. Please note that the user's password is not sent by the provider to your web application. The email address is often sent, as is the OpenID token and various other information. And JanRain does not get the password either.
  6. Your web application then checks that information with what is in the web application's user database, and if there is a match, the user is considered authenticated, and is logged in.
  7. From there, your web application grants the user a login session and gives him access to all pages that require authentication that the user is authorized to see. 
Outsourcing user authentication is a great way for a cloud application producer to gain the benefits of user authentication without getting into the messy business of securely storing passwords. A nice side benefit is that the application does not need "Forgot Password" functionality because that will be done at the authentication provider.   A nice side benefit for the user is that she does not have to remember the userid and password for yet another web application. Using a well-used identity such as Facebook or Google (which is common for a lot of people) means that you .

Some people argue that outsourcing your authentication to JanRain, Google, or Facebook makes your site  look "unprofessional". However, getting your users to authenticate with a well-known authentication provider is more professional, as is providing an authenticated service without needing to store their passwords. Also, users are more likely to register with your site if all they need to do is log in using the credentials of a service that they already use such as Facebook or Google, because they don't have to memorize another userid/password combination.

Some people argue that trusting authentication to other providers won't work if their sites go down. That is true, but your site is almost certainly more likely to go down more often than Google or Facebook.

Wednesday, February 16, 2011

Want to Build A Killer Cloud App?

Are you a cloud application producer and do you want to build a killer cloud application? James Urquhart offers some interesting advice.

  1. Build an application that analyzes gobs and gobs of data. The cloud is a great place to do it, because you can provision a ton of computing on demand when the application is crunching data, and then release it (and therefore stop paying for it) when the data crunching is done. is a good example.
  2. Explore applications that create commerce and communities. Exploring ideas is cheap on a cloud platform, and doesn't require high capital costs. Once  you figure out what works, you can scale it in a cloud, a hybrid cloud, or in-house. The point is that you don't have to decide until you are successful. is a great example that is hosted on the Google App Engine.
  3. Context applications (à la Geoffrey Moore). These applications help business improve their operational efficiency, but do not contribute directly to their core products. These are safe to put in the cloud because most of them don't give their users any major competitive advantages. For example, web applications that manage chat, conferencing, wikis, human resources processes, are all applications that could be built in the cloud. Halogen Software for managing human resources is a good example.

Interesting advice. It seems rooted in the fact that CIOs must balance the need to lower IT costs with the need to maintain data security. These classes of applications offer both because they reduce IT costs and use data that isn't particularly critical to the core of the business. Read the article for more details.

Tuesday, February 15, 2011

How To Host Wordpress on for free! *

The folks over at have put together a nice video on how to host a Wordpress blog on their platform. They can do it because Wordpress is based on the PHP/MySQL stack, as is

This video shows that it is very straightforward to get your blog going here.

(*) While your traffic is low, hosting is free. Also, unlike with a blog hosted at, you can put AdSense ads on your blog and get yourself a free custom subdomain under a domain that you own. 

Cloud Computing Concern: Security: Credit Card Data

Does your cloud application take credit card payments? If so, it may mean that you are keeping sensitive credit card data on your system and don't know about it. That could get you in trouble if somebody gets unauthorized access to your data.

There are regulations on how to treat credit card data, which are in the "Payment Card Industry Data Security Standard" or "PCI DSS". Some of the things you need to do as a cloud application producer that collects credit card data for payments are:
  1. Build and maintain a secure network. 
  2. Protect cardholder data. 
  3. Mantain a vulnerability management program.
  4. Implement strong access control measures
  5. Regularly monitor and test networks
  6. Maintain an information security policy. 
There are specific directives on how to do all of these things. These measures are expensive for non-cloud applications, but are even more expensive for producers of cloud application with small budgets. However, you must implement these measures to avoid security breaches and related lawsuits.

There is, of course, a simpler way. Outsource your credit card processing. PayPal and are two popular credit card processors, and their methods are called PayPal Website Payments Standard and Server Integration Method. Here is roughly how they both work.
  1. You place a payment button on your web page which directs the user to credit card processor. Included is some hidden data from your web application which links to the details of the payment transaction. 
  2. The user enters the credit card data at the credit card processor page which clearly says that the payment is for your web application. All sensitive data is stored at the credit card processor and their web sites are compliant with PCI DSS. 
  3. When payment steps clear, they return the user to your web application, which then triggers software that starts the process of fulfilling their transaction.
  4. Alternatively, the credit card processor finishes clearing the payment, and then sends a message to a URL on your web application to indicate that  the transaction is complete. (There is usually an additional secure handshake to ensure that the request was authentic). That message can then trigger the software that starts the process of fulfilling the user's transaction.
Outsourcing your credit card processing is a great way for a cloud application producer to gain the benefits of accepting online payments without incurring the cost of PCI DSS compliance. 

Many people argue that outsourcing your credit card processing to PayPal or makes your site look "unprofessional". (Credit card processors also give you some tools to skin your payment page on their site to look like your site). Please remember that looking "unprofessional" is much worse than being "unprofessional" by not implementing PCI-DSS.  But note that keeping the data of your customers secure is actually more "professional". For myself, I almost never make online payments unless it is either to a large public company that can afford to implement PCI-DSS in-house (such as Google, eBay, Amazon), or a company that outsources credit card processing to PayPal (NASDAQ: EBAY) or (NYSE: V). I simply do not trust small non-public companies to implement PCI-DSS, whether or not they say they do.

Monday, February 14, 2011

Cloud Concern: Security: Heroku: What Went Wrong?

David Chen, founder of Duostack, posted the details of a security leak on Heroku has since fixed the bug. is a multi-tenant PaaS provider. That means that different applications owned by different people may run on the same Linux virtual machine, but in different processes. This not as secure as having each application run on its own Linux virtual machines, but they can be isolated from each other. (Unix/Linux has done this for ages). 

David Chen's security exploit was to access the slug file of somebody else's application server and use it to get access to source code, security credentials, etc for that application owner. Ouch!

So what happened?
  1. Over time, has provided more and more functionality via the "heroku console", which is a command line program used to manage a heroku instance. The heroku console allows the user  to execute Linux shell commands. That allowed David Chen to run shell commands to see the files used by others.
  2. Also, at some point, the folks at unintentionally relaxed user privilege restrictions such that if somebody knew about files written by other people, they could get code, credentials, etc. 
David Chen was able to combine these two factors to uncover the user credentials, source code, etc from some other applications. He did the honourable thing and notified Heroku.

So what went wrong? In my opinion, Heroku did the right thing by creating a multi-tenant platform tailored for Ruby on Rails web applications. They abstracted away the operating system, and even developed new terms such as "slugs" and "dynos" for common concepts such as "application image" and "application server". Where they went wrong  was to allow the abstraction to leak by letting users run Linux shell commands via the heroku command line program. That leaky abstraction provided the ability for a smart hacker to see what was under the covers, and how to break it. When that combined with the relaxation of user privileges, it became possible to deduce the security credentials and source code of other applications.

Has Heroku fixed the problem? They fixed the specific problem, and they are well on their way preventing future security problems and/or detecting them quickly. They fixed the problem, introduced regression tests to ensure that the problem stays fixed, introduced security checks into their development process, and are conducting more independent security audits. See here for details.

What's the lesson here? Ask your cloud provider what they are doing for security. Ask yourself if that is enough to meet your security needs.

Heroku definitely let their guard down. But their response to this incident was as good because they fixed the bug, put in preventative measures, and put in a process to audit and catch security defects. That is as good as it gets, and that is what will help them become a trustworthy cloud computing platform.

Friday, February 11, 2011

How To Hire a Rails/Heroku Cloud Developer

Many cloud application producers need to hire developers for their specific platform. But it is hard to differentiate between developers because most resumes will have the right words inserted.

The folks at have an interesting approach. They posted a job posting at where they asked applicants to write a Ruby/Rails application which, when deployed on Siyelo's servers would serve up their resume, and enable others to upload resumes.

They gave extra points for TDD/BDD, a good git commit log, and easy deployment to

That is an absolutely brilliant way to determine who will be able to meet their needs.

Cloud platforms such as and make this possible, but I'm sure it can be done on other platforms.

The nice thing about this idea is that the software application they asked the applicant to write is so simple that it couldn't be sold for much money, and a capable developer would be able to build it in the time it takes to write a customized resume and cover letter.

Thursday, February 10, 2011

Getting started with cloudControl under Windows - Feb 2011

The folks at just released a new version of their command line tool, and here is an updated video of how to get going on (They also now have documented for version control systems git and bazaar).

Remember that is a cloud-based platform on which to host PHP applications. Hosting a low traffic site is free or really cheap. And it automatically scales for higher traffic.

Usability Testing for Cloud the Cloud

Testing your web applications for usability is a very useful thing to do. Even if you have a vast array of user tests and automated regression tests for your cloud application, it is hard to know if your application is usable? Why?
  • Most small cloud application development shops do not have usability designers available to ensure that the application is usable. And engineers are famously unpredictable in designing usable software.
  • Over time, Most people intimately involved with developing the application become blind to the usability defects of the application because they figure out ways to get around it. Your paying customers won't be so forgiving. 
  • User-testing can be time-consuming and expensive. You have to get users who don't know the application, give them a set of tasks, and then capture their thoughts as they go through the tasks.
There is a company that offers offers usability testing "in the cloud". Here is what you do:
  1. Publish your application or a prototype on the web. 
  2. Write down about 10 minutes worth of tasks on an instruction sheet for your user to perform. The tasks should say what to do, and not so much how to do it. In a usable web application, the user should be able to figure out the "how" from the "what". 
  3. Go to Sign up. Pay. Register your job, including any URLs, login instructions, and possibly other particulars.  It cost $39 per user per user-job, and good user testing practice suggests that you should get at least 3 users per job for effective testing.
  4. has freelance testers who will test your application. The testing session (about 15 minutes) is captured in a video screencast.  They will also write comments about your application. They upload their screencast and comments with the video. 
  5. You then download their videos and listen them think out loud as they use your software. 
  6. Once you have recovered from the shock and humiliation, take feedback from these users and fix your usability problems. 
  7. Retest. provides a useful service to cloud application producers because the cloud application producers don't have to keep usability testers on staff. They only have to buy testing services on demand. A moderately well tested application (for usability) is possible at at a small fraction of what it would cost to keep usability testers on staff or what it would cost to bring in testers for usability testing sessions.

Wednesday, February 9, 2011

Use Test Driven Development To Lower Development Cost of Cloud Applications

In the last few years a new style of development has started becoming influential because it delivers more predictability and productivity. It is called Test-Driven-Development (TDD) or Behaviour-Driven-Development (BDD). What is it? In the simplest terms you:

  1. Write an executable test case, using one of many testing frameworks. (e.g. RSpec for Ruby, JUnit for Java, NUnit for .Net, and Expect for interfaces that use telnet and other network protocols), which describes what you expect your software to do. (There are books on  RSpec , Expect, JUnit, and  Nunit). This test should not require any manual assistance because it will eventually end up in the automated regression test suite.
  2. Run your test, and ensure that it fails. 
  3. Write your application code to make the test pass.
  4. Adjust your test and code if they were not specific enough, and ensure that the test passes and fails. 
  5. Automatically run all previously written tests (regression test suite) against your software, and make sure that the regression tests that were running continue to run. 
  6. Fix code for tests that previously passed, but now fail. 
  7. Is the software finished? If not, go back to step 1 and add a new test and repeat. 
This software development techniques lets programmers declare their code "done" because it is done, and the developer has proven with tests that his code did not break any other code as well. Therefore it gives the software producer more confidence that the percentage of project completion is correct.

This is a  useful technique for software development in general, but why is it even more useful  for cloud  application producers?
  1. Because some cloud platforms, such as or, facilitate almost instant deployment, cloud application producers can take advantage of this feature to improve productivity. Easier deployment means easier integration testing, and that means you can find and fix integration bugs faster. This leads to faster development.
  2. Because it is easier to set up geographically distributed development teams when deploying to a cloud platform, it will happen. It is easier to manage a geographically distributed teams if you mandate and enforce that every code submission will come with a test suite that passes, and that the new code also passes the regression tests. i.e. Geographically distributed teams don't have to have lower productivity caused by a lack of communication because the passing integration tests serve as useful communication between team members.
  3. It is easier to measure developer performance based on written code and passed tests, and that is important if you don't see the developer physically every day.
  4. It is easier for a cloud producer to see the complexity of a feature request when it implies a large number of reasonable test cases. Knowing which kinds of features require a lot of tests (and therefore labour cost), the cloud application producer can make better prioritization decisions when using code and test data from previous software developed with TDD. 
Test-Driven-Development is therefore more useful to cloud application producers than to traditional application producers. TDD is also a very useful technique to improve development productivity.

Tuesday, February 8, 2011

Amazon Simple Email Service (SES)

Amazon recently announced its new Simple Email Service, or Amazon SES. It lets your applications send email through an Amazon SES API.

Why is this service amazing for cloud apps producers?
  1. You don't have to install, configure, and tune your email servers.  That will save you a lot of money in system administrator time, software licenses, servers, etc. 
  2. The service is dirt-cheap. (Actually, dirt is probably more expensive). Currently it is priced at $0.10 per thousand emails, (where an email is one sender and one receiver), and data transfer is about $0.15 per Gigabyte up to 10 TBytes/month, and it goes down from there.
  3. Spam and malware filtering is applied to all outgoing mail. That way, SES becomes a trusted provider and your email doesn't get labeled as spam by ISPs because of some jackass trying to use Amazon SES to send malware and spam. 
  4. They collect data on things like bounces and complaints, and make it available to you.
Overall, Amazon SES helps you cut your development costs and operations costs of your cloud applications.

And if you are building your cloud apps using Ruby on Rails, the folks at have published a nice guide to using Amazon SES with Ruby & Rails.

Thursday, February 3, 2011

What is Cloud Computing

An author at asks an interesting question, "What is Cloud Computing"? This is a good question, and I'll take a stab at answering it from two points of view, which is that of the user of cloud services, and the producer of cloud services.

I'll do the easy one first. Users of any internet service may know about servers, routers, load balancers, multi-tiered distributed architectures, et cetera, but they don't care. They care about where the service is and what it does. For example Google email is a well-known internet service. You can find it at the URLcalled You know that you can go to that service and use web mail. Whether Google has implemented the server on one server, many servers, on-demand servers or whatever, the user has no idea. He only knows that is the URL that gives him gmail service. i.e. For users, cloud computing is just the computing services they get from the internet. 

The next one is harder. What about producers of cloud computing services? Historically, a producer had to develop his application and then deploy it to on a server to be visible on the internet via URLs. The producer acquired an IP address for the server, and then mapped his domain name to that IP address via his domain host. For example a friend of mine runs a site at The domain ( is registered at a domain name service, and it points to an IP address at his hosting provider.

However, when sites got more traffic, it was hard to serve the same domain name with just one server. That is when loadbalancers were brought in. A loadbalancer is just a device that appears like a server, but it takes requests to the server and dispatches them to one of many actual servers which then fulfil the request. Adding servers allowed web sites to do more and more work while acting as one URL. However, the producer of the web site had to set up load balancers, servers, and networking between them to make sure that they got the job done.

Setting up a little network of loadbalancers and servers was hard, and took time. If traffic went up it took a long time to acquire, provision, and deploy more servers. And the producer had to pay for that infrastructure even if traffic went down.

Eventually large companies such as Amazon and Google had to deploy large numbers of servers, but that meant that they had to detect, find, and fix/replace failed servers more often. (This is expensive to do manually). Eventually they figured out how to install enough banks of idling extra servers to be able to take over from failed servers automatically and  almost immediately. Eventually, they made it so that provisioning servers from the inventory of installed but idling servers could happen by a configuration action and even an automatic action. These last two developments were crucial to the development of cloud computing because it meant that application producer from Amazon or Google did not have to worry about the servers, load-balancing, fault-tolerance, recovery, because it was handled consistently and automatically. With the worry of provisioning the servers and network infrastructure taken away, cloud computing was born within Amazon and Google. Eventually somebody realized that they could provision a few more banks of servers, put a billing engine on the servers, and rent the servers out on demand as components of various Amazon Web Services, GMail, and Google Apps. i.e. For application producers, cloud computing is the ability to deploy applications without having to worry about physically provisioning servers, networking, storage, and the cumbersome details of application deployment.   

There are a few different flavours of cloud computing.
  • Infrastructure-as-a-Service (IaaS) - This is where servers, storage, networking can be provisioned by the cloud application producer using provisioning interfaces (e.g. Web, command line, API), but the cloud producer must configure these devices and their software stacks, and then deploy his application, and implement monitoring, automatic scaling, et cetera. Examples include Amazon EC2, Zerigo,
  • Platform-as-a-Service (PaaS) - This where servers, storage, networking, and some programming stacks, and automatic application deployment tools come bundled as provisionable entities. The cloud application producer must use the automatic deployment tasks to deploy their applications, but everything else is handled by the PaaS provider. Examples include,, Google App Engine. PaaS platforms often come with severely limited choice in the programming stack, but that is the price that cloud application producers pay for not having to worry about configuring the infrastructure.
  • Web Services - This where a distinct service has been made available on the web for others to consume via web service APIs where each API is accessible at a set of URLs. Examples include Google search, Google Maps, PayPal, JanRain Engage authentication, Facebook Login, Twilio, Zerigo DNS, etc. Users may or may not also see these services as distinct applications (SaaS), but cloud application producers may use these APIs to build their cloud applications. Web services are monitored and measured, and often billed as well.
  • Software-as-a-Service (SaaS) - This is where a cloud application producer has deployed the application to a cloud computing platform/infrastructure, and only makes the service available via a URL. SaaS is usually not relevant to cloud application producers unless the SaaS also has a web services API which allow the web service to be consumed by another cloud computing application.
I hope that this blog posting answers your question and questions your answer about "What is Cloud Computing".

Tuesday, February 1, 2011

The Services Used By Y Combinator Startups

In an article at, They analyze the various services used by software start ups funded by YCombinator. (YCombinator is kind of  a super-angel investor fund which funds small software start-up teams with small amounts of money to get them going before they get money from other sources). The full table on which the article is based shows that lots of services used by startups are cloud-based. For example, Google is the overwhelming choice for email host.

One interesting statistic is that although many apparently used as their web host, the [SSL] certificate type of  "*" shows that these startups are actually deploying their systems on the platform-as-a-service (PaaS) vendor

Thursday, January 27, 2011

Why Instant Deployment Matters

The folks at Heroku wrote a great post on why instant deployment matters.

In a nutshell, programming had gotten about 130x more effective between 1996 and 2008, but provisioning had only gotten 10x better. From their numbers, it meant that in 1996 provisioning took about 2% of the effort of a project. Today, especially with agile development, the number is closer to 25%.

The instant deployment scenarios offered by providers such as Heroku, Cloudcontrol, and Google App Engine, means that the 25% can be knocked back down to something closer to 5%. I have used Heroku, and deployment is just that fast. Why?
  • There are no servers to provision. Just your user account and credit card (which you have to do with traditional cloud computing hosts). 
  • There are no application servers to start. Heroku does that automatically in a standard way. 
  • There is one data transfer to change your code, which is from your git repository to the server's git repository. With other hosts, there may be ssh sessions, transfer of scripts, and a variety of other things. 
  • Even if you script the whole experience (e.g. with capistrano), it still takes longer than a Heroku git push, and you still have to write and debug the script.
When you reduce application provisioning (or deployment) costs from 25% to 5% of your development costs, you get some big productivity gains.

Wednesday, January 26, 2011

Amazon Elastic Beanstalk

Amazon AWS has been a leader in the cloud computing space for a while. They are more of a "Infrastructure as a Service" (IaaS) vendor. They pioneered renting out virtual servers in the "cloud" by the hour, which could be fired up or shut down at will.

A number of companies used Amazon's IaaS as their starting point to build their own value-added layers for language-specific hosting, or whatever. Even did that with Amazon RDS. It is pretty obvious that "Platform as a Service" (PaaS) companies are starting to become attractive. (Ruby/Rack PaaS) is based on Amazon AWS and it was just acquired for $212 million. (PHP PaaS) is also based on Amazon AWS. I'm sure there are others. So why should Amazon provide the service and let the others gain the benefits of building a PaaS platform on top of  Amazon IaaS? They shouldn't. An to this end, Amazon just announced Amazon Elastic Beanstalk. Here is some of what they have:
  • Java/Tomcat application stack, 
  • Any of the databases in Amazon RDS. 
  • Auto-scaling so that your application will automatically start up new application servers in response to heavy traffic, and shut them down when the traffic lightens up.
  • Pre-configured load balancer and JVM/Tomcat setup.
  • An application subdomain, (e.g., which should eventually be mappable to a subdomain in your application via a CNAME record at your domain host. 
  • Access to log files. 
  • Single-command application server restart. 

They don't seem to offer as much as, but they offer a lot of control at the lower layers of the application stack if you need it.

Sunday, January 23, 2011

Deploying Toto on Heroku

This video shows how to deploy Toto, a Rails blogging engine, on Once again, the instructions are so simple that many folks could do it without much effort.

Deploying Toto on Heroku from jdesrosiers on Vimeo.

Friday, January 21, 2011

Amazon S3: Awesome cloud storage

Amazon S3 (Simple Storage Service) is one of the best cloud computing services available for storage. When and how can it help the owner of a web site application?

Amazon S3 is best used when you have a web site or web application that must serve up lots of large static files such as images, audio files, video files, and software downloads. S3 is useful because:
  • Your large files are not stored on your web server but on Amazon's servers. 
  • Amazon's storage service is far more reliable than the hard disk on your web application server. 
  • When you serve up web pages from your web site or application, the web site files will come from your web server, but you can have pictures, videos etc. come directly from Amazon S3. The side benefit is that if your traffic goes up, your server won't get bogged down serving up large static files because Amazon S3 (with their big fat connections to the internet) will do that. 
  • Even with high traffic, your site will seem responsive because your site's web server is only serving up relatively small bits of HTML, CSS, and JS while Amazon S3 serves up the big static files. 
  • With Amazon S3, you only pay for what you use. They charge for every request, and for data transfer, but it takes a lot of traffic to rack up $1 in bills. For example 1GB of storage (e.g. 1000 files) and 7 GB of monthly data transfer should cost less than $1/month with Amazon S3 (based on what I read from their pricing page). 
  • Amazon S3 data is stored over multiple servers in multiple locations. Even if two of those locations go down, you still won't lose your data. That is more reliable than a server with hard disks in a RAID 1 configuration. (As a side note, many people use Amazon S3 to store program and data backups). 
  • Some cloud platforms such as do not let web applications write to their file system. If you need to upload and store files with a application you must store those files elsewhere. Amazon S3 works very will in this case. 
Here is roughly how it works:
  • You get some API credentials for your S3 storage account. You will be billed monthly.
  • You can set up access control lists so that anybody with the keys can create "objects" in the bucket (i.e. upload files). You can upload assets using the S3 management console or you can also add them using the Amazon S3 API, which is callable by most popular programming languages. For example if somebody uploads files to your web application, the file upload handler in your web application would add the file (via a HTTPS POST) to your bucket in Amazon S3, and you would then associate the URL of that object with the file in your application database.
  • Wherever you have links to static files in your web site or application, point them to the URL for the file in Amazon S3. You can also use the S3 Fox plugin for Firefox.
  •  That's it!
Amazon S3 is a great cloud storage service to improve the performance and response of your web site or application, and also reduce the costs of storage and bandwidth for sites and applications with large storage needs. 

    Thursday, January 20, 2011

    Screencast: How To Install Joomla on

    A lot of PHP web site or web application owners start their sites by deploying the content management system called Joomla, and then customizing it for their needs. Joomla is popular and robust, so it saves the owner of the web application a lot of software development costs. 

    The folks at have put together a great video on how to install Joomla , a popular content management system (CMS), on their platform. This gives the benefit of free PHP hosting for low traffic, and the ability to automatically scale the site if traffic grows. No new code is needed for the automatic scaling. And Joomla gives a great starting point. 

    Before you start there are dependencies. In general, you should ensure that you have installed:
    • bazaar - a distributed version control system for Windows or Linux
    • a tool to generate a public/private key pair
    • (possibly) the python programming language.
    • cctrl - a tool from cloudcontrol which lets you manage your application from the command line. 
    Go to the dependencies page to see exactly how to do this for your operating system.

    Then watch the video and type along with it.

    Once you are up and running, you can make changes to customize Joomla to your needs, and deploy again.

    This is pretty cool and really powerful.

    Tuesday, January 18, 2011

    Language-Specific Cloud Hosting

    Traditional or first generation web hosts came in one or two basic flavours.

    1. Shared hosting - In this scenario, you get an account on a server, a database, some disk space, perhaps access to an email server, a web server, and one of a small number of web programming languages (usually PHP and one or two others). You also get technical support, and the quality of the support varies with the cost of the service.
    2. Private hosting - In this scenario, you get full operational ownership of a server or a virtual server. With the server you get a root account which lets you create as many accounts as you want, and you install the software and configure it to your liking.You also get technical support, also varying in quality with the cost of the service. 

    Shared hosting is usually free or very cheap. Private  hosting is more expensive. With both shared and private hosting, Deployment and administration are about the same. You are responsible for file structure, permissions on directories, and getting the software to the server using FTP. You may also then be responsible to restart the application server and/or web server. 

    A new class of hosting has emerged which I'll call language-specific hosting. Here are the characteristics.
    1. The web host supports one language. e.g. supports Ruby running on the Rack framework. supports PHP. Google App Engine supports Python. 
    2. Deployment is not done using FTP and files. It is done using a source-code control system interface by pushing a version-controlled set of files. uses  a "git push" to deploy the code. uses a "bazaar push". 
    3. Application and web server restarts are standardized. Heroku uses a hook on the git push to pack up the code and restart the application server. So does 
    Why are language-specific hosts a good thing for a cloud application developer or a cloud application owner?
    1. Standard parts means better support. Heroku only has to worry about the Ruby programming language and a single database (PostgreSQL). only supports PHP and MySQL. Other choices are limited and integrations are very well tested before going on a standard list of add-ons. Limited choices mean that the support people provide really good support for a small set of things.
    2. Well-integrated optional parts means fewer failure points. Heroku and both support various add-ons for functionality such as database, application monitoring, email and messaging, et cetera. However each of these components is very well tested before being added to a standard list of add-ons. 
    With limited choices and standard parts, language-specific web cloud hosts do not let cloud application owners make as many configuration, architecture, integration, and performance mistakes, and are therefore naturally able to provide better support and more reliability than a traditional cloud application host. 

    Language-specific cloud hosting platforms are going to give cloud application developers and owners new productivity benefits in development and operations.

      Monday, January 17, 2011

      Cloud Computing Concern: Availability

      One of the big hurdles facing people wanting to deploy to a cloud computing environment is service availability. Will the service be available at least as much as our in-house servers?

      According to this article (and others), Google's gmail service has 32 times less down time than typical in-house email services. Their downtime is so low, in fact, that they have removed the scheduled down times from their terms of service. i.e. All downtime is their fault.

      The fear question has now been turned around. Why go with an in-house email solution that will probably be down more than 3 hours per month when gmail is down only 7 minutes per month?