@Jonin60Seconds says “Your World Has Changed”

My good pal, Jon M Bishop recently presented at Digital Surrey last Thursday, on how mobile is changing our world. He delivered an engaging talk and described a number of excellent examples on how mobile phones are enabling mobile transactions in Africa. Here are some key statistics from Jon’s talk:

  • There are around 6 million Internet users in South Africa. Only 750k are on fixed line broadband
  • South Africa’s MXit also created the most engaged social network in the world and it all on mobile, despite only being launched in 2003!
  • The M-Pesa service allows South Africans to make mobile payments for items such music tickets, food and taxi payments
  • Safaricom is now the biggest bank in East Africa, despite being a communications company.
  • Google estimates that $3.3 billion in mobile ads will be spent this year, $1 billion will be with Google alone

Jon, then went to discuss a real life case study of his friend Shame, who has shunned the Apple iPhone platform in favour of Blackberry devices. In particular, Share enjoys the BlackBerry Messenger (BBM) service with his friends to stay in contact and be informed. Jon also noted that the BBM service was also a key technology that helped to keep ah hoc gangs of youth to stay connected during the London riots.

Learn more about Jon’s talk by reading his slide deck above.

Software Plus Services Explained

 

Hat Tip to Steve.

This video is by far the best of explanation of Microsoft’s Software Plus Services strategy I have seen. A great job done by the guys at Microsoft and Common Craft!

A YouTube version is also available here.

Ray Ozzie’s PDC2008 Keynote Word Cloud

1

Ray Ozzie’s Word Cloud from Microsoft’s PDC 2008. I’ve also added his keynote transcript below. A very interesting read and gives us a glimpse at where today’s computing is headed.

Today, we’re in the early days of a transformation towards services across the industry, a change that’s being catalyzed by a confluence of factors, by cheap computing and cheap storage, by the increasing ubiquity of high bandwidth connectivity to the Internet, by an explosion in PC innovation from the high-end desktop to the low-end netbook, by an explosion in device innovation, Media Players, Smart Phones, net-connected devices of all shapes and sizes.

At PDC this week you’ll hear our take, Microsoft’s take on this revolution that’s happening in our industry’s software foundation, and how there’s new value to be had for users, for developers, for businesses, by deeply and genuinely combining the best of software with the best aspects of services.

Today and tomorrow morning, you’re going to hear us map out our all up software plus services strategy end-to-end. You’re going to see how this strategy is coming to life in our platforms, in our apps and in our tools. You’re going to see some great demos. You’ll get software to take home with you, and you’ll get activation codes for our new services.

So, I’ll be with you here for the next couple of days. Tomorrow, I’ll be up here and we’ll talk about the front-end of our computing experiences. We’ll focus on the innovations in our client OS and on tools and runtimes and services that enable a new generation of apps that span the PC, the phone, and the Web.

But today we’ll be focusing on our back-end innovation, platforms, infrastructure, and solutions that span from on-premises servers to services in the cloud to datacenters in the cloud.

Back-End Innovation: Platforms, Infrastructure and Solutions

You know, over the past couple of weeks I’ve ready some pretty provocative pieces online taking the position that this cloud thing might be, in fact, vastly overblown. Some say: What’s the big deal and What’s the difference between the cloud and how we’re now treating computing as a virtualized utility in most major enterprises.

And in a sense these concepts have been around for what seems like forever. The notion of utility computing was pioneered in the ’60s by service bureaus like TimeShare and Geistgo.

Virtualization was also pioneered in that same era by IBM and its VM370 took virtualization very, very broadly into the enterprise datacenter.

Today, that same virtualization technology is making a very, very strong comeback, driven by our trend toward consolidation of our PC-based servers. With racks of machines now hosting any number of Virtual Servers, computing is looking more and more like an economical shared utility, serving our enterprise users, apps and solutions.

But today, even in the best of our virtualized enterprise datacenters, most of our enterprise computing architectures have been designed for the purpose of serving and delivering inwardly facing solutions. That is, most of our systems and networks have been specifically built to target solutions for our employees, in some cases for our partners, hundreds or thousands or perhaps tens of thousands of concurrent users; desktops, datacenters, and the networks between them all scoped out, audited, controllable and controlled by an IT organization skilled in managing the enterprise as the scope of deployment.

But more and more the reach and scope that’s required of our systems has been greatly expanding. Almost every business, every organization, every school, every government is experiencing the externalization of IT, the way IT needs to engage with individuals and customers coming in from all across the Web.

These days, there’s a minimum expectation that customers have of all of our Web sites delivering product information, customer support, direct fulfillment from the Web.

But the bar is being raised as far richer forms of customer interaction are evolving very, very rapidly. Once on our Web sites, customers increasingly expect to interact with one another through community rating and ranking, through forms with reputation, through wikis and through blogs.

Companies are coming to realize that regardless of their industry, the Web has become a key demand generation mechanism, the first place customers look, every organization’s front door.

Now more than ever, the richness, reach and effectiveness of all aspects of a company’s Web presence has become critical to the overall health of the business.

And company’s IT systems now have to deal with far more outside users, their customers, than the users that they serve within their own four walls.

As a result, one of the things that’s begun to happen over the course of the past few years is that the previously separate roles of software development and operations have become incredibly enmeshed and intertwined. IT pros and developers are now finding themselves with the need to work closely together and to jointly team and jointly learn how to design and build and operate systems that support millions or tens of millions of customers or potential customers spread across the globe, clicking through ads, doing transactions, talking with the company, and talking with each other.

For some customers’ Web-facing systems the demand that they see on their Web sites might be seen in peaks and valleys. It might shoot up during the holidays or new product introductions or during product recalls or when good things or bad things are going on in the blogosphere.

And so today, at great expense many companies tend to add ample spare capacity for each of the apps for which traffic must scale, more floor space, more hardware, more power, more cooling, more experts on networks, more operations personnel.

And a company’s Web-facing challenges can go much further than that if the systems are housed in a single location and you have a variety of failures such as cable cuts, earthquakes, power shortages; you know, any of these things could cause critical continuity issues that could end up being huge for the business.

The answer, of course, is to have more than one datacenter, which helps with load balancing and redundancy. But doing this is extremely tough. It requires a good deal of human expertise in loosely coupled systems design, in data replication architectures, in networking architectures and more.

And having just two datacenters, while challenging, may not be enough. Far away customers experience network latency issues that can impact the experience or the effectiveness or the user satisfaction with the Web site.

So, to serve these global customers you may need to locate at least datacenters around the world, and this may mean dealing with a whole host of issues related to your data or the communications between the users on your Web sites that’s going on outside the borders: political issues, tax issues, a variety of issues related to sovereignty and so on.

And so reflecting back on the question I asked earlier for developers or IT professionals, is this cloud thing really any different than the things that we’ve known in the past, the answer is absolutely and resoundingly yes. Things are materially different when building systems designed to serve the world of the Web as compared with the systems designed to serve those living within a company’s own four walls.

And there’s a very significant reason why it might be beneficial to have access to a shared infrastructure designed explicitly to serve the world of the Web, one having plenty of excess capacity, providing kind of an overdraft protection for your Web site, one built and operated by someone having the IT expertise, the networking and security expertise, all kinds of expertise necessary for a service that spans the globe.

High-Scale Internet Services Infrastructure: A New Tier in Computing Architecture

A few years ago, as it happens, we at Microsoft embarked upon a detailed examination of our own Web-facing systems, systems serving hundreds of millions of customers worldwide using MSN, systems delivering updates to hundreds of millions of Windows users worldwide, systems that are visited by Office users every time they press the help key, systems such as MSDN serving millions of developers, you, worldwide, and many, many more systems.

Each one of these systems had grown organically on its own, but for all of them together across all of them we built up a tremendous amount of common expertise, expertise in understanding how and to what degree we should be investing in datacenters and networks in different places around the world, given geopolitical issues and environmental issues and a variety of other issues; expertise in anticipating how many physical machines our various services would actually need and where and when to deploy those machines, and how to cope with service interdependencies across datacenters and so on; expertise in understanding how to efficiently deploy software to these machines and how to measure, tune, and manage a broad and diverse portfolio of services; expertise in keeping the OS and apps up to date across these thousands of machines; expertise in understanding how to prepare for an cope with holiday peaks of demand, especially with products like Xbox Live and Zune.

All in all over the years we’ve accumulated lots and lots of high scale services’ expertise, but all that knowledge, technology and skill, tremendous and expensive as that asset is, wasn’t packaged in a form that could be leveraged by outside developers or in a form that could benefit our enterprise customers. It certainly wasn’t packaged in a form that might be helpful to you.

Also at any industry level we’d come to believe that the externalization of IT in extending all our enterprise systems to a world of users across the Web, that this high scale Internet services infrastructure is nothing less than a new tier in our industry’s computing architecture.

The first tier, of course, is our experience tier, the PC on your desk or the phone in your pocket. The scale of this first tier of computing is one, and it’s all about you.

The second tier is the enterprise tier, the back-end systems hosting our business infrastructure and our business solutions, and the scale of this tier is roughly the size of the enterprise, and to serve this tier is really the design center of today’s server architectures and systems management architectures and most major enterprise datacenters.

The third tier is this Web tier, externally facing systems serving your customers, your prospects, potentially everyone in the world. The scale of this third tier is the size of the Web, and this tier requires computation, storage, networking, and a broad set of high level services designed explicitly for scale with what appears to be infinite capacity, available on-demand, anywhere across the globe.

And so a few years ago, some of our best and brightest, Dave Cutler, Amitabh Srivastava, and an amazing founding team, embarked upon a mission to utilize our systems expertise to create an offering in this new Web tier, a platform for cloud computing to be used by Microsoft’s own developers, by Web developers, and enterprise developers alike.

Some months after we began to plan this new effort, Amazon launched a service called EC2, and I’d like to tip my hat to Jeff Bezos and Amazon for their innovation and for the fact that across the industry all of us are going to be standing on their shoulders as they’ve established some base level design patterns, architectural models and business models that we’ll all learn from and grow.

In the context of Microsoft with somewhat different and definitely broader objectives, Amitabh, Dave and their team have been working for a few years now on our own platform for computing in the cloud. It’s designed to be the foundation, the bedrock underneath all of Microsoft’s service offerings for consumers and business alike, and it’s designed to be ultimately the foundation for yours as well.

Announcing Windows Azure

And so I’d like to announce a new service in the cloud, Windows Azure. (Cheers, applause.) Windows Azure is a new Windows offering at the Web tier of computing. This represents a significant extension to our family of Windows computing platforms from Windows Vista and Windows Mobile at the experience tier, Windows Server at the enterprise tier, and now Windows Azure being our Web tier offering, what you might think of as Windows in the cloud.

Windows Azure is our lowest level foundation for building and deploying a high scale service, providing core capabilities such as virtualized computation, scalable storage in the form of blogs, tables and streams, and perhaps most importantly an automated service management system, a fabric controller that handles provisioning, geo-distribution, and the entire lifecycle of a cloud-based service.

You can think of Windows Azure as a new service-based operating environment specifically targeted for this new cloud design point, striking the best possible balance between two seemingly opposing goals.

First, we felt it was critical for Windows developers to be able to utilize existing skills and existing code, for the most part writing code and developing software that leverages things that you might already know. Most of you, of course, would expect to be able to use your existing tools and runtimes like Visual Studio and .NET Framework, and, of course, you can.

But in developing for something that we would brand Windows, you’d also expect a fundamentally open environment for your innovation. You’d expect a world of tools, languages, frameworks, and runtimes, some from us, some from you, some from commercial developers, and some from a vibrant community on the Web. And so being Windows, that’s the type of familiar and developer friendly environment that we intend to foster and grow.

But at the same time, even with that familiarity, even in trying to create a familiar environment for developers, we need to help developers recognize that this cloud design point is something fundamentally new, and that there are ways that Windows Azure needs to be different than the kind of server environment that you might be used to.

Whether Windows, UNIX, Linux or the Mac, most of today’s systems and most of today’s apps are deeply, deeply rooted in a scale-up past, but the systems that we’re building right now for cloud-based computing are setting the stage for the next 50 years of systems, both outside and inside the enterprise.

And so we really need to begin laying the groundwork with new patterns and practices, new types of storage, model-based deployment, new ways of binding an app to the system, app model and app patterns designed fundamentally from the outset for a world of parallel computing and for a world of horizontal scale.

Today, here at PDC, for those of you in this audience, Windows Azure comes to life. As I said before, and as you’ll hear about more in a few minutes, Windows Azure is not software that you run on your own servers but rather it’s a service that’s running on a vast number of machines housed in Microsoft’s own datacenters first in the U.S. and soon worldwide. It’s being released today as a Community Technology Preview with the initial features being only a fraction of where you’ll see from our roadmap that it will be going.

Like any of our other high scale Internet services, Windows Azure’s development and operational processes were designed from the outset for iteration and rapid improvement, incorporating your feedback and getting better and better in a very, very dynamic way.

As you’ll see today, we’re betting on Azure ourselves, and as the system scales out, we’ll be bringing more and more of our own key apps and key services onto Windows Azure because it will be our highest scale, highest availability, most economical, and most environmentally sensitive way of hosting services in the cloud.

The Azure Services Platform

A few of those key services, when taken together with Windows Azure itself, constitute a much larger Azure Services Platform. These higher level developer services, which you can mix and match ala carte, provide functions that as Windows developers you’ll find quite valuable and familiar and useful.

Some of you may recall hearing about SQL Server Data Services, SSDS, an effort that we introduced earlier this year at our MIX conference. We’re planning to bring even more of the power of SQL Server to the cloud, including SQL Reporting Services and SQL Data Analysis Services; and as such, this offering is now called simply SQL Services, our database services in the cloud.

Our .NET services subsystem is a concrete implementation of some of the things that many of you are probably already familiar with that exist within the .NET Framework, such as workflow services, authorization services and identity federation services.

The Live services subsystem, which you’ll hear about tomorrow, provides an incredibly powerful bridge that extends Azure services outward to any given user’s PCs, phones, or devices through synchronized storage and synchronized apps.

SQL Services, .NET services, and Live services, just like Windows Azure, are all being included as a part of the Azure services platform CTP being made available to you right here at PDC.

As you are well aware, Dynamics CRM and SharePoint are two of our most capable and most extensible platforms for business content, collaboration, and rapid solutions. And later this morning, you’ll hear about how these two platforms also fill a very important role in the overall Azure Services Platform”.

Technorati Tags: ,

Cloud Computing – Greater Than The Sum Of Its Parts…

When you combine the ever-growing power of devices and the increasing ubiquity of the Web, you come up with a sum that is greater than its parts. Software + Services is that greater sum. It all adds up to a commitment from Microsoft to deliver ever more compelling opportunities and solutions to consumer and business costumers—and to our partners.”

Yesterday, Microsoft announced its “Cloud Computing” offering – Windows Azure.  Azure is essentially a framework, which will allow developers to build a variety of applications which will be hosted live on the Internet. This brings a fundamental shift in today’s computing. Traditionally, software applications were stored on private ‘local’ servers. However, managing servers is a costly business. Even though hardware costs may have come down in recent years. Physical space, storage, licensing, administration and backup costs take up the lion’s share of supporting a modern day computing environment.

Microsoft and other vendors, such as Amazon, Google and SalesForce.com believe consumers and businesses will want to store far more of their data on the servers in its “cloud” of giant data centres around the world, so that it can be accessed anytime, any place and from any device.

Microsoft’s offerings are somewhat different to its competitors, in that Microsoft believes that accessing your data in the cloud requires more than just using a web browser. A hybrid model of using “Software + Services”.  Essentially, this means that you still use some kind of desktop client to manipulate the data stored up in the cloud.

This proposition of cloud computing sounds attractive to businesses for a number of a reasons:

  1. The cost of Internet network bandwidth has significantly reduced, whilst at the same time penetration of broadband has significantly increased worldwide. This means you can access the Internet almost anywhere on earth.
  2. Outsourcing your hardware infrastructure saves businesses serious fixed costs, both in physical space and in hardware. Essentially, you can expense the running costs of your infrastructure. Previously, infrastructure costs were typically attributed to capital expenditure. Cloud Computing will make Finance Directors the world over very happy. Depreciation? What stinking depreciation?

However, there are some big issues to consider too:

  1. Single point of failure. If the cloud hardware goes down, you lose your apps and data.
  2. How secure is the hosting?  Are your apps and data files safe from sabotage and espionage?
  3. Cultural concerns. For some businesses, it is going to be very hard in “letting go”. Businesses have  looked after and managed their data for years. Are CEO’s willing to let  their precious data be managed outside of their own data centres, despite the significant cost savings?

In response to point 3. I think that the concern is easing. Many business already outsource many of their services.  Outsourcing the hardware is a natural progression of that process

But, what about the rest of us? Well, for consumers, there is the prospect of a future where much, if not all of our data and many of our applications could be stored online “in the cloud”. Think about this for a moment. Imagine a world, where our data follows us everywhere. Smaller computer, limited applications, data synced across all of our Internet aware devices?

Over the past decade, the world we live in has been transformed by the Web.  It connects us to nearly everything we do—be it social or economic.  It holds the potential to make the real world smaller, more relevant, more digestible and more personal.  At the same time, the PC has grown phenomenally  in power with rich applications unimaginable just a few years ago.  What were documents and spreadsheets then are now digital photos, videos, music and movies.  And as we edit, organise and store media, the PC has quietly moved from our desks to our laps to our mobile phones and entertainment centres—taking the Web with it each step of the way”.

Microsoft’s Software + Services model is perhaps the logical step in the evolution of computing.  It represents an industry shift toward a design approach that is neither exclusively software-centric nor browser-centric.  By combining the best aspects of software with the best aspects of cloud-based services, Microsoft hope to deliver more compelling solutions for consumers, developers and businesses.  Microsoft envisions a world where rich, highly functional and elegant experiences extend from the PC, to the Web, to the devices we use every day.

“When you combine the ever-growing power of devices and the increasing ubiquity of the Web, you come up with a sum that is greater than its parts.”

Personally, I’m very excited about this computing shift. I’m *almost* ready to put my data in the cloud.

More information can be found at Microsoft’s Azure site and in this technical white paper. Azure’s terms of service can be found here.

Keeping Friday Night Clean with Gmail Goggles

 Gmail Soap

I’m not sure whether to continue laughing, or to be truly grateful to Google for a new innovative Gmail Labs app which has just been launched entitled, Mail Goggles.

Google Engineer, Jon Perlow posts on the Gmail blog

“Sometimes I send messages I shouldn’t send. Like the time I told that girl I had a crush on her over text message. Or the time I sent that late night email to my ex-girlfriend that we should get back together. Gmail can’t always prevent you from sending messages you might later regret, but today we’re launching a new Labs feature I wrote called Mail Goggles which may help.

When you enable Mail Goggles, it will check that you’re really sure you want to send that late night Friday email. And what better way to check than by making you solve a few simple math problems after you click send to verify you’re in the right state of mind?

By default, Mail Goggles is only active late night on the weekend as that is the time you’re most likely to need it. Once enabled, you can adjust when it’s active in the General settings Hopefully Mail Goggles will prevent many of you out there from sending messages you wish you hadn’t. Like that late night memo — I mean mission statement — to the entire firm.

I guess we have all sent emails over the years when we shouldn’t have. Some fuelled by alcohol, some fuelled by anger. I do think that for many people, this app will be truly useful. Though I’m still undecided if I like my email client controlling yet another part of the way I use my mail. I already have rules, spam and content filtering.  Can I no longer be trusted to send emails after a few beers, late at night?  Probably not.

Mail Goggles can be enabled in the Settings section of your Gmail.

image 

Technorati Tags:

Steve Ballmer’s Keynote in London

Microsoft CEO Steve Ballmer delivered the keynote speech at his company’s London conference “Technologies to Change Your Business: How Customers Are Implementing Tomorrow’s Strategies Today”.
CIO editor, Martin Veitch interviewed Ballmer directly after his keynote.

Veitch: Given the massive investment that corporates are considering, what are the key factors you believe are   now going to drive us into Cloud Computing? Why would we entrust Microsoft with its cloud provision, over other competitors such as Google?

Ballmer: Well, let me take it in a variety of ways. First of all, anytime there is a major disruption you want to make sure you take advantage of it. The book ‘The Innovator’s Dilemma’ says “You can’t as a company that’s established miss the next major revolution”.

So we are embracing Software + Services, Cloud Computing as hard as anybody. By the time we finish our Professional Developers Conference this month, I think you’ll have to say that there is nobody out there with as wide a range of Cloud Computing services as Microsoft, including, dare I say it, Google – which has a great search product but, at the end of the day, doesn’t really have much for Enterprise email, productivity, collaboration. They are trying. They are coming to the game. But they are not really there yet.
Even though we are driving disruption, our job has got to be to also give you a clean and straightforward path forward. So you are going to want the PCs that you own, you are going to want to to be able to apply the licences that your already own.

I think we have, and our prices reflect an ability to let you get to the disruptive point easily, from the place you are now financially.

Veitch: Steve, I guess the $64,000 question from a lot of people’s point of view is, is there going to be an Office for the Web, something that really competes head on with Google Docs, Google Apps?
Ballmer: Well, those are not very popular products! I hope that we are not competing head on with those! I hope we actually compete head on with Microsoft Office. If you take a look at it, Google Docs and Spreadsheets have relatively low usage and have not grown over the last six months or so.
There’s a reason. I think what people want is something as rich as Microsoft Office, something that you can ‘click and run’, if you are not at your own desk. Something that is compatible, document-wise with Microsoft Office and something that offers the kind of joint editing capabilities that is nice in Google Docs and Spreadsheets. Will Microsoft Office offer that? Yes! Standby for details in the next month.

Veitch: So, in the backend of Microsoft R&D, are there people beavering away at versions of Word, PowerPoint, Excel, etc, that are purely web based? Or, is it always going to be this hybrid?
Ballmer: What does it mean to be purely Web based? Do we want them to be as only as powerful as ‘runs in a browser’? No. We want software that is more powerful than runs in a browser. Does that mean we will not have some neat stuff that does run in the browser? No.

We think you’ll actually want the full power of Word, Excel and PowerPoint – and you’ll want to be able to get that simply. But, if you just happen to be in an Internet cafe kiosk and you want to do some light editing, perhaps we need to have a way to support you in that as well, inside the browser. And for today, that’s going to have to be all the detail I share. Otherwise, we have no drum roll announcement coming up here in a month!

Veitch: There’s a lot of different views on what the ‘cloud’ is going to look like? Will it be a data centre that you have and you own it yourself? Will it belong to Amazon or some other organisation? Maybe you could even franchise it and work with rivals or peers and operate a data centre in that way. What do you think it will look like? Which slice of the pie will be the biggest?

Ballmer: I think before we are done, the answer is ‘Yes’. No, all of those models will need to flourish. I think it would be nuts for me to say that we are going to run all of the world’s data centres. I don’t think that’s practical.
But what we need to do is a build a service that we start running and we have a model for how it can also be implemented and hosted by corporations for themselves, or by other partners.
The service must be a service. If it’s not in our data centre, if it’s in somebody’s else’s, you’ll still want it updated in real time, dynamically. You don’t want it to be like today’s outsourced model – where the outsourcer winds up locked in, and has to embrace the past more than the future.

So, we need to design ‘a service for services’, if you will. That’s kind of the way we are attacking the challenge.
Now, Version 1 that we will announce this month, you’ll think about it as running a Microsoft data centre, sort of like the Amazon model. And yet we know and we’ve talked already with corporations and partners about going beyond it.

That’s why the symmetry between the server and the cloud is important. Because if we bring back the cloud features into the server platform, it’s also possible for any corporation then to go into instance of its own similar services.

Veitch: Now, is this going to be the Microsoft data centre that we’ll be talking to?

Ballmer: On V1 that will be the only alternative, that’s right

Veitch: Are you going to build here [UK] as well?

Ballmer: In V1, our data centre will be the only alternative, where we build data centres up in the air. By, V2 or V3 whether its our data centre or somebody else’s we know we have to have data centres in many, many countries around the globe. Certainly, in this big country we know we need a data centre – whether we run it, or a partner runs it.

Veitch: Why has Microsoft developed Zune?

Ballmer: At the end of the day, one of the big trends is that all content is going digital. And if we don’t have the software and services that are useful, helpful and valuable for the consumption of music and video, we are sort of not really a player.

Now, we built the Zune hardware with the Zune software – and what you’ll see more and more over time is that the Zune software will also be ported to and be more important not just with the hardware but on the PC, on Windows Mobile devices, etc.

Veitch: It seems to me to be a tricky one because Apple is out there, and they have a pretty good product – but also they have this kind of cult following of people who are just going to buy, because it’s Apple. That must be a frustrating thing to compete against.

Ballmer: They may have a cult following in the music business, and we got about 97 percent of PC users using our stuff. 97 percent may not constitute a cult! But I wouldn’t trade that for a cult!

[Update] This interview has been picked up by CIO Magazine.

Frank Gillett talks Cloud Envy

Frank Gillett of Forrester speaks about the cloud envy of various companies who jump on the cloud computing bandwagon by rebranding existing services. Whoa! new buzz words here, “Cloud Envy”, “Cloud Spray”, and “Cloud Washing”.

Is it me, or is Cloud Computing just getting too confusing? Especially as Steve notes, all of the the ‘xxxxx as a service’ platforms taking shape. Software and hardware vendors are going to have a to do a good job on making this ‘white fluffy’ stuff easier to understand. Though perhaps, Cloud Computing is nothing but smoke and mirrors? Old services being rebranded with new “fashion” labels as Larry Ellison points out.

Anyone who know of a simple guide to Cloud Computing?  Perhaps the chaps at Common Craft could create a great video?