Ray Ozzie’s PDC2008 Keynote Word Cloud

1

Ray Ozzie’s Word Cloud from Microsoft’s PDC 2008. I’ve also added his keynote transcript below. A very interesting read and gives us a glimpse at where today’s computing is headed.

Today, we’re in the early days of a transformation towards services across the industry, a change that’s being catalyzed by a confluence of factors, by cheap computing and cheap storage, by the increasing ubiquity of high bandwidth connectivity to the Internet, by an explosion in PC innovation from the high-end desktop to the low-end netbook, by an explosion in device innovation, Media Players, Smart Phones, net-connected devices of all shapes and sizes.

At PDC this week you’ll hear our take, Microsoft’s take on this revolution that’s happening in our industry’s software foundation, and how there’s new value to be had for users, for developers, for businesses, by deeply and genuinely combining the best of software with the best aspects of services.

Today and tomorrow morning, you’re going to hear us map out our all up software plus services strategy end-to-end. You’re going to see how this strategy is coming to life in our platforms, in our apps and in our tools. You’re going to see some great demos. You’ll get software to take home with you, and you’ll get activation codes for our new services.

So, I’ll be with you here for the next couple of days. Tomorrow, I’ll be up here and we’ll talk about the front-end of our computing experiences. We’ll focus on the innovations in our client OS and on tools and runtimes and services that enable a new generation of apps that span the PC, the phone, and the Web.

But today we’ll be focusing on our back-end innovation, platforms, infrastructure, and solutions that span from on-premises servers to services in the cloud to datacenters in the cloud.

Back-End Innovation: Platforms, Infrastructure and Solutions

You know, over the past couple of weeks I’ve ready some pretty provocative pieces online taking the position that this cloud thing might be, in fact, vastly overblown. Some say: What’s the big deal and What’s the difference between the cloud and how we’re now treating computing as a virtualized utility in most major enterprises.

And in a sense these concepts have been around for what seems like forever. The notion of utility computing was pioneered in the ’60s by service bureaus like TimeShare and Geistgo.

Virtualization was also pioneered in that same era by IBM and its VM370 took virtualization very, very broadly into the enterprise datacenter.

Today, that same virtualization technology is making a very, very strong comeback, driven by our trend toward consolidation of our PC-based servers. With racks of machines now hosting any number of Virtual Servers, computing is looking more and more like an economical shared utility, serving our enterprise users, apps and solutions.

But today, even in the best of our virtualized enterprise datacenters, most of our enterprise computing architectures have been designed for the purpose of serving and delivering inwardly facing solutions. That is, most of our systems and networks have been specifically built to target solutions for our employees, in some cases for our partners, hundreds or thousands or perhaps tens of thousands of concurrent users; desktops, datacenters, and the networks between them all scoped out, audited, controllable and controlled by an IT organization skilled in managing the enterprise as the scope of deployment.

But more and more the reach and scope that’s required of our systems has been greatly expanding. Almost every business, every organization, every school, every government is experiencing the externalization of IT, the way IT needs to engage with individuals and customers coming in from all across the Web.

These days, there’s a minimum expectation that customers have of all of our Web sites delivering product information, customer support, direct fulfillment from the Web.

But the bar is being raised as far richer forms of customer interaction are evolving very, very rapidly. Once on our Web sites, customers increasingly expect to interact with one another through community rating and ranking, through forms with reputation, through wikis and through blogs.

Companies are coming to realize that regardless of their industry, the Web has become a key demand generation mechanism, the first place customers look, every organization’s front door.

Now more than ever, the richness, reach and effectiveness of all aspects of a company’s Web presence has become critical to the overall health of the business.

And company’s IT systems now have to deal with far more outside users, their customers, than the users that they serve within their own four walls.

As a result, one of the things that’s begun to happen over the course of the past few years is that the previously separate roles of software development and operations have become incredibly enmeshed and intertwined. IT pros and developers are now finding themselves with the need to work closely together and to jointly team and jointly learn how to design and build and operate systems that support millions or tens of millions of customers or potential customers spread across the globe, clicking through ads, doing transactions, talking with the company, and talking with each other.

For some customers’ Web-facing systems the demand that they see on their Web sites might be seen in peaks and valleys. It might shoot up during the holidays or new product introductions or during product recalls or when good things or bad things are going on in the blogosphere.

And so today, at great expense many companies tend to add ample spare capacity for each of the apps for which traffic must scale, more floor space, more hardware, more power, more cooling, more experts on networks, more operations personnel.

And a company’s Web-facing challenges can go much further than that if the systems are housed in a single location and you have a variety of failures such as cable cuts, earthquakes, power shortages; you know, any of these things could cause critical continuity issues that could end up being huge for the business.

The answer, of course, is to have more than one datacenter, which helps with load balancing and redundancy. But doing this is extremely tough. It requires a good deal of human expertise in loosely coupled systems design, in data replication architectures, in networking architectures and more.

And having just two datacenters, while challenging, may not be enough. Far away customers experience network latency issues that can impact the experience or the effectiveness or the user satisfaction with the Web site.

So, to serve these global customers you may need to locate at least datacenters around the world, and this may mean dealing with a whole host of issues related to your data or the communications between the users on your Web sites that’s going on outside the borders: political issues, tax issues, a variety of issues related to sovereignty and so on.

And so reflecting back on the question I asked earlier for developers or IT professionals, is this cloud thing really any different than the things that we’ve known in the past, the answer is absolutely and resoundingly yes. Things are materially different when building systems designed to serve the world of the Web as compared with the systems designed to serve those living within a company’s own four walls.

And there’s a very significant reason why it might be beneficial to have access to a shared infrastructure designed explicitly to serve the world of the Web, one having plenty of excess capacity, providing kind of an overdraft protection for your Web site, one built and operated by someone having the IT expertise, the networking and security expertise, all kinds of expertise necessary for a service that spans the globe.

High-Scale Internet Services Infrastructure: A New Tier in Computing Architecture

A few years ago, as it happens, we at Microsoft embarked upon a detailed examination of our own Web-facing systems, systems serving hundreds of millions of customers worldwide using MSN, systems delivering updates to hundreds of millions of Windows users worldwide, systems that are visited by Office users every time they press the help key, systems such as MSDN serving millions of developers, you, worldwide, and many, many more systems.

Each one of these systems had grown organically on its own, but for all of them together across all of them we built up a tremendous amount of common expertise, expertise in understanding how and to what degree we should be investing in datacenters and networks in different places around the world, given geopolitical issues and environmental issues and a variety of other issues; expertise in anticipating how many physical machines our various services would actually need and where and when to deploy those machines, and how to cope with service interdependencies across datacenters and so on; expertise in understanding how to efficiently deploy software to these machines and how to measure, tune, and manage a broad and diverse portfolio of services; expertise in keeping the OS and apps up to date across these thousands of machines; expertise in understanding how to prepare for an cope with holiday peaks of demand, especially with products like Xbox Live and Zune.

All in all over the years we’ve accumulated lots and lots of high scale services’ expertise, but all that knowledge, technology and skill, tremendous and expensive as that asset is, wasn’t packaged in a form that could be leveraged by outside developers or in a form that could benefit our enterprise customers. It certainly wasn’t packaged in a form that might be helpful to you.

Also at any industry level we’d come to believe that the externalization of IT in extending all our enterprise systems to a world of users across the Web, that this high scale Internet services infrastructure is nothing less than a new tier in our industry’s computing architecture.

The first tier, of course, is our experience tier, the PC on your desk or the phone in your pocket. The scale of this first tier of computing is one, and it’s all about you.

The second tier is the enterprise tier, the back-end systems hosting our business infrastructure and our business solutions, and the scale of this tier is roughly the size of the enterprise, and to serve this tier is really the design center of today’s server architectures and systems management architectures and most major enterprise datacenters.

The third tier is this Web tier, externally facing systems serving your customers, your prospects, potentially everyone in the world. The scale of this third tier is the size of the Web, and this tier requires computation, storage, networking, and a broad set of high level services designed explicitly for scale with what appears to be infinite capacity, available on-demand, anywhere across the globe.

And so a few years ago, some of our best and brightest, Dave Cutler, Amitabh Srivastava, and an amazing founding team, embarked upon a mission to utilize our systems expertise to create an offering in this new Web tier, a platform for cloud computing to be used by Microsoft’s own developers, by Web developers, and enterprise developers alike.

Some months after we began to plan this new effort, Amazon launched a service called EC2, and I’d like to tip my hat to Jeff Bezos and Amazon for their innovation and for the fact that across the industry all of us are going to be standing on their shoulders as they’ve established some base level design patterns, architectural models and business models that we’ll all learn from and grow.

In the context of Microsoft with somewhat different and definitely broader objectives, Amitabh, Dave and their team have been working for a few years now on our own platform for computing in the cloud. It’s designed to be the foundation, the bedrock underneath all of Microsoft’s service offerings for consumers and business alike, and it’s designed to be ultimately the foundation for yours as well.

Announcing Windows Azure

And so I’d like to announce a new service in the cloud, Windows Azure. (Cheers, applause.) Windows Azure is a new Windows offering at the Web tier of computing. This represents a significant extension to our family of Windows computing platforms from Windows Vista and Windows Mobile at the experience tier, Windows Server at the enterprise tier, and now Windows Azure being our Web tier offering, what you might think of as Windows in the cloud.

Windows Azure is our lowest level foundation for building and deploying a high scale service, providing core capabilities such as virtualized computation, scalable storage in the form of blogs, tables and streams, and perhaps most importantly an automated service management system, a fabric controller that handles provisioning, geo-distribution, and the entire lifecycle of a cloud-based service.

You can think of Windows Azure as a new service-based operating environment specifically targeted for this new cloud design point, striking the best possible balance between two seemingly opposing goals.

First, we felt it was critical for Windows developers to be able to utilize existing skills and existing code, for the most part writing code and developing software that leverages things that you might already know. Most of you, of course, would expect to be able to use your existing tools and runtimes like Visual Studio and .NET Framework, and, of course, you can.

But in developing for something that we would brand Windows, you’d also expect a fundamentally open environment for your innovation. You’d expect a world of tools, languages, frameworks, and runtimes, some from us, some from you, some from commercial developers, and some from a vibrant community on the Web. And so being Windows, that’s the type of familiar and developer friendly environment that we intend to foster and grow.

But at the same time, even with that familiarity, even in trying to create a familiar environment for developers, we need to help developers recognize that this cloud design point is something fundamentally new, and that there are ways that Windows Azure needs to be different than the kind of server environment that you might be used to.

Whether Windows, UNIX, Linux or the Mac, most of today’s systems and most of today’s apps are deeply, deeply rooted in a scale-up past, but the systems that we’re building right now for cloud-based computing are setting the stage for the next 50 years of systems, both outside and inside the enterprise.

And so we really need to begin laying the groundwork with new patterns and practices, new types of storage, model-based deployment, new ways of binding an app to the system, app model and app patterns designed fundamentally from the outset for a world of parallel computing and for a world of horizontal scale.

Today, here at PDC, for those of you in this audience, Windows Azure comes to life. As I said before, and as you’ll hear about more in a few minutes, Windows Azure is not software that you run on your own servers but rather it’s a service that’s running on a vast number of machines housed in Microsoft’s own datacenters first in the U.S. and soon worldwide. It’s being released today as a Community Technology Preview with the initial features being only a fraction of where you’ll see from our roadmap that it will be going.

Like any of our other high scale Internet services, Windows Azure’s development and operational processes were designed from the outset for iteration and rapid improvement, incorporating your feedback and getting better and better in a very, very dynamic way.

As you’ll see today, we’re betting on Azure ourselves, and as the system scales out, we’ll be bringing more and more of our own key apps and key services onto Windows Azure because it will be our highest scale, highest availability, most economical, and most environmentally sensitive way of hosting services in the cloud.

The Azure Services Platform

A few of those key services, when taken together with Windows Azure itself, constitute a much larger Azure Services Platform. These higher level developer services, which you can mix and match ala carte, provide functions that as Windows developers you’ll find quite valuable and familiar and useful.

Some of you may recall hearing about SQL Server Data Services, SSDS, an effort that we introduced earlier this year at our MIX conference. We’re planning to bring even more of the power of SQL Server to the cloud, including SQL Reporting Services and SQL Data Analysis Services; and as such, this offering is now called simply SQL Services, our database services in the cloud.

Our .NET services subsystem is a concrete implementation of some of the things that many of you are probably already familiar with that exist within the .NET Framework, such as workflow services, authorization services and identity federation services.

The Live services subsystem, which you’ll hear about tomorrow, provides an incredibly powerful bridge that extends Azure services outward to any given user’s PCs, phones, or devices through synchronized storage and synchronized apps.

SQL Services, .NET services, and Live services, just like Windows Azure, are all being included as a part of the Azure services platform CTP being made available to you right here at PDC.

As you are well aware, Dynamics CRM and SharePoint are two of our most capable and most extensible platforms for business content, collaboration, and rapid solutions. And later this morning, you’ll hear about how these two platforms also fill a very important role in the overall Azure Services Platform”.

Technorati Tags: ,

Author: Jas

Jas Dhaliwal is a highly experienced International Social Media Strategist. Currently working as AVG Technologies, Director of Communities and Online Engagement, he specialises in building and engaging with social communities across the web. Born and bred in London, he is passionate about technology and social anthropology. Prior to AVG, Jas launched the social media program for Microsoft’s MVP Award program. Jas holds a BSc (Hons) in Information Systems and has an MBA from Brunel University in London, England. You can follow Jas as @Jas on Twitter or on Google+