Ray Ozzie’s PDC2008 Keynote Word Cloud

1

Ray Ozzie’s Word Cloud from Microsoft’s PDC 2008. I’ve also added his keynote transcript below. A very interesting read and gives us a glimpse at where today’s computing is headed.

Today, we’re in the early days of a transformation towards services across the industry, a change that’s being catalyzed by a confluence of factors, by cheap computing and cheap storage, by the increasing ubiquity of high bandwidth connectivity to the Internet, by an explosion in PC innovation from the high-end desktop to the low-end netbook, by an explosion in device innovation, Media Players, Smart Phones, net-connected devices of all shapes and sizes.

At PDC this week you’ll hear our take, Microsoft’s take on this revolution that’s happening in our industry’s software foundation, and how there’s new value to be had for users, for developers, for businesses, by deeply and genuinely combining the best of software with the best aspects of services.

Today and tomorrow morning, you’re going to hear us map out our all up software plus services strategy end-to-end. You’re going to see how this strategy is coming to life in our platforms, in our apps and in our tools. You’re going to see some great demos. You’ll get software to take home with you, and you’ll get activation codes for our new services.

So, I’ll be with you here for the next couple of days. Tomorrow, I’ll be up here and we’ll talk about the front-end of our computing experiences. We’ll focus on the innovations in our client OS and on tools and runtimes and services that enable a new generation of apps that span the PC, the phone, and the Web.

But today we’ll be focusing on our back-end innovation, platforms, infrastructure, and solutions that span from on-premises servers to services in the cloud to datacenters in the cloud.

Back-End Innovation: Platforms, Infrastructure and Solutions

You know, over the past couple of weeks I’ve ready some pretty provocative pieces online taking the position that this cloud thing might be, in fact, vastly overblown. Some say: What’s the big deal and What’s the difference between the cloud and how we’re now treating computing as a virtualized utility in most major enterprises.

And in a sense these concepts have been around for what seems like forever. The notion of utility computing was pioneered in the ’60s by service bureaus like TimeShare and Geistgo.

Virtualization was also pioneered in that same era by IBM and its VM370 took virtualization very, very broadly into the enterprise datacenter.

Today, that same virtualization technology is making a very, very strong comeback, driven by our trend toward consolidation of our PC-based servers. With racks of machines now hosting any number of Virtual Servers, computing is looking more and more like an economical shared utility, serving our enterprise users, apps and solutions.

But today, even in the best of our virtualized enterprise datacenters, most of our enterprise computing architectures have been designed for the purpose of serving and delivering inwardly facing solutions. That is, most of our systems and networks have been specifically built to target solutions for our employees, in some cases for our partners, hundreds or thousands or perhaps tens of thousands of concurrent users; desktops, datacenters, and the networks between them all scoped out, audited, controllable and controlled by an IT organization skilled in managing the enterprise as the scope of deployment.

But more and more the reach and scope that’s required of our systems has been greatly expanding. Almost every business, every organization, every school, every government is experiencing the externalization of IT, the way IT needs to engage with individuals and customers coming in from all across the Web.

These days, there’s a minimum expectation that customers have of all of our Web sites delivering product information, customer support, direct fulfillment from the Web.

But the bar is being raised as far richer forms of customer interaction are evolving very, very rapidly. Once on our Web sites, customers increasingly expect to interact with one another through community rating and ranking, through forms with reputation, through wikis and through blogs.

Companies are coming to realize that regardless of their industry, the Web has become a key demand generation mechanism, the first place customers look, every organization’s front door.

Now more than ever, the richness, reach and effectiveness of all aspects of a company’s Web presence has become critical to the overall health of the business.

And company’s IT systems now have to deal with far more outside users, their customers, than the users that they serve within their own four walls.

As a result, one of the things that’s begun to happen over the course of the past few years is that the previously separate roles of software development and operations have become incredibly enmeshed and intertwined. IT pros and developers are now finding themselves with the need to work closely together and to jointly team and jointly learn how to design and build and operate systems that support millions or tens of millions of customers or potential customers spread across the globe, clicking through ads, doing transactions, talking with the company, and talking with each other.

For some customers’ Web-facing systems the demand that they see on their Web sites might be seen in peaks and valleys. It might shoot up during the holidays or new product introductions or during product recalls or when good things or bad things are going on in the blogosphere.

And so today, at great expense many companies tend to add ample spare capacity for each of the apps for which traffic must scale, more floor space, more hardware, more power, more cooling, more experts on networks, more operations personnel.

And a company’s Web-facing challenges can go much further than that if the systems are housed in a single location and you have a variety of failures such as cable cuts, earthquakes, power shortages; you know, any of these things could cause critical continuity issues that could end up being huge for the business.

The answer, of course, is to have more than one datacenter, which helps with load balancing and redundancy. But doing this is extremely tough. It requires a good deal of human expertise in loosely coupled systems design, in data replication architectures, in networking architectures and more.

And having just two datacenters, while challenging, may not be enough. Far away customers experience network latency issues that can impact the experience or the effectiveness or the user satisfaction with the Web site.

So, to serve these global customers you may need to locate at least datacenters around the world, and this may mean dealing with a whole host of issues related to your data or the communications between the users on your Web sites that’s going on outside the borders: political issues, tax issues, a variety of issues related to sovereignty and so on.

And so reflecting back on the question I asked earlier for developers or IT professionals, is this cloud thing really any different than the things that we’ve known in the past, the answer is absolutely and resoundingly yes. Things are materially different when building systems designed to serve the world of the Web as compared with the systems designed to serve those living within a company’s own four walls.

And there’s a very significant reason why it might be beneficial to have access to a shared infrastructure designed explicitly to serve the world of the Web, one having plenty of excess capacity, providing kind of an overdraft protection for your Web site, one built and operated by someone having the IT expertise, the networking and security expertise, all kinds of expertise necessary for a service that spans the globe.

High-Scale Internet Services Infrastructure: A New Tier in Computing Architecture

A few years ago, as it happens, we at Microsoft embarked upon a detailed examination of our own Web-facing systems, systems serving hundreds of millions of customers worldwide using MSN, systems delivering updates to hundreds of millions of Windows users worldwide, systems that are visited by Office users every time they press the help key, systems such as MSDN serving millions of developers, you, worldwide, and many, many more systems.

Each one of these systems had grown organically on its own, but for all of them together across all of them we built up a tremendous amount of common expertise, expertise in understanding how and to what degree we should be investing in datacenters and networks in different places around the world, given geopolitical issues and environmental issues and a variety of other issues; expertise in anticipating how many physical machines our various services would actually need and where and when to deploy those machines, and how to cope with service interdependencies across datacenters and so on; expertise in understanding how to efficiently deploy software to these machines and how to measure, tune, and manage a broad and diverse portfolio of services; expertise in keeping the OS and apps up to date across these thousands of machines; expertise in understanding how to prepare for an cope with holiday peaks of demand, especially with products like Xbox Live and Zune.

All in all over the years we’ve accumulated lots and lots of high scale services’ expertise, but all that knowledge, technology and skill, tremendous and expensive as that asset is, wasn’t packaged in a form that could be leveraged by outside developers or in a form that could benefit our enterprise customers. It certainly wasn’t packaged in a form that might be helpful to you.

Also at any industry level we’d come to believe that the externalization of IT in extending all our enterprise systems to a world of users across the Web, that this high scale Internet services infrastructure is nothing less than a new tier in our industry’s computing architecture.

The first tier, of course, is our experience tier, the PC on your desk or the phone in your pocket. The scale of this first tier of computing is one, and it’s all about you.

The second tier is the enterprise tier, the back-end systems hosting our business infrastructure and our business solutions, and the scale of this tier is roughly the size of the enterprise, and to serve this tier is really the design center of today’s server architectures and systems management architectures and most major enterprise datacenters.

The third tier is this Web tier, externally facing systems serving your customers, your prospects, potentially everyone in the world. The scale of this third tier is the size of the Web, and this tier requires computation, storage, networking, and a broad set of high level services designed explicitly for scale with what appears to be infinite capacity, available on-demand, anywhere across the globe.

And so a few years ago, some of our best and brightest, Dave Cutler, Amitabh Srivastava, and an amazing founding team, embarked upon a mission to utilize our systems expertise to create an offering in this new Web tier, a platform for cloud computing to be used by Microsoft’s own developers, by Web developers, and enterprise developers alike.

Some months after we began to plan this new effort, Amazon launched a service called EC2, and I’d like to tip my hat to Jeff Bezos and Amazon for their innovation and for the fact that across the industry all of us are going to be standing on their shoulders as they’ve established some base level design patterns, architectural models and business models that we’ll all learn from and grow.

In the context of Microsoft with somewhat different and definitely broader objectives, Amitabh, Dave and their team have been working for a few years now on our own platform for computing in the cloud. It’s designed to be the foundation, the bedrock underneath all of Microsoft’s service offerings for consumers and business alike, and it’s designed to be ultimately the foundation for yours as well.

Announcing Windows Azure

And so I’d like to announce a new service in the cloud, Windows Azure. (Cheers, applause.) Windows Azure is a new Windows offering at the Web tier of computing. This represents a significant extension to our family of Windows computing platforms from Windows Vista and Windows Mobile at the experience tier, Windows Server at the enterprise tier, and now Windows Azure being our Web tier offering, what you might think of as Windows in the cloud.

Windows Azure is our lowest level foundation for building and deploying a high scale service, providing core capabilities such as virtualized computation, scalable storage in the form of blogs, tables and streams, and perhaps most importantly an automated service management system, a fabric controller that handles provisioning, geo-distribution, and the entire lifecycle of a cloud-based service.

You can think of Windows Azure as a new service-based operating environment specifically targeted for this new cloud design point, striking the best possible balance between two seemingly opposing goals.

First, we felt it was critical for Windows developers to be able to utilize existing skills and existing code, for the most part writing code and developing software that leverages things that you might already know. Most of you, of course, would expect to be able to use your existing tools and runtimes like Visual Studio and .NET Framework, and, of course, you can.

But in developing for something that we would brand Windows, you’d also expect a fundamentally open environment for your innovation. You’d expect a world of tools, languages, frameworks, and runtimes, some from us, some from you, some from commercial developers, and some from a vibrant community on the Web. And so being Windows, that’s the type of familiar and developer friendly environment that we intend to foster and grow.

But at the same time, even with that familiarity, even in trying to create a familiar environment for developers, we need to help developers recognize that this cloud design point is something fundamentally new, and that there are ways that Windows Azure needs to be different than the kind of server environment that you might be used to.

Whether Windows, UNIX, Linux or the Mac, most of today’s systems and most of today’s apps are deeply, deeply rooted in a scale-up past, but the systems that we’re building right now for cloud-based computing are setting the stage for the next 50 years of systems, both outside and inside the enterprise.

And so we really need to begin laying the groundwork with new patterns and practices, new types of storage, model-based deployment, new ways of binding an app to the system, app model and app patterns designed fundamentally from the outset for a world of parallel computing and for a world of horizontal scale.

Today, here at PDC, for those of you in this audience, Windows Azure comes to life. As I said before, and as you’ll hear about more in a few minutes, Windows Azure is not software that you run on your own servers but rather it’s a service that’s running on a vast number of machines housed in Microsoft’s own datacenters first in the U.S. and soon worldwide. It’s being released today as a Community Technology Preview with the initial features being only a fraction of where you’ll see from our roadmap that it will be going.

Like any of our other high scale Internet services, Windows Azure’s development and operational processes were designed from the outset for iteration and rapid improvement, incorporating your feedback and getting better and better in a very, very dynamic way.

As you’ll see today, we’re betting on Azure ourselves, and as the system scales out, we’ll be bringing more and more of our own key apps and key services onto Windows Azure because it will be our highest scale, highest availability, most economical, and most environmentally sensitive way of hosting services in the cloud.

The Azure Services Platform

A few of those key services, when taken together with Windows Azure itself, constitute a much larger Azure Services Platform. These higher level developer services, which you can mix and match ala carte, provide functions that as Windows developers you’ll find quite valuable and familiar and useful.

Some of you may recall hearing about SQL Server Data Services, SSDS, an effort that we introduced earlier this year at our MIX conference. We’re planning to bring even more of the power of SQL Server to the cloud, including SQL Reporting Services and SQL Data Analysis Services; and as such, this offering is now called simply SQL Services, our database services in the cloud.

Our .NET services subsystem is a concrete implementation of some of the things that many of you are probably already familiar with that exist within the .NET Framework, such as workflow services, authorization services and identity federation services.

The Live services subsystem, which you’ll hear about tomorrow, provides an incredibly powerful bridge that extends Azure services outward to any given user’s PCs, phones, or devices through synchronized storage and synchronized apps.

SQL Services, .NET services, and Live services, just like Windows Azure, are all being included as a part of the Azure services platform CTP being made available to you right here at PDC.

As you are well aware, Dynamics CRM and SharePoint are two of our most capable and most extensible platforms for business content, collaboration, and rapid solutions. And later this morning, you’ll hear about how these two platforms also fill a very important role in the overall Azure Services Platform”.

Technorati Tags: ,

Cloud Computing – Greater Than The Sum Of Its Parts…

When you combine the ever-growing power of devices and the increasing ubiquity of the Web, you come up with a sum that is greater than its parts. Software + Services is that greater sum. It all adds up to a commitment from Microsoft to deliver ever more compelling opportunities and solutions to consumer and business costumers—and to our partners.”

Yesterday, Microsoft announced its “Cloud Computing” offering – Windows Azure.  Azure is essentially a framework, which will allow developers to build a variety of applications which will be hosted live on the Internet. This brings a fundamental shift in today’s computing. Traditionally, software applications were stored on private ‘local’ servers. However, managing servers is a costly business. Even though hardware costs may have come down in recent years. Physical space, storage, licensing, administration and backup costs take up the lion’s share of supporting a modern day computing environment.

Microsoft and other vendors, such as Amazon, Google and SalesForce.com believe consumers and businesses will want to store far more of their data on the servers in its “cloud” of giant data centres around the world, so that it can be accessed anytime, any place and from any device.

Microsoft’s offerings are somewhat different to its competitors, in that Microsoft believes that accessing your data in the cloud requires more than just using a web browser. A hybrid model of using “Software + Services”.  Essentially, this means that you still use some kind of desktop client to manipulate the data stored up in the cloud.

This proposition of cloud computing sounds attractive to businesses for a number of a reasons:

  1. The cost of Internet network bandwidth has significantly reduced, whilst at the same time penetration of broadband has significantly increased worldwide. This means you can access the Internet almost anywhere on earth.
  2. Outsourcing your hardware infrastructure saves businesses serious fixed costs, both in physical space and in hardware. Essentially, you can expense the running costs of your infrastructure. Previously, infrastructure costs were typically attributed to capital expenditure. Cloud Computing will make Finance Directors the world over very happy. Depreciation? What stinking depreciation?

However, there are some big issues to consider too:

  1. Single point of failure. If the cloud hardware goes down, you lose your apps and data.
  2. How secure is the hosting?  Are your apps and data files safe from sabotage and espionage?
  3. Cultural concerns. For some businesses, it is going to be very hard in “letting go”. Businesses have  looked after and managed their data for years. Are CEO’s willing to let  their precious data be managed outside of their own data centres, despite the significant cost savings?

In response to point 3. I think that the concern is easing. Many business already outsource many of their services.  Outsourcing the hardware is a natural progression of that process

But, what about the rest of us? Well, for consumers, there is the prospect of a future where much, if not all of our data and many of our applications could be stored online “in the cloud”. Think about this for a moment. Imagine a world, where our data follows us everywhere. Smaller computer, limited applications, data synced across all of our Internet aware devices?

Over the past decade, the world we live in has been transformed by the Web.  It connects us to nearly everything we do—be it social or economic.  It holds the potential to make the real world smaller, more relevant, more digestible and more personal.  At the same time, the PC has grown phenomenally  in power with rich applications unimaginable just a few years ago.  What were documents and spreadsheets then are now digital photos, videos, music and movies.  And as we edit, organise and store media, the PC has quietly moved from our desks to our laps to our mobile phones and entertainment centres—taking the Web with it each step of the way”.

Microsoft’s Software + Services model is perhaps the logical step in the evolution of computing.  It represents an industry shift toward a design approach that is neither exclusively software-centric nor browser-centric.  By combining the best aspects of software with the best aspects of cloud-based services, Microsoft hope to deliver more compelling solutions for consumers, developers and businesses.  Microsoft envisions a world where rich, highly functional and elegant experiences extend from the PC, to the Web, to the devices we use every day.

“When you combine the ever-growing power of devices and the increasing ubiquity of the Web, you come up with a sum that is greater than its parts.”

Personally, I’m very excited about this computing shift. I’m *almost* ready to put my data in the cloud.

More information can be found at Microsoft’s Azure site and in this technical white paper. Azure’s terms of service can be found here.

Web 2.0 Expo Berlin: Better Media Plumbing for the Social Web

  2963399829_061585466f

Stowe Boyd presented and interesting talk on how the web needs to finds a better model to encapsulate discussion within social media.

Whilst there has been a lot of discussion around Web 2.0 – e.g. the rise of social networks. The foundations of social media seem relatively unchanged. Blogs are still pretty much stuck in a Web 1.0 timeframe. They are limited to a model of chronological posts with embedded comments and a variety of widgets in the margins that engage with other web communities, such as Digg and Del.icio.us

Bloggers today ultimately still retain full control over content posted on their blogs. Readers can leave comments, but usually can’t edit them or remove them. More often than not, The “blogger” gains an increased reputation, (within the blogosphere) from comments posted. However, what of the reputation of the person who left the comment in the first place?

Boyd also noted how the rise of RSS and RSS Readers meant that fewer and fewer people actually visited blog sites. This has the side effect of divorcing your readers from the comments. Boyd asked the audience, how many people had their comments accessible within their RSS feeds? The response was minimal from the audience.

Boyd argued that the ability to recommend and share content through RSS, actually created a further community of readers who were even more fragmented from the conversation. In other words, conversations regarding blog posts are occurring more and more in locations far removed from the blog post itself. Though the blogger may start the discussion with a blog post. The ‘social buzz’ of discussion may occur in various other communities , e.g. Digg, Friendfeed, Techmeme etc.

Boyd when onto state that "Flow" applications such as Twitter or the Facebook mini feed offer a possible replacement. He suggested that once you get used to these flow apps, it gets harder and harder to go back to blogs.

If the community all move to a flow service, you don’t lose your friendships”

If the current pace continues, will blogs become reduced to being just a publishing platform? While new commenting systems like Disqus and Intense Debate attempt to bridge this commenting gap, we are also seeing the arrival of video-based systems, such as Seesmic, that seem to offer a higher levels of immediacy and simplicity.

2963374447_25344d61ac 

Picture Credit: http://berlinblase.de

Boyd shows us his “extended desktop”. He uses a number of Flow applications to engage in social communities on the web.  He uses, Snackr (RSS feed aggregator), Twhirl, (Twitter desktop tool), FriendFeed and Flickr.  These applications run everyday and automatically update in the background.

Stowe’s “Web of Flow” is a social web where we continuously watch multiple streams of social interaction, live as they happen. Our eyes gaze over communities of conversation. We then exercise our choice to ‘dip in or out’ of the conversation as we see fit. Often, we may never even venture near a blog post.

[Bonus Videos] 

 

Hat Tip http://blog.whoiswho.de

Boyd, continues his discussion regarding the emergence of Flow apps and their effects on Social Media.

 BerlinBlase interviewed Stowe after his talk and asked him why email was broken?


Email is dead – Stowe Boyd from dotdean on Vimeo.

Web 2.0 Expo Berlin – Tim O’Reilly’s Keynote

2961349149_7f12d384c5 

Tim O’Reilly’s keynote at this week’s Web 2.0 Expo in Berlin brought a firm focus back to reality.  At recent conferences, the mood has been dampened by current economic conditions. Funding for web start-ups is vanishing rapidly for businesses without sustainable business plans. However, responding to critics suggesting that the Web 2.0 bubble was bursting. O’Reilly asked the rhetorical question, "Do you really think that we’re done yet with exploiting this huge change [of disruptive technologies]?"

O’Reilly suggested that the ‘web winners’ in the years ahead are those businesses that are involved in:

  • Cloud computing
  • Software as a Service applications like Google Apps
  • Open Source software
  • Companies delivering value added services to consumers or businesses
  • Breakthroughs in collective intelligence (harnessing the crowd)
  • Entrepreneurs who innovate and concentrate on delivering value

Using the example of the PC industry in the early 1990’s. O’Reilly cited the early years when there were hundreds of PC manufactures. However, through a ‘natural consolidation’ process, the number today is greatly reduced.  He then made the conclusion that the current economic problems accelerate the consolidation process – The business with robust business models will survive. Those that do not will die.

O’Reilly advocated the use of building business that deliver value, concluding with the strategy:

"Work on stuff that matters"

Citing the example of the Berlin Airlift and the innovative efforts required to achieve it, he stressed that his point that:

"Great challenge = Great opportunity"

O’Reilly finished the keynote, urging the masses to use Web  2.0 innovations to address today’s important problems. The world doesn’t need another another “Me too”  application. However, web applications that deliver “real value” are the ones to lead the industry out of the doldrums.

Keynote Video and presentation slides follow below

Hat Tip http://media.vascellari.com

OReilly Radar

View SlideShare presentation or Upload your own. (tags: web2expoeu08 web2expo)


Interview with Tim O’Reilly at Web 2.0 Expo 2008 Berlin from DigiRedo on Vimeo.

Seth Godin’s Tribes – A Book Review

 51drpze7irL__SS500_

Tribes is the newest addition to Seth Godin’s ongoing work of easily readable ideas on "Changing The World". Godin uses a number of real world examples and a number of short stories to underpin the ideas within Tribes.  There is nothing in the book, that many of us don’t already know, at least on a subconscious level.  In certain situations, we must find the initiative to lead. Many of us lead tribes, even if we don’t always see it that way.

The book begs the reader into thinking about how we can all challenge the status quo. Both, in our everyday personal and professional lives by taking the lead. Godin is a master at dusting off conventional concepts and presenting them in an enlightening and refreshing new way. He demonstrates the importance of not only leading a tribe of followers, but also nurturing the relationships within it. One of the most powerful aspects of the book is how it is written to speak directly to the reader.

Tribes, is not written as a conventional book with chapters. The book is written more in the style of a conversation, resulting from ideas and conversations from Godin’s blog. The book is a essentially a collection of those thoughts, presented beautifully in print. The book is a quick read and does inspire the reader to look at ways where he/she can make a real difference and empower groups of people.

Controversially, Godin describes most people within organisations as "sheepwalkers," those who "have been raised to be obedient" and those that are comfortable "with brain-dead jobs and enough fear to keep in line." For at least a few, leadership brings empowerment and brings opportunities to challenge traditional ways of doing things for the better.

There is a feeling that Godin is in fear. Fear of a world without "everyday" leaders who continue to change things for the better. These everyday leaders are not big CEO’s, but rather people like you and me. The book is indeed a call to action.
Godin cites five different reasons as to why people should look for everyday opportunities to lead:

1. "Everyone in an organisation, not just the boss is expected to lead".
2. [Today] "it’s easier than ever before to change things [and] individuals have more leverage than ever before". Especially, with tools such as Facebook and Twitter
3. Individuals, and their organisations that "change things and create remarkable products and services" are rewarded in the marketplace.
4. Change is a catalyst and can empower each of us to do something truly remarkable. It is "engaging, thrilling, profitable and fun,".
5. Finally, there is a "tribe" of other people waiting for a leader, "to connect them to one another and lead them where they want to go."

Godin states that, great leaders "create movements by empowering the tribe to communicate". They establish the foundation for people to make connections, as opposed to commanding people to follow."  Powerful leaders connect members of a tribe by a common interest (e.g. by sharing a passionate goal), and a determination to create things that did not exist before.

Don’t be fooled into thinking, Tribes is a technical manual, or a practical step-by-step guide. Because while it encourages you to "lead", it doesn’t go into specifics, (which is a good thing). Godin challenges the reader to accept full responsibility for becoming a tribal leader.

"No one gives you permission or approval or a permit to lead, You can just do it. The only one who can say no is you."

Critics may argues that the book lacks "concrete data". However, long time Seth Godin readers will understand that his books are a presentation of ideas. Ideas that spread and win. Tribes is no different. The book could also be criticised for being too short. However,  it is extremely well written and in my opinion the right size for the material it covers.

After reading the book, I was left thinking that Tribes was Volume 1. Certainly, another book could soon be followed up. For example, what effects are realised, when tribes collaborate with other tribes forming a "Super Tribe"?  Or, what does the leader do when his role is challenged within the tribe?

It’s hard to escape the religious metaphors in the book. References to "heretics" and "fundamentalists" echo throughout. In centuries passed, heretics were burned for their religious views. However, in Godin’s 21st Century world, heretics may just be the ones that save us from an unremarkable world.

Overall, Tribes is an inspiring read and well worth adding to your Seth Godin collection!  Get your copy of Tribes from Amazon

[UPDATE]

Seth was kind enough to answer my three quick questions below:

Q.  What inspired you to write Tribes?

A. I see a world where just about everyone is pushed to conform, to fit in, to do what we’re told. A workforce filled with sheepwalkers… at the same time, I see people desperately in search of leadership, eager to be connected and to matter. I was hoping to point those two things out and encourage people to take a breath and lead.

Q.  How does Seth Godin spend his day?

A. I write, answer email, bother people, notice things, and run my company, Squidoo.com and my closed online site, triiiibes.

Q.  For readers who haven’t read Tribes, can you explain the general themes of the book and why you think everyone should buy it?

A. The best thing to do is visit www.squidoo.com/tribesbook and see what other people had to say!

Technorati Tags: ,,

Gartner’s Top 10 Strategic Technologies for 2009

Hat Tip to Broadstuff

Jason Hiner from ZDNet blogs that Gartner analysts Carl Claunch and Dave Cearley presented a list of top 10 technologies that will provide important strategic advantages to IT over the next three years, at Gartner’s 2008 Symposium. They encourage IT leaders to keep these technologies in mind as they formulate budgets and long-term plans.

Claunch and Cearley delivered their list in the presentation “Top 10 Strategic Technology Areas for 2009? at the Orlando event. Here’s how they defined the “strategic technologies” that made the list:

“A strategic technology is one with the potential for significant impact on the enterprise in the next three years. Factors that denote significant impact include a high potential for disruption to IT or the business, the need for a major dollar investment, or the risk of being late to adopt. Companies should factor these technologies into their strategic planning process by asking key questions and making deliberate decisions about them during the next two years. Sometimes the decision will be to do nothing with a particular technology. In other cases it will be to continue investing in the technology at the current rate. In still other cases the decision may be to test/pilot or more aggressively adopt/deploy the technology.”

Gartner’s list follows below along with Hiner’s comments:

1. Virtualization

Gartner says: Server virtualization is already in process. Today, the two biggest opportunities in virtualization are in storage and desktops. Storage virtualization offers simplified access by pooling systems and can save big money with storage deduplication. Desktop virtualization allows users to have a portable personality across multiple systems, delivering a thick client experience with a thin client delivery model.

Hiner says: The biggest factor that could drive desktop virtualization will be the advent of cheep $100-$200 thin clients (nettops) based on Intel Atom processors. In terms of storage virtualization, dedeplication — if effective — could be a huge money saver because every enterprise has tons of duplicate versions of files clogging up their file servers.

2. Cloud Computing

Gartner says: You need to be very careful about all of the hype, but you need to take it very seriously as well. They think 80% of Fortune 1000 companies will be using some form of cloud computing services by 2012. They encouraged IT leaders to consider the back-end infrastructure and policies of cloud providers and to carefully the development models.

Hiner says: Claunch and Cearley briefing mentioned the one reason why a lot of IT leaders will eventually adopt cloud computing:  It can allow IT to move a significant chunk of money from capital expenditures to operating expenditures. That’s the story.

3. Servers: Beyond Blades

Gartner says: Blade servers introduced a shared a computing fabric that allowed some recombination of components and some efficiencies. The fabric-based server of the future will treat memory, processors and I/O cards as components in a pool, combining and recombining them into particular arrangements to suit the needs of the server load.

Hiner says: This sounds terrific in principle because it’s about greater utilization of resources. But, how will this relate to virtualization, where the software layer is being abstracted in much the same way? Can the two work together to provide even more dynamic server resources? I also wonder about licensing, especially since this involves CPUs, which a lot of licensing is being tied to.

4. Web-Oriented Architectures

Gartner says: Expect Internet, Web and cloud-based concepts (such as SOA) to increasingly drive mainstream architectures and development models.

Hiner says: We’ve been hearing this for almost a decade now. I hope that the model is finally changing — it’s overdue — but as my ZDNet colleague Larry Dignan likes to say, “Hope is not a strategy.”

5. Enterprise Mashups

Gartner says: Mashups mix content from multiple sources by using feeds from public application programming interfaces (APIs). Enterprises are now investigating taking mashups from cool Web hobby to enterprise-class systems to augment their models for delivering and managing applications.

Hiner says: The best part about mashups is that they eliminate duplication of effort by allowing developers to componetize their code and then re-use it themselves and offer others the ability to use it as well. There needs to be better tools for doing this and then developers need to get in the habit of thinking about what they can turn into mashable components during the development process.

6. Specialized Systems

Gartner says: Specialized server appliances can save IT time because they are largely preconfigured, but they also are not as flexible and can’t be reused as easily. A new category called heterogenous systems is emerging that offers mix-and-match hardware. Heterogeneous systems are prebuilt and supported by vendors, rather than custom-built by IT departments.

Hiner says: IT should allow experts to preconfigure systems as much as possible and whenever it makes sense. If heterogeneous systems can further commoditize servers then it’s a good thing because it will drive down costs and increase selection. Even better are virtualized appliances, which provide nearly all the benefits of appliances without the hardware drawbacks.

7. Social Software and Social Networking

Gartner says: Your organization is an entity in the broad Social Web. Get to know Facebook, Twitter, FriendFeed, LinkedIn and other social sites and applications. Listen to the language of social media, before starting to speak.

Hiner says: Beyond just looking to send out marketing messages via social networks, companies need to look at the ways social networking can allow them to better listen to customers and to empower employees to become better connected in their industry and specialty. But beware, social networking can become a time-sink and a productivity killer when not used in a disciplined way.

8. Unified Communications

Gartner says: Enterprises are realizing that they have multiple products and vendors performing the same communications functions, and that this redundancy creates additional expense, makes it more difficult for users to learn, and increases the complexity of integration. In the next three years the number of communications

Hiner says:What is the future of the good old business desk phone? Some companies such as Cisco see the desk phone becoming a video and data device. Others see the desk phone going away and mobile phones (with both a business number and a personal number) becoming the sole voice device for most business users.

9. Business Intelligence

Gartner says: Business intelligence (BI) is one of the most powerful things you can deliver to business decision makers. Even though  we’ve all been doing it for years, we’re not doing it very well because too much of the data is stuck in silos. Companies need to get serious and systematic about implementing BI and performance management solutions because they fuel smarter decisions and better results.

Hiner says: Companies now have lots of ways to collect data. The problem is that there aren’t as many good ways to dig into that data and quickly and easily turn it into actionable reports, graphs, and dashboards. That’s what business intelligence should be about — making the data easily accessible to the employees who need that data to make better decisions.

10. Green IT

Gartner says: Consider potential regulations and have alternative plans for data center and capacity growth. Many are looking at energy efficiency or ‘green’ products simply for the practical advantages in energy savings. Some companies are emphasizing green activities as part of their social responsibility. A socially conscious CEO may have funds to support some IT changes that result in a greener company.

Hiner says: Green IT is here to stay, even in a difficult economic environment. Energy will be one of the pre-eminent public concerns of the next decade and energy conservation will be an important part of the discussion. IT departments need to act now to start measuring the energy consumption of IT infrastructure and looking for strategic opportunities to reduce it, before they are forced to act due to government intervention.

Run, Grow, Transform

Cearley encouraged the attendees to ask, “How will these technologies effect the way that you run the business, grow the business, and transform the business?” With that in mind, the two analysts closed with a sample action plan based on those three principles (see below).

image

[Bonus] David Cearley Discussing Top 10 Strategic Technologies at Gartner Symposium/ITxpo Orlando 2008

Gartner’s Press Release can be found here

Blogging Success in 90 Days

Picture Credit: Wisebread.com

Rohit has just posted a great article on how to reach blogging success in 90 days. Successful blogging takes a lot of hard work and time. However, Rohit provides some great tips for new bloggers on what you may want to focus on in your first 90 days of blogging:

DAYS 1 TO 15:

1. Find a good niche. Think hard about what you want to write about. It has to be something you are passionate about and interested in, otherwise it won’t work. The more specific you can get, the better. You can also broaden it later, but in the beginning you need to find a subject that you can own.

2. Choose a name and URL. This is a tough thing, but just as many companies these days do, you should let available URLs drive how you name your blog. If you can’t get the URL, don’t use the name. And make sure you plan to put your blog on a specific URL, whether you are using Typepad or Blogspot or any other service. Trust me on this, you’ll eventually wish you built your blog on your own URL, whether you think so today or not.

3. Grab a template and launch quickly.
The biggest paralysis new bloggers have is wanting to get their new blog just right. In the first few weeks of your blog, the most important thing is to find your voice – so forget about design just launch it with a ready made template. Chances are remote that search engines will list it that quickly, and you’ll have a few weeks to get it right.

4. Add Google Analytics. Google has a free tool called Google Analytics which gives you some great metrics on your blog all for free. It requires you to do a bit of tricky cutting and pasting to add certain code to your blog, but it is totally worth it to do it early so you’ll have metrics from the first days of your blog to compare to and see how far you have come.

5. Create an editorial calendar.
Some football coaches head into games scripting out their first 10 plays as part of their gameplan. You should do the same. Figure out the topics for your first ten posts, and then write them steadily. Not only does this get you thinking ahead, it also gives you a sense of how many posts per week you can realistically write.

DAYS 15 TO 30:

6. Reevaluate your blog title. At this point, you will likely have several blog posts to look at and a better sense of what you enjoy writing about. It’s the perfect time to check the title of your blog and theme that you set earlier and make sure it still accurately describes what you want to write about. If it doesn’t, now is the perfect time to fix it.

7. Design your blog brand. Now that you have your theme and several posts, you can design your blog. At this stage, you may just want to add a logo to an existing template or do something more custom. Either way, by having your blog brand set and several posts in your archive, you can really see what your design will look like.

8. Get your blog listed.
It’s also time in these two weeks to get your blog listed on all the search engines by submitting it. You should also claim it on Technorati, and submit it to any other sites in your particular industry or area of focus. Remember, you don’t need to focus on promotion right now, this is just about getting your site listed.

9. Set up your feeds. Many bloggers today (including me) are using Feedburner to syndicate their RSS feeds and offer email subscriptions to their blog. Whether you choose to use Feedburner or not, setting up your feeds and making them available to readers will be important as you start to grow your blog.

10. Learn the art of headlining. In blog posts, titles make a big difference. Particularly because many readers will be accessing your content through RSS and the title may be the only thing you see. To deal with this truth, you need to think like a copywriter and treat your blog post titles like headlines. Learning to write good blog post titles will be a major skill you will use all the time.

DAYS 30 TO 60:

11. Set your targets. You’ll probably be getting close to finishing your first ten posts by now, or at least worrying about what you’ll write about next. Based on what you’ve been able to do in the first month, set a target for yourself of how many posts you will try to write per week. My target is three and I usually stick to it.

12. Learn the 25 styles.
More than a year ago, I wrote a presentation designed to answer the common question from bloggers of what to write about. To help you fight "bloggers block" – view the presentation and learn the techniques. They will help you figure out what to write about, as they have helped me.

13. Contact your influencers. Now that you have a month of blogging experience, it’s also time to start asking for advice and introducing yourself to those who inspire you. Create a list of bloggers that you look up to and then religiously email one person from that list after you do a post. Ideally it will be someone who would be interested in your post and likely to respond to it.

14. Actively share your posts. In addition to emailing them to your influencers, you should start finding appropriate social networks and sites on which to share your blog posts. This could mean submitting them to Digg, or posting them onto del.icio.us with keyword tags. Essentially, you want to try a few tools to get your blog posts out there tagged and saved.

15. Integrate your blog into your profiles. At the point when you start your blog, you are probably already using other social networks such as Facebook or LinkedIn. After the first month when you have some good activity on your blog, you can add the URL to your profiles and make sure that your network knows you have a blog.

DAYS 60 TO 90:

In these days, your main focus should be on content and connections. Try to create the best blog posts you can. Those that have insights, a strong point of view, and are highly shareable. If you can really succeed at having this great content, people will pass it along and your blog will have the greatest chance of getting passed along too. At this stage you should also make sure that you are using all of your social networks to also spread the word about your blog and your posts. The reason I don’t have specific lessons at this stage is that you’re starting to get into the point where you will probably be finding your own way and techniques that work for you. The best advice I can offer at this stage and moving ahead beyond 90 days is to try and stay as consistent as possible, and continue to create the best content you can, and share it with people in your network most likely to help you spread the word.

Keeping Friday Night Clean with Gmail Goggles

 Gmail Soap

I’m not sure whether to continue laughing, or to be truly grateful to Google for a new innovative Gmail Labs app which has just been launched entitled, Mail Goggles.

Google Engineer, Jon Perlow posts on the Gmail blog

“Sometimes I send messages I shouldn’t send. Like the time I told that girl I had a crush on her over text message. Or the time I sent that late night email to my ex-girlfriend that we should get back together. Gmail can’t always prevent you from sending messages you might later regret, but today we’re launching a new Labs feature I wrote called Mail Goggles which may help.

When you enable Mail Goggles, it will check that you’re really sure you want to send that late night Friday email. And what better way to check than by making you solve a few simple math problems after you click send to verify you’re in the right state of mind?

By default, Mail Goggles is only active late night on the weekend as that is the time you’re most likely to need it. Once enabled, you can adjust when it’s active in the General settings Hopefully Mail Goggles will prevent many of you out there from sending messages you wish you hadn’t. Like that late night memo — I mean mission statement — to the entire firm.

I guess we have all sent emails over the years when we shouldn’t have. Some fuelled by alcohol, some fuelled by anger. I do think that for many people, this app will be truly useful. Though I’m still undecided if I like my email client controlling yet another part of the way I use my mail. I already have rules, spam and content filtering.  Can I no longer be trusted to send emails after a few beers, late at night?  Probably not.

Mail Goggles can be enabled in the Settings section of your Gmail.

image 

Technorati Tags:

Godin’s 9 Steps to Presentation Nirvana

 

In my opinion PowerPoint has an unfair reputation as a bad presentation tool. We have all heard comments over the years such as, “death by PowerPoint”. However, it is not the tool which is the problem, it’s often the presenter. My two favourite books on presenting, help to improve the style of your slides. The books also offer sound advice on limiting the amount of text on your screen.  Often, people respond more favourably to stories. Seth Godin, makes this point in his post below.

  1. Don’t use PowerPoint at all. Most of the time, it’s not necessary. It’s underkill. Powerpoint distracts you from what you really need to do… look people in the eye, tell a story, tell the truth. Do it in your own words, without artifice and with clarity. There are times Powerpoint is helpful, but choose them carefully.
  2. Use your own font. Go visit Smashing Magazine and buy a font from one of their sponsors or get one of the free ones they offer. Have your tech guy teach you how to install it and then use it instead of the basic fonts built in to your computer. This is like dressing better or having a nicer business card. It’s subtle, but it works.
  3. Tell the truth. By this I don’t mean, "don’t lie," (that’s a given), I mean "don’t hide." Be extremely direct in why you are here, what you’re going to sell me (you’re here to sell me something, right? If not, please don’t waste your time or mine). It might be an idea, or a budget, but it’s still selling. If, at the end, I don’t know what you’re selling, you’ve failed.
  4. Pay by the word. Here’s the deal: You should have to put $5 into the coffee fund for every single word on the wordiest slide in your deck. 400 words costs $2000. If that were true, would you use fewer words? A lot fewer? I’ve said this before, but I need to try again: words belong in memos. Powerpoint is for ideas. If you have bullets, please, please, please only use one word in each bullet. Two if you have to. Three never.
  5. Get a remote. I always use one. Mine went missing a couple of weeks ago, so I had to present without it. I saw myself on video and hated the fact that I lost all that eye contact. It’s money well spent.
  6. Use a microphone. If you are presenting to more than twenty people, a clip on microphone changes your posture and your impact. And if you’re presenting to more than 300 people, use iMag. This puts your face on the screen. You should have a second screen for your slides–the switching back and forth is an incompetent producer’s hack that saves a few bucks but is completely and totally not worth it. If 400 people are willing to spend an hour listening to you, someone ought to be willing to spend a few dollars to make the presentation work properly.
  7. Check to make sure you brought your big idea with you. It’s not worth doing a presentation for a small idea, or for a budget, or to give a quarterly update. That’s what memos are for. Presentations involve putting on a show, standing up and performing. So, what’s your big idea? Is it big enough? Really?
  8. Too breathtaking to take notes. If people are liveblogging, twittering or writing down what you’re saying, I wonder if your presentation is everything it could be. After all, you could have saved everyone the trouble and just blogged it/note-taken it for them, right? We’ve been trained since youth to replace paying attention with taking notes. That’s a shame. Your actions should demand attention (hint: bullets demand note-taking. The minute you put bullets on the screen, you are announcing, "write this down, but don’t really pay attention now.") People don’t take notes when they go to the opera.
  9. Short! Do you really need an hour for the presentation? Twenty minutes? Most of the time, the right answer is, "ten." Ten minutes of breathtaking big ideas with big pictures and big type and few words and scary thoughts and startling insights. And then, and then, spend the rest of your time just talking to me. Interacting. Answering questions. Leading a discussion.

Most presentations (and I’ve seen a lot) are absolutely horrible. They’re not horrible because they weren’t designed by a professional, they’re horrible because they are delivered by someone who is hiding what they came to say. The new trend of tweaking your slides with expensive graphic design doesn’t solve this problem, it makes it worse. Give me an earnest amateur any day, please.

I would add a further point.

10.   Watch other presenters.  YouTube and TED, carry great videos of experts presenters. My advice is to  study, watch and learn from them. Watching other presenters is a great way of improving your own technique.