Netbooks, Tablets, and Good Enough Computing

You may have caught my title and asked “aren’t Netbooks dead? Why are you bringing them up?” That is an excellent question and while Netbooks are mostly dead (they are finding a role in emerging markets) they taught us something very interesting that sheds light on the tablet phenomenon.

Since our firm tracks the computing industry extremely closely, we were doing quite a bit of analysis on the market for Netbooks. Although it was short lived, which we predicted, they taught us something that is fundamentally important to understand. Which is that there is a massive market for computing devices that are good enough.

An Important Evolution

When the Netbook began its rise as a category we started looking at what were the driving factors for their market success. From our consumer interviews we learned a number of interesting things.

First was that most buying a Netbook were not looking to replace a notebook, rather they were looking for a less expensive 2nd, 3rd, 4th, computer for their home and/or family member to be able to get online, do simple tasks, etc. What became clear was that not every member of the household had a personal computer in an average home and many consumers appreciated the low-cost and small size of many Netbooks to fill this void. These products represented a low-cost way to get multiple new PCs in the home for simple tasks and more importantly alternate screens for web browsing. Large numbers of consumers told us that the few PCs they had in the home were constantly being fought over, mainly for web browsing. Netbooks looked like an easy way to get everyone in the home a PC screen of some type. Many knocked the category at the time and believed Netbooks were just truncated PCs, however, they were good enough for the mass market.

But our interest in Netbooks went further into the experience consumers had with them. More often than not consumers mentioned how the capabilities of the Netbook were sufficient for most, and in many cases all, of their every day needs for a personal computer. This led to the good enough computing reality that has opened the eyes of many in the industry. As innovations become saturated and mature, at some point distinct elements of those innovations reach a point of good enough or diminishing returns. At this point, further innovations in the same areas become less apparent and obvious. This is particularly true of things like semiconductors, displays, broadband, etc. In all these instances there comes a point in time where the advancements become harder to distinguish.

An analogy I’ve used frequently when discussing good enough computing is one related to Intel. Back when Intel was pushing Moore’s law heavily and MHZ and then GHZ was a big deal, we could objectively see the speed and performance advancements by simply opening a program like Word or Excel. I recall at many IDF conferences, Intel opening an MS Office program and showing how much faster it opened on the latest generation over the previous. Today, no such example exists for the casual observer to notice the performance benefits of new generation silicon. CPUs have reached a point of good enough for the mass market. And Netbooks brought this realization to light.

Good Enough and Smart Enough

It was this realization and learnings around Netbooks that led us to believe that tablets would be as disruptive as they have been. Tablets, like Netbooks, have taken advantage of the good enough computing paradigm but done so by adding something Netbooks did not–touch. I’ve written extensively on the concept of touch computing and why I believe it is foundational to the future of computing so I won’t go into too much detail here. Touch and the tablet form factor made the good enough experience for consumers even that much more compelling, forcing them to evaluate if they need anything else as far as computers go. Some consumers may need more than a tablet, and some may not, the point is they will decide what works best for them.

The full realization in all of this, is simply that there is a massive section of the market that does not have extreme demands with technology. When we were doing market analysis around Netbooks, we asked consumers the tasks they did with PCs on a regular basis. From that research we learned that the vast majority of those we spoke with, who fell into the early and late majority, used less than five applications daily and none of them were CPU intensive (arguably playing Flash video is CPU intensive but that is a debate for another time).

The key takeaway to understand with good enough computing is that many of the key features and innovations that originally drove demand diminish (i.e CPU speed, memory, resolution, # of apps, etc.) This means that future product generations need to appeal to customers in new ways that go beyond the elements which are good enough. I believe that too often companies get stuck putting too much emphasis on the elements of their products which are already good enough for the mass market, thus those features get glossed over, when in reality they should shift their emphasis to what is new or unique. Quality product marketing, messaging, and positioning will be at a premium going forward.

What appealed to the mass market of computing the past few decades will not be what appealed to them now in a mature and post-mature personal computing landscape. Understanding good enough computing does not mean that you stop innovating. What it does mean, however, is that it will be absolutely critical to be careful not to pre-maturely bring key innovations to market and risk having the mass market not understand the value of them. The key, rather, is to carefully and strategically bring key innovations to market at precisely the right time in which the mass market will value them.

Why The iPad Will Change How We Work

What is becoming more clear every day is the way in which tablets are changing paradigms of computing that have existed for decades. The entire way we think about computers, and computing in general, is undergoing significant change. In the days of the desktop and notebook, computing hardware and software was functionally the same and remained relatively unchanged. Specifically how we used a mouse and keyboard as the main way to interact, work, play, produce, create, etc.

The iPad launched a new day in computing, one where the paradigm of mouse and keyboard computing gave way to touch based computing. In the early days it was programs like VisiCalc which paved the way for computers to move from hobby to office tool. Today we have a slew of apps on the iPad that are being created every day that are proving the iPad is more than a consumption and entertainment device and is a powerful tool in which genuine creation and productive jobs can be accomplished.

I have thought about this for a while and we have written extensively about many of the ways touch computing opens the door to new opportunities. However, it wasn’t until recently, with the launch of iPhoto on the iPad, that I have come to a deeper realization of how profound this change may be. That is why I choose to title this column the way I did. I truly believe the iPad and more specifically touch based computing will entirely change the way we work, create, produce, and more.

Tough Tasks Become Easier

While going through and analyzing the slew of information in the help tips for iPhoto for iPad I came to a profound realization. When it comes to content creation, touch and software optimized for touch, allows us to do with ease, tasks that were either very difficult or extremely time consuming with mouse and keyboard computing. This may or may not apply to all tasks or all software but there are certainly tasks that shine on touch platforms. iPhoto for iPad is one of the clearest cases of this.

I have been into photography since high school, taking photo for three years, and staying active since always trying to make perfect photographs. I also would call myself a advanced user of Photoshop. As I have been using iPhoto for iPad more and more it has become clear how powerful of a tool iPhoto for iPad is when it comes to photo editing. What’s more is that iPhoto, when paired with touch optimized software, actually makes extremely complex tasks much easier and enjoyable than with a mouse and keyboard.

A key example of this is adjusting colors in a photo. If I took a photo and wanted to adjust the color in just the sky for example, I would need to isolate the sky and then tweak the color elements independently. With iPhoto for iPad you simply touch then slide to the left and the software adjusts just the blue skies to your liking. With one single touch iPhoto on iPad accomplishes a task that would take a minimum of 5 clicks with a keyboard and mouse and probably 5 min or so of precision mouse work. This is just one example of many of a way that touch computing will change how we work today and of course in the future.

Mainstream Consumers Can Now Participate

Using again the Photoshop example another realization struck me. If I sat my kids or my wife down in front of the desktop or notebook, opened Photoshop and an image and had them try to edit it, there would be mass confusion. I would have to spend quite a bit of time teaching them several basic things just to get them started.

Mastering a program like photoshop is no easy task for the non-techie, think of all the seminars that exist for software and computer literacy. All of this changes with the iPad and touch based computing. I gave my kids the iPad, opened iPhoto and an image, and let them go. Watching them for five minutes they figured out how to adjust colors, lighten areas of an image and add effects (they are 6 and 9).

They nearly mastered a program in under 10 minutes and began doing professional level tasks in that short time frame. This would be nearly impossible without extensive time and training using a mouse, keyboard, menus, icon palettes, etc.

Touch based computing opens the doors to brining true computing to the masses. Think about how many consumers out there have notebooks or desktops, running software capable of creating amazing things and they never use it or when they do they don’t take advantage of its full potential. Touch computing changes all of this and is the foundation that will bring more consumers to create and produce things they never would have using a mouse and keyboard.

A quote I am fond of is “simple solutions require sophisticated technology.” The iPad, and it touch computing software ecosystem is one of the most sophisticated technologies on the market today. It is no wonder that the iPad is enabling simple solutions and inviting more and more consumers to participate in computing in ways they never have before.

Although I focused this column on how the iPad and touch based computing will change how we work, produce, and create, we ultimately believe that this platform will also change the way we play, learn, be entertained, and much more.

It all boils down to the fact that the iPad is changing everything.

The Era of Personal Computing

I have adopted a philosophy in my analysis over the past few years where I distinguish between personal computing and personalized computing.

In a post a few months ago, I wrote about these differences and pointed out that because of the differences in personal and personalized computing the Post PC Era will happen in two different stages.

The first stage is personalized computing. In this era, the one we are currently in, all of our personal computing devices are personalized by us. What I mean by this is we take the time to personalize the devices with our personal content, apps, preferences, interests, etc. In reality, however, how personal are these devices? They don’t actually know anything about us we just simply use them to get jobs done. We customize them and they contain our personal content but they really aren’t that personal.

However in this next phase, the era of personal computing, things may actually get very interesting. In this era our devices will actually start to learn things about us and in the process become truly personal. Our most personal devices will learn our interests, schedule, preferences, habits, personality, etc. I know it sounds a bit scary but that is where we will inevitably end up.

I believe Apple’s latest feature–Siri–demonstrates this future reality of personal computing. As Tim pointed out in his article yesterday, Siri and the underlying artificial intelligence engine, will learn key things about our unique tastes, interests, and more and over time become even more useful as a personal assistant.

What is absolutely central for this personal computing era to become reality is we have to allow our devices to get to know us. Perhaps more specifically we have to trust our devices or the underlying company providing us the personal computing experience.

John Gruber points this very point out in a post with some comments from Ed Wrenbeck, former lead developer of Siri.

In an interview with VectorForm Labs Ed Wrenbeck states:

“For Siri to be really effective, it has to learn a great deal about the user. If it knows where you work and where you live and what kind of places you like to go, it can really start to tailor itself as it becomes an expert on you. This requires a great deal of trust in the institution collecting this data. Siri didn’t have this, but Apple has earned a very high level of trust from its customers.”

In the era of personal computing we will get beyond personalizing our devices and instead enter the era where they truly become personal to us because of their ability to know, learn, and be trained about who we are and our unique interests and needs.

There are many great examples of this in Sci-Fi movies and novels but perhaps my favorite, because it is fresh, is how Tony Stark interacted with Jarvis in the Iron Man movies. Jarvis is what Tony Stark named his personal computer and as you can tell from his interactions in the movie, Jarvis knew quite a bit of the intimate details of Tony Stark.

Jarvis was a personal computer, one that took on an entirely new way to be useful because of the artificial intelligence that was built on top of incredible computing power.

Of course, this all sounds extremely futuristic but it will be the basis of what takes us from having to manually personalize our devices, to a future where our devices truly become personal and indispensable parts of our lives.