Digital Reincarnation at it’s Best: Live From Daryl’s House

Today’s world is fickle, and one of my favorite examples of digital avenues bringing new audiences to the best performers of our time is definitely Live From Daryl’s House, started by Daryl Hall in late 2007.  This is music discovery at its best, with Hall taking a simple idea and turning it into web gold. Hall recently said of the show, “I had this idea of playing with my friends and putting it up on the Internet.”  What could be easier, right? Certainly the response has been huge.

The show has become almost iconic, with applause coming in from a long list of leading names in the industry, including Rolling Stone, SPIN, Daily Variety, CNN, BBC, Yahoo! Music and influential (and hyper-critical) blogger Bob Lefsetz.  This is exactly what veteran artists need to be creating in order to reinvent in the digital age and gain new audiences (and influence) through vibrant collaborations with both established leaders in music and new performers.

Daryl Hall has had a rich and varied career, working with virtually all of the great musicians of modern popular music, as well as entering into new relationships with the best of the latest generation of artists. So far, episodes have featured superstars like Smokey Robinson, Rob Thomas (of Matchbox 20), Robby Krieger and Ray Manzarek (The Doors’), Train, Nick Lowe, K.T. Tunstall, Gym Class Heroes’ Travis McCoy, Fall Out Boy’s Patrick Stump, and soul legends The Blind Boys of Alabama – as well as with newcomers such as Nikki Jean, Grace Potter & the Nocturnals, Canadian techno-rockers Chromeo, Bay Area singer/songwriter Matt Nathanson, and highly touted tunesmith Diane Birch.

He’s also featured my own close personal friend, Todd Rundgren, several times, most recently at Rundgren’s home in Kauai, Hawaii, where they performed a rousing 7-song set, including an amazing cover of the Delfonics’ 1970 hit, “Didn’t I Blow Your Mind This Time.”

Daryl Hall and Todd Rundgren have known one another since their early days inPhiladelphia, and the gig inHawaiiincluded an old-fashioned traditional Luau Show, burying a pig in the dirt, serving up some poi, hula dancers and a special performance with local musicians of “Bang on the Drum.” Said Rundgren, “It’s always great when friends come all the way out here to visit, but it’s even better when they come to play.”

Hall’s latest collaboration has been with up-and-coming artist Allen Stone, a virtual look-alike for Daryl himself (in both image and musical philosophy).  In fact, that collaboration went so well that Stone is now touring and working with Hall and Oates.

Hall has had an illustrious career, with six #1 singles with collaborator John Oates, including “Rich Girl” (also #1 R&B), “Kiss on My List,” “Private Eyes,” “I Can’t Go For That (No Can Do) (also #1 R&B), “Maneater” and “Out of Touch” from their six consecutive multi-platinum albums—’76’s Bigger Than Both of Us, ’80’sVoices, ’81’s Private Eyes, ‘82’s H2O, ‘83’s Rock N Soul, Part I and ‘84’s Big Bam Boom. The era would also produce an additional 5 Top 10 singles, “Sara Smile,” “One on One,” “You Make My Dreams,” “Say It Isn’t So” and “Method of Modern Love.”

Live from Daryl’s House is being shown weekly in over 80% of U.S. homes in the nation’s top 200 media markets, as well as all of the top 10, including New York, L.A., Chicago, Dallas and Houston. The show also recently won the 2010 WEBBY Award in the Variety Category.

You can sign up for his Newsletter at http://www.livefromdarylshouse.com/emupdates.html

 

Fun & Games: Cross Digital Distribution Takes Movies into a Whole New Realm

A few months ago I had the privilege of interviewing Thomas Dolby and getting an insider’s view into the extremely creative way he was introducing the first new album he’d created in over two decades.  Fans would go through the portal of a game, The Floating City, and only when they had reached certain benchmarks in the game did they get to download some of the best tracks.

As the digital revolution reaches what seems to be an absolute frenzy of technological advances and debuts, more and more we’re seeing the cross-platform promotion and collaboration around product distribution. The latest (and one of the biggest) entries into this virtual world is set to be Suzanne Collins’ popular novel, “The Hunger Games.”

According to Digital Media Wire, Social game developer, Funtactix has partnered up with Lionsgate around the movie’s debut this spring The Hunger Games Adventures is set to launch on March 23, the day the first movie opens in theaters.

Lionsgate started the cross-digital campaign months ago with “The Capitol” a web presence for the government of Panem, where “The Hunger Games” stories take place. They also created a strong social media launch, featuring a YouTube channel in addition to other social spaces.

The Hunger Games Adventures game is set to be a strong player in the multi-digital branding of the three Lionsgate movies. Players move through The Capitol and the key Districts in the story, becoming more engrossed at each level of game.

“As a company that wants to align itself with the most beloved entertainment properties and passionate fans in the world, joining forces with Lionsgate on The Hunger Games was a very easy decision,” said Sam Glassenberg, CEO of Funtactix.

This isn’t the only foray Funtactix has made into the world of branded film releases.  They recently created a game built around the Mission Impossible franchise.

Since its founding in 2006, Funtactix has earned an industry reputation for rapid innovation in web-based gaming. They were first to deliver connected 3D multiplayer action gaming through  deeply-integrated avatar-based games.

They’re also poised to release The Hunger Games Adventures through Facebook as well, and with a following of millions and the realm of Harry Potter coming to a close, it will be interesting to see where Funtactix – and other branding gaming companies – will reach as the digital revolution continues to evolve.

Kelli Richards
President and CEO
The All Access Group, LLC

Hulu’s Latest Hot Ticket

When it comes to digital distribution, one of the big online commercial sites for video has certainly been Hulu. In only four short years of life, Hulu has carved out a tremendous niche with a huge tribe of trusting, loyal fans and users.  While Hulu is “independent” to some degree, NBCUniversal, Newscorp and even Disney are part of the ownership team.

For anyone not familiar with Hulu yet, at its core, it is simply an online video service providing formally, commercially produced content, such as movies, television shows, clips, and other content, coming in from a very wide variety of sources, such as FOX, NBCUniversal, ABC, Criterion, A&E Networks, TED and a very long list of other content providers.

So why, after four years of great digital distribution, am I writing about Hulu?  Because they are about to take a huge leap of faith and add another original production to their arsenal – original content is a journey that even Oprah Winfrey herself can tell you is fraught with danger.  So in addition to movies and primetime TV hits such as Modern Family, Glee, The Office, etc., etc., etc., viewers can also download Hulu’s own creations (A Day in the Life and The Morning After), as well as their newest addition, “Paul, the Male Matchmaker” (launching on Monday, February 13th exclusively on Hulu).  The launch date is no accident – the 10-episode comedy is a mockumentary about a socially inept man who inherits a matchmaking service – who then does out brutally honest dating advice in the sincere belief that he is helping women find love.

Actor/writer Paul Bartholomew (Mad Men; Yes, Dear), who stars in the series, said, “This show is for anyone who has ever been set up on a horribly misguided date by their sister, friend, co-worker — and then been blamed for it not working out. Which is basically everyone.”

Finally, original, full-length commercially produced web series’ are starting to find a foothold – and a distribution portal like Hulu is exactly the venue to bring enough attention and a strong enough fan base to move audiences to show up week after week. Rock on Hulu, we’re looking forward to where you go next!

Kelli Richards
CEO
The All Access Group, LLC

 

 

Marketing to Moms – a Language Barrier

Before attending the recent CES Show in Las Vegas – my 12th consecutive visit to this, the annual tech pilgrimage. I ventured a prediction to my husband: “As much as I hope this is not the case, based on the PR barrage I have received prior to the event this year, I think many of products aimed at women at the Show will STILL be mostly pink”.

Although I can admit this was not entirely the case and the MommyTech section of the Show encompassed almost exclusively fitness gadgets, nail polish machines, and rhinestone accessories for my smart phone and tablet, my disappointment came from a different source.

Tech marketers are still only speaking in speeds and feeds.

With a few notable exceptions – new technologies demonstrated by consumer electronic brands – large and small, highlighted exclusively the power of the processor, the water-resistant casing, the speed of the memory and more – but I seldom heard “and this is what it means to consumers”.

Why is it is so difficult to make that translation? Tech retailers do the same thing. At a recent experience in Best Buy looking to buy an SLR camera, the salesman focused on the quality of the lens, talking about pixels, the number of crystals and even explained how the light is processed inside but he never said: “And all this means that you will confidently take the best, most pristine pictures to capture your most special memories”.

It is 2012 people. Women and moms in particular account for two-thirds of consumer purchases and they are speaking up, engaging brands, sharing their experiences and recommending products they love on Twitter, Facebook and pinning pictures on Pinterest.

Speak to us in plain language; highlight the benefits of speed, durability, and reliability in terms that support our daily life. Such as: “it will be fast and ready when you need to make that call”, or “it will endure the wear and tear of 11 YO triplets at home”.

Even geeks like me, need to hear how your technology will enhance our lives.

Oh and please, drop the “best of breed” – I can’t even translate what that means to me!

Amazon & the End of the Book

With the end of the Nook for Barnes & Noble and doom and gloom on expected losses and lowered guidance for fiscal 2012, the company’s stock fell 18 percent. The Nook was the poster child of Barnes & Noble’s in-store growth strategy.

Credit: Geek Sugar

It’s nemesis, Amazon, is doling out cash to authors who make their e-books available exclusively on Kindle for 90 days. Kindle Direct Publishing (or KDP, for those in the know) has put aside at least $6 million in 2012. Books can be “borrowed” for free and authors receive royalty payments based on the popularity of their titles. This may be one more step towards the end of the bookshelf as we know it.

While Amazon erodes the viability of the physical store, the Amazon storefront is fast becoming confusing to navigate, and it is a slippery slope for authors. If we let the age-old publishing process that allows a book to percolate (sometimes arduously) from manuscript to agent to editor to published work, to fade away, who will curate our content? Can the publisher and bookstore forge a new role in the value chain?

No more rejection letters
There is the age-old tale of the rejected writer: years of shipping manila envelopes to agents, years of returned manuscripts and polite decline letters from editors. J. K. Rowling’s agent submitted her wizard’s tale to twelve publishing houses and was rejected twelve times before she finally found a home. She is in good company as Stephen King and George Orwell were also rejected. One of Orwell’s critics wrote on the back of the Animal Farm manuscript, “It is impossible to sell animal stories in the USA.”
As the blog “Literary Rejections on Display” writes: “Remember this: Someone out there will always say no.” This is no longer the case.

Now there are aspirational tales such as Karen McQuestion who, after giving up on publishing her book, A Scattered Life, managed to self-published and sell over 35 thousand copies. There are stories of writers like Jim Kukral who went to the web to raise funds for his next book (remarkably $16,000 in a week). Kukral, author of This Book Will Make You Money says, “The walls are crumbling down, and aggressive and smart entrepreneurs are running through the gates to grab their share of self-publishing gold.”

But is this new business model sustainable? Is this the inevitable revolution of the masses against the traditional publishers? (Publishers, who many feel are removed from the new realities of digital publishing )

The answer is no.
According to R.R. Bowker, a publishing industry analyst, self-published titles in the U.S. nearly tripled to 133,036 in 2010 and will continue to grow. Like the flood of self-published Apps in the iTunes Store, there is a point where the author can no longer be found amidst the huge numbers of books being published. Finding a publisher becomes the easy part. Selling and driving profit becomes impossible.

Self-publishing your first novel and hoping that it reaches a mass audience is effectively the same as the delusionary garage app developer who decides to develop a game and post it to iTunes inspired by the success of Rovio’s Angry Birds. While Peter Vesterbacka, Rovio’s Chief Eagle, is touting his line of Red Bird sweat shirts, the developer’s app will be buried deep beneath the other one million assorted apps waiting for success.

The book is lost and the digital bookstore is becoming increasingly crowded with vanity press. With triple-digit growth in self-publishing it is difficult to know where to go to find an audience, and writers are flummoxed. With the surge of books self-published on the on Amazon’s storefront readers are flummoxed about where to go for quality content.

So how does the writer reach an audience? Amazon offers new reach and readers. But who is curating the explosive proliferation of content? What we collectively do not seem to understand is how the industry’s shifting roles are undermining the value chain for both the writer and the reader.

After years of battling the demons of book store conglomerates and then cloud commerce and eBook business models, the industry is teetering on reinvention.

We all know that what Amazon calls “pro-consumer” has been a major business disruptor for bookstores and now shoe, apparel and electronic stores. Could Amazon be simply using the book to build its m-commerce empire? Is the book industry a necessary sacrifice: mobile commerce road kill?

Book Countdown
Here are the modified Cliff’s Notes on how the book industry turned on its ear:

1. Bad-Boy Barnes & Noble: In the 90’s Barnes & Noble opened up superstore after superstore across America. They become the Wal-Mart of books, with the same vendor-facing attitude. Publishers were forced to grin and bear the harsh Barnes & Noble business terms: challenging discounts, sling-shot mechanize return policies, and more

2. Amazon Cloud: Ten years later, Amazon reinvented book browsing and shopping, and Barnes & Noble opened coffee shops and began selling household furniture. Smaller publishers and independent bookstores began to vanish.

3. The eBook: In 2007, we saw the first Kindle, the harbinger of a new power game and more importantly a new relationship with the mobile consumer. The Kindle became the new storefront further threatening the first market disruptor, Barnes & Noble. In order to promote its Kindle device, Amazon sold electronic books below wholesale prices. A tactical loss. Owning the commerce platform was the ultimate reward for Amazon.

4. Macmillan’s Counter Attack: This revenue model is understandably sub-optimal for the publisher. Led by New York-based Macmillan, the industry challenged Amazon’s hostile business model. Amazon pulled Macmillan content from its site. Macmillan held ground. Amazon caved. Round one.

5. Vanity Press: In the traditional publishing relationship, the writer should expect approximately a 7.5% royalty for paperback books and for digital, 25% of net receipts (which is the 70% that publisher receives from the retailer.) Amazon offers “publish direct” capability for writers on a 70/30 royalty share across the Kindle, Amazon Cloud, and the free Kindle apps. Direct is an attractive option.

The creative → agent →publisher → distributor relationship become dis-intermediated. Touted thriller-writer John Locke joined the Kindle Million Club (authors who have sold over a million books). And then there is the tenacious Amanda Hocking, who became a successful self-published author after receiving multiple rejections from traditional houses. (However, the Million Club is an elite club, and I would hazard a guess that there are many other would-be writers that will never go beyond vanity press.)

6. Slippery Slope: Book stores (Barnes & Noble) and publishers (the Perseus Books Group) launch self-publishing eBook services (PubIt! and Argo Navis respectively) with similar flattering revenue shares. With all stakeholders playing all the roles, the value chain is breaking up.

7. The Kindle Fire: Combining commerce with the immersive Kindle experience is the final frontier. Layering in Cha-Ching into the armchair reader is a natural and powerful evolution of the bookstore. Amazon is so confident that they are selling the unit at a loss ($199 for $210 unit cost)

8. Kindle Owners’ Lending Library: Amazon Prime members who own a Kindle can “borrow” one title per month, from this expanding library for free. Presently, there are a limited number of books available; Amazon has not received publisher consent to include titles from many publishers. In some cases, Amazon is simply paying the wholesale price for the book each time somebody borrows it.

This is Napster, the original peer-to-peer music file sharing service, but legal and underwritten by Amazon. The Authors Guild naturally has harshly criticized this business model.

Is this an eight-bullet epitaph for the book publisher? John Biggs blogs nostalgically, “While I will miss the creak of the Village Bookshop’s old church floor, the calm of Crescent City books, and the crankiness of the Provincetown Bookshop, the time has come to move on.”

Move On? The question is where to.

What is the beleaguered publisher’s new role? (Guaranteed the solution is not for the publisher to go digital by offering multimedia extras such as video and audio commentary with their eBooks.) The publisher can:

1. Taste Management: The publishing industry can retain its credibility as the purveyors of content. The publisher is providing rich content and is in the best position to build a long-term relationship with the customer, selling targeted stuff to this person, not once but many times.

2. Drive Subscription: Learn from mobile commerce. The mobile content aggregator never sold one ringtone (too much work for the publisher and for the buyer). The mobile content aggregator sold a subscription. The mobile consumer paid for music curation. (And a pretty penny at that.)

Perhaps we need to reconsider the idea of buying a book. Perhaps we should be buying a content subscription to chapters instead of books. Or see the book as a modern Dickensian novel serialized in mobile monthly installments.

3. Sell non-traditional: Fight Amazon in the cloud, not the store. Publishers need to find ways to sell digital content into competitive storefronts. The publisher needs to work closely with the remaining terrestrial booksellers to help them sell into their digital storefronts.

Publishers need to be aware that the 2010s are eerily reminiscent of the music industry in 2000’s. Books have changed. Reading and commerce behavior has changed. Publishers need to reaffirm their value proposition and find a way of reintroducing their mission critical role into the digital mall.

Interactive TV Trends – How the TV Experience is Changing, Part III

This is the third article in a three part series discussing key trends in TV. The first article looked at how new interface technologies are enabling new ways to control our TVs. The second article focused on the multi-screen TV experience. This article focuses on how interactive TV trends are driving the need for improvements in TV image quality.

Full HD is not enough for Future TV
Some might believe our latest flat panel televisions represent the zenith of picture quality. This is not surprising given we often hear that 1080P resolution or “Full HD” are “future proof” technologies. The oft-cited reasoning is that for a given screen size, viewed from a normal watching distance, the acuity of the eye cannot discern resolutions beyond Full HD. Another reason why Full HD is considered future proof is because actually a very small percentage of video content is even broadcast at this resolution. Most digital pay TV broadcasting systems transmit in lower resolution formats – the industry is still catching up.

Certainly, for those looking to buy their next TV set – no one should be concerned that 1080P is not good enough. Considering the horizon of time people buy and keep a TV set which is about 8 years– a consumer cannot go wrong with “Full HD”. But for people interested in where the industry is going in the long term –looking out over the next ten years, our image quality is going to see massive improvements making today’s TV technology look primitive.

Part of the reason why we can expect big improvements in TV video quality has to do with our superior eyesight. Our capacity to see is many multiple orders of magnitude greater than what our TVs can display. For example, a Full HD TV displays about 2 million pixels of video information. In real life, one of our eyes processes about 250 million pixels – but since we have two eyes channeling vision to our brains – our effective vision makes use of greater than 500 million pixels of video information. And while it is true that we can only discern a limited resolution from a given distance – our eyesight is also sensitive to contrast, color saturation, color accuracy and especially movement. All these areas are where TV systems can improve.

Detractors may argue TVs do not have to be perfect – just a reasonable representation. Others may argue that consumers only care about TV size and price that TV quality is not a selling point. But I argue TV image quality does matter – quality has always had to keep pace with the growing size of TV screens. TVs will continue to get larger – requiring improvements in resolution as our room sizes will start to limit viewing distance. Also, the nature of interactive TV and future 3D systems will make us want to sit closer to the TV set – again mandating video quality improvements.

Interactive TV’s Make You Sit Closer
Interactive TVs will bring games, virtual worlds and new video applications drawing us physically closer to the TV screen. Gaming is a huge industry- with almost $50B spent on gaming consoles, software and accessories. Virtual world games are increasingly popular. “World of Warcraft” is a massively multiplayer online role-playing game with over 10 million subscribers. All kinds of social virtual worlds such as the Sims, Second Life, IMVU and Club Penguin are attracting millions of players. IMVU, with over 50 million registered users, is a social game where people can develop personal avatars and spend time in virtual worlds chatting and interacting. While many of these games are still played on PCs – migration to the living room TV is inevitable. Console games have already shown the way – the size and immersive nature of the large screen TV will draw others into the living room as well.

3D display will also drive the need for improving display resolution and image quality. Sure, everyone hates 3D glasses – but technology will continue to evolve and glassless 3D displays will continue to improve and come down in price. There will be applications that consumers will demand in 3D such as sports – people will see the advantages of watching close up sports games on the large screen display in vivid, artifact free video.

OEMs and broadcast equipment companies are investing heavily in supplying the infrastructure to make this happen. 3D advertising will take on more importance – imagine having the option to tour a car or a house in extremely vivid 3D. On the entertainment side, movie and video directors will become much better at using 3D perspectives in such a way to take advantage of image quality improvement. Today 3D effects are more like a gimmick – watch the arrow fly into the room for example. But going forward directors will make more subtle use of 3D adeptly drawing viewers into to the film or the show. On a beautiful large screen display with ultra high resolution and image quality, viewers will practically feel like they are part of a movie or scene.

3D also opens up a world that we could only dream about when matched with the power of the internet. For example, the evolutions of virtual worlds and their capabilities becomes much more compelling with large screen displays. A simple example is virtual tourism and world exploration. Just as Google has taken a picture of all the street views of the world, there is no reason we cannot build a 3D model of the whole terrestrial experience on earth in a few years. Imagine then the capability to walk around the world as a virtual tourist and view the world from the comfort of your 3D television.

As virtual worlds improve and evolve, new immersive ways to interact with large screen TVs will continue to evolve. Many social activities come to mind as well as the concept of participating or viewing in e-sports. E-sports are virtual sports games that can also be viewed by others. The prospects for e-sports are boundless and limited only by imagination. Virtual bullfights, gladiator battles, racing events will be watched on-line the same way we watch football games today.

The display-use model will also change over time. Today our concept of a display is a TV set that sits in the living room – a piece of functional furniture. With the advent of new display materials like OLED, display will transform from furniture to architectural material. In fact there is no reason why the wall in your den cannot become a display. In fact, why stop with the wall? Imagine the immersive feeling of the ceiling, floor, and walls all around built of display – it’s the video equivalent of surround sound. In fact, the architectural use of display could add interesting use cases beyond entertainment.

For example, inlaid architectural materials can appear in almost in any room around the house. Touch screen uses in the kitchen, can provide not only control but also interactive recipe applications and videos on cooking instructions. Bathroom walls can provide wallpaper backgrounds or any kind of networked information that we already see on our PCs. Inlaid display technologies will appear on appliances as well as anywhere people need information or help with controls. The point of all this is that again there will be many reasons in the future of us needed to get close to the screen – and all this near proximity will demand increases in display quality.

TV Development Underway
Already major TV OEMs are working on the next step up in resolution over Full HD. There are multiple propositions in development for higher order resolution TV systems. TV OEMS are already demonstrating “4KX2K” systems that provide 4096 X 2160 pixel arrays. Even beyond “4KX2K” is Ultra High Definition (UHD) which provides 7,680X4320 pixels resolution which equals 33 million pixels or about 16 times the number of pixels used by Full HD systems. UHD was first introduced by Japan’s national TV broadcaster NHK in 2003. NHK, marketing the resolution as “Super Hi-Vision” had to build the cameras and display technology from scratch to be able to create a UHD demonstration system. Since then NHK has displayed the system at numerous broadcasting shows. Toshiba, LG and Panasonic showed UHD systems at CES 2011 – likely more UHD sets will be shown in 2012. UK’s BBC also is interested in this format. The BBC announced plans to provide UHD coverage of the 2012 London games.

In addition to higher resolution, OEMs continue to invest in superior display technologies like organic light emitting diode (OLED) displays. OLEDs have several advantages over LCD and plasma display technologies. For example, OLED do not make use of a backlight and emit light directly. Direct emission results in a much more vivid display of color, contrast and viewing angle over LCDs. Since there is no backplane in OLED TVs, OLEDS are a much more power efficient and lower in weight. OLED displays are also flexible – opening up new opportunities to use displays in various new applications in architectural display and even clothing.

OLEDs also have a very high response time over LCD. In fact, the relative low response time of LCD, required the industry to introduce all kinds of approaches to compensate by introducing frame rate conversion techniques. OLEDs response time increases response time by a factor of 1000 over LCD allowing for a much better display motion performance.

Improvements will also need to continue on the broadcast side. Higher resolution TVs consume bits at an alarming rate. For example, uncompressed ultra high HD would demand 24Gbps a major jump over ~1.5Gbps required for Full HD. Any increases in resolution will demand major improvements in data compression as well as networking, storage and broadcasting capacity.

But the march of improvements will continue. As TV screens get larger and the way we use these screens draw us in closer – the need for improved image quality will also continue to improve.

Our TV experience will change dramatically over the next ten years. As these series of articles have discussed the whole TV experience will continue to morph the way we spend our time watching large screen displays. 2012 will bring some interesting signs about how all this will play out. 2012 we will see OEMs developing much better ways to interact with TVs – our ability to control the TV through new remote technologies and improvements in finding and sharing content will make major advances. We can expect more use of our hand held tablets and smart phone devices joining us in front of our TV sets. Interactive TV will bring, not only more sources of content, but also new tools to help recommend as well as share content and media that we really want to see. Finally, the way we use TV will be much more immersive demanding major improvements in the video quality in TVs over what we have today.

The Tablet Market in 2012: What it Means for Publishers

Those who planned their tablet strategies based on the predictions of key analysts and the excitement at the Consumer Electronics Show (CES) at the beginning of 2011 may want to be very wary of the wave of predictions for the 2012 tablet market after 2011 remained iPad dominant. Apple is now expected to sell around 39 million units worldwide, and even the top competitor in the tablet market, Samsung with its Galaxy Tab, has achieved very modest sales by comparison.

Motorola’s Xoom showed initial promise but faltered and was purchased by Google. RIM’s Playbook stalled with updates not expected until late-2012. HP launched and then abandoned the TouchPad and Dell shuttered the Streak.

At the low-end of the tablet market, competing against Barnes & Noble’s Nook, the Kindle Fire from Amazon is the only Android-based tablet seeing solid momentum with the company claiming around 1 million sales a week thanks to a $199 price point. However, despite the claims, inventory is still plentiful and the Fire’s software platform has come in for significant criticism (update promised soon) with a lot of customers giving it one to three stars.

A recent report from IDC , who now counts both the Kindle Fire and Barnes & Noble Nook as Android tablets, forecasts that Apple’s worldwide market share will dip to 60% in 2012 although it should be pointed put that the analysts’ firms use shipments numbers for Apple’s competitors that don’t necessarily equate to sales numbers. The current key players are Samsung, Amazon, B&N, Asus plus “others.”

CES 2012 from January 10-13 will no doubt showcase a vast range of new tablets, but if the history of the last couple of years repeats then virtually all will fail to gain any traction. Hardware alone without integrated software and an effective distribution system is doomed. Very few companies can provide the whole digital ecosystem, and Apple has a near unassailable advantage in this regard.

Merrill Lynch predicts growth of tablet market in chart below.

New iPads on the horizon?

Looking at the year ahead, Apple is expected to roll out the iPad3 in first-quarter 2012. The key features are the iPhone4 “retina” display and faster processors to cope with the demands of that higher resolution screen. Many expect Apple to launch a smaller iPad with a screen size of nearly 8 inches later in 2012, and this could help counter any competitive threat from Amazon. A possible pricing structure of around $250-$299 for an 8-inch tablet, $399 for the older iPad2 and the next iPad3 for $499 (or maybe a little higher) would allow Apple to cover the market from value to premium products. According to a recent report from J.P. Morgan Chase, total worldwide tablet shipments will reach 99 million in 2012 and will rise to 132.6 million in 2013. Although it will see some market-share erosion, Apple is likely to remain the dominate tablet supplier for the next few years so needs to be core to any publisher’s digital mobile strategy.
Publisher reset

For many publishers, 2011 has been a year of expensive experimentation. Now, many are reassessing their return on investment in tablet apps. Over-designed multimedia versions of consumer magazines appear to distract readers from the core content. The majority of tablet readers seem happier with simple enhanced PDF versions of their favorite brands or layouts that emphasize the readability of the articles on their devices. Photos and videos that enhance text are welcomed but not unnecessary interruptions and distractions. In a sea of often mediocre publisher apps, there are some real standouts–some of my favorites include National Geographic iPad version, the travel magazine TRVL and The Economist–that deliver an excellent balance that improves the tablet version over the print version.

Now, in addition to Apple’s iTunes, Amazon (and to a lessor extent Barnes & Noble) offers publishers additional digital distributions channels for paid subscriptions and digital newsstand copies. Outside these channels, the general Android marketplace is still searching for an efficient app marketplace for paid content. The industry consortium Next Issue Media (NIM), which represents Condé Nast, Hearst, Meredith, News Corp and Time Inc., has, so far, failed to achieve momentum, but maybe NIM will redeem itself in 2012 with an HTML5 strategy.

Publishers will continue to face a very challenging and fragmented market. They have to deal with Apple’s hardware dominance. Most mainstream publishers have somewhat reluctantly come to the conclusion they cannot ignore the iPad platform and have come terms with Apple–although there are some notable exceptions including Time magazine and the Financial Times. In rejecting Apple’s App store, the FT has developed a sophisticated HTML5 approach allowing content to be viewed in a mobile Web browser.

Although there are many compromises, an HTML5 approach allows publishers to take advantage of the growth of the tablet market without restricting themselves to any operating systems. As HTML5 continues to grow in sophistication, I expect the major publishers will experiment with both an app and HTML5 strategy.

On the iOS platform publishers can distribute directly via Apple’s Newsstand and also via Zinio’s reader app. Neither approach really gives the publisher the customer information they really want. On the Android platform the main app options for publishers are to go through Zinio (for tablets other than the KIndle Fire and Nook). For the Amazon and Nook platforms publishers can deal direct or go through Zinio. The publishers’ life is further complicated by a slew of app content aggregators – with Flipboard leading a very crowded field which recently saw the entry of Google’s Currents. Managing advertising and content metrics across these multiple platforms is extremely challenging.

Further clarification – the Zinio app on the Kindle Fire is not showing up for many users but instructions for downloading the app are available via Zinio’s site.

Where’s the breakthrough?

I believe it depends on the segments being served–the B2B market has always been more of a data information and service business and should see strong opportunities to drive revenues via quality information services targeted to their audiences. The key media brands can continue to play a trusted role aggregating services–content (original, aggregated and peer-to-peer /social), directory and supplier service, links to physical conference and events. Lead generation, premium paid services and sponsorship will be more important revenue streams than CPM-based advertising. In the long term, B2B publishers should really benefit from wider distribution as they can drive revenues outside of advertising.

For consumer publishers, it’s a greater challenge, but the B2B market can provide some pointers. Consumer publishers (and some are moving there) need to be more vested in the (customer) data business. They have to better understand the needs to their customers and engage with them through valuable content and services not just relying on impression based advertising.
The rapid move to mobility

The transition to consumption of content via mobile devices has been evident for several years–the technology and related services are now catching up with the vision. A mobile strategy needs to be at the center of all publishers’ long-range planning. The major challenge is that past revenue models are not transitioning easily to the new medium. Inevitably, the industry will go through a few more years of pain while new revenues models for the mobile world become obvious.

But the good news is that despite all the challenges, premium brands actually become more important as quality content is consumed by a wider audience and audiences look to trusted brands to guide them through an increasingly “noisy” content world.

The road ahead will be a bit rough but all publishers should be excited that in 2012 their content will be distributed much more widely and they will have a change to engage with new audiences.

That’s not a bad outlook.

This article originally appeared at MinOnline

Cool or Not Cool? Bandzoogle and it’s Cool Connection to Direct-to-Fan

I recently interviewed Dave Cool, the “voice” of Bandzoogle. Dave Cool writes the Bandzoogles’ blog, well known for inspiring and supporting Bandzoogle’s #1 mission: To make Direct-to-Fan a very real accomplishment for artists and bands everywhere.

Dave Cool is perhaps best known for having directed and produced the documentary film “What is INDIE? A look into the World of Independent Musicians” which documented the experience of being an independent artist in the music industry. That movie actually became it’s own testament about the power of Direct-to-Fan, creating a huge movement around indie music and the process that independent musicians go through around today’s new tech and the seemingly endless opportunities.

The film featured several leading experts in the music industry, including Derek Sivers (CD Baby) and Panos Panay (Sonicbids), as well as with 20 independent artists. Without any background in film and funded entirely on his own, Dave Cool took the film from a small do-it-yourself project and turned it into an indie success story in its own right, with the film screening all over the world and being mentioned on CNN.com and in Newsweek Magazine.

A big inspiration in the world of musicians and bands, Dave inspires artists to keep control of their content on as many levels as possible and to maximize their fan outreach and merchandising.  If you are a musician breaking out in today’s world, this is a must. In the end, however, one thing Cool makes infinitely clear, is that it’s still about doing great work.

Bandzoogle is a web-based platform for artists and bands, allowing them to create a dot.com website – something bands MUST have, especially in the world of ever-changing social media.  If a band puts their energies into MySpace, for instance, well… we all know what happened there.  So a Facebook page is great, but a dot.com is still a necessity, and Bandzoogle is the leader in making this accessible and easy – ”Easy enough for even the drummer to do,” jokes Dave Cool during our interview.

He also spoke quite a bit about social media NOT being a one-size-fits-all – and perhaps not being a good fit for some artists at all.  (Sacrilege right?)

You can hear this interview in its entirety at http://bit.ly/DaveCool and you can get a complimentary digital copy of “What is INDIE?” when you sign up for Dave’s mailing list at: http://bit.ly/hs4uk6

 

Gaming AMD’s 2012 Strategy

AMD intends to pursue “growth opportunities” in low-powered devices, emerging markets and Internet-based businesses.

There’s an awful lot of mis-guided analysis wafting about regarding AMD’s new strategic direction, which the company says it will make public in February. This piece is to help you (and me) sort through the facts and the opportunities. I last took a look at AMD’s strategies earlier this year, available here.

Starting With the Facts

  • AMD is a fabless semiconductor company since 2009. The company depends on GlobalFoundries and soon Taiwan Semiconductor to actually fabricate its chips;
  • In its latest quarter, AMD had net income of about $100 million on $1.7 billion in revenue. Subsequently, the company announced a restructuring that seeks to cut costs by $118 million in 2012, largely through a reduction in force of about ten percent;
  • AMD has about a 20% market share in the PC market, which Intel says is growing north of 20% this year, largely in emerging markets;
  • AMD’s products compete most successfully against rival Intel in the low- to mid-range PC categories, but 2011 PC processors have underwhelmed reviewers, especially in performance as compared to comparable Intel products;
  • AMD has less than a 10% market share in the server market of about 250,000 units, which grew 7.6% last quarter according to Gartner Group;
  • AMD’s graphics division competes with nVidia in the discrete graphics chip business, which is growing in profitable commercial applications like high-performance supercomputing and declining in the core PC business as Intel’s integrated graphics is now “good enough” for mainstream buyers;
  • AMD has no significant expertise in phone and tablet chip design, especially the multi-function “systems on a chip (SOCs)” that make up all of today’s hot sellers.

What Will AMD CEO Rory Read’s Strategy Be?

I have no insider information and no crystal ball. But my eyebrows were seriously raised this morning in perplexity to see several headlines such as “AMD to give up competing with Intel on X86“, which led to “AMD struggling to reinvent itself” in the hometown Mercury News. I will stipulate that AMD is indeed struggling to reinvent itself, as the public process has taken most of 2011. The board of directors itself seems unclear on direction. That said, here is my score card on reinvention opportunities in descending order of attractiveness:

  1. Servers —  For not much more work than a desktop high-end Bulldozer microprocessor, AMD makes Opteron 6100 server processors. Hundreds or thousands more revenue dollars per chip at correspondingly higher margins. AMD has a tiny market share, but keeps a foot in the door at the major server OEMs. The company has been late and underdelivered to its OEMs recently. But the problem is execution, not computer science.
  2. Desktop and Notebook PCs — AMD is in this market and the volumes are huge. AMD needs volume to amortize its R&D and fab preparation costs for each generation of products. Twenty percent of a 400 million chip 2011 market is 80 million units! While faster, more competitive chips would help gain market share from Intel, AMD has to execute profitably in the PC space to survive. I see no role for AMD that does not include PCs — unless we are talking about a much smaller, specialized AMD.
  3. Graphics Processors (GPUs) — ATI products are neck-and-neck with nVidia in the discrete graphics card space. But nVidia has done a great job of late creating a high-performance computing market that consumes tens of thousands of commercial-grade (e.g., high price) graphics cards. Intel is about to jump into the HPC space with Knight’s Corner, a many-X86-core chip. Meanwhile, AMD needs the graphics talent onboard to drive innovation in its Fusion processors that marry a processor and graphics on one chip. So, I don’t see an AMD without a graphics component, but neither do I see huge profit pools either.
  4. Getting Out of the X86 Business — If you’re reading along and thinking you might short AMD stock, this is the reason not to: the only legally sanctioned software-compatible competition to X86 inventor Intel. If AMD decides to get out of making X86 chips, it better have a sound strategy in mind and the ability to execute. But be assured that the investment bankers and hedge funds would be flailing elbows to buy the piece of AMD that allows them to mint, er, process X86 chips. So, I describe this option as “sell off the family jewels”, and am not enthralled with the prospects for success in using those funds to generate $6.8 billion in profitable revenue or better to replace today’s X86 business.
  5. Entering the ARM Smartphone and Tablet Market— A sure path to Chapter 11. Remember, AMD no longer makes the chips it designs, so it lacks any fab margin to use elsewhere in the business. It starts against well-experienced ARM processor designers including Apple, Qualcomm, Samsung, and TI … and even nVidia. Most ARM licensees take an off-the-shelf design from ARM that is tweaked and married to input-output to create an SOC design, that then competes for space at one of the handful of global fab companies. AMD has absolutely no special sauce to win in the ARM SOC kitchen.To win, AMD would have to execute flawlessly in its maiden start (see execution problems above), gain credibility, nail down 100+ design wins for its second generation, and outrace the largest and most experienced companies in the digital consumer products arena. Oh, and don’t forget volume, profitability, and especially cash flow. It can’t be done. Or if it can be done, the risks are at heart-attack levels.

“AMD intends to pursue “growth opportunities” in low-powered devices, emerging markets and Internet-based businesses.” One way to read that ambiguous sentence by AMD is a strategy that includes:

  • Tablets and netbooks running X86 Windows 8;
  • Emerging geographic markets, chasing Intel for the next billion Internet users in places like Brazil, China, and even Africa. Here, AMD’s traditional value play resonates;
  • Internet-based businesses such as lots of profitable servers in the cloud. Tier 4 datacenters for Amazon, Apple, Facebook, Google, and Microsoft are a small but off-the-charts growing market.

So, let’s get together in February and see how the strategy chips fall. Or post a comment on your game plan for AMD.

Interactive TV Trends – How the TV Experience is Changing – Part II

This is the second article in a three-part series discussing key trends in TV. The first article looked at how new interface technologies are enabling new ways to control our TVs. This article focuses on how the TV experience is changing as we begin to use multiple screens of our PC, phone and tablets together with our TV sets. The third and last article will discuss new trends in image processing and why major improvements in picture quality are still necessary.

Entertainment Multitasking – How our TV experience is changing
For years now we have had multiple screens in the home – at least two anyway, the PC and TV – though they never had much to do with each other. This is now changing. TVs are starting to connect – not just to a PC but more importantly to your smart phone and tablet. In fact, our hand-held systems used in conjunction with the interactive TV represents a major change in how we will digest entertainment going into the future.

Earlier this year Nielsen made a study of the use of tablets, smart phones and e-readers in the home. Nielsen’s survey found that the tablet and smart phones are more likely to be used while watching TV. (E-readers on the other hand were more likely to be used in bed –no surprise there.) In fact, 70 percent of tablet owners and 68 percent of smart phone owners said that they use their devices while watching TV. Moreover, using the tablet while watching TV constitutes the largest portion of time spent on the device – representing about 30% out of the total time spent. After TV, tablet owners spend about 21 percent of their time using the tablet in bed. The Nielsen report also surveyed what people are actually doing on their tablets and smart phones while watching TV. Most popular activities are checking email and searching for either related or unrelated content. Again not surprisingly, women are more likely to be connecting to social applications while men are more likely to be looking up sports scores.

The Nielsen report by itself shows clearly that we like to multi-task our tablet and smart phone use while watching TV. But is there something more compelling about the multi-screen phenomenon? In fact is it really a phenomenon at all? Whether or not there is truly a greater trend at work – TV networks, advertisers and technology companies are trying hard to put the multi-screen to better use.

Advertisers in particular are interested in managing ad campaigns that coordinate across the multiple screens in the home. ComScore recently published a report that measured advertizing effectiveness of ads that are coupled with synergistic multiple platform campaigns or “touch points”. The research shows that using synergistic touch points can actually reduce advertising cost in reaching TV audiences. A TV ad campaign that harnesses digital touch points can increase effective reach by 16% at the same overall budget. This is a compelling number – especially since we are still at the very early stages of this type of multi-front digital advertising activity.

Networks are increasingly looking for ways to increase engagement with their programs. And it is clear that the multi-screen environment accommodates social TV activities if for no other reason that it is easier to communicate over a smart phone or tablet rather than a TV remote. Additionally, who wants to overlay a bunch of Twitter chatter onto the beautiful HD display? Major networks are already making use of Twitter and Facebook to raise the level of dialog, recommendation and engagement of their programs. In fact, most networks these days publish Twitter hashtags associated with their programming – some shows even display hashtags on-screen during a program broadcast. This is especially true for reality entertainment and talent shows like Xfactor where audience participation voting for favorite performers is part of the program. All of this social activity works well when TVs are used in conjunction with multi-screen handhelds.

The trend is not lost on the TV OEMS. One practical use of a smart phone or a tablet is that they make for great TV remotes. Companies developing remote control software for tablets and smart phones enable users to customize the user interface and functionality. For example, your child’s remote UI can be designed to only have selections for children’s programming with animated design of the buttons. Adults in the family can design more complex remotes that provide access to a wide range of programming and applications.

TV companies in China have already started shipping tablets with their TV sets. Both Haier and Hisense have started shipping Android based tablets together with their higher end digital TV sets sold. In the USA Samsung together with Best Buy gave customers an Android based Galaxy Tab 10.1 for every 46 inch 3D Samsung HDTV they bought for a week back in August. Also, Sony offers a bundling discount for their Sony 16GB Tab for every Smart TV sold. It will be interesting to see how tablets with TVs will be promoted by the large OEMs at CES 2012.

Multiple technology start-ups are also proposing new ways to make use of a tablet or smart phone in conjunction with the TV. A good example of a multi-screen use case is the application called Into_Now – a company now owned by Yahoo. In the spirit of Shazam, a much-loved audio application that could record any song and identify the song title and artist, Into_Now can record a TV program or movie from your tablet or smart phone and in addition to identifying a show (down to the specific season and episode) display relevant information and metadata associated with the TV program. Into_Now also includes a social tagging and chatting capability which allows you to discuss shows that your are “Into….Now” with your friends. In fact, Into_Now’s preference engine algorithms use tag information that you and your friends can use to develop recommendations on other shows you may like.

Into_Now was also engaged in some interesting advertising campaigns. Before Yahoo purchased Into_Now (Twelve weeks after the company was started), Into_Now partnered with Pepsi Max where Into_Now users were rewarded with free Pepsi in exchange for tagging the “Clubhouse in the Corn” Pepsi Max commercial. An interesting combination of TV, hand-held device, social networking resulting in strengthening advertisement engagement.

Samsung also recently started promoting their new Galaxy tablet 7 plus, which comes with an application called Peel. Peel turns your smart phone or tablet into a preference based TV remote. The Peel application recommends programming options that may be interesting to you. The application also allows you to share over Facebook and Twitter. Samsung announced that their partnership with Peel is an example of their strategy to create enhanced user experiences.

Peel was started by some folks who worked at Apple and were involved in developing i-Tunes. They have implemented a compelling system to combine the power of a preference- based recommendation engine application for your smart phone or tablet that works together with a device called the Peel “Fruit” a device that sits near your TV and set-top boxes and works in conjunction with your smart phone or tablet to control all the input devices in your home. The Peel application together with the Fruit becomes a recommendation engine that is a universal remote tied to your social sphere – with a very compelling user interface to boot.

The smart phone or tablet nearby will accelerate the social interactivity associated with TV. And the level of social chatter about shows is being watched much more closely. Taking advantage of this trend, a technology company in the UK, called TV Genius, set up a website called Social TV Statistics. This site is updated daily and provides a list of the20 most tweeted TV shows aired in the UK. The statistics include the maximum daily tweets in a week as well as the maximum tweets in an hour. Recently the UK version of the XFactor was the most tweeted show with about 80,000 max daily tweets and 20,000 max hourly tweets. This is followed by “The Only Way is Essex” which has registered about 1400 max daily and 1200 max hourly tweets. (The large discrepancy most likely is also partly due to the nature of the Xfactor show which makes use of multiple engagement strategies such as viewer participatory voting as well as aggressive Twitter engagement through hash tag promotion etc.)

Social chatter is golden information to networks and advertisers and shows a level of engagement in a show or movie– which is more valuable than viewership statistics of old. Also, the general chatter in the tweets can also be aggregated to develop some meaningful insights about how people feel about the show as well as potentially how they feel about advertising associated with the show. TV networks themselves can also use tweet levels to engender further engagement in their programming by publishing tweet levels or other such information.

The use of a handheld device in conjunction with the TV also introduces interesting new use case possibilities. As we discussed in the first article, TVs are going to be able to recognize users and be able to recommend content tailored to viewer preferences. The handheld communication device is a great way for a TV to recognize a user. The hand-held device can also download information to the TV to further aid in preference generation. For example, say you were watching a movie on your tablet on an airplane. Half way through the movie, you need to shut down as the airplane starts to land. After you get home, the tablet can update the TV to let it know that you did not finish your movie. The TV can then provide you an option to see the rest of the movie on the TV set.
In fact, the portable nature of smart phones and tablets could give rise to improved applications that further bind tablets and smart phones to your TV watching. In fact, if OEMs could find a simple way to download your pictures and video from your tablets and smart phones it would make a big impact. Imagine that you come home from a day at the beach where you took amazing videos of your family on your smart phone. Wouldn’t it be nice if as you come into the house your phone asks if you would like to synch your new content to you home network – and it does it automatically. A few short seconds later you can call up the movies and pictures on your TV.
The knitting of the multi-screen enables media to move around the house. Today we already have the base protocols and standards in place to make this happen. Many TV companies have been trying to improve this dynamic. Apple is perhaps the most advanced. Apple’s Airplay is a system that already sets up the use of a tablet or smart phone to mirror what we see on a TV. In addition, it allows us to easily take the content off the PC and display it on our TV. This is a huge advantage. 95% of us are not making use of the HD content we produce with camera and camcorders- how many of us resort to looking at our photographs on a 9 inch computer display when in fact we could be using the high-definition screen in the living room.

Tablets and smart phones will also help us navigate growing cloud- based applications for TV. In fact, the use of the cloud will have huge benefits especially for the future of interactive TVs. By moving content and interactivity to the cloud, content can be viewed on any device. This allows one to watch TV anywhere and on any device.

The cloud is also important because it extends considerably the capabilities and future functionality of your TV. TV is not a platform that lends itself well to constantly updating software and increased CPU requirements. After all, it is easy to replace a PC, a Phone, or a tablet – but we hate to throw away the beautiful 60” OLED if it is already hanging on the wall and looks great.
The cloud in effect serves to future-proof your TV. Now applications can evolve, user interfaces can improve, and applications of all kinds can multiply. The heavy lifting will be taken care of in the cloud while our TV screen will do what it does best – provide a great picture.

With the cloud we will see expanded use of the TV for online sales of movies and TV shows – not to mention other retail sales. To date, other than I-Tunes there has not been very successful systems that allow digital sales of video to flourish. But that may be changing. Recently a large studio consortium announced a digital right locker system called Ultraviolet. The system establishes a streamlined way to buy digital video programming and store it on the cloud. Searching, purchasing and navigation will be much more efficient through the combination of the tablet or smart phone. The hand-held can be used to enter or swipe credit card data while you and your family review movie trailers or whatever it is that you are considering buying.
Again CES 2012 will be a key show to see how TV industry stakeholders will expand the use of the multi-screen multi-tasking. The multi-screen experience is becoming the new TV experience. Looking into 2012 we are sure to see many new applications by networks, advertisers and technology companies to take advantage of this new dynamic.

Note on Part III: Part III will look more closely at TV image quality and expected improvements we will see in TV display technology in the coming years and why the connected TV is driving new requirements in image processing.

A Digital Insider Scoffs at Townshend

As an industry insider – on way more than one level – it’s hard to take Pete Townshend’s comments as anything more than another great artist railing at the system.  Look, in the end, we all have to admit that the system is broken.  That’s one thing that Townshend got right in that interview.  After that?  Well, it’s all up for debate.  But the fact that the debate was called to the floor again, that’s a good thing.

Let’s look at what he probably got wrong.  Apple is not the villain here.  In fact, probably the opposite.  Apple is responsible for 75% of all LEGAL music downloads.  And there’s no way that this makes them a vampire.  It makes them a hero, of sorts.  By creating a closed system, where one download went to ONE machine, Apple stopped the bleeding of way more than royalties. It addressed a cultural shift that it was OKAY to steal music.  “Sharing.”  So there’s something else that Townshend got right in that interview.  Stealing and sharing are not the same thing – and the mere idea that music should be free is an utter insult to the millions of people who give their lives to create it.

I should disclose here that I was part of Apple way back when and helped launch digital music before it broke wide open, but my 13+ years in digital consultancy have certainly shown me every side of this equation (and argument).

Whether or not music should be free has gone where it belongs. It’s gone to artist-controlled DIY.  DIY creation and DIY distribution. The indie artists have unlocked the code.  Give away great material to build a tribe, and get that tribe to adore you.  They’ll show up with the money, for sure, but only after the love affair has begun.

Here’s the other problem with Pete’s point of view – it assumes that Apple controls the digital distribution industry, and quite simply, it does not.  In the world of Spotify and MusicShark and locker systems, Apple is only one giant float in the parade.  Let’s clarify, they may even be leading the parade, but after a brief initial claim to the universe, way back when, they’re far from alone.  Having said that, it’s obvious that the consumer, overall, loves Apple.  Quite simply, in the words of futurist Gerd Leonhard, it’s easy.  It’s a plug and go solution.  It meets busy consumers where they want to be met, and serving the consumer IS the end game on the business side of music (and anything digital).

The artistic side?  Producing great content and hiring mentors to aide and abet that?  I wish I could ask Townshend why that is at all iTunes’ responsibility.  That is a model that we see fading at every label, sadly (& that’s me wearing my hat as a former A&R exec at one of the majors).  From this insider’s viewpoint, however, it will fade, but not die.  There is a space for grooming artists, from a label’s point of view – otherwise we end up with the music industry’s version of Yentl for every project.  (The same Editor, Producer, Writer and Actress, if you needed me to spell out that comparison.)  Without label support, bands have limited objectivity of their work, at best.  But we KNOW what percentage of artists get signed.  So this new world of digital DIY is an amazing opportunity for artist AND consumer. Which brings us to Townshend’s issue with gatekeepers – one that social media and DIY will summarily trump, given enough time. Spaces like iLIke and Facebook will level the playing field.

Finally, it’s NOT Apple’s job to bridge the gap between labels and DIY. They are, like it or not, a retailer.  Why should they be expected to fix what’s broken in music?  The business model for direct sales/acquisition of recorded music in the traditional sense is collapsing.

But with all of the GREAT minds in the digital and music space, of course we’ll find a new model.  Music does far more than soothe the savage breast, it is the most vital language of unification.  Ask the millions of Chinese listening to Gaga or Beiber – or just look at the worldwide recognition of Mozart.  Or the global domination of Idol.

Yes, there are definitely parts of the foundation with cracks, or worse, but I have full confidence from my life experience of consulting with the industry leaders and artists, that we’ll find a new and more powerful model to propel us forward. Until then, in the immortal words of Sonny and Cher, the beat goes on.

Kelli Richards
CEO
The All Access Group, LLC

 

Interactive TV Trends – How the TV Experience is Changing

The article below is the first in a three part series describing key interactive TV trends. This first article looks at new technologies to control the TV – and how the TVs future ability to recognize users will allow it to tailor content choices and preferences. The second article in the series will examine how multiple screens of the PC, tablet, smart phone and TV will alter the TV experience. Finally the third article discusses new trends in image processing and why major improvements in picture quality are still necessary

Credit: Soft Kinetic
Part 1: Where’s the Remote? Controlling the TV with your Gestures and Voice
The convergence of the internet and broadcast TV is changing the way we will interact with the TV set. Convergence is enabling the use of the TV for gaming, social interaction and new ways to watch content. As the internet and broadcast TV continue to intertwine – the way we interact to the TV will continue to evolve. This evolution will focus on finding new ways to fuse interactive functions to work well in the “lean back” TV experience. Nobody wants to “lean in” on the TV in the living room; this is why early attempts to simply graft a PC to the back of a TV were never going to create a useable interactive TV. The industry is finding new ways to bridge the internet more naturally into the TV experience. This is the first in a series of three articles explaining key trends in interactive TV and the technologies that are being developed to support them.

Improvements in the way we interact with the TV start with how we control the TV. Gesture recognition technologies are a very promising development – especially command gestures that do not require a remote. The Xbox Kinect is probably the most compelling example of the importance of this trend. The Kinect works by combining the use of a camera, and light emitter and receiver as well as voice control. The combination of these capabilities enables the Kinect to recognize you, watch and understand your physical movements and gestures, as well as understand voice commands. This results in an interactive experience that enables remote free gesture control. You can control the TV and games by using hand gesture. For games, this is great as it allows a more immersive experience.

For example, by detecting your body’s movements and articulation – your movements are translated to your avatar representation on the screen. For action games, you simply mimic the movements as though you were skiing, dancing or playing tennis. This technology can also be used to control the viewing experience on TV. A typical example is viewing menu pages or video thumbnails – you can move options or pages around by a wave of your hand. Future advances could allow for more intuitive controls as well as systems that integrate coordinated gesture controls from your tablet to your PC.

Apple’s new Siri improved voice control is also a promising technology that could have a place in the interactive TV world. Siri enables people to speak to machines in a more natural way. The Siri technology includes a semantic sensitivity – that can find meaning in your statement to help it understand you better. This has huge implications in the interactive TV world where we need this type of personalized control especially when we convey our intentions, preferences and feedback to search and discovery.

There are consistent rumors that Apple is working on an Apple branded TV or at least an improved version of their Apple TV media box. They could easily apply the Siri voice control capability to the control of the TV. (As we will discuss in the next series, Apple’s combination of the multi-screens in the home and their elegant interface to the cloud creates a TV ecosystem that could pose a threat to existing TV OEMs.)

TV OEMs like Samsung have been experimenting with remote free gesture control for a while. Samsung, Toshiba and others have shown these technologies at CES over the years. But there is no large scale market availability. That said, in China, mega TV maker Hisense announced that it will be shipping a remote-free gesture control TV starting this month. On the voice side, besides Xbox, there are several electronics companies that have been working on standalone voice activation TV remotes. Voice activation on a remote or a tablet may have a lot of advantages. For one, it is easier for the TV to isolate who it should listen to when there are several people in the room. It will be interesting to see what will come out at CES 2012 on these technologies.

Remote free gesture and voice control are excellent solutions for overcoming the lean back environment of the living room TV. And these capabilities will only get better as the underlying software, user interface, electronic program guide and menu systems improve. The methods of controlling the TV will also become more efficient as TVs take on their ability to personalize their menus for either an individual or a group in the household. In short, TVs will have the capability to recognize us and present a tailored list of menus and services when we come into their vicinity.

Personalization in general is a key trend on the internet. We see that many interactive programs attempt to improve their services by personalizing their user experiences. Examples include the voting function “thumbs up” or “thumbs down” on Pandora allowing it to tailor an individual’s song selection. The interactive TV will also take on capabilities to personalize menu options as well as content and service preferences. This represents a new level of convenience in terms of controlling the TV, as the TV will only be presenting options that you really care about.

To enable this, smart phones or tablets interacting with the TV through WiFi, Bluetooth or other interactive technologies can also identify users to the TV. Alternatively or in addition, TVs could use camera technology just like cameras are used in our hand held devices and laptop computers. Cameras together with facial recognition algorithms can do a good job to see who walks into the room. Imagine entering your living room or den and the TV automatically brings up options dialed into your specific preferences and interests. The TV can also set up all kinds of services and capabilities that are tied to your needs. The system can stand ready to serve up your favorite TV shows, download music for your run or commute, enable or disable your home security system, regulate the sprinkler system if it is raining, and update you on the whereabouts of family and friends.

Preferences do not have to be limited to individuals. TVs will also be able to recognize groups of people such as your whole family sitting together, the kids only or even the family dog and will serve tailored content and service options appropriate to each group.

The TV can be programmed to personalize its menus when the entire family or various subsets of the family is sitting in the front of the TV. The TV greets the family and immediately serves up some appropriate video, audio or service options. The father can ask the TV through a voice command to display the photographs from the recent family trip to Hawaii and provide some Hawaiian background music. During the slide show, the family can also ask the TV to dial in a distant grandparent to join the review of slides. If a child asks a question such as when Hawaii became a state – the TV can search video, webpage or blog content on Hawaii’s history. Likewise, if the family suddenly has an interest in buying a surfboard, the TV can put together a list of interactive ads from local surfboard shops.

Of course, it can be unnerving for some to contemplate this type of interaction with a TV or any machine for that matter. Thoughts of Space Odyssey 2001 may come to mind. There is no doubt that the preferences and choices made through an interactive TV represent valuable information to advertisers and retailers. The technology should also provide consumers strong privacy controls. But the advantages of personalization will outweigh the concerns of letting “HAL” loose in the home. In terms of control – it is much easier to control what you want if the TV is familiar with your preferences.

The technology driving gesture control, voice commands and cameras with facial recognition are available today. We are likely to see incorporation of these types of concepts in TVs next year. As the internet makes further inroads into our living room TV – we can expect to see the use of these tools to improve our ability to interact and maintain our feet-up laid-back position on the couch.

The Pandora Box of Mobile – The Sky’s the Limit

If you were at the Web 2.0 Summit in San Francisco last week, you probably heard Pandora Founder Tim Westergren share that SEVENTY PERCENT of their usage is through mobile venues.  Yes.  70%.  And having created a super-successful digital space for themselves, Pandora doesn’t see Spotify, iTunes, or any other competition eating their lunch any time soon.

Tim Westergren shared the following about the portability of the iPhone and its impact on Pandora, “Overnight it transformed our business. We almost doubled our growth rate. It changed Pandora from being desktop computer radio to being like real radio.”

One can’t completely appreciate the enormous (and growing) impact of the mobile industry without really understanding its past. I recently interviewed my longtime colleague, Anthony Stonefield, a leader in the mobile and digital industries, who literally pioneered downloadable song distribution in the 90’s and popularized ringtones worldwide in 2000 (creating today’s $8 billion ringtone market). Anthony also executive produced the worldwide mobile program for the Live 8 event, and the mobile charity part of Melissa Ethridge’s “I Run for Life” breast cancer campaign, among others. I asked Anthony Stonefield where he thought super distribution will take us in the next few yeas and to talk about SmartPhones and their broad effect on users.

“Smartphones put everything that you had on your PC into your hand… I think what’s happening now is that we’re unlocking the true internet. Until today, we have always thought that we are driving the web, but now, SmartPhones are reaching down into the emerging markets, to the next several billion individuals, and these people are creating revolutions, changing the face of the planet, because they’re getting their first real-time connection to the rest of the world, through SmartPhones.  As these phones infiltrate emerging markets, we have a whole new world to embrace… this is changing the nature of the human being and the way we interact.”

“My experience is that entertainment media is always consumed on impulse.  So the technical solutions are also part of this equation.  4G will eventually enable a distribution model that can scale, but until then, we face serious limitations of scale… 4G has a way to go before it can provide viable, reliable user experiences, but it does enable a way to discover and present media very rapidly.”

You can hear the entire interview here.

Getting back to the future, so to speak, Pandora’s founder explained at the Web 2.0 Summit that Pandora transformed from a simple desktop radio to a “real” radio when users started taking their iPhones and plugging them into their cars and living rooms.  It’s important to realize that, conceptually, Tim Westergren does not consider Pandora competition to Apple, Spotify or other subscription music services.  He considers it a streaming radio service, and does not charge for participation.

With revenue skyrocketing due to ad sales, similar to traditional radio, Pandora has forayed further into radio, actually developing programming and content – and perhaps even newscasts and “sports radio” broadcasts in the future, further solidifying them as the leader in this industry – at least for now.  Like any great industry, competitors WILL show up.  AOL, who could arguably be called the founder of online radio, relaunched its own product within hours of Westergren’s speech, with half of the audio commercials.  (And AOL Radio already carries ESPN Radio and ABC News stations.)

It’s hard to know if AOL will be the biggest contender in the mobile war, but with Smartphones becoming the “transistor radios” of the future, Pandora’s box is definitely filled with opportunity.

******** 

Give Your Smartphone Room to Stretch

As convenient and versatile as mobile devices have become, there are still times when that little 4-inch, 7-inch, or even 10-inch screen just isn’t enough. Maybe you want to enjoy your mobile apps and content with friends or colleagues – without knocking your heads together. Maybe you read the latest study that found that reading text, watching movies or playing games on small handheld screens could lead to problems with eyesight down the road. Or how about this one: you’re halfway through showing your awesome vacation video, and your phone’s battery suddenly goes dead. Sure, there are ways to output your handheld’s display to an HDTV, but they all seem to fall short in one way or another. Either the video quality is sketchy, the link is unstable, or both. Unless the connection supports HDCP, forget about watching copy-protected content like first-run movies. Wouldn’t it be great if you could just plug the phone into the TV and avoid all these problems? Display photos, watch movies, play games, and enjoy your mobile apps on the big screen, controlling it all with your TV remote and charging your phone’s battery while you’re at it? It’s not a fantasy, it’s MHL technology, and it’s here now.

MHL stands for Mobile High-Definition Link, a new digital interface that was purpose-built to connect smartphones and tablets to DTVs and other HD displays. It’s a high-bandwidth connection, capable of transmitting Blu-ray™ quality video and audiophile-quality 7.1-channel surround sound as a digital stream, with no signal compression.

MHL technology has a number of unique performance features that make it ideally suited for connecting mobile devices to DTVs. First, MHL technology is connector-agnostic, so manufacturers can link MHL-enabled mobile devices through its existing connector to just about any brand of DTV or monitor as long as it’s MHL-enabled or there’s an HDMI port. Rather than try to force mobile manufacturers to add an additional hardware connector, it allows them to repurpose what they already have on the device. Second, its streamlined architecture requires only five wires, allowing for extremely lightweight, flexible cables that make carrying them around simple. Third, it does more than just transmit audio and video. It’s a smart connection that allows the user to interact with a smartphone or tablet using the TV’s remote control, and for the TV to recharge the phone or tablet’s battery while it’s connected. Finally, being able to charge your mobile device while it’s connected to the TV may sound like a small thing, but it’s a huge convenience factor — especially compared to other connectivity options, such as Wi-Fi, that will drain your battery faster than you can say “drain your battery.” Of course Wi-Fi has other performance issues, like the fact that it can be prone to radio interference from other devices on its increasingly congested frequency band.

Signal compression is another shortcoming of many legacy interconnects, both wireless and wired. They just don’t have the bandwidth to handle HD content without running it through compression and decompression algorithms, an inherently “lossy” process. MHL technology, by contrast, provides plenty of uncompressed bandwidth for even the richest content, so what comes out of the TV is exactly what you loaded into the phone, with no loss of clarity or fidelity, even if it’s 1080p video with high-fidelity surround sound. And since MHL technology offers native support for HDCP copy protection, you can watch protected content without running afoul of anti-piracy measures.

MHL technology also offers the unique benefit of being able to control your mobile device with your MHL-enabled TV’s remote. Forget about scrolling through tiny windows and pressing tiny buttons – now you can do it all on the big screen thanks to something called the Remote Control Protocol (RCP), a technology that’s built into MHL-enabled TVs, phones, and other devices. Better still, it’s brand-agnostic, so you can connect any phone with MHL connectivity to any MHL-enabled TV, regardless of manufacturer, and control your mobile apps through the TV.

MHL technology is backed by an industry Consortium co-founded by Nokia, Samsung, Sony, Silicon Image, and Toshiba. As of October 2011, more than sixty additional companies have licensed the technology as Adopters. Adopters agree to submit their products to a compliance testing program, helping to ensure reliable performance and cross-vendor interoperability. Many MHL-enabled products are already on the shelves, including smartphones, tablets, adapter cables, and the first wave of DTVs featuring a new, dual-purpose, MHL/HDMI port. Legacy TVs, projectors, and other display devices can be made MHL-ready with the use of adapters, also available now.

While MHL is hardly a household word at this point, this could change rapidly as the pace of adoption increases, more products hit the stores, and more consumers experience the unique benefits of the technology. What’s not to like about watching a movie, TV show, or YouTube video from your phone on a big-screen DTV with surround audio and walking away with your phone fully charged and ready to go? Or playing your favorite games on a 46-inch screen instead of a 4-inch for that truly immersive experience? Sharing photos and videos likewise gets a lot easier and more convenient with MHL technology. As smartphone cameras offer increasingly higher image quality and video recording capabilities, MHL connectivity provides the reliable, high-bandwidth connection people need to share these images with family and friends, or to pull them down from an online gallery and show them on the living room TV.

Business travelers can also benefit from MHL connectivity. With an MHL-enabled video projector, the savvy road warrior can now carry all her slides and demos on a smartphone, update them on the plane or even in a taxi, and plug in to the projector at her destination for a high-resolution, high-impact presentation. The technology also has potential applications in automotive, aircraft, and hotel environments – wherever people travel with their handheld devices.

As consumers increasingly view a smartphone or tablet as their primary computing device and content repository, the rise of MHL technology marks an important step forward in that trend. By giving users the option of switching over to a big screen at any time, it makes it easier to view the handheld as a legitimate replacement for the PC, or at least to create a more seamless interface between the two devices, by offering greater flexibility in how people interact with their content and apps.

In brief, MHL technology has the potential to revolutionize the way we interact with our smartphones and tablets, delivering a premium big-screen experience when the small screen just isn’t enough. Optimized for mobile platforms, designed for an immersive audiovisual experience, and built with on-the-go consumers in mind, it has established its roots in the industry with products already in retail and a promising future.

Has Nokia Raised the Bar Enough?

Yesterday in London at its big conference. Nokia’s CEO Stephen Elop announced its next generation phones, Lumia (Windows Phone 7.5 based) and Asha (S40 based), which we were told means “Hope” in Hindi. Although Asha is an interesting device for many emerging markets, it’s the Lumia that is most important to Nokia’s future and the announcement the market was anxiously awaiting. So while Nokia did introduce two new Windows Phone smartphones of nice design (the Lumia 800 for the premium market and a slightly less costly version the 710), and three Symbian devices meant for the market between feature phones and smart phones, overall the announcements at Nokia World disappointed on a number of accounts.

First, Nokia did not confirm when and what would go to North America – only that there would be a portfolio of devices released early next year (once LTE stabilizes they said). What does that say about the commitment from the carriers to Lumia? If you have a halo device (where Lumia is being positioned) and it’s not being sold in the largest market, what does that say about your market position?

Second, there was no mention of how Nokia would differentiate from other Windows Phone vendors, other than with a better camera, navigation application and music services. Not enough. Samsung makes a nice Windows Phone, as does HTC. Why would a consumer choose a Nokia device?

Third, the pricing was set at a premium pricing level (420 Euros, or about $599 US before subsidies). Nokia is competing against the market leaders at about the same pricing level. There is no advantage taken by Nokia in trying to get back into the marketplace at a reasonable price with a premium product. It’s roughly the same price as iPhone 4S after subsidies and this could be a tough sell.

Fourth, what about the enterprise? There was no mention of how they would help with management and security for corporate. customers other than pointing to Microsoft tools and capabilities. IT doesn’t need yet another device to work with when there is already so much diversity from BYOD. IT wants help and expects some advantage from key suppliers. Microsoft management tools for mobile are inferior and especially so when looking at a diverse environment. Where were the partnership announcements with MDM vendors that would have indicated the serious nature Nokia places on business?

Fifth, what about Windows 8? That is the future (Windows Phone 7.5 is a place holder until the next-gen devices come out in 12-18 months and bridge the PC, tablet and phone markets). This would have been a great opportunity to make a strategy statement at a high level at least, even if not a detailed statement. And it would have indicated an acknowledgement by Nokia of the importance it places in the partnership with Microsoft.

Finally, where was Microsoft’s endorsement? No one from Microsoft spoke during the keynote. No doubt Microsoft wants to keep some distance to not offend its other OEMs, but if this is such a close partnership, where is the “love”?

So I’m left with many questions after the announcements. How do the new devices fit into a diverse environment in an enterprise setting? Where are the enterprise tools to deploy, activate secure and manage them? What is the Nokia Value Add on top of plane Windows Phone? What did they do to enhance the Windows Phone platform beyond what Microsoft offers? Nokia seemed to show once again that they understand how to make appealing hardware, but fell short in service offerings that could differentiate them in the market, especially with the important business user.

Bottom Line: Nokia World was really Nokia’s coming out party. It was meant to show a revitalized company. They did offer a couple of new phone families (one Windows Phone, one Symbian). But they missed the opportunity to show what Nokia represents longer term, how it adds value to the Microsoft standard OS features, and what it will do to differentiate in the market from both other Windows Phone makers and the Android and iPhone market. Nokia a missed opportunity.

An Intimate Chat with Tech Pioneer, Thomas Dolby

 

I was lucky enough, recently, to share a Fireside Chat with the Iconic ’80s electronic music and MTV pioneer, Thomas Dolby. I’m fortunate to have known Thomas personally and professionally, for almost 20 years now. Most recognized for his hit, “She Blinded Me with Science,” from the 80’s, Thomas Dolby is much more than a recording artist, he’s also a Producer and the Music Director for the prestigious TED Conference since 2001, for which he has received much visibility and credibility.

As most music lovers will know, Thomas retired from music to hit Silicon Valley as one of the inventors of musical ringtone technology for cell phones with his company Beatnik Inc. A few years ago he returned to his native England to follow his passion for renewable energy and built a solar-powered studio aboard a 1930s lifeboat in the garden of his beach house on England’s North Sea coast.

Thomas Dolby is STILL leading the tech revolution, most recently with his interactive game, Floating City, which reacts to player contributions, eventually granting access to Thomas’ newest music.  Floating City is unlike so many high tech games because it consciously fosters community and involvement, not solitude.

A Map of the FloatingCity is a travelogue across three imaginary continents: In Amerikana he reflects with affection on the years spent living in the U.S.A., and his fascination with its roots music. (See to what degree at http://www.youtube.com/watch?v=5SuZMwe-XRc&feature=youtu.be.) Urbanoia is more dark, with Dolby himself calling it, “a little unsettling.” In the third continent, Oceanea, there’s a return to Dolby’s natural home on the windswept coastline.

Floating City is digital tech at its best, using it as a platform and portal through which to offer new music, music that has touchstones to Dolby’s past but definitely brings some exciting new and unexpected musical turns — and not all of them involving his signature electronica.

Tune in for my one-on-one chat with Dolby and hear what inspired him to enter gaming tech and how it creates this connection to music with his fans, and how they have turned into an army of brand advocates, completely leveraging web 3.0 on Dolby’s behalf.

This is digital technology at its best!

iOS Morphing Into a Desktop OS?

imageDuring the Apple WWDC, I was really struck at just how many features were added into iOS 5 and just how few new features had been added to Lion. Don’t get me wrong here, I like Lion a lot but after using many of the 250 new features, few altered how or what someone can do with a computer or already to with a tablet. The one exception was AirDrop, which makes peer-to-peer sharing easier. Also, many of the iOS features seemed like desktop features, and the new Lion features appeared to make it look more like iOS features. Let’s take a look.

New Desktop-Like Features in iOS 5

  • Tabbed Browsing: I remember some apologists explaining away the lack of tabbed browsing with the iPad 1. Now Safari has tabs…. on its 9.7″ display.
  • Basic Photo Editing: No longer an add-on app like my favorite, Photogene, photo enhancements are available right inside the Photos app. Users can use auto-enhance, remove red eye and even crop photos.
  • Reading List: Previously available on the Mac, the iOS Safari browser has the Reading list, a place to save articles you wish to read later.
  • Mail Features: Now users can edit email text, add or delete email folders, and even search all the email text, not just the subject line for topics. All of this in the new Mail.
  • Calendar Features: Like on Lion, users can drag time bars to set meeting time, can view attachments inside the calendar app and even share calendars.
  • Mirroring: Via a cable to wirelessly through an Apple TV 2, see on a monitor or TV exactly what is on the iPad 2 or iPhone 4s.
  • Improved Task Switching: With new “multitasking” gestures, users no longer need to click the home button to return to the home screen or switch between apps. They use a four-finger left-to-right gesture to switch tasks and what I call the “claw” to go to the home screen.

New iOS-Like Features in Lion

New Gestures: Every iOS user is familiar with finger scrolling, tap to zoom, pinch to zoom and swipe to navigate. Now this is available on a Lion Mac.

  • image
  • Full Screen Apps: By design, every iOS is full screen. Now Lion has this capability.
  • App Store: Required since the first iPhone, now ships with Lion.
  • Launchpad: This is Lion’s fancy name for iOSs Home Screen. A bunch of app icons.
  • Mail Improvements: Yes, even desktop Mail is getting more like iOS. In this case, adding full height message panes.

image

So What? Why Should we Care?

So what does this mean, if anything? It is too early to tell, but it could signal a few alternative scenarios:

  • Unity of UI? By uniting many of the UI elements across phone, tablet and computer, quite possibly it could make switching between iPhone, iPad and Mac easier. Also, as advanced HCI techniques like voice and air gesture emerge, do input techniques get even closer? Can one metaphor work across three different sized devices?
  • Easier Switch to Mac from Windows? The logic here says, even if you were brought up on a Windows PC, if you can use an iPhone or iPad, you can use a Mac.
  • Modularity? I’ve always believed that a modular approach could work well in certain regions and consumer segments, but only if the OS and apps morphed with it. For example, a tablet with a desktop metaphor makes no more sense that a desktop with a tablet metaphor. What if they could morph based on the state but keep some unifying elements? For instance, my tablet is a tablet when it’s not docked. When docked it acts more like you would expect with keyboard and mouse. They two experiences would be unified visually and with gestures so that they didn’t look like two different planets, but two different neighborhoods in the same city.
  • Desktop OS Dead or Changing Dramatically? What is a desktop OS now? If a desktop OS is a slow-booting, energy-consuming, keyboard-mouse only, complex system, then Microsoft is killing it with Windows 8 next year anyways, so no impact.
  • Simplicity Dead? If phone and tablet OSs are becoming more like desktop OSs, is that good for simplicity? Or are desktop operating systems getting more like phone and tablet operating systems? How do you mask the complexity and still be able to do a lot?

Where We Go From Here

We will all get a front row seat next year to see how users react to one interface on three platforms. Windows 8 will test this next year and Metro UI will be on phones, tablets and PCs. The only caveat here is the Windows 8 desktop app for traditional desktop which will server as a release valve for angst and a bridge to the future. Whatever the future holds, it will be interesting.

“PC Free” in iOS 5 Doesn’t Mean “Free from PCs” (or Macs)

There’s a new feature in iOS 5 that’s called “PC Free”.  While the definition is very specific, it conjures up a lot of images I would guess, specifically getting rid of the PC and Mac. So exactly what parts of the PC and Mac is it removing?

“PC Free” is about removing the PC for a few tasks that are frankly awful parts of the iOS experience and primarily administrative. Here is how it’s described on the iOS 5 landing page:

 

image“Independence for all iOS devices. With iOS 5, you no longer need a computer to own an iPad, iPhone, or iPod touch. Activate and set up your device wirelessly, right out of the box. Download free iOS software updates directly on your device. Do more with your apps — like editing your photos or adding new email folders — on your device, without the need for a Mac or PC. And back up and restore your device automatically using iCloud”.

It sounds promising, the promise of getting rid of that nasty horrible PC or Mac. :-).  Can you really dump your Mac or Windows PC?

I asked a few people in my family and at work what they liked doing on their PC and didn’t do on their tablet.  Here’s why they said they couldn’t ditch their PC or Mac to (UPDATED):

  • Text chat with someone on Google Chat at the same time as you are looking at FaceBook.
  • Quickly create a somewhat complex spreadsheet or presentation.  You really need a mouse to do this productively and iOS doesn’t support mice with Keynote or Numbers.
  • Download a file from multiple web sites in the background as you do something in the foreground.  There are a few exceptions with some apps, but certainly cannot be done in the iOS browser.
  • Compress a big file and email it.  Zipping or Rarring a file, attaching it, then emailing it.
  • Watch 1080p video. iPad has “768P” display for lack of a better term.  Yes, a user can watch 1080P on the iPad 2 on an extra display like an HDTV.
  • Importing HD video into the iPad that wasn’t taken on an iPhone or another iPad.  I am not aware of HD source video that’s shot to iOS specs.  I’ve had to reconvert gobs of videos on my PC to play on the iPhone or iPad.
  • Storing all your pictures. I am talking the multiple gigabytes of years and years of pictures. Alternatively you can rent iCloud space.
  • Store your entire music collection beyond iPads storage.
  • Store lots of personal videos.
  • “Perfect” personal video you’ve downloaded or shot with a camcorder that’s shaky, dark, etc.  Things that software like VReveal can do.
  • Face tagging. You’ll need iPhoto, Picasa, or Windows Live Photo Gallery for this.
  • Display different content on one display and different display on another.  There are a few exceptions, very few.
  • Any web site that uses Flash for navigation, like my local Mexican restaurant.
  • Print. I know, iOS says it can print. Have you gotten it to reliably print?  I didn’t think so. You think people don’t need printers anymore?  Tell my teenagers science and English teacher that.

OK, so you get the point here.  PC Free means you don’t need a PC to do some very basic and fundamental things. If you do need to do something the very basics, you will still need a PC or Mac.

iCloud is Awesome Yet Incomplete

After release to developers at Apple’s WWDC, the Apple iCloud is available to all consumers today with access to iOS 5 and updated iTunes.  In many ways, it is incredible that millions will have access to the consumer power of the cloud.  It’s very integrated into the experience, but then again, it’s not as complete or comprehensive when compared to the best-in-breed cloud apps and services available today.  Will that make a difference in consumer acceptance?  Let’s see.

icloud

What Makes a Great Cloud Experience?

A few applications define by example what a great cloud app or service can provide.  To a consumer, this will change over time and will also be dependent of their comfort and knowledge.   Some sites that are ahead of the cloud service game are Evernote, Amazon Kindle, and Netflix.  What makes these great examples of consumer cloud offer?   While very different in terms of usage, they share similar variables that in aggregate make them awesome:

  • Cross Platform: Windows, OSX, iOS, Android and the web.  Kindle and Netlix are even available on special-purpose devices like the Kindle and Roku.  Consumers can buy into the service and not worry about the platform going away.
  • Continuous Computing: Continuous computing means a few different things. On content consumption, the next device picks up exactly where the last device left off. On Netflix, if I am halfway through a movie on my iPad I can pickup at the same spot on my Roku. When I pick up another Kindle device, it asks me whether I want to go to the latest bookmark.
  • Sync: While a step back from continuous computing, it does assure that the same files are on the same system. On Evernote, every change I make is in synch when I open up the next device.
  • Continuous Improvement: Monthly and even weekly updates to add features and functionality.
  • Compatible and Data Integrity: Even with all these updates, the data keeps its integrity.  If the service has a question about which version is the master, it asks me.  Evernote will tell me that I have a duplicate entry and lets me pick the version or content I want.

iCloud: Cross Platform

As we all know, Apple by design works in its own “walled garden” but that doesn’t mean its completely closed off.  You cannot get iCloud-enabled apps like Pages, Numbers, Keynote or iBooks for Windows or Android.   Even worse, you cannot get to your photos and PhotoStream on any mobile device other than iOS.  To be fair, users can get access to Photo Stream on a Windows PC , but users should at least be allowed access to their own photos over the web if they want. Users can access iWork compatible documents on all “modern” browsers by going to iCloud.com and downloading files.  Windows users then need to drag and drop the updated file inside the web-based iCloud.com to update the file. – Grade D

iCloud: Sync

iCloud will automatically  “sync” photos (Photo Stream), purchased music and TV shows (iTunes), apps, letters (Pages), spreadsheets (Numbers), and presentations (Keynote), Reading Lists and Bookmarks (Safari), reminders (Reminders), calendar (Calendar), email (Mail), notes (Notes), and contacts (Contacts).

There are some major exceptions.  iWork documents will not auto sync with the Windows “Documents” folder, as I think users would expect.  Sugarsync and Drobbox will automatically sync documents with Windows and any other file type with Windows.  Also, personal videos and commercial movies do not sync on any iCloud platform which I don’t fully understand.  Maybe its a concern with storage on iOS devices or storage and throughput  in the iCloud.  – Grade B

iCloud: Continuous Computing

Within iOS phones and tablets, users can start right where they left off for TV shows (Videos) , games (Games Center) and book bookmarks (iBooks).  These are real awesome capabilities especially for those where it’s hard to know where you left off.

imageiCloud will not save the “state” for playing music (Music), playing movies (Videos), or web pages (Safari).  Add the PC and Mac into the continuous computing arena and iCloud experience starts to degrade for most all use cases for a variety of reasons.  iOS games don’t run or sync on a Mac or PC and on Windows  platforms iWork isn’t available.  Consumers over time will expect continuous computing on every usage model on every platform, the way Evernote does it today.   Grade C

iCloud: Continuous Improvement

I cannot definitively answer this question as it will emerge over time, but I must extrapolate from what I have seen from previous drops of Apple software. Apple software app drops, with iOS in particular, have been consistent, very often, and very solid code. – Grade A

iCloud: Compatible and Data Integrity

So far so good, even on difficult to manage applications like word processing, spreadsheets, and presentations.  I make a one line change to a document without going back to “Documents” inside iOS and web Pages, the one line changed on every other system. – Grade A

What, not Straight A’s and Does it Matter?

Apple has never needed to achieve a 4.0 in everything to be successful.  Getting all A’s in the core segment of users and building useful solutions that just work has been the Apple hallmark.  The first iPhone proved this and the iPhone 4s will prove this again as everyone else offers 4G but Apple doesn’t have to. A good fallback to Continuous Computing in good Sync, and I believe that as long as Apple still allows other services with better cloud capabilities into their walled garden, it won’t be an issue now. Over time, I believe Apple will fill in the gaps in iCloud and that have fully thought through where they could add the most value and that’s what they hit first.  Your move, Google, Amazon and Microsoft.

A $299 Amazon Kindle Fire- What It Could Be

Last week the industry was engrossed in the Amazon Kindle Fire launch. There was lots of excitement, speculation and 299kindle2many questions on it. The $199 price point was one of the biggest points of excitement, particularly in that it was less than half the price of the Apple iPad 2. What could a $299 Fire look like? What features and use cases could it support over the $199 version?

 

Design Strategy

Every company needs a focused strategy, particularly in the risky tablet market,  and Amazon surely has one.  Amazon must balance inexpensive tablet “must haves” with ways to monetize their store.  That’s why consumers can buy an inexpensive tablet and Amazon doesn’t need to make 40% gross margins.  Their bet is that Fire consumers will buy their books, movies, TV shows, music, magazines, and maybe even durables.  So everything needs to lead to an Amazon purchase or be a required element.

Operating System

Amazon will stick with Android 2.X as their base as it’s the only OS that Google has opened up.  Google has yet to open up Honeycomb, even as Ice Cream Sandwich (ICS) is around the corner.  If Google opens up ICS, they would want to move there for many reasons.  First, they get access to larger screens, 10″ all the way to the TV.  Secondly, they would need to ask less of the developers to modify their apps to work decent on a 10″ display.

Display

The display would most likely be a 10.1″, 1,280×800, IPS display.  This is where the current cost break-point is right now.  The other possibility is 1,024×600 display, given these are shipped en-masse on netbooks and mini-notebooks.  Amazon could claim “HD” with both, but with x800 it would be “more HD accurate” given it could support real 1,280×720 (720P) movies.  Also at x800 they can claim that the resolution is better than the iPad 2 at 1,024×768.  That is, until the rumored iPad 3 comes out with Retina Display.

Web Sites versus Apps

One challenge Amazon will have with a 10.1” display and Android 2.X is the app’s appearance. It’s a stretch for Android 2.X apps to even look good on a 7” display. Many of them are blocky, because they were designed for a maximum of 5” displays. At a minimum, Amazon would need to write custom apps for mail, calendar, and address books. I can see Amazon encouraging users to use web sites via Silk versus apps as well and they would need to beef up Silk’s browser to do this. Today’s tablet browsers have limitations, limitations Amazon’s Silk could remove. One simple issue is tablet browser’s ability to access the file system. The iPad’s browser, for example, is unable to upload photos to Picasa. This is why you need an app for that. Silk could conceivably remove the barrier.

Processor, Storage and RAM

While it doesn’t necessarily need more of this for a better experience, the competitive optics demand a bump, particularly on storage. There’s no reason to move beyond the OMAP 4, particularly if the $199 Fire has the TI 4430, which can easily do 1080P HD video.  RAM could very well stay at 512MB, but for the optics, would most likely move to 1GB.  Storage would definitely bump beyond 8GB to at least 16GB.  Apple has made storage the break point for iPad, and Amazon knows they cannot be at a disadvantage, even with Amazon Cloud Storage as the backup.

Living Room Entertainment with Remote Control

Here’s where it gets interesting.  The $199 Fire is designed for individual video content.  The step-up $299 could be positioned as the living room alternative to the “over the top” set top box.  By providing a simple HDMI 1.4 port out and a remote control, consumers could watch all the 1080P TV and movies from Amazon Prime and Amazon VOD.  Consumers are always looking for a way to justify that extra $100 and this alone could be the reason.  To accomplish the same this on the iPad, the consumer needs to buy the expensive HDMI connector and have an iPhone, load the “Remote App”, and setup AirPlay.   The other Apple alternative is to buy an Apple TV, and extra $99.  Amazon could have a cost and simplicity message over Apple in the living room.

Optional Living Room Dock

Taking the living room video usage to the next level, Amazon could offer an optional $29.99 dock which makes living room video even easier.  Place the $299 Fire into the dock and it gets power, HDMI out to the HDTV, speaker out, and Ethernet.  This would be an easy way to connect the Fire to the TV.  It also provides another justification to buy this over an “expensive” $499 tablet that doesn’t provide this option.

Camera and Mic Enable “Entertainment Assistant” App

If the $299 Fire has a front facing camera and microphone, Amazon could “listen” or “watch” the content you are consuming in your living room. This would be user-driven as not to be “creepy”. Think of it as Pandora for all types of content, including TV shows and movies. The user could point the Fire to the TV, press a button and a few seconds, an in-context search result would result. In addition to the news and social media results, it would also show relevant results from the Amazon store.

All it would take is for Amazon to index what they already have. They have access to 18M pieces of content; TV shows, movies, songs, books, and magazines. With Silk, they will also know every web site you access, where you shop, what you buy and how long you stay there.

clip_image004

Even without any access to the rich Amazon data, simple Evernote was able to extract “Dallas” from this photo. Google Goggles is able to extract “Fox Sports” too. Now imagine this capability with Amazon’s access to basically all content and wherever you have ever browsed.

Camera to Improve Shopping

At $299, consumers will expect a camera, maybe even two.  What’s its primary role?  Shopping, of course.  What?  Yes.  Like I said before, everything needs to lead to the Amazon store.  The camera could serve as an augmented reality try-before-you-buy feature.  Amazon is great at selling physical books, DVDs, electronics, and toys, but what about items that are better sold in a retail store?

  • Clothing: In conjunction with the TV and remote, see what different clothes look like on you and get the perfect fit, too.  The camera is taking videos of you and overlays the clothes on you.  What to change the color or size?  Just use the remote.
  • Jewelry: Watches are interesting.  Will the face be too big on the wrist?  Is it too masculine or feminine?  Use the Fire to see what it looks on you before you buy it.
  • Shoes: Afraid of getting the wrong size or that on you it looks ugly? Print the Amazon Sizing Grid.  Take the picture with the Fire of your feet on the grid.  See how it looks on you; get the right size shoe, including the correct width.  Now that it has this much info, why not now introduce custom show sizes?
  • Home: How will those towels look in your bathroom?  That patio furniture on your patio? That lamp on your end table?
  • You get the idea; use the camera with augmented reality to make the shopping experience more fun and with less risk.

Camera for Universal Videoconferencing
What if your parents use Skype and you use Apple Facetime? One of you needs to change programs or you don’t get to communicate with each other. Amazon, with its data center prowess, could become the “universal adapter” for video services, and make money doing it. Skype, FaceTime, Google Video, Yahoo Messenger, it doesn’t matter. If you use Amazon’s service, you can connect to all of them. A stretch? Maybe, but remember, via Silk they know every site you go to and have a login as well. What’s to stop from the “embracing and extending” if they can further lock in customers?

A Note on Living Room Gaming

Amazon could relatively easily use the dock above, the included remote to enter living room gaming.  But they have a big issue.  Android 2.X looks horrible on the big screen.  Even Angry Birds.  I have tried racing games, too.  So Amazon would need to further break, or fork, from stock Android to make this happen.  Developers would need to do this, too.  When or if Google opens up Ice Cream Sandwich could be the time this happens.  I cannot imagine Amazon going after living room gaming without ICS, although tempting.

Conclusion

I have no inside information whatsoever on any future Amazon Kindle Fire.  BUT, it only makes sense for Amazon to introduce a higher-priced, higher-feature tablet to intercept the 10″ competitors.  Also, given Amazon’s business model, these features must drive Amazon.com store revenue, too.  This $299 Fire as I have laid out does all of these things.

Unanswered Questions about the Amazon Kindle Fire

Amazon threw their axe into the tablet sea Wednesday with the launch of the Amazon Kindle Fire. On paper, the Kindle Fire seems like a killer value proposition. For $199, you get continuous computing access to 18 million books, movies, TV shows, music, newspapers, unlimited cloud content storage, and fastest web browsing. And all this at less than half the price of the Apple iPad 2. There are a few important, unanswered questions that could determine whether that deal is too good to be true.

clip_image002

Delivered Responsiveness

Amazon had a great showing at their launch event, but attendees weren’t able to freely touch the tablet themselves. Demos were carefully scripted that showed how good the responsiveness was. I remember how amazingly responsive the TouchPad tablet demos were, only to be disappointed at launch with the lags. The lags were quickly fixed with a patch a few weeks later, but the damage was done. Basic pinch, zoom, page turn, app load and app close must be responsive or it will just feel cheap. Buying a tablet with bad touch is buying a car with a loose steering wheel and a missing tire.

Display Quality on Videos and Photos

At 7”, to effectively see video content at the same size versus a 10” tablet, users must hold it closer to their face. Will we be able to see pixels? Hold the original iPhone close to your face, play a video, and you can see the pixels. That for me could be a deal breaker, but hey, that’s me. At $199, the Kindle Fire is a less considered purchase, but still considered. Heck, consumers return $5 food items because they didn’t like it, so don’t think they wouldn’t return a $199 Kindle Fire if it didn’t do what they expected.

Video Content Quality

I am one of the few people who own a Google TV. While I like the Amazon streaming service, it can get quite pixelated at times. It happens a lot more than it happens on Netflix, too, which leads me to surmise that it’s an Amazon issue. Bandwidth won’t be an issue on the downloaded content, but, again, what about the quality? I have downloaded movies from Amazon Unbox on my laptop and sometimes they are pixelated in spots. My laptop is 1366×768 on a large display and the Kindle Fire has 1024X600 resolutions at 7”, so probability will hopefully be small. The final question is how 16:9 content looks on a 16:10 display. Will there be black blocks on the top or bottom of the display or will the content be zoomed in and possibly blurry?

Software Storage Footprint

With 8GB of storage, users will need to be very choosy with what movies, TV shows, music, games, apps and app content they store on the tablet. So the software storage footprint gets important. For example, if it takes 2GB, that leaves 6GB left for apps and content. The Amazon Cloud storage is great, but who wants to be deleting and re-downloading songs and apps to make room for a downloaded movie or a game that requires a huge, secondary download after install?

Let’s take a look on iTunes at the popular movie “X-Men: First Class”. It packs a 1.79 GB download. While I don’t think the Amazon “portable” version will weigh in at this size, users will still need to think about their storage, and that’s never good.

Silk Web Acceleration

Silk promises many things, and to the user it promises faster web page downloads for a more enjoyable browsing experience. It could, potentially, eliminate any browser compatibility issues with the device and a web page. For example, even if the Silk browser didn’t support  the latest or oldest web standards, by pre-rendering certain elements of the page, the user wouldn’t detect a thing, only that they can interact with the web page.

clip_image004

This begs about 100 questions, but I’ll leave that for another analysis. I do have a few I will highlight.

  • Privacy: Amazon knows everywhere I’ve been. Is there a way to opt out? How will it protect my personal information ?
  • Standards: Which will it support, which won’t it?
  • Security: Will it capture my passwords?
  • Control: Will user have any kind of control over which sites get “silked” and which ones don’t? I can’t expect Amazon to pre-render every site correctly, particularly the smaller ones.

Conclusion

On paper, the Amazon Kindle Fire appears to provide an exceptional value proposition for the consumer who is on a budget and cannot afford the iPad 2. There are, however, many unknowns that have yet to be determined that could impact the user’s experience. My experience with Amazon is that they under-promise and over-deliver. It’s been that way since their existence. I don’t think they are going to stop that given the importance of Kindle Fire to Amazon. I ordered mine within 5 minutes of the “doors” open up and I’ll hopefully have the answers to these questions above.

10 Days, 10 Questions About Windows 8

Last week, I wrote about the many positive experience aspects of the Windows 8 developer tablet. There are, however, experience areas that are difficult to evaluate, either because Windows 8 is only a developer version and not final product, or it would take longer than 10 days to gain that insight.

Two User Interfaces

I found it a challenge to bounce between the Metro and Desktop interfaces. This was true for me whether I was using it as a tablet or docked with a large display, mouse and keyboard. Metro is designed for touch and Desktop is optimized for mouse and keyboard. Even on the 11.6” display, I still managed to botch pull down menus and fine pointing mouse controls.

clip_image002 clip_image004

Another challenge to the two user interfaces was duplication of certain tasks. For example, there are two ways to join a network, Metro and Desktop-style. There are two ways to change volume, change tasks, change controls, etc.

This could very well take some training and everything will be fine, as it was for me when Windows first launched and I was bouncing between DOS and Windows.

Metro UI and “Deep” Applications

Metro is about beauty, space, and the content. Desktop is all about 100 functions on one screen and quickly bouncing between multiple apps. But what about apps like Photoshop, Microsoft Office, and video editors? I cannot yet imagine how this works Metro-fied on a 22” display, but also understand that in the grand scheme of the global population, it’s the exception, not the norm. But what happens to the exceptions? I am leaving that door open for now.

clip_image006

Web Plug-Ins and Metro

Internet Explorer 10 will not work with plug-ins like Adobe Flash. I understand the experiential, security and performance issues with plug-ins, but I also respect that end users expect their systems to work with every site they deem important. I fully expect major web sites to transition to elements like HTML 5 video, but many in the “long-tail” will not. For example, my local Mexican restaurant uses Flash in the UI and I had to use Desktop IE 10 for this to work. I can do this, but then again I have been in high-tech for over 20 years with 1,000s of hours clawing through hardware and software. What about those who don’t have the experience or the desire? I haven’t heard too many people complaining about the iPad browsers inability to do these things, so I am open on this one.

Touch on Desktop Apps

Applications like Microsoft Office 2010 are optimized to work great with keyboard and mouse, maybe even pen, but not a finger. Fact is, I can’t work without Office as it’s the AMD corporate standard. On the beautiful 11.6” Samsung display, I could easily navigate the larger ribbon icons (i.e., “Paste”), had a difficult time with the smaller icons (i.e., “Format Painter”), and found it extremely difficult to work with text navigation (i.e., “File”- “Open”).

clip_image008

This seems like it could be changed to make Desktop apps friendlier without having to crack the code; but then again, I’m not a software developer.

Various other Questions

  • Footprint: How much hard drive space will the OS and baseline apps occupy? This will be especially important for tablets, where extra storage space comes at a premium.
  • Metro Apps: Obviously at this stage, only the intern-written Metro apps are available. I’m really interested to get my hands on many more Metro apps, particularly those with depth.
  • CPU/GPU and Experience: The developer tablet included a very expensive x86 processor. Will the experience be the same on an ARM-based tablet whose processors power smartphones and tablets?
  • Windows Store: Microsoft was transparent on their plans but I need to use it before I can intelligently discuss it.
  • OS Updates: With Windows 7, it feels like I am receiving weekly updates that are quite large, take a while to install, and sometimes require a reboot. That won’t fly on a tablet that’s targeted for convenience. I don’t need to do that often on my iPad, Xoom, Transformer, Galaxy Tab or PlayBook. When it does, it’s usually some new cool feature, not a “fix”.
  • Smaller than 11.6″: My developer tablet was on an 11.6″ tablet.  Will it feel different on a smaller tablet like 10″?  Desktop was manageable on 11.6″ at 1366×768 but I believe could be very different on a 10″.

Conclusion

There are very many positive aspects of the Windows 8 Developer Preview. Given the state of Windows 8 Developer Preview, many elements of the experience are unknown as I lay out above. As we get closer to launch, these important pieces of the experience puzzle will be filled in and we will be able to better evaluate the future experience. I have used almost every beta version of Windows since Windows was born and this version is the farthest ahead of anything I have seen. The biggest difference now, is that there are alternatives already in-market for the very products that Windows 8 hopes to replace, and they will also be improving up until launch.

See Pat’s bio here or past blogs here.

Follow @PatrickMoorhead on Twitter and on Google+.

10 Days with Windows 8 Developer Tablet- the “Plusses”

It has been ten days since I attended Microsoft’s BUILD developer forum where I listened to many of the public details on Windows 8. The most valuable time I spent was that with customers, developers, press and analysts to share thoughts about what we all just heard about Window’s future. I also picked up a Samsung tablet with Microsoft Windows 8 Developer Preview on it. I have found that after actually using a product, I can learn 10x more than from any slide deck. I’d like to share my first impressions after using Windows 8 Developer Preview for 10 days, and I will start with the positive aspects. In my next blog, I will discuss the less appealing aspects or areas where it’s just too early to call.

State of Windows 8

Windows 8 is currently in the stage called “developer preview”. How does this relate to alpha or beta stage? Consider it pre-beta, in that it is almost feature-complete. So my thoughts will be in the context that this is a developer preview, not beta, and certainly not a shipping product.

Start Time

Starting the Windows 8 tablet was nothing short of amazing. Press the power button, and in 3-5 seconds you are at the start menu. Nothing short of incredible and I hope this will be consistent between platforms and when lots of software is installed. I remember Windows Vista seeming good at beta stage, but then I started installing programs…

Metro Touch UI for Tablets is “Thumbtastic”

I was stunned at how well Metro works and how good it looks on the developer tablet. It is fast and fluid, minimal, graphical and optimized for a user holding the tablet with two hands in 16×9 landscape orientation.

clip_image002

In fact, most of the important things I wanted to do I could accomplish with my two thumbs.

  • Multitask by scrolling through open programs
  • Go “home” or to the Start screen
  • Initiate a search
  • Share content to a service or to another device
  • Change key settings connecting to a network, volume, brightness, notifications, and power

clip_image004

No other tablet I have used comes close to that at 10” and above. Android Honeycomb forces me to reach in to the center to change programs and the thumb action is too far down the tablet in the lower right and left corners. Thumb actions need to be where the thumbs naturally rest.

Live Tiles to Launch Apps and Provide Info

Instead of icons and widgets, Metro uses live tiles. This combines simple navigation with instant access to relevant information. I have always loved Android’s widgets and screens. The issue with Android widgets is complexity and uniformity. Windows 8 goes a step further to provide uniform sizes and a simple update methodology.

image

Dock as PC

I am an unrepentant fan of “smart” modularity, or making a device serve completely different functions when connected to another device. This must be done intelligently; otherwise users just won’t do it because it’s either not obvious, or too difficult.

clip_image011 clip_image013

I was very impressed with my tablet’s ability to dock with off the shelf peripherals. Samsung’s tablet dock had ports for USB, HDMI, Gigabit Ethernet, and audio. When I returned from meetings, I connected the tablet to a 22” display, a full size keyboard and mouse. In desktop mode, it was like I was at a desktop PC, where I could do heavy-duty work and content creation. When I was done or if I went to meetings or home, I would undock and it was good on the couch.

“Play To” Amped Up

Anyone with a Windows 7 PC can currently play content to another Windows 7 PC. This is via a feature called “Play To”. Also they can play content to a DMA like WD TV Live Hub and even an XBOX 360.

What’s different in Windows 8? First, it’s not buried five layers deep. It’s one thumb swipe away. Secondly, it supports content from the Internet Explorer 10 browser. For instance, even though it’s a preview version, I streamed HTML 5 YouTube videos from my tablet to my HDTV via my WD TV Live Hub.

image

Finally, at BUILD, Microsoft outlined a new program to certify that the experience would be really good for “certified” Play To devices. For Windows 7, peripherals weren’t certified for experience, but were tested for compatibility. This meant that it would work, but may not work well. With Windows 8, I am hopeful we will see many Play To devices that are certified for compatibility and experience.

Runs Windows 7 Apps

I ran every app I use on my Windows 7 machine in “desktop mode” without any compatibility issues. I used apps like MS Office 2010, Adobe Reader X, Evernote, SugarSync, XMarks for IE, Google Chrome browser, Amazon Kindle for Windows, Hulu Desktop, and Tweetdeck.

clip_image016

Full Screen Internet Explorer 10 Browser

Admittedly, I have been skeptical on full screen browsing. I’ve tried to like it since full screen browsing options started, but it always felt out of place and awkward because no other apps were full screen. Also, without “chrome” or borders, it was difficult to change programs. Windows 8 and Metro changed all of this.

clip_image020

Compatibility was good, too, as long as I didn’t go to sites where plug-ins like Flash or Silverlight were required. I didn’t encounter many compatibility issues at all, surprising given how early this version is. Heck, even LogMeIn worked.

Conclusion

While it’s only been 10 days, it’s easy to get the feel of Microsoft’s Windows 8 Developer Preview operating system. This is particularly true after using so many different tablets over the last few years. There’s a lot to like about Windows 8 so far, particularly the Metro UI on a tablet and its chameleon-like capabilities to transform into a PC. As in life, there are always down sides to decisions or it’s just too early to tell how something will end. That’s the case for Windows 8, and I’ll be exploring this in my next analysis.

See Pat’s bio here or past blogs here.

Follow @PatrickMoorhead on Twitter and on Google+.

Metro Could Drive Voice and Air Gesture UI

Last week, I attended Microsoft’s BUILD conference in Anaheim, where, among other things, Windows 8 details were rolled out to the Microsoft ecosystem. One of the most talked-about items was the Metro User Interface (UI), the end user face for the future of Windows. The last few days, I have been thinking about the implications of Metro on user interfaces beyond the obvious physical touch and gestures. I believe Metro UI has as much to do with voice control and air gestures as it does with physical touch.

image

Voice Control

Voice command and control has been a part of Windows for many generations. So why do I think Metro has anything to do with enabling widespread voice use in the future, and why do I think people would actually use this version? It’s actually quite simple. First, only a few voice command and control implementations and usage scenarios have been successful, and they all adopt a similar methodology and all come from the same company. Microsoft Auto voice solutions have found their way into Ford and Lincoln automobiles, branded SYNC, and drivers actually are using it. Fiat uses MS Auto technology as well. Microsoft Kinect implements a very accurate implementation for the living room using some amazing audio beamforming algorithms and a hardware four microphone array.

clip_image002

None of these implementations would be successful without establishing an in-context and limited dictionary. Let’s use Kinect as an example. Kinect allows you to “say what you see” on the TV screen, limiting the dictionary of words required to recognize. That is key. Pattern matching is a lot easier when you are matching 100s of objects versus 100K. Windows 8 Metro UI limits what users see on the screen, compared with previous versions of Windows, making that voice pattern matching all the easier. One final, interesting clue comes with the developer tablets distributed at BUILD. The tablets had dual microphones, which greatly assists with audio beam forming.

Air Gestures

Air gestures are essentially what Kinect users do with their hands and arms instead of using the XBOX controller. When players want to click on a “tile” in the XBOX environment, they place your hand in the air, hover over the tile for a few seconds, and it selects it. Kinect uses a camera array and an IR sensor to detect what your “limbs” are doing and associates it with a tile location on the screen. Note that no more than 8 tiles are shown on the screen at one time, increasing user accuracy.

clip_image004

Hypothetically, air gestures on Metro could take a few forms, and they could be guided by form factor. In “stand-up” environments with large displays, they would take a similar approach as Kinect does. In the future, displays will be everywhere in the house and air gestures would be used when touching the display just isn’t convenient or desired. I would like this functionality today in my kitchen as I am cooking. I have food all over my hands and I want to turn the cookbook page or even start up Pandora. I can’t touch the display, so I’d much rather do a very accurate air gesture.

In desk environments, I’d like to ditch the trackpad and mouse and just use my physical hand as a gesture methodology. It’s a virtual trackpad or gesture mouse. I use all the standard Metro gestures on a flat surface, a camera tracks exactly what my hand is doing and translates that into a physical touch gesture.

Conclusion

Microsoft introduced Metro as the next generation user interface primarily for physical touch gestures and secondarily for keyboard and mouse. Metro changes the interface from a navigation-centric environment with hundreds of elements on the screen to content-first with a very clean interface. Large tiles replace multitudes of icons and applets and the amount of words, or dictionary is drastically reduced. Sure this is great for physical touch, but also significantly improves the capability to enhance voice control and even air gestures. Microsoft is a leader in voice and air gesture with MS Auto and Kinect, and certainly could enable this in Windows 8 for the right user environments.

See Pat’s bio here or past blogs here.

Follow @PatrickMoorhead on Twitter and on Google+.

Intel and Google – Who Needs Who?

Android is very popular and has made great inroads in the market in smart phones (with more than 50% share) and is beginning to pick up traction in tablets as well with a plethora of new devices due out shortly. But Android itself has not always been that good a performer, and some of the SW choices Google has made while developing the various versions have been troublesome.

It is clear Android can use some assistance in optimizing the code and user experience (one of the primary reasons Google is buying Motorola is for its engineering talent that has had a major positive impact on the design and tuning of Android). But Google needs assistance in improving future versions of Android, and has a broader vision for Android than today’s phones and tablets.

Although not well understood, Intel is one of the largest SW companies in the world (they have many thousands of SW engineers). It has a unique ability to make SW and particularly OSes run extremely well and have been doing so for many years, and not just with Windows. It is a leading provider of development and compiler technology. While Intel won’t necessarily help Android run better on ARM, it can certainly make Android run great on the Intel architecture. It is already well down this path with the Android code porting and optimization work it’s been engaged in for some time.

But Google has greater ambitions for Android than powering current mobile devices. Google ultimately wants to be a leading OS provider across the board and on many form factors, including on the x86 platform powering PC and PC-like devices, and competing with Microsoft and Apple. This is an extension of Google’s “service in the cloud” strategy with clients powered by Android and Chrome and productivity apps being “optimized” for its own environment.

So the relationship between Google and Intel is key to both their long term strategies. It’s a win-win relationship if done right. It’s quite conceivable that by the time Intel is through optimizing Android code, it will run substantially better on its chips than on ARM. But any help Intel provides Google for Android reliability and performance optimization on x86 will most likely also help it running on ARM since the efforts will be repurposed, and this ultimately helps Android on ARM as well.

The bottom line is both companies actually have a great deal to benefit from a close relationship. Intel gets to show of its upcoming devices for mobile form factors running a highly optimized (for its chips) version of Android. And Google gets a path to higher end systems and optimized code to access its services. And users get choice and a more compelling experience. So there really are no “junior partners” in this relationship. Both have much to gain.