Will Amazon Announce the un-iPad?

Amazon invitation

 

Amazon.com has called a press conference for next Wednesday and all expectations are for an announcement of the long-awaited Amazon tablet. You’re going to read a lot of stupid stuff in the next few days speculating about whether Amazon can come up with an iPad killer. Pay no attention to any of it.

Amazon is a very smart company, and one thing they are smart enough not to do is to challenge Apple on its home turf. Nor do they have to. It is important to remember that Amazon and Apple, while perhaps being the two sharpest consumer companies in the world, are in very different businesses. Apple sees content and software as a way to sell hardware, because that is where it makes its money. Amazon needs hardware as a platform for its content, because that is where it makes all its money.

I suspect Amazon is mostly backing into a tablet business, think of it an an un-iPad, it would rather have left to hardware specialists. It brought out the original Kindle because it felt the time for an ereader had come and no one else was doing the job, although its Kindle platform quickly became hardware-agnostic as other alternatives emerged.

In the field of more capable tablets, Amazon is looking at another vacuum. Android is a mess, with no product gaining significant market share and most of the tablets not very good. iPad, while an important outlet, became a hostile environment when Apple effectively imposed a 30% tax on all in-app sales, an arrangement that would leave Amazon with little or no gross margin on most purchases. (Amazon has countered with an HTML 5 browser-based version of the Kindle app, but the technology isn’t quite there yet.) Amazon was an early partner for the Hewlett-Packard TouchPad, but we know how that turned out. Windows 8 is a promising tablet operating system, but we are probably a year away from seeing products.

So once again, Amazon seems ready to come out with its own hardware to give its customers a more effective way to consume its content. So rather than a product challenging the iPad in the general tablet market (such as it is), look for a specialized device, running a customized version of Android, that is optimized for Amazon services. That means Kindle, of course, but also Amazon Instant Video, Amazon Music, the Amazon App Store, and a great Amazon shopping app. I also would not be surprised if the tablet has a subscription component, perhaps a discounted price with an Amazon Prime subscription, or a discounted Prime subscription with a tablet purchase.

The key thing to remember is that this is not Amazon vs. iPad. There is a vast market in which both can thrive.

 

 

Lack of iPad Competition: A Tale of Missed Opportunity

HP touchPad photoIn a new report on the tablet market, Gartner predicts that the iPad will account for two-thirds of the 103.5 million units it expects to be sold next year and nearly half of the 326 million units in 2015. While its easy to quarrel with some of the details in the forecast (not to mention the ridiculous habit of forecasting sales to the nearest thousand) the general drift of the prediction seems dead-on.

It wasn’t supposed to be this way. This year, the iPad was supposed to get three serious competitors in Android, Research In Motion’s PlayBook, and Hewlett-Packard’s TouchPad. Instead, the TouchPad was killed before it had a chance, PlayBook’s heart is barely beating, and Android, while still promising, is beset by mediocre products, fragmentation of the operating system, and a severe lack of applications. The only really good news is that Microsoft is determined to make Windows 8 tablets succeed when they launch next year, though it is way to early to assess its chances.

For competitors, 2011 was a year of badly missed opportunities and at least in the case of RIM and HP, these flubs have serious implications for the future of the companies. For RIM, the PlayBook, based on the QNX operating system, was to breathe new life into the slumping BlackBerry line. It showed great promise at the Consumer Electronics Show last January, but quickly flopped when launched in April.

The reasons were pretty obvious. Not only was it buggy, but the PlayBook shipped without native email, calendar, or contact apps. It was usable only if paired with a BlackBerry, which it also relied on for a 3G connection. In practice, its market was limited to existing BlackBerry owners on carriers other than  AT&T because AT&T blocked installation of the software required for PlayBook pairing. To make matters worse, the selection of apps was dismal, even by BlackBerry standards. Summer came and went without promised software improvements appearing. Little wonder that PlayBooks mostly sat on dealers’ shelves.

In fact, the QNX sales forecasts are one of the odder things in the Gartner report (table).  The analyst firm projects sales of 3 million for all of this year, odd because RIM shipped (to dealers, not sold through to customers) only 900,000 units in the six months ended in August. In would take one spectacular autumn to hit 3 million, and some sort of miracle–or at least a while new product line–to hit the forecasts of 6.3 million next year and 26 million in 2015.

Worldwide Sales of Media Tablets to End Users by OS (Thousands of Units)

OS 2010 2011 2012 2015
Android 2,512 11,020 22,875 116,444
iOS 14,685 46,697 69,025 148,674
MeeGo 179 476 490 197
Microsoft 0 0 4,348 34,435
QNX 0 3,016 6,274 26,123
WebOS 0 2,053 0 0
Other Operating Systems 235 375 467 431
Total Market 17,610 63,637 103,479 326,304

Source: Gartner (September 2011)

The failure of the TouchPad was even more tragic. When HP bought Palm and its webOS last year, company executives saw it out of a path in which its software choices were controlled by microsoft and its hardware was increasingly commoditized. But all the steam, heart, and funding went out of the effort when CEO Mark Hurd was fired and replaced (temporarily, it seems) by Léo Apotheker. What could have been a serious iPad challenger launched this summer as an intriguing but half-finished product. A battle that HP officials once said would take years, not months, ended in abject surrender after six weeks, when HP killed the TouchPad and the rest of the webOS Global Business Unit. The main impact of the whole HP-webOS affair was to set of an existential internal struggle over the future of HP. Gartner wisely projects next year’s sales at 0.

Android’s future as a tablet OS is hard to assess because the present is so muddled. This year saw dozens of products, or widely varying quality, hit the market, but none of them really took off, and none could answer the essential question of why they should be purchased rather than an iPad. Google will try again this fall with a new version of the software, called Ice Cream Sandwich, that is supposed to unify the fragmented Android landscape. But, in fact, further fragmentation may be in store if Amazon.com goes ahead with rumored plans for a custom tablet based on its own modified version of Android. If the rumors are correct, Amazon doesn’t want so much to challenge Apple as to create a new market for a low-0cost media consumption tablet.

One place I think Gartner may be seriously off the mark is in its forecast for Windows tablets. An estimate of 4.3 million units might be on target for next year because we don’t yet know when Windows 8 will ship, but 34 million, barely 10% of the total, seems unduly pessimistic for 2015. We’ve just gotten our first real glimpse of Windows 8, but it is clear that this is a very serious effort by Microsoft–the first, really–to design an operating system optimized for PC-like devices that lack mice and keyboards. Many questions, including how well Microsoft will do in attracting developer support, remain, but Windows 8 has the potential to become the iPad’s most serious challenger.

[thumbsup group_id=”2471″ display=”both” orderby=”date” order=”ASC” show_group_title=”1″ show_group_desc=”0″ show_item_desc=”1″ show_item_title=”1″ ]

Anti-Google Coalition’s Strange View of Business

FairSearch.org, a coalition of Google competitors that thinks the search giant is being very evil, seems to have a strange idea of how business works.

In a blog post that is part of a running argument with Wall Street Journal columnist Gordon Crovitz, FairSearch attempts to rebut the argument that while Google’s practices may have hurt competitors, they have actually helped consumers by contending that Google is killing innovation:

Google doesn’t offer free products – advertisers pay for them, to the tune of Google’s $29.3 billion in 2010 revenue. Those billions get passed on to consumers and business customers in the form of less innovation, higher prices, and less free services and information provided by others online.

FairSearch describes this as a “Google tax,” but this is a very odd notion indeed. Advertisers choose to place ads through Google and the prices they pay are established by auction. Companies may argue that Google’s sheer size leaves them with little choice but to advertise through Google, but in the days of dominant newspapers and TV networks, I never heard anyone talk about a Chicago Tribune tax or a CBS tax, and those prices were set a lot more arbitrarily than Google AdWord or AdSense rates. Companies paying other companies for products or services is just the normal course of business and to call this a “tax” mostly just deprives the word of any meaning. And the assertion that the money flowing to Google reduces innovation is just that: An assertion advanced without evidence. Who among FairSearch’s members has come close to matching Google in innovation?

FairSearch has been a driving force behind a Federal Trade Commission antitrust probe of Google. And there are certainly issues that should be explored, particularly Google’s treatment of  Yelp as described in yesterday’s Senate Judiciary Committee hearing. But the government has the burden of showing that Google’s behavior  hurt consumers, and not even outraged competitors have made much of a case for that yet.

It’s not particularly cynical to suggest that FairSearch members are more interested in hurting Google than they are in helping consumers. And Microsoft, by far the biggest company in the coalition, knows all too well the crippling effect an antitrust case can have on a business, even when it results in no meaningful penalties.

HP’s Never Ending Drama: Blame the Board

I don’t pretend to know what is going to happen next at Hewlett-Packard, but both Bloomberg and All Things Digital report that the board is meeting to consider ousting CEO Léo Apotheker and perhaps replace him with former eBay CEO (and current HP director) Meg Whitman.

The HP board has long had a well-earned reputation as one of the worst around, going back at least to the clumsy dumping of CEO Carly Fiorina and the ensuing scandal over spying on reporters to determine the source of boardroom leaks. But its performance in the last year puts it in a class by itself. A board shouldn’t have much involvement in the day-to-day running of a company, but it is responsible for oversight and strategic direction. The HP board has provided neither.

HP stock price chart

HP common stock price (from MarketWatch)

The big problems started a little over a year ago when then-CEO Mark Hurd got caught up in accusations of sexual harassment and an improper relationship with a contractor. The board decided Hurd hadn’t violated any policies on harassment or relationships, but fired him anyway for falsified expense reports. Hurd was widely disliked within HP for his slash-and-burn approach to improving earnings through stringent cost reduction, but he was a first rate operations executive who did make the trains run on time.

The choice of Apotheker seemed to signal a major strategic move by the board. For some time, HP had been a three-headed beast, comprising PCs and associated consumer electronics; enterprise servers, services, and software; and printing and imaging. from desktop inkjets to commercial Indigo digital presses. Apotheker, who had spent most of his career at German software giant SAP,  seemed to be chosen to focus on the enterprise business, especially since the board chose him over Personal Systems Group chief Todd Bradley and Imaging & Printing Group chief Vyomesh Joshi.

Apotheker made his big move this August when he announced that HP was killing the phone and tablet business acquired  (under Hurd) from Palm, was considering selling or spinning out the rest of Personal Systems, and was acquiring Autonomy, British business analytics software maker, for $10 billion. The Autonomy purchase was unanimously approved by the HP board according to the announcement; the other moves certainly must have had board approval as well.

But HP has been in a tailspin since the announcements. The stock price, which had been sinking since spring, cratered (though it rose 10% in intraday trading today on rumors of Apotheker’s departure.) Competitors such as Dell moved to poach corporate customers unnerved by the uncertain future of the PC division.  A fire sale of  the inventory of TouchPad tablets added to the ridicule. And the board, it would appear, panicked again.

In the circumstances, the choice of Whitman, would be an odd one. She has a solid record of accomplishment and has been cooling her heels at the venture capital firm of Kleiner Perkins Caulfield & Byers since losing a bid to become governor of California in 2010. But her business background is largely in consumer services and she would be taking over an HP that Apotheker has remade into an enterprise company.

Might such a move signal the board’s intention to reverse the enterprise direction? Possibly, though that would only lead to even more turmoil. The Autonomy purchase has been approved by both boards, though not yet by shareholders, and would probably be both very messy and very expensive to undo; such agreements typically carry heavy termination fees if the deal fails to close. Nothing irrevocable has been done yet to sell the Personal Systems Group, but the former palm operation is well and truly dead–the Apotheker rumors broke the day after HO began wholesale firing of webOS Global Business Unit employees.

However this turns out, we can expect more drama, and probably more missteps by the HP board. Somewhere in tech heaven, Bill Hewlett and Dave Packard are crying.

 

Windows 8 on a Laptop: A First Look

For the past few days, I have been playing with the developer preview of Windows 8 on a conventional–no touch screen–laptop. My initial reaction is that no matter how good the new Metro user interface may be on tablets, it is nothing but a pain on a keyboard-and-mouse PC.

Some caveats are in order. These are very early days for Windows 8 and we are dealing with pre-beta software. The Metro applications included in the release are not very polished or very useful. And by the time Win 8 ships, probably about a year from now, PCs may have changed to take better advantage of the new interface (a MacBook-style touchpad, which effectively simulates a touch screen with gestures, would be a huge improvement over a typical Windows touchpad.) But still, the experience has been sufficiently disconcerting to make me question Microsoft’s strategy of trying to combine tablet and conventional PC operating systems into one package.

Where are my programs? It’s easy enough to move from the tiled, Windows Phone-like Metro UI to a standard Windows desktop. Just move the cursor all the way to the right side of the screen and click the desktop image that appears. The desktop looks sort of normal–until you click the Start button. Instead of seeing a program menu, you are dumped back into the Metro home screen. There is no obvious way to launch a program that doesn;t either have a Metro tile or an icon on the standard Windows desktop.

It would appear that the favored way to launch an app is through the universal search feature. Start typing on the metro home screen and a search box appears. If the search is set to Apps, a list of matching programs appears in the left panel as you type. Click on the one you want and the program will launch. It works, but it is awkward and weird. It feels almost like a throwback to the command line. (Yes, you can also launch programs this way in Windows 7 by typing a name in the Start menu’s search box, but I’ve only used it for obscure system utilities that don’t appear in the program list.)

There are downloadable utilities that let you toggle the Windows 7-style Start menu on and off and a Registry fix that will turn Metro off altogether, but neither is a very satisfactory solution. Microsoft, of course, has plenty of time to fix things, but for now, the conventional desktop version of Windows 8 feels distinctly like a second-class citizen.

Meanwhile, the Metro interface and its associated apps are awkward and uncomfortable on a PC. When I work on a desktop or laptop, as opposed to a tablet, I typically has a lot of windows open and move between them frequently. In fact, the ability to run multiple apps and to move data among them is a primary reason why I work mostly on a conventional computer. (The illustration above began with Windows 8 running in a virtual machine on an iMac. I used the Grab program to create the screen shot, which I then processed in Photoshop, and inserted into this WordPress post in Chrome.)

Metro improves on the standard tablet approach by letting you have two apps open at once, that’s not nearly enough for desktop work. And bad as it is on the 13.3″ laptop screen, it would be much worse on a 30″ desktop monitor. And while I haven;t had an opportunity to try it, I suspect the Windows 8 conventional desktop is every bit as awkward on a tablet as Windows 7 was, because you are dealing with windows, icons, and menus designed for use with a keyboard and mouse.

The bottom line is that Microsoft has not convinced me this two-headed approach to Windows is going to serve either tablets of mouse-and-keyboard PCs very well. As I said, there’s still lots of time to get it right, but there is an awful lot to be done.

 

Bitcasa’s Clever Encryption Trick

A startup named Bitcasa made a splash at the TechCrunch Disrupt conference last week by promising unlimited  backups to the cloud for just $10 a month. Bitcasa said it provided privacy and security by encrypting all files, but was able to offer a very inexpensive service because it avoided the storage of duplicate files, especially music and movies. The savings from de-duplication could be considerable because with entertainment content, large numbers of people tend to have copies of a small number of movies or songs.

But at first glance, these claims seemed to be contradictory. If files are encrypted, no one but the owner, or someone else given the key, has any idea of what the contents are. And if you don’t know what is in files, the sort of de-duping Bitcasa promises seems impossible.

But, as Bitcasa CEO Tony Gauda explains in this TechCrunch interview, the company is taking advantage of a new and very clever trick called convergent encryption. If you want to get deep into the weeds of the technology, it is explained in detail in this paper, but here is how it works.

The trick is to use the file to create its own encryption key. A mathematical function called a one-way hash reduces the file to a relatively short string of digits, typically 256 or 512 bits. It’s called one-way because while each file generates a unique hash (there is a vanishingly small possibility that two different files could generate the same hash), there is no way to reconstruct the original from the hash. The hash is then used to encrypt the file using the Advanced Encryption Standard.

Say Alice and Bob each have the same song on their hard drives and both use Bitcasa. Alice backs up first, so the song is reduced to a hash and then encrypted on her computer using that hash. When it’s Bob’s turn, his computer goes through the same process and creates an encrypted file identical to Alice’s. Bitcasa checks its server records, finds a match, and realizes it doesn’t need another copy of the file. In fact, it could save the cost and trouble of transmitting the file to its data center, but it’s not clear whether it actually does that.

Of course, it’s not quite as simple as that. With normal encryption, all files are scrambled using the same key, so the user only has to hang on to one vital chunk of information. In the convergent technique, each file uses its own key so the system has to include a separate map file that links each user’s keys to the files they will decrypt. Then all the user needs is the key to the map file.

Converged encryption has some weaknesses compared with conventional encryption. One is that it makes possible a sort of traffic analysis. An adversary who has access only to the encrypted files could still learn that Alice, Bob, Carol, and Dave all had copies of the same group of files because they would be storing identical cyphertexts (or more accurately, identical keys to those files.) They might just  have the same taste in music, or they might be collaborating on a secret project. More sophisticated attacks my also be possible, but assuming that it is properly implemented–always a huge assumption when dealing with encryption–the approach does seem good enough for most encryption needs.

 

 

 

RIP, Flash (and Silverlight too)

Flash iconWhen Apple introduced the iPad last year without support of Adobe Flash, Steve Jobs was accused of everything from crippling his own product to pursuing a personal vendetta against Adobe. Events have proven the Flash ban, like so many of Jobs’s decisions, to be prescient. But if Jobs needed any vindication, he has now gotten it from, of all places, Microsoft, which has stuck a probably fatal blow to both Flash and its own competing technology, Silverlight.

In a post to the Building Windows 8 blog, Internet Explorer development chief Dean Hachamovich made clear the IE 10 browser in Windows 8 will not support plug-ins. That means that neither flash nor Silverlight will run in IE (though other apps, including other browsers, may support the Flash and Silverlight players.) Instead, Microsoft will follow Apple’s lead and rely on native HTML 5 for rich web applications and media play.

There are two big problems with Flash. One is that the plug-in has a nasty destabilizing effect on browsers. A large percentage of the browser crashes that I have experienced have been attributable to Flash misbehaving. Second, it is an awful resource hog. This is a minor issue in a modern PC with processing power to burn but is a huge problem on more constrained tablets. The ability to run Flash was supposed to be a big selling point for Android tablets, except that it turned out that they didn’t actually run Flash very well. The fact the Windows 8 is supposed to work on both PCs and ARM-powered tablets was clearly a major consideration in Microsoft’s decision.

The absence of Flash on the iPad has been a minor nuisance–and the popularity of the Apple tablet has greatly accelerated the development of HTML 5 alternatives. Microsoft is betting that by the time Windows 8 ships, probably about  year from now, HTML 5 will have matured to the point where Flash and Silverlight will not be missed.

Flash was an immensely useful technology in its day. It both enhanced media play–it’s not clear how YouTube might have happened without it–and enabled the development  of richer web pages than were possible with existing HTML techniques. So let us mourn its impending passing and celebrate the folks at Future Wave, Macromedia, and Adobe who developed it. And let us move on to a better HTML 5 future.

 

 

Why Microsoft’s Development Must Be More Open Than Apple’s

Matt Rosoff at Business Insider writes that a principal reason why Microsoft reveals a lot more about its development process than Apple does is that Apple is a consumer products company while Microsoft is a technology company.That’s somewhat oversimplified, but mostly true as far as it goes. However, it misses some deeper reasons for Microsoft’s greater openness.

Windows 8 screen shotThe most important reason is that Windows lies at the heart of an extremely complex ecosystem. Microsoft needs to provide its partners, both computer manufacturers and enterprise customers, with a clear development roadmap. For OEMs, this is vitally important if they are to be able to ship optimized hardware, such as the new Windows 8 tablets, when the new software is released. This requires lots of lead time.

Windows also runs on an almost uncountable variety of of hardware configurations. Device manufacturers, like computer OEMs, need lead time to have optimized drivers ready when the new OS ships. Fortunately, Windows 8, like Windows 7 and unlike Vista, does not require extensive rewriting of drivers. But there are always issues of fine-tuning the software.

The variety of configurations also calls for extensive beta testing. There’s no way Microsoft can test any but a tiny proportion of the possibilities in-house. It needs debugging input from a large number of users.

Apple, by contrast, tightly controls the ecosystem. It can, and does, regularly release OS versions that render relatively new hardware and software obsolete. Apple can get away with this approach, which enables it to avoid Microsoft’s endless problems with legacy code, largely because it does not have to worry about keeping enterprise customers happy.

Apple’s development secretiveness does cause problems. New OS releases often cause serious compatibility problems. Even a relatively minor upgrade like Lion has produced a long list of hardware and software incompatibilities that probably would have been a lot shorter had Apple been more open with third-party developers. This is a price Apple is willing to pay, but that Microsoft, because of its different position in its own ecosystem, cannot afford.

DEMO Deja Vu

At the TechCrunch Disrupt conference this week, entrepreneurs Peter Thiel and Max Levchin made a splash by declaring the state of innovations as being somewhere “between dire straits and dead.” I think they are fundamentally wrong, but that’s a hard case to make at the opening session of the DEMOfall conference today.

DEMOfall iconThe 14 products demonstrated all seemed relatively worthy. In fact, that may be what was wrong. Each one seemed like it had a chance to succeed, largely because they mostly sounded like minor variations on familiar themes. What was missing, so far at least, was the goofy,off-the-wall nature of the products that have made past DEMO conferences so interesting.

The product that struck as most interesting was LiveLoop, a plugin that adds real-time collaboration to Microsoft Office. When the most intriguing thing is an Office add-on, it’s hard to believe you are in a hotbed of innovation. (I also found Upverter, a cloud-based collaborative circuit design tool intriguing, but I don’t know enough about circuit design to assess it.)

But I don’t think the lack of excitement is symptomatic of a basic failure in innovation. It’s more that we are just at an odd place in the cycle. I think, for example, that experimentation and development that is going on with big data, sensor networks and other new methods of data collection, and deep analytics is going to lead to deeper understanding of our world and products we can barely imagine today. But this area of innovation has not net reached to point where it is producing consumer-facing products. That is going to take at least a couple more years.

Meanwhile at DEMO, I am looking forward to I-TOMB.net–The World Virtual Cemetery, a product that might take me back to DEMO’s goofy glory days.

 

 

Should Apple Fear HTML 5? Not a Chance

Apple-Think DifferentIDG News Service’s Leok Essers has an article in which a couple of financial analysts predict dire consequences for Apple from the growing adoption of HTML 5, a technology that allows web pages to behave much more like native apps.

Toni Sacconaghi Jr. of Bernstein research thinks HTML 5 could reduce Apple’s operating profit growth through 2015 by 30%. Jeffrey Hammond of Forrester Research argues that adoption of HTML 5 will squeeze Apple by increasing the commoditization of both hardware and software.

This sort of analysis fundamentally misunderstands the nature of Apple’s success. The first question you have to ask yourself is why, if HTML 5 is such a threat to Apple, why is it embracing the technology so aggressively?  When the iPad was introduced in early 2009, Steve Jobs famously rejected Adobe Flash in favor of HTML 5 for providing media content and rich apps and a browser, a stance from which Apple has never wavered.

The fact is that no company is better at resisting commoditization than Apple. It does this through relentless focus on user experience. “It just works” may be a Jobsian cliche, but it is the essence of Apple. It provided a breakthrough user experience with the original iPhone, which relied on Web apps that are not nearly as good as what HTML 5 offers, it did it again with native apps on later iPhones and the iPad, and it will do it with HTML 5.

The one area where Apple may be hurt a bit will be the ability of HTML 5 web apps to go around the iTunes store and the 30% of sales that Apple takes off the top. But that’s not where Apple makes its money. In the June quarter, all iTunes Store revenues, including apps and content, accounted for only $1.6 billion of Apple’s $28.6 billion in revenues.

A bigger threat perhaps is that in a world of HTML 5 web apps, Apple will lose the curatorial control that the App Store has provided. While Apple’s “control-freakery” has been much criticized, this curation has maintained fairly highly minimum standards for iPhone and iPad apps and has avoided the chaos of the Android Market. HTML 5 will loosen control somewhat, but I suspect that Apple will find a way to keep that user experience coming.

 

Mozilla Gets Tough on Digital Certificates

Firefox logoIn a preemptive step to protect users from possible attacks based on fake digital certificates, Mozilla has given certificate issuers a week to present proof of security measures they have taken or have their certificates rejected by Firefox browsers.

Digital certificates are a critical part of the web’s security infrastructure. They are how sites prove that they are what they claim to be and they are also used to encrypt transactions between browsers and servers. But the integrity of the system was called into question by an attack on DigiNotar, a Dutch certification authority (CA), that allowed the attackers to issue false certificates in the name of a large number of well-known sites, including Google.com and there have been less serious breaches at other CAs.

In a letter to all CAs whose certificates are accepted by Firefox, Kathleen Wilson, who is responsible for managing certificates in firefox, gave CAs until Sept. 16 to complete a checklist of security measures, including a full audit of their public key infrastructure, a key security component.

The is a necessary step, and should be joined in by Microsoft, Google, Apple, Opera, and anyone else responsible for software that maintains a list of trusted CAs. But there is still an element of locking the stable after a fair number of horses have escaped. What is really needed is much toucher standards for CAs on an ongoing basis, and probably a sharp reduction in the number of organizations that can issue trusted certificates.

America Invents: A Step Toward Patent Reform

Patent office logoThe America Invents Act, now awaiting President Obama’s signature, will not solve the most serious problems of the U.S. patent system, especially the ugly mess of vague and dubious software patents. But it is a welcome step on the long road to reform.

The most notable  change in the law is a new criterion for awarding patents: To win U.S. patents, inventors had to prove they were the first to come up with the idea. The new law, following the practice of most of the rest of the world, will now award a contested patent to the first party to file for it. This may be a rough sort of justice and could prompt some premature patent filings, but it eliminates one of the most contentious and costly elements of patent litigation. And as engineers and inventors adapt to the new regime, it could ease some of the lab record-keeping and paperwork now deemed necessary to prove primacy of invention in a patent dispute.

The new law also streamlines the patent application process and simplifies fees. New procedures should mean that the U.S. Patent & Trademark Office gets to keep more of the fees it collects and stronger financing could lead to the hiring of more and better patent examiners.

But the mere fact that the bill passed the Senate 89-8 reflects the fact that the most controversial issues were left on the table. The only real opposition came from some supports of small business and independent inventors, who felt the measure tilted too far in favor of big companies. Among the issues that will have to wait for another day–or case-by-case resolution by courts–is clarification of just what sort of software innovations or business processes are patentable.

 

It’s the User Experience, Stupid; How iPhone Critics Miss the Point

The title tells it all: 10 Reasons Why iPhone 5 Doesn’t Stand a chance Against Motorola Droid Bionic. The article, by Elias Samuel in International Business Times, not surprisingly, lists 10 ways in which the Droid Bionic, just announced for Verizon Wireless is superior to the the forthcoming Apple iPhone 5.

Photo of Droic BionicI don’t mean to pick on Mr. Samuel’s, whose other work I am not familiar with.  But this article is sadly typical of a common style of tech reviewing.

Never mind that we know very little about the iPhone 5 hardware, though that doesn’t stop everyone from speculating. The problem is that even if all of Samuels’ assumptions about the new iPhone are right, it just doesn’t matter. For example, you can probably count on your fingers the number of potential iPhone buyers who care that the Bionic’s  Texas Instruments OMAP 4430 processor has specifications superior to the iPhone’s presumed Apple A5.

Some of the other claims are downright inane. If the lack of support for Flash and absence of an external memory card slot mattered, they would have killed iPhone and iPad sales by now. Obviously, they haven’t. And the alleged “open source advantage” is of interest mainly to ideologues (not to mention the fact that Android’s open sourciness is questionable at best.)

What is entirely lacking in Samuels’ review, and many, many others of its ilk, is a discussion of the one thing we do know about the iPhone 5, it’s IOS 5 software and the improvements it is likely to bring to the iPhone’s already great user experience. There are many Android phones whose hardware equals or beats the iPhone. There are none whose user experience comes close. And that, not speeds and feeds, Flash and LTE, is what sells phones.

The Droid Bionic looks to be a fine handset and I expect it will do well. But to say “iPhone 5 may not stand a chance against Motorola’s flagship phone” is just plain silly.

[thumbsup group_id=”2471″ display=”both” orderby=”date” order=”ASC” show_group_title=”0″ show_group_desc=”0″ show_item_desc=”0″ show_item_title=”1″ ]

Why No One Can Match the MacBook Air

Peter Bright at Ars Technica has a feature about his frustrating search for a Windows notebook that can match the MacBook Air–and how difficult it will be for Intel to pull off its quest for Air-like Ultrabooks. The big questions is why it is so hard for PC makers to compete.

ThinkPad X1 photo
ThinkPad X1 (Lenovo)

The answer clearly has nothing to do with technology. Dell, HP, Lenovo, Acer, Sony, and Toshiba, along with smaller players, have all the skills required to design just about anything. Everyone is building their systems using the same components and, for the most part, the same manufacturing partners.

I think the real problem lies in the  marketing DNA of the computer makers, which has evolved to meet the demands of corporate  customers and the retail sales channel. While their  requirements  are entirely different, both drive design away from the clean and simple designs and low-cost, high-quality manufacturing that are Apple hallmarks.

Corporate sales are the lifeblood for many PC makers. Consumers buy a lot more units, but enterprises buy higher-end products and typically provide better margins. But corporations are very picky buyers. Their bid sheets generally include lengthy lists of specifications, often specific classes of processors, specific graphics systems, even specific Wi-Fi radios. They often require legacy ports to be included long after their usefulness has ended. And in most cases, supplying every item on the bid sheet is a minimum requirement to compete.

The result of this need to meet very fine-grained requirements is great complexity. The buyer of a 13″Mac Book Air has one choice to make: a 128- or 256-gigabyte solid-state storage device. The Lenovo ThinkPad X1, one of the most Air-like products, offers three different processors, optional Bluetooth, two flavors of mobile broadband, four Wi-Fi radios, 4 or 8 GB or RAM, and a choice of a conventional hard drive or two different SSDs, making 432 total hardware combinations.

This much variety complicates every stage of the supply chain, from buying components to stocking finished inventory. It raises costs. It also prevents optimizing the design around a set of component choices. (One consequence of the Air’s sleek, monolithic design–a big part of its esthetic appeal–is that what you buy is what you get; there are no field-upgradeable components.

In the consumer market, the problem is different but the result the same. Retailers (including Dell’s mostly online operation) want to have a product, or perhaps a choice of products, at every conceivable price point. This leads to a profusion of overlapping and very similar models and a product line that makes no sense even to very sophisticated buyers. When I asked Dell.com to show me 11″ to 14″ consumer notebooks, the site produced a page offering 12 different versions of two 14″ Inspiron notebooks, the 14R-2nd Gen and 14z (even the names are messy.)

Apple, by contrast, need not satisfy anyone but the ultimate user–and judging by the results, the lack of choice isn’t much of a problem. Even corporations, many of which are reluctantly buying Macs to meet the demands of their internal users, are learning to live with taking what Apple gives. This Apple-knows-best attitude strikes some people as paternalistic, even fascistic. But it produces great products that well-heeled buyers seem to love.

 

 

Netflix vs. Starz: Hollywood Keeps a Tight Rein on Content

In a short article today, The Los Angeles Times sheds some fascinating light on the failed negotiations that will result in Netflix losing streaming movie and TV content from Starz at the end of February.  Nextflix, the paper said, offered $300 million a year to extend the agreement, but Starz wanted nothing less than a change in Netflix’s one-price, all-you-can-eat business model. It wanted Netflix to charge premium prices to viewers who wanted the Starz content.

Netflix screen grabIn recent Tech.pinions posts, Ben Bajarin, Patrick Moorhead, and I have all argued, in somewhat different ways, that winning the cooperation of the studios who control content is the key to realizing the potential of the convergences of information technology and entertainment. But the Starz move, and other developments such as Fox Television’s decision to withhold new episodes of its shows from Hulu.com for eight days, show that if anything, Hollywood is becoming more resistant to anything that challenges its traditional distribution.

On one level, it’s difficult to argue against the Hollywood position. The studios do not face the challenges that record labels did a decade ago. The DVD business is crumbling and theatrical distribution, while healthy, is showing slow growth at best, relying mainly on ever-rising ticket prices to drive revenue. But “broadcast” TV (I use quotes because relatively few people see these shows through over-the-air broadcasts anymore) is holding up well, and cable on-demand and premium channels are thriving. The studios know they are fighting a historical tide, but for the time being, they don’t see various forms of internet distribution replacing the revenues that they stand to lose from their existing business models.

Starz is owned by Liberty Media, whose chairman, John C. Malone, and CEO Gregory B. Maffei, are about as smart and as tough as they come. (If you ever get into a negotiation and see Malone on the other side of the table, run.  This is the guy who made a fortune selling cable operator Tele-Communications Inc. to AT&T and another fortune as AT&T spun out Liberty Media when it sold its cable operations to Comcast. AT&T took a bath, and Malone made money on both ends of the deal.) I would not dismiss them shunning the Netflix money as the act of foolish Luddites who can’t see what’s coming.

No one much likes their cable company and the notion that we will eventually be able to pick and choose the content we want from the internet is an appealing one. But I think would-be cord-cutters who don’t want to give up current, premium content are going to have to wait at least a few years–and real innovation in entertainment content delivery will probably have to wait with them.

 

How To Fix the A&T/T-Mobile Deal

Sascha Segan, the phone guru at PCMag.com, has an interesting suggestion on  how AT&T might salvage its challenged purchase of T-Mobile USA. Divesting 25% of the company wouldn’t satisfy Justice Dept. antitrusters, but a binding promise to open its network to competitive mobile virtual network operators (MVNOs) just might. Read Segan’s full post here.

Spectrum Wars: The Bleak Outlook for Wireless

In a post at TechCrunch, Frank Barbieri argues that the entire dispute over the AT&T’s acquisition of T-Mobile is really about the failure of spectrum allocation policies in the U.S. He’s right although AT&T’s short-term case would be stronger if it were moving faster to deploy the unused spectrum it already has. And Barbieri is right that the biggest roadblock to freeing that spectrum is local TV st ation owners, who in many cases, especially the most valuable big-city outlets, are CBS, Disney (ABC), NBC Universal (soon to be Comcast), and News Corp. (Fox.)

Cover Art: The Dark Side of the Moon, Pink Floyd

One reason for the transition to digital television in the  last decade is that digital TV uses spectrum a lot more efficiently than its analog predecessor. TV stations traded their old frequency assignments–the spectrum is now mostly owned by Verizon, which is using it for 4G LTE service, and AT&T, which will eventually do the same–for new assignments. But an HD broadcast channel only takes about a third of the bandwidth assigned. The hope was that stations would create second and third channels, but mostly they are filling them with endless loops of local weather radar or similarly uninspired programming, or leaving them idle altogether.

Unused or underused TV channels do seem to be the easiest part of the 500 MHz of bandwidth that Federal Communications Commission Chairman Julius Genachowski hopes to free for mobile wireless use. But legally and politically things get sticky very fast.

One problem is the question of who actually owns the spectrum. In recent years, users such as wireless phone carriers and satellite operators have been required to buy the rights to spectrum at auctions. Earlier, however, the bandwidth was licensed without charge, to be used by broadcasters “for the public convenience and necessity.” Ownership remains with the people of the U.S., that is, the federal government.

As a matter of law, that is indisputable, But as a matter of equity, broadcasters argue that hardly any current owners are the original licensees and that the spectrum has actually has actually been paid for many times over as the licenses have been bought and sold. Regardless of how the spectrum was originally assigned, current owners have an equity interest in it and, in fact, it is often their most valuable asset.

The FCC proposes that the unused frequencies be sold in  an “incentive auction,” in which current licensees will get an incentive to participate by being given a share of the proceeds. This will require congressional approval, and we all know how well that works. One big argument is over just how big that incentive payment will be. The broadcasters seems to think something like 90% of the proceeds would be appropriate, which the government would like to reverse the ratio. Then there are questions about just how voluntary participation in the sale would be and just how to handle the fact that some stations would have to move to new frequencies to allowed the freed-up spectrum to be consolidated into more usable blocks.

Barbieri is quite right that local broadcasters have tremendous power in Washington. Getting free media on the local station still matters a lot to every member of Congress.

The one  hope for anything happening in the near term may be Congress’ desperation to cut the budget deficit without cutting popular programs or raising taxes. In the end, it was congressional hunger for the revenues from reselling analog TV spectrum that put an end to years of successful foot-dragging by broadcasters in 2009. The same could happen this time, though not without an epic fight.

 

 

 

In Praise of Longevity: The HP 12c Calculator

How many tech products that made their debut in 1981 are still with us today in essentially unchanged form? Today marks the 30th anniversary of the release of the HP 12c financial calculator, a device contemporary with the Apple ][e, the IBM PC 5150, and the Osborne I.

HP 12c photoUnlike the iconic HP 65, the 12c never really caught the attention of techies. It was designed for financial number crunchers, not engineers or programmers. But its array of built-in financial functions, its ease of use, and even its quirky but efficient reverse Polish (RPN) data entry has endeared it to its target audience to this day.

The price of the 12c has come down somewhat since its introduction. It first went on sale for $150, about $274 in today’s dollars. Now you can buy it for about $60 and a modernized Platinum edition (among other things, it allows standard algebraic data entry as well as RPN) for a few bucks more. A special 30th anniversary edition is $80.

Even at that price, it’s probably one of the more profitable–in margin terms–products in HP’s lineup. Its innards have been modernized over the years; the original relied on a lot of discrete components. The bill of materials for a couple of chips, a 10-digit seven-segment LCD display, some plastic injection moldings, and a couple of alkaline batteries can’t be more than a few dollars.

But it’s a device that still does its job. Nothing is faster or easier at calculating the monthly payment on a mortgage loan or the present value of a 10-year stream of income. The 12c could easily still be around in another 30 years.

Obscure Attack Threatens Privacy, e-Commerce

An attack on a Dutch company in the obscure–to most of us–business of issuing digital certificates poses a serious challenge to secure web communications. No, you shouldn’t stop using Amazon.com or Gmail, but the attack opens another front in the never-ending war that threatens the security of the internet. The tale is a bit complicated, but I’ll try to make it simple. (And thanks to Swa Frantzen of the SANS Internet Storm Center for his detailed analysis of the incident.)

Since the early days of the the secure hypertext transfer protocol (https) has been used to lock down communications between browsers and web servers. Its use is indicated by the letters “https” in the URL, often a locked browser icon, and sometimes the use of green text in the address bar (screen shot above shows Google’s Chrome browser connected to a secure site.)

Https depends on something called a digital certificate that is supposed to do two things. First, your browser checks the certificate for proof that the server it is connecting to is what it claims to be, that is, it asks the server to present a digital ID card proving that it really is mail.google.com. Second, the certificate includes a key that is used to set up encryption of the traffic between the server and the browser.

All of this, obviously, depends on the integrity of the certificate. Some time in the past–just when is not certain yet–unknown parties breached the system of DigiNotar, a Dutch certificate authority. The attackers issued a number of fake certificates in July. On July 19, DigNotar discovered the attack an revoked a number of certificates. However at least one, for google.com, was missed. This fake certificate was used to connect users, mostly in Iran, to a fake Google site.

All of this had little immediate effect on anyone outside Iran. Microsoft, Mozilla, and Google updated their browsers so that they will no longer automatically trust any certificate issued by DigiNotar (the situation with Safari on Macs is more complicated.) This is a problem for DigiNotar and its legitimate customers, but is the best way to protect everyone else.

Furthermore, the fake certificates were only a problem if users were also directed to a fake site. This required a separate attack on the internet’s domain name system (DNS) to replace the legitimate addresses of Google servers with fake ones. That is why the attack only affected users of Iranian DNS services. (There’s only speculation at this point on why Iran,  but one possibility is that the attack was designed to allow the country’s security services to read what users thought were secure, encrypted communications with gmail and other Google services.)

Still, this is another serious warning shot telling us that major improvements are needed in internet security. Attacks redirecting traffic to fake web sites, either by compromising DNS servers or through a technique known as DNS cache poisoning, are not rare. When combined with undetected fake certificates, they have the potential to be devastating.

One obvious area for improvement is the certificate authority infrastructure. As it exists, the certificate authority is what engineers call a single point of failure. Compromise it and the entire security system, which ultimately runs on trust, fails. In particular, a speedy investigation is needed into how the audit trhat followed DigiNotar’s discovery of the breach failed to find the fake Google certificate.