The End of a Wireless Anomaly

Verizon Wireless has always been a very strange beast. Its DNA incorporates the tangled history of the U.S. phone industry since the 1983 court-ordered breakup of the Bell System.

Verizon Communications, which will finally become its sole owner of Verizon Wireless as a result of a $130 billion deal announced Sept. 2, is itself the successor to several companies created by the  breakup. Verizon Wireless was born out of the merger of Bell Atlantic Mobile and GTE Mobile, both Verizon Communications subsidiaries, with AirTouch Plc, the one-time mobile unit of PacTel (which itself was later bought by SBC which became AT&T–I told you this was tangled.) AirTouch was owned by Vodafone, the big British mobile operator, and the deal created Verizon Wireless as a joint venture of which Verizon owned 55% and Voda 45%.

Customers can be excused if they thought Verizon Communications–which offers landlines, FiOS internet and television service, and business services–and Verizon Wireless were the same company. They used the same branding and offered unified ordering and billing. But they weren’t. The arrangement was particularly uncomfortable for Vodafone, whose 45% stake was a huge investment that gave it virtually no say in the management of the joint venture.

Vodafone will receive $59 billion in cash, $60 billion in Verizon Communications stock, and $11 billion in other consideration including Verizon’s stake in Vodafone Italy. Vodafone said it would distribute the  stock to its shareholders along with $24 billion of the cash. The remaining cash will be used for a “new organic investment program.”

The deal, which is subject to approval by shareholders of both companies and by US and UK regulators, is expected to close in the first quarter of next year.

 

The Irony of Microsoft’s Lost Mobile Opportunity

The dozens of commentaries on Steve Ballmer’s impending departure nearly all condemn him for Microsoft’s failure in mobile computing during his tenure. It’s certainly true that Microsoft was run over by an iPhone-iPad-Android train in the last few years, but the basic problem was not that the company failed to understand the importance of mobile.

As John Gruber writes at Daring Fireball:

Look no further than mobile. Microsoft correctly saw that mobile was important to the industry. They saw this early — “Pocket PC” devices first appeared in 2000, and by 2003 they had “Windows Mobile”. They blew it. They had a market lead at some point, but during a time when the handheld market was tiny. The technology wasn’t there yet to make mobile computing desirable to the mass market. By the time the technology was there, when Apple unveiled the iPhone in 2007, Microsoft was not only caught flatfooted, but Ballmer himself seemed incapable of recognizing just how remarkable the iPhone was.

What Microsoft really missed was the importance of touch interfaces. And that is particularly ironic because Microsoft was promoting touch-based Tablet PCs from 2003 on. The problem, as usual, was a combination of a slavish dedication to the design of Windows and the desire to give OEM partners the widest possible choice in form factors.

The Pocket PC user interface was a miniature Windows desktop. Back then, there were no capacitive multitouch screens and the devices, like the Palms that were their major competitors, required use of a stylus. Over time, the UI evolved to become less Windows-like–the screenshot above shows a version from about the time the iPhone was introduced–but they always relied on interface elements that were too small to touch reliably with a finger.

Furthermore, Microsoft allowed OEMs a great freedom in configurations. Windows Mobile devices could have a considerable variety of screen sizes and they could be designed with or without touch screens and with or without physical keyboards. The support for non-touchscreen devices sharing the same basic interface guaranteed that Windows Mobile, like desktop Windows prior to Windows 8 in its Metro mode,  would remain basically a keyboard-driven UI with touch features.

The arrival of the iPhone in 2007 convinced a skeptical world that a touch-only device could be not only practical but delightful. Microsoft, probably because it saw BlackBerry as its real competition, paid no attention. As late as the fall of 2009, contemporary with the iPhone 3GS, Microsoft released a final version of the Windows Mobile software, which still lacked support for multitouch displays. Microsoft did not release a keyboardless, multitouch phone until Windows 7 in late 2010, by which time it was fighting not only Apple but Android.

Microsoft gave every indication of realizing the importance of mobile, in fact, long before Apple. But what it couldn’t do was get mobile right. Perhaps it realized deep down that really great mobile devices would ultimately threaten the desktop Windows franchise they cared much more about. But as Andy Grove always warned, if you won’t cannibalize your own products, some else will.

 

 

 

About Those NY Times and Twitter Attacks: What Really Happened

General news media mostly do a terrible job covering tech issues. And in the case of the attacks, allegedly by the Syrian Electronic Army, that effective took nytimes.com off-line for a good part of Tuesday, the tech media haven’t done too well either. One of the big problems is the use of the word “hack” to describe any attack, as in “the Times web site was hacked.” In fact, neither the Times site, nor twitter.com, which was attacked less successfully, was ever touched.

To understand what happened, you have to know a bit about the Domain Name Service, which is both a great strength and a great weakness of the internet. Its strength is that its distributed design has let the net scale seemingly without limits to handle orders of magnitude more sites that were even envisioned by its designers. Its weakness is that it is, at least in its standard form, insecure.

If I want to load www.nytimes.com, the Times home page, my browser generates a DNS query that will poll a hierarchy of DNS servers until it finds one that can  report that the address corresponds to 170.149.168.130. In the case of nytimes.com, access to the DNS database is controlled through an Australian company, Melbourne IT, through which the  Times has registered its domain name.

The successful attack was not against the Times  itself but Melbourne IT, a domain registrar and hosting company. According to Timothy B. Lee of The Washington Post‘s The Switch blog, the Syrian Electronic Army got access to Melbourne IT through credentials of a legitimate customer. Once inside, the attackers were able to change any records that hadn’t been locked down tightly, and that included nytimes.com. All they had to do was change the DNS record to point to a site of their choice–an attack known as DNS hijacking–and nytimes.com effectively disappeared.

This is probably the least effective way to attack a web site. Because of the distributed nature of DNS, changes take hours to percolate through the system. I never lost access to nytimes.com, probably because I go to the site a lot in its correct address was cached locally. It was also quick and easy for the Times to set up an alias that let people route around the damage to find the site.

The attackers did less well with Twitter, whose DNS account, also at Melbourne IT, was locked down. All they were able to do was change Twitter’s record in the whois database to indicate that twitter.com was owned by the Syrian Electronic Army. But since the whois database (accessible through www.whois.com) is not actually used in the DNS lookup process,  the Twitter change had no practical effect.

The lesson, of course, is that if you own a domain, make sure it is locked down so that only you can make changes.

 

 

 

TV: A Faster Slow Death

Television as we know it is doomed and has been for some time. But I thought  broadcasters and cable operators would be able to hold off the inevitable for a long time because their business model, antiquated as it is, still produced a mighty stream of profits. I’m not so sure anymore. The decline will still play out over years, but there are plenty of straws in the wind that suggest that over-the-top internet distribution is disrupting traditional TV at a much faster pace than I believed possible. Consider these developments:

  • The successful launch of original content on Netflix, such as “House of Cards” and “Orange Is the New Black,” shows that it is possible to circumvent traditional distribution.
  • The protracted fight that has kept CBS off Time Warner Cable, with TWC hinting that its customers could turn to Aereo to avoid the blackout.
  • Google talking with the National Football League about over-the-top distribution of games.

Each of these examples cuts to the heart of a business model that has been controlled by an iron triangle of content producers (studios and sports leagues), programmers (broadcast and cable networks), and distributors (cable and satellite companies and over-the-air broadcasters.) The producers and programmers have been looking at the internet with an increasingly hungry eye for a while now, but they haven’t been hungry enough to be willing to disrupt their very lucrative arrangements.

Death by pecking. I used to think it would take some cataclysmic event, say the decision of an HBO to allow over-the-top sales of  its programming by non-cable subscribers, to break the TV business model. But it now seems more likely that the model will slowly be pecked to death, one deal at a time.

Of the recent developments, the war between CBS and TWC is the most surprising and perhaps the most significant. The core of the dispute is that TWC’s belief that CBS is charging too much for its content. These fights over “retransmission consent” have erupted before, but typically have not lasted for more than a few days. Usually, the prospect of losing a big sports event has undone the resolve of the cable carrier; it will be interesting to see if TWC can continue to hold out through the start of the college football season next weekend and the NFL season the week after.

Retransmission fees have become an increasingly important part of the revenue stream of networks; CBS reportedly is asking TWC to triple its payments from 66 cents to $2 per subscriber per month. (For a deep dive into the economics of retransmission, read this analysis by Richard Greenfield.) It’s only a matter of time before distributors try to cut out the middleman by negotiating directly with the content providers. TWC, for example is providing all subscribers free access to the tennis channel–an independent operation with close ties to the tennis business–during the U.S. Open (TWC subscribers will still miss the biggest matches, for which CBS has an exclusive.)

DIY sports production. College teams, from the Big Ten to  to Brigham Young, are producing telecasts of their own games. Currently, they distribute the content through cable operators, but it is certainly conceivable their their future deals could be with Netflix or Google or Apple.[pullquote]Direct, non-cable channel subscriptions, or even a la carte sales of individual shows, look increasingly inevitable.[/pullquote]

Real change in the television business model will require a realignment of interests in the industry. Local broadcast stations, many of the largest of which are owned by networks, seem at the most risk of disruption. So far, most have been resisting the temptation to give up their over-the-air licenses in the forthcoming “incentive auction” of spectrum, but that option could become increasingly attractive if broadcast economics continue to deteriorate.

The role of traditional broadcast networks in the value chain growing increasingly dubious as competition with non-traditional distributors, such as Netflix, heats up. More and more cable channels are turning up on over-the-top devices such as Roku, Apple TV, and Xbox. Apple, for example, has just announced deals for the Disney Channel, the Weather Channel, and the Smithsonian Channel.  Premium channels generally require that the viewer also have a cable subscription because of either contractual obligations or the content owner’s desire to maintain good relations with the cable operator, but direct, non-cable channel subscriptions, or even a la carte sales of individual shows, look increasingly inevitable.

Cable operators may have to get more deeply into programming  or face becoming pure suppliers of bandwidth. Currently, only Comcast, which owns NBC Universal has significant production assets; TWC has been spun off from Time Warner, which owns HBO, CNN and many other assets.

One big question about the inevitable move to over-the-top distribution is whether the internet can handle the load traditionally carried by cable networks, especially for, say, a major sports event.  In one sense, the traffic is all moving over the same network since most U.S. residences get broadband via cable, but cable’s broadcast model is inherently more efficient than the one-stream-per-viewer required by IP transmission. But the extensive use of content distribution networks has already made the topology of the internet look a lot more cable-like for highly popular sites; the head ends are just a little further away from subscribers. As CDNs continue to improve and as last-mile bandwidth continues to increase, internet logjams seem less and less a threat to over-the-top TV.

Fear and Loathing and the NSA

In the couple of months since the flood of revelations about National Security Agency internet snooping was unleashed by Edward Snowden, we have seen a great deal of knee jerk reactions on all sides punctuated by an occasional burst of sanity. We have seen some genuinely frightening things, such as the NSA is collecting metadata on every phone call in the U.S. and the hints that the government is trying to get the master keys from services that offer encrypted email. We have even seen the odd moment of black comedy, such as the announcement by The Guardian that UK officials, in some strange bit of kabuki, had forced the staff to smash hard drives containing one of many copies of material obtained by Snowden.

But we have reached the point where some people are beginning to suffer what I can only call NSA derangement. The best example to date was a post by founder Pamela Jones that she was shutting down a blog called Groklaw because of NSA snooping. She wrote:

And the simple truth is, no matter how good the motives might be for collecting and screening everything we say to one another, and no matter how “clean” we all are ourselves from the standpoint of the screeners, I don’t know how to function in such an atmosphere. I don’t know how to do Groklaw like this.

Groklaw did yeoman service for years covering every detail of the epic litigation in which a relatively obscure software company called SCO tried to claim that it owned the UNIX operating system and was entitled to vast damages from IBM, Novell, and countless others. After SCO lost on nearly all its claims and went through bankruptcy and liquidation, the lawsuits linger with a sort of half-life and, until today, Groklaw lingered with them (Jones actually announced she was killing Groklaw in 2011, but it refused to die.) But Groklaw is of no more interest to the NSA than is, say, Tech.pinions. To think otherwise is a bit megalomaniacal.

But the fact that someone as knowledgable about the law and technology as Jones (who does, it should be noted, have a somewhat conspiratorial frame of mind, especially where Microsoft is concerned) goes this far off the deep end is an indication of the impact of the NSA story. She’s hardly the only one. Keith Devlin, an eminent Stanford mathematician, has been filling my Twitter feed with over-the-top political posts, including the claim that the U.S. has become East Germany, a state that not only spied on its citizen’s in far more offensive ways than anything we have heard about in the U.S., but regularly murdered them:

devlin-tweet-1

On an even odder note, he suggested that he might face prosecution if a terrorist enrolled in his Coursera “Introduction to Mathematical Thinking” course, since knowledge of math might allw them to become codebreakers (or something):

devlin-tweet-2

When really smart people start talking like this, it is time for everyone to stop and take a deep breath. I am horrified by what we have learned about the extent of government spying on its own citizens and I am disgusted by knee-jerk defenses from the likes of Senator Diane Feinstein (D-Calif.) and Representative Peter King (R-NY) that it’s all OK because it is just protecting us from terrorists. Some perspective is desperately needed.

At one level, what the government (NSA is convenient shorthand, but it is much broader than that) is doing has caused some fundamental changes in thinking about the relationship between citizens and a democratic government. But unless you deal in leaks of classified information (an important unless, but one beyond the scope of this article), the effect of knowing what you now know on your daily life is minimal to non-existent. For example, I have been preaching for 20 years that email should not be regarded as secure or even private. The fact that the NSA might snoop on it changes very little–and nothing that has come to light shows that the government routinely reads domestic email. I don’t like the government collecting my call data, but it’s not going to change my behavior.

What we need right now is a lot less hysteria and a lot more pressure for a serious political debate on striking a new balance between legitimate investigative needs and the need for privacy and freedom from intrusion, and moves the needle a long way back toward privacy and transparency. Although the leadership of Congress is generally standing behind government spy programs, there is broad support for change among the congressional rank and file of both parties.  This could be a rare moment when it is actually possible to get something done, but only if we focus on what is real and what is important, and avoid sinking into wild charges and paranoia.

 

 

The Limits of Cloud Encryption

Revelations of National Security Agency snooping on email and other internet traffic has inspired long-overdue concern about privacy and security–and set off a wave of opportunistic announcements of encrypted services. Adding encryption is a good thing, but you have to understand what it can and cannot do. and what the newly announced services definitely cannot do is keep the government’s eyes off your data.

There are two fundamentally different problems: Protecting data in transit and in storage (sometimes called in flight and at rest in technical literature.) These are subject to different technical requirements and different legal protections–in general, data in transit are better protected.

In-transit protection. There’s no excuse for not encrypting all sensitive data in transit. There are standard protocols for it: Secure socket layer (SSL), transport layer security (TLS), and the more secure Perfect Forward Security. Transactions with Web sites involving any sort of personal data should use secure HHTP (HTTPS); if you use a mail client such as Outlook, Mac Mail, or Thunderbird, you should choose encrypted transport under server settings (Microsoft Exchange mail is encrypted by default.) If your mail provider doesn’t support encryptions in transit, seriously, get a new one.

In-transit encryption is what most providers offer. Some also encrypt mail stored on their servers, but there’s a catch. The government–and sometimes private parties in a lawsuit–can demand that the stored mail be decrypted and, with proper authority, the mail service provider has no choice but to comply.

When can the government look? Exactly what sort of authority the federal government needs, a major issue in the NSA revelations, is not entirely clear. Normally, the government would need a court order for mail less than 18 moths old, but a mere administrative subpoena for anything older. (Why the distinction? Because that’s what the 1986 Electronic Communications Privacy Act says.) It was the realization that they could not defy government orders that apparently led Lavabit and SilentCircle to shut down their secure mail services.

It is possible to encrypt email traffic from end to end, but the difficulty makes it seriously impractical. To do it, you have to find a way to get a key to everyone you want to be able to read a message. There are ways to do this using public key encryption, but they are far from easy to implement and far from convenient to use, so almost no one does it. And even if you encrypt message data from end to end, you have to leave header information exposed or the mail system will be unable to deliver your messages–and this metadata can reveal a great deal.

Cloud shortcomings. Automatic encryption of information stored on cloud servers, as recently promoted by Google and Amazon suffers the same shortcoming as encrypting stored mail. The service provider has the keys, and can be forced to give them up. This sort of encryption is still a very good idea; if done properly, it is very effective at protecting your data from intruders or other prying eyes. But it won’t work against an adversary with a court order.

The only way to get complete protection of data stored in the cloud is to encrypt it yourself before sending it to the cloud, and keep the keys in your possession. It’s not the most convenient thing in the world and if you lose the keys you are sunk,  but there are standard software packages that will do the heavy lifting.

 

 

 

The Feature the Next iPhone Should Have

I know nothing about what Apple is going to announce at an iPhone event expected on Sept. 10. But I do know about one feature the is a strong candidate and definitely should be included in the next generation of iPhones and iPads: a fingerprint reader.

There are two basic reasons why Apple should–and, I believe, likely will–do this. First, as a result of its 2012 purchase of AuthenTec, Apple owns advanced fingerprint sensor technology that gives it a real competitive advantage. Second, incorporating the technology, with the sort of software support that Apple customarily delivers, will bring real benefits to customers and help strengthen the iOS ecosystem.

Biometric reliability. Fingerprint technology has been around for a very long time, but only recently has it gotten reliable enough to be truly useful. They would reject your finger if your skin was too moist or too dry, or if you swiped too fast or too slow or at an angle. Recent sensor-equipped laptops that I have used read my fingerprint quickly and accurately nearly every time.

Rumor has it that Apple will build a sensor into the home button, a move that seems both likely and practical, since tapping the home button is a common way to wake a sleeping iPhone or iPad. Apple has a patent pending for technology that would embed the sensor in the display itself (illustration above), but it seems unlikely that it would be ready for use any time soon.

The consumer benefit of fingerprint technology is clear. I suspect most iPhone owners have not activated even Apple’s PIN code login protection, which is not a great idea, although having to enter the code every time you want to use the iPhone is a nuisance. The real issue comes with the greater use of handsets as payment devices. For this to work well, you need something stronger than simply possession of the device to authenticate purchases unless you want anyone who happens to get hold of your phone to be able to buy stuff with it. Better authentication is a key to making both buyers and sellers more confident about using phone-based payment systems.[pullquote]Better authentication is a key to making both buyers and sellers more confident about using phone-based payment systems.[/pullquote]

A big improvement. Two-factor authentication is a big improvement, with the rule of them being you need two of three of something you have, something you are, and something you know. The phone is something you have and a biometric–your fingerprint–is something you are. The advantage of a fingerprint is that it’s easy and natural to collect, giving the user the benefits of stronger authentication with little or no effort.

But with authentication, as with all security-related matters, the devil is in the implementation. Biometrics pose a special challenge. If your password is compromised, you get a new password. If your fingerprint is compromised, you cannot grow a new finger. For this reason, your fingerprint should never be stored, not in a database (which is why the stories about Apple acquiring a lucrative fingerprint database and the privacy fears related to it are nonsense) not even on the device itself.

Keeping it secure. The “fingerprint” collected by a sensor actually consists of a map of “minutiae,” specific features of a print, that can be converted into a numerical code. This code is then hashed, run through a one-way mathematical function that will always supply the same output from a given input, but which makes it impossible to compute the input from the output. All the authentication system, whether on the device or on a server, ever sees is the hash. If it matches, you’re good to go; otherwise, your login is rejected. (Password systems should word the same way, unfortunately, many don’t.) The trick is picking a function that makes it very, very rare for two inputs to produce the same output and especially, all but impossible to force such a “collision” to occur. The way to do this is to use a standard, well-tested cryptographic hash algorithm, such as the National Institutes of Standards & Technology’s SHA-2 family.

Apple certainly has the software chops to implement biometrics in a way that is both secure and very user-friendly. If it pulls it off, the $356 million it paid for AuthenTec could prove to be money very well spent.

NSA, Google Glass, and Confirmation Bias

Two stories that were literally too good to be true made news this week, one of them silly, the other much more serious. But the fact that both got wide circulation before being corrected shows once again the serious flaws affecting much of what passes for journalism these days.

Confirmation bias is the form of wishful thinking that causes us to believe things we want to be true. Everyone, even the most skeptical Journalist, is susceptible to it. During my years as an editor, the most important question I asked writers was “how do you know that?” I wasn’t interested in epistemology; I wanted them to convince me as thoroughly as they had convinced themselves. If they couldn’t, I sent them back for more reporting. But with the decline of those annoying but useful editors, this check on confirmation bias creeping into stories is much reduced.

MVPoliceBlotter tweetThis week’s silly story began with a tweet from MV Police Blotter reporting that “a man wearing Google Glass breaks window after walking into it while watching YouTube.” Anyone who bothered to check @MVPoliceBlotter’s tweets would have recognized it as a parody account. (The tweet immediately proceeding this one was about an assault with a baguette, with the Mountain View police collecting croutons as evidence.) Furthermore, the story was lacking such important details as the name of the Glass wearer and the address of the incident–but it sounded good.

The obvious red flags didn’t stop lots of sites,  including the San Jose Business Journal (since taken down), from grabbing the story from a flood of retweets. John Markoff of The New York Times expressed some skepticism with a tweet of his own, and @MountainViewPD, the real Twitter account of the Mountain View Police Department, put an end to the nonsense.

The much more serious episode involved a report, circulated by Ars Technica and many others, that malware planted in the browsers of users of the Tor network and used to take down hidden sites hosting child pornography was phoning home to the National Security Agency. Now after all the revelations of the past few weeks, we are prepared to believe pretty much anything about the all-seeing, all-knowing NSA, though the obvious question of why an intelligence agency would be involved in what appeared to be a straight-up law enforcement matter, seems not to have been asked.

The reports were the results of some sloppy work by Baneki Privacy Labs, a consortium of security researchers, and Cryptocloud, that traced an IP address in the malware code to intelligence/defense contractor SAIC and ultimately to the NSA. But the work was done with out-of-date tools that were returning incorrect results. Baneki and Cryptocloud backed down after Wired‘s Kevin Poulsen raised some hard questions about the claim. (Ars has a good piece on the unraveling.) In the end, all we know is that the address in question is assigned to Verizon Business Services and very likely ultimately to SAIC through Verizon’s secure government operations data center.

But by the time the story got sorted out, the NSA connection had been widely reported, not as an interesting bit of speculation but as fact, for example, in this (still uncorrected) post by Cory Doctorow at BoingBoing.

Getting stuff right is hard, especially under time pressure. In the absence of editors or anyone else carefully reading copy before it is posted, we are all prone to letting what we think we know get ahead of what we really know. It is a temptation that must be mightily resisted.

 

 

 

OpenPower Consortium: A Trip Down Memory Lane

IBM announced today that it was working with Google and Nvidia in an alliance called the OpenPower Consortium to promote the use of IBM’s Power processor as an alternative to x86 CPU’s from Intel and AMD. Power, a descendant of the PowerPC chips that were the heart of Apple’s Macintosh computers until the 2005 switch to Intel, currently are used primarily in IBM’s midrange line of servers, a business that has been shrinking of late.

Ian Betteridge tweetA tweet from Ian Betteridge tweaked my memory and reminded me that this looks a lot like a road we have traveled before. Back in the mid-90s when the PowerPC was young, Apple, IBM, and Motorola–the companies behind the chip–came up with something called the Common Hardware Reference Platform.

The idea behind CHRP was a hardware design independent of the software that would run on it. So an IBM CHRP machine could run the UNIX-like AIX (the original version of AIX was a lot further from stock UNIX than what eventually evolved.) Motorola’s system could run Windows NT for PowerPC. And Apple could use Mac OS.

Not much ever came of the scheme. Apple incorporate CHRP-like elements into its Power Macs, but to the best of my knowledge, it never built a CHRP-compliant Macintosh. Motorola, at the time a licensed maker of Mac clones, developed a system called the StarMax 6000. I saw it once, perhaps at Comdex, dual-booting Mac OS and Windows NT. This was very cool, but whatever commercial possibility it had disappeared when Apple abruptly discontinued the Mac OS licensing program before it shipped. The only significant product to come out of the experiment was a version of the IBM RS/6000, a precursor of IBM’s very successful p-series server line.

I wish IBM better luck with the new venture. The support of Nvidia and Google is impressive, especially the latter with its thousands and thousands of x86-based servers. But making a dent in x86 while at the same time holding off a new challenge from ARM servers is a very tall order.

Disclosure: My son for some years developed system software for the p-series servers. He’s still with IBM, but working on an unrelated project.

 

 

Bezos and The Washington Post: Mr. Disruption Buys a Newspaper

You’d think a newspaper would be the last business in need of disruption. Yet Jeff Bezos, whose Amazon.com is the most disruptive force in retailing and perhaps in all of American business, has just plunked down $250 million of his own money to that The Washington Post, off the hands of the Graham family and the other shareholders of the soon-to-be renamed Washington Post Co.

I’m not going to pretend to know why Bezos did this. Maybe he has some never before expressed desire to be a press lord. Perhaps he had a secret desire to save the Post from the death by a thousand cuts it as suffering. Or maybe, just maybe, he has some notion of how to turn the slowly sinking Post back into a viable business.

National ambition, local reality. The Post has always been a somewhat odd beast. It’s basically a local paper with national ambitions but without the means to realize them. The Post, unlike the Times, never committed the resources needed to go national. At the same time, local news coverage  always took a back seat (and it was something like the back seat of a Porsche) to the all-powerful National Desk.  The Post tended to regard Washington’s suburbs as foreign territory and showed only a fitful interest in the less-prosperous neighborhoods of the District of Columbia.

Lack of attention to the community is likely one reason the Post failed to derive any benefit from the rapid growth of the Washington metropolitan area. Between 1990 and 2010, the Post‘s circulation fell faster than the Times’ even though Washington was growing much faster than New York.[pullquote]Between 1990 and 2010, the Post‘s circulation fell faster than the Times’ even though Washington was growing much faster than New York.[/pullquote]

Disruption can sometimes bring renewal, but the Post  has only seen pain and decline. When the collapse of classified ads and the disappearance of department stores clobbered ad revenues, the Post responded in the worst possible way, by gutting the paper. Successive rounds of buyouts gradually deprived the staff of many of its most talented writers, reporters, and editors (If you leave the choice of who gets bought out to the staff, as the Post largely did, you are going to lose your best people.)

Shrinking news. The news hole shrank drastically, the business section disappeared (taking most tech coverage with it), domestic bureaus were shuttered, and arts and cultural coverage was savaged. For reasons that I cannot explain even to myself, I get three dead tree newspapers, the Post, the Times, and The Wall Street Journal delivered, but I am hard-pressed these days to spend more than five minutes on the Post each morning. It is still capable of occasional greatness, but mostly it is a pale shadow of itself.

Until recently, the Post‘s online efforts have also been pathetic. The online Post was originally established as completely separate operation from the regular newsroom, in part to keep the online staff out of Newspaper Guild jurisdiction. Whatever that saved, it was a disastrous decision. The “real” Post staff regarded the online folks as an even lower life form than the Metro staff. While the Times and Journal built their online operations into real businesses, the Post moldered. It has since taken aggressive steps to remedy this, but it is very, very late.

So Bezos has bought himself a badly tarnished, once-great, newspaper. What is he going to do with it? I hope he will at least stem its decline. If the Post‘s finances stabilize, as they have shown some recent signs of doing (and the Washington Post Company is retaining legacy pension costs), Bezos can afford to absorb the current level of losses for many years to come. Even better, maybe he can find some Amazon magic to revive the Post.

The Motorola Mystery Deepens

The Moto X is a phone.

It’s a little newer than your phone.

It will be available for sale in late August.

This is the true story of the Motorola Moto X.

–“The Amazing True Story of the Moto X,” John Herman’s Buzzfeed story in the form of a poem

Fifteen months ago, Google bought Motorola for $12.5 billion. And we are still trying to figure out why.

The flagship Moto X handset, announced Aug. 1, doesn’t do much to clarify things. It’s the first product that bears the clear stamp of Google management. And it appears to be a very nice phone, with solid specs and fully competitive with the latest offerings from Samsung, HTC, Sony, and whoever else is still in the Android phone mix.

What it isn’t is particularly disruptive. Like other flagship Android phones–the Samsung Galaxy S4, the HTC One, the Sony Experia Z–it has some distinctive features that set it aside from the pack. But at bottom, it is built from the same specs and will be sold through the same carrier channels at the same price and with the same plans as its competitors.

Its most notable features include customized colors and textures (initially available only on AT&T models), the ability to take a photo by shaking the phone, even if it is asleep, and tapping the screen, and software that listens and will respond to voice commands at any time. These seem less gimmicky than some of the features of the Galaxy S4, but they don’t feel like enough to shake Samsung’s dominance of the Android market.

In the rumor-laden run-up to the Moto X announcement, there was talk that Motorola’s expertise in sensor technology would yield some revolutionary use of the phone’s ability to sense its surrounding, for example, a phone that might automatically adjust its behavior when it realized it was in a moving car. A lot of this speculation was encouraged by the public statements of CEO Dennis Woodside. So it’s not unreasonable that we hoped for something more interesting than shake-to-shoot.

So the question remains is just what problem Motorola Mobility solves for Google. The Moto X doesn’t reduce Android fragmentation. Instead, it adds its own mildly customized user interface to the Android 4.x mix. It doesn’t answer the question of who is really in charge of the Android user experience, the OEMs, the carriers, or Google. And I have no particular reason to believe it will significantly reduce Moto’s losses, which are running at a pace of about $1 billion a year.

Maybe Google has some grand scheme for its disparate efforts whose logic will some day be revealed to us. Or maybe it really is just a unfocused collection of efforts, ranging from handsets to self-driving cars, funded by a massively profitable search advertising business. Some day we will know.

 

A Microsoft Without Client Software: It Could Be the Future

It’s easy to forget today but Microsoft got its start as a highly disruptive company. The IBM PC running Microsoft software revolutionized business computing in the 1980s, bringing down the priesthood of the mainframe. As cheap clones and better applications flooded the market, Microsoft was everywhere. In the mid-1990s, the friendlier Windows 95 and the explosion of the World Wide Web made personal computing accessible to a mass consumer market, again dominated by Microsoft.

Twenty years later  though, it’s time to contemplate a future in which Microsoft mainly retreats to the back offices its once disrupted so thoroughly. Microsoft’s client business has gotten into deeper trouble faster than I imagined possible. A years ago, the company launched three new client operating systems. Windows RT, the OS designed for tablets with ARM system-on-chip processors, is an unmitigated disaster. Asus, the last OEM with any real commitment to RT products, appears to be bailing out, and Microsoft, after taking a $900 million writedown on unsold inventory of Surface RT tablets, reported that it sold just $853 million worth. Windows 8, the version for traditional PCs and Intel-powered tablets, is not doing much better.  For the first time, the introduction of a new version of windows has not only failed to boost PC sales but may actually have helped accelerate the decline, while Windows tablets and new convertible designs have failed to gain much traction.

In this field, Windows Phone 8 looks like the winner. At least it is gaining market share. But Windows Phone, with about 4% of the U.S. market, is a very distant third in what increasingly appears to be a two-horse race. So where does this leave Microsoft: Nowhere in the rapidly growing space of tablets. Still dominant in PCs, but in a market facing slow but inevitable decline. And hanging on by its fingernails in a smartphone market in which growth may be slowing substantially. The bottom line is that it is not at all clear that Microsoft has a viable future in client systems. To be sure, the Windows PC will be with us for a long time to come, but its future for Microsoft is one of managing decline. What would a Microsoft without clients look like?

On the revenue side, it’s not too bad. Microsoft financials table The Windows Division (these data do not reflect Microsoft’s most recent corporate reorganization), which consists of, well, Windows, accounted for just over a quarter of Microsoft’s gross in the most recent 12 months. Of course, the decline in clients is also going to affect client applications. Office falls into the Microsoft Business Division, which also includes Microsoft’s growing Dynamics line of enterprise management software. Best guess is that Windows and Office combined provide about half of Microsoft’s revenue. The self-explanatory Server and Tools unit and the Entertainment and Devices Division–mostly Xbox–make up most of the rest.

On a pro forma basis, a clientless Microsoft would still be a $40 billion company. The problems come when you look at profits rather than revenues. Windows and Microsoft Business provide nearly all the profit. This picture is not quite a grim as it appears at first glance. If Microsoft were to contract sharply, the $6.6 billion in overhead (“Corporate-level activity”) should also shrink considerably. And the Online Services Division, which includes Bing Search, Skype, and a variety of other services, appears to be in a serious turnaround, trimming its massive losses to $1.3 billion in the most recent 12 months. Server and Tools turns in a very nice 40% margin.

What you might be left with is a company with $40 billion in revenues and a 30% profit margin in place of one with $78 billion in revenue and a 35% margin. No great, but clearly viable. And the fact that Microsoft will go on receiving streams of legacy income for years to come buys it the time it needs for change. The big question is how you get from here to there. Corporate reinventions, in which a company sheds what were once thought to be core businesses to set the stage for future growth, are hardly unheard of. IBM is probably the outstanding example of such a successful transformation. But the process is wrenching and brutal. And I have never heard of a reinvention on this scale being carried out by the management that landed the company on the rocks. Whoever will lead Microsoft to calmer waters, it will not be Steve Ballmer.

Semi-random Thoughts on China

I’m writing this as I am flying home from 10 days in China. Not the tech temples of Shenzhen but a concert tour of five of China’s largets cities with the Children’s Chorus of Washington. This opens you eyes to an amazing country in ways that visits to factories and electronics markets do not.

The single most striking thing about China is that the country has reached a point where the explosive growth of large sities simply must slow down because they are hitting two limits: pollution and traffic. Pollution has reached a point where, particularly in places like Beijing and Xi’an, it is a serious public health issue. And choking, immovable traffic is destroying any efficiency of urban life.

It’s not that China fails to recognize these problems. But the situation is very different from more developed countries, especially the U.S. We suffer emissions and traffic problems, too, but our political system seems unwilling or unable to address them. Washington,D.C., for example, is about to get rapid transit most of the way to Dulles Airport 25 years after the need became obvious.

China, by contrast, is building infrastructure as fast as it can. Every city seems to be torn up by cut-and-cover subway construction. The utility industry is moving to retire coal-firerd power plants. But the efforts simply cannot keep up with the growth, so the situation grows worse despite heroic efforts to improve it. Our tour guide’s answer to all questions about how long it would take to reach our next destination seemed to be “if no traffic jam needs one hour.” But there was always a traffic jam, making it nearly impossible to maintain a schedule.

Matters are not helped by the Chinese penchant for American-style big cars. Although SUVs are rare, the streets are choked with Honda Accords, Toyota Camrys, 5-series BMWs, and Audi A6s. Obviously, these are not the vehicles of average Chinese workers, but these luxury cars are seen with a density typical of the plusher areas of Washington or Manhattan. Though there is a plague of scooters and trishaws darting in and out of traffic, bicycles have largely disappeared from the main streets. Nine million bicycles in Beijing is a nice song, but it no longer reflects reality.

Political freedom is a tricky question. Certainly, China is not a free country. Political leadership is picked by the Communist Party, which except for the doctrine of democratic centralism that concentrates all power inthe hands of the Party, bears no allegiance to any communist princles that Marx, Lenin, Khrushchev or even Toglitatti would recognize. There are no meaningful elections and no real citizen input into government. The free flow of information is blocked and treatment of dissidents can be, and often is, harsh.

On the other hand, I have been in real police states, the Soviet Union for one, and China doesn’t come close. The pervasive fear is lacking and I found ordinary Chinese remarkably willing to talk about their political and social aspirations. There are video cameras everywhare, but these days that hardly distinguishes China from the US or UK.

The real test for China will come with the invetiable slowdown in growth. The Chinese people has made a faustian bargain since the end of their Cultural Revolution. They have accepted a level of Party control and repression in exchange for a government that delivers boundless growth that has created a vast urban middle class and raised aspirations throughout the countryside. But the pace is unsustainable, and as it slows, will the people stick with the bargain. There are no revolutions more dangerous than revolutions of rising expectations.

When Microsoft Ruled Tech: An Elegy

Almost 20 years ago, when Microsoft was king, I became a full time tech writer after many years of writing about economics and politics and working as an editor. As I watch Microsft struggling to get its mojo back, especially in consumer markets, I realize that I really miss the swashbuckling Microsoft of the mid–1990s.

There’s never been anything quite like it, and may never be again. This was a Microsoft that its competitors industry feared and that many regarded as downright evil. It was at the start of a run of domination that would lead to it being found guilty of civil violations of antitrust laws in the U.S. and Europe. And it was an exciting and dynamic company. (Probably the closest thing to it today is Google. But despite Microsoft’s many sins, it lacked two of Google’s most significant traits, a lack of focus and an annoying streak of self-righteousness.)

What was this Microsoft really like? By 1994, Microsoft was on its way to ruling the PC world with Windows and it was developing a never-realized vision in which Windows code would run on everything, from PCs to copiers to coffeemakers. But Windows 3.1, despite its success, was a thin, kludgy layer of code on top of the rickety foundation of MS-DOS. Within Microsoft, two groups were racing to replace it, the Windows 95 team headed by Brad Silverberg and the Windows NT group skippered by Jim Allchin. In the best Microsoft tradittion, these groups competed hotly with each other. Windows NT was the more ambitious effort, built on a solid operating system kernel architected by Dave Cutler, who had created VMS for Digital Equipment. Windows 95 was a huge user interface improvement, but still a kludge dependent on a DOS core. Windows 95 was an instant hit, while NT provided Microsoft with its OS of the future: the NT kernel powers all current Windows versions.

Microsoft was a fierce competitor. But until recently, it has had phenomenal luck in the incompetence of its competitors. Apple slowly crumbled through the 90s, turning out lousy Mac hardware running outdated software, and steadily lost market share. The Newton, years ahead of its time, sapped scarce resources. IBM’s attempt to challenge Windows, OS/2, was just the consumer product you would expect from a mainframe maker. The dominant DOS applications software makers, WordPerfect and Lotus, both missed the rise of Windows, leaving the field open. Microsoft Office was born more or less by accident. Microsoft had developed Excel for the Mac, which lacked a good spreadsheet, but was having a hard time getting customers to trade MacWrite for Word. The company created Office by throwing in a copy of Word with Excel, a product that former Offcie marketing chief Laura Jennings once described to me as “crap in a box.”[pullquote]Microsoft was a fierce competitor. But until recently, it has had phenomenal luck in the incompetance of its competitors.[/pullquote]

That the internet and Internet Explorer would be central to the government’s antitrust case is the great irony of Microsoft history. Bill Gates and other executives of Microsoft were late to recognize the importance of the internet. Windows 95 originally shipped without a browser or any real internet support. This mistake, probably the biggest in the company’s history, helps explain why it came to regard Netscape as an existential threat that had to be destroyed. During the development of Windows 98, there was a fierce battle between Silverberg, who wanted a more net-centric approach for the future, and Allchin. Allchin won, and Silverberg and much of his team left the company. It’s impossible to say whether Microsoft would have done better had the fight gone the other way, but it definitely would have been much different.

Microsoft in the mid–90s was a fun company to cover. It believed in Bill Gates’ mission of putting a PC in every home and on devery desktop. Its executives were open and frank and it dreamed big dreams. It’s aggressiveness made it interesting. I used to look forward to my regular trips to Redmond. The antitrust case, a disaster from which Microsoft has never really recovered, sucked most of the fun and a lot of the life out of the company.

Microsoft was going to change anyway: It had become a big company and many of the executives who had led the phenomenal growth period and had grown rich beyond imagination in the process, were starting to move on. In nearly every case, their replacements were more managerial and less adventurous. The prosecution added to the growing sense of caution, and Gates, much of whose time was absorbed by the case, seemed to lose his fire and, gradually, his interest.

The dominant companies of today, Apple and Google, are nowhere near as much fun to write about as Microsoft in its prime. Both are secretive, Apple obssessively so, and neither makes its senior executives available except in very tightly controlled situations. For a writer fresh to the tech business, the Microsoft of 1994 was a dream. In an industry that has grown up a lot in the last 20 years, I doubt we will se its like again.

Microsoft Reorganization and the Future of Xbox

Microsoft today announced a long-awaited reorganization that is aimed at eliminating the company’s often warring business units in favor of a more unified, collaborative organization. Kara Swisher at All Things D had the details first and has reported them in great depth, so I’m not going to go over that ground. But I do want to speculate a bit on what the new arrangement means for the Xbox, the one product that has generated any real excitement out of Redmond in the past few years.

On the one hand, putting Xbox in a consolidated hardware unit, under former Windows engineering chief Julie Larsen-Green, shows how mainstream and central to Microsoft’s ambitions the Xbox has become. On the other, it marks the formal end of Xbox as a rebel within the Microsoft camp.

Xbox was the product of a Microsoft skunkworks run by J. Allard and Robbie Bach. It was based in Redmond, Wash., like the rest of the company, but was physically located in offices and labs a few miles away fromt the main Microsoft campus. At least in its early days, it felt very different, and much less corporate, than other Microsoft operations. The team produced the original Xbox, the Xbox 360, and the revolutionary Kinect sensor.

To be sure, a lot of what made the Xbox unit special ended when Allard and Bach left the company in 2010 and Xbox got reeled in closer to the mother ship. Don Mattrick, who took over for Bach as head of the unit,  announced recently he was leaving Microsoft to become CEO of game maker Zynga. After today’s reorganization, it’s official that Xbox is just another product in the Microsoft family.

 

Apple, Microsoft, and Listening to Customers

iOS 7 beta 3 screenshotEarlier this week, Apple released iOS 7 Beta 3, the third test version of the upcoming software for iPhone and iPad released in a month. Users, by definition registered Apple developers, who installed it made a remarkable discovery: The Helvetica iOS system font, widely denounced as too light by earlier users, had been replaced by a slightly heavier version, producing a big improvement in readability. Apple, to the amazement of many who view the company as a design dictatorship, had listened and changed in response to what it heard.

Oddly, it’s Microsoft, once the paragon of listening to customers, that seems to have lost the knack. The preview version of Windows 8.1 addresses some of the most serious problems with the touch-centric Metro, it does very little to improve the legacy Desktop side of Windows 8. As a result, it does nothing to assuage traditional Windows users’ deep unhappiness with Windows 8. From developers to OEMs to ordinary users, I keep hearing complaints that Microsoft just doesn’t listen.

Apple and Microsoft are treating their previews very differently. Although calling Windows 8.1 a preview, Microsoft seems to think of it as a nearly finished product. In fact, the Windows App Store practically begs you to install the preview.

Windows app store screenshotBy contrast, iOS 7 is a true beta. It is only available to registered iOS developers (though anyone can become one by paying Apple $99 a year, it does require skin ion the game.) The download and installation process seems to be deliberately obscure to discourage the casual. And to download the software, you must agree to a confidentially agreement the prohibits publication of details that have not yet been made public. All of this is typical of serious software testing.

With Microsoft planning to release Windows 8.1 to OEMs by the end of August, I don’t expect that we will see more than minor tweaks to the software. The legacy Desktop will remain a jarring experience, ill-suited to either touch or non-touch use with Metro screen and apps that pop up seemingly unbidden at inconvenient moments. The changes from the version released last October are depressingly minor. There is more real change on the Metro side; in particular, Microsoft has drastically reduced the circumstances under which you have to drop back into Desktop. But the Metro apps, especially Mail, are still seriously under-featured and the app store remains a wasteland.

Microsoft seems to be following a similar course for the new Metro-fied versions of Office applications. Office 2013, released last year along with Windows 8, offered only minor concessions to touch and the applications were largely useless on the surface or other tablets unless there was a keyboard attached. In response to a strong negative reaction, Microsoft accelerated development of real touch versions. These are supposed to ship before the end of the year, but so far Microsoft is keeping them close to the vest. This is especially concerning because making applications such as Word and Outlook touch-friendly will require a radical simplification of the interface and, as a result, a lot of familiar features will have to be removed or hidden. You would think Microsoft would be seeking as much user input on these decisions as possible, but that does not appear to be the case. The result is likely to be another disappointment, though I really hope I am wrong.

iOS 7, by contrast, seems to be evolving quickly. I don’t think Apple will maintain the pace of a new release every couple of weeks, but many subtle design changes in the two updates we have seen suggest that Apple is heeding the concerns of testers. As Marco Arment, developer of Tumblr and Instapaper, put it in his blog:

Since Apple is just people, they’re usually trying to figure out the best answer to the same decisions and trade-offs we argue about on the outside: what’s best for the user, what’s best for battery life, what apps should be allowed to do, how multitasking should work, how far sandboxing should go, and so on. Almost any decision that causes controversy on the outside has almost certainly caused just as much on the inside, it’s probably still being argued, and the decision probably isn’t set in stone.

We can’t participate directly in those debates, but we can provide ammo to the side we agree with.

 

 

 

 

 

 

A Requiem for WebTV

It’s not too often that I am reminded of something I wrote 17 years ago and am able to read it without cringing, at least not too much. This happened over the weekend, after Microsoft quietly announced that it was shutting down its MSN TV service at the end of September. The Verge linked to a 1996 BusinessWeek  column in which I said that the brand-new WebTV might be the “product that could turn the World Wide Web into a mass-entertainment medium.”

That call was premature by about a decade, but the folks who created WebTV–Steve Perlman, Bruce Leak, and the late Phil Goldman–and soon sold it to Microsoft for $425 million were onto something important. WebTV was the first of a long line of set top boxes designed to merge standard television with what we now call over-the-top internet video, and effort that continues today with Apple TV, Google TV, Roku, Xbox, and many others.

WebTV and its successor MSN TV never found much success. It turned out that not very many people wanted to see a mostly text internet on their TV sets, nearly all of which were then CRTs. The WebTV engineers did a brilliant job of making text readable on displays never designed for it, but there were limits to what they could accomplish. Content was a huge challenge, especially with the standard connectivity coming through a 28 kilobit per second dial-up modem (later versions supported broadband.) As I wrote:

WebTV also requires a Web that is much more visual than today’s. This is an entertainment appliance, not a research tool. You can’t save or print pages. On the other hand, an online tour of an art gallery was beautiful on WebTV. Support for a number of multimedia technologies, including Java programs, RealAudio sound, and QuickTime movies, was missing from my prototype, but they should be ready soon.

It’s easy to imagine what’s needed to make TV-based browsing take off: shopping, with click-to-order multimedia catalogs. (WebTV comes with a slot for the “smart” credit card of the future.) Movie guides with previews on demand. Travel information, with on-screen booking services. Online games.

Perhaps most important, setting up WebTV was drop-dead simple to set up and use at a time when most people  were still struggling with recording on VCRs.

We are still a long way from the perfect, do-everything set top box. But WebTV was an important first step on the path that is getting us there.

Windows 8.1: A Step Forward, a Ways To Go

Windows 8.1 has arrived, at least in preview form. And while it shows that Microsoft has made significant improvements in the eight months since the original version of Windows 8 shipped, it also shows just how far the software has to go before it becomes a truly useful advance.

I have been running 8.1 for the past week on a Lenovo ThinkPad Helix, a convertible design that gave the experience of using it both on a more-or-less conventional touchscreen laptop and on a standalone table. I would also have liked to try it on a Hewlett-Packard Envy x2 convertible, but the current preview edition does not work on the Envy’s Atom processor (the Helix is powered by an Intel i5.)

Microsoft seems anxious to have as many people as possible try 8.1, an unusual approach to software that has not been officially released. While Apple is restricting access to the preview version of OS X Mavericks to registered, paid members of the Mac development program, Microsoft is advertising 8.1 to all comers in the Windows store. It’s a big download, over two gigabytes, but the installation was painless.

The two most talked-about changes in 8.1 turn out to be no big deal. A simple change in Taskbar properties gives a number of new startup options, including booting directly to the legacy Desktops instead of the new Metro-style startup screen. But since all it ever took to get from the Start screen to the Desktop was a single click or screen tap, this isn’t exactly a revolution.

Similarly, the return of the Start button has been greatly exaggerated (though, in fairness, Microsoft has been making it clear for some time what the new Start button would do.) What’s new is a Windows icon at the far left of the Taskbar, where Windows 7’s round Start button used to be. Tapping it has exactly the same effect as pressing the Windows key on the keyboard or swiping in from the right and tapping Start: It brings up the Start page. If the appropriate property is set, it will take you to the Apps list instead, which is kinda, sorta like the old Start menu. (If this option is chosen, it affects all three methods; all will bring up the Apps list instead of the Start page.) But I never considered the absence of the Start button as anywhere close to the heart of Windows 8’s problems, so I find the value of this change to be modest.[pullquote]I never considered the absence of the Start button as anywhere close to the heart of Windows 8’s problems, so I find the value of this change to be modest.[/pullquote]

Far more useful is a major expansion in your ability to configure and control your system from within the Metro interface. In the original version of Windows 8, all but the simplest tasks required opening a Desktop control panel. 8.1 lets you do most of the chores you encounter with any frequency by tapping the Change PC Settings option you are offered with the Settings charm, from adding or modifying a user account to choosing accessibility options. This is a considerable benefit when working without a keyboard in tablet mode; those Desktop control panels are very difficult to handle with touch. One area where the new approach falls short, though, is networking; dealing with any real connectivity issues, including any troubleshooting, still requires going to Desktop.

Another significant change is greater flexibility in showing more than one app in Metro. The original version let you open a second app, but it was restricted to a vertical strip of a quarter of the screen on the left or right. Now you can choose among a quarter, a third, or half of the screen and, on big enough displays, you can open three apps. But they are still restricted to non-overlapping vertical strips, an arrangement far inferior to traditional windows on larger displays. Choosing which applications get to share the screen is also an unnecessarily fiddly process.

Many of the annoyances from the original Windows 8 remain. The need to switch between Metro and Desktop modes is reduced but not eliminated, regards of your choice of primary mode, and Desktop is still mostly unusable in touch. (Lenovo’s inclusion of a stylus with the Helix is helpful, but at the same time an admission of failure.) And after eight months, the lack of third-party Metro remains a huge problem. The necessity to switch to Desktop could be greatly reduced if there were more native apps available.

There’s also the problem that Windows 8 does not let you chose different default apps in different modes. Where Metro versions exist, they are the defaults; for example, clicking on a picture file in Desktop opens the Metro Photos app rather than the Desktop Photo Viewer. There’s no way to set separate defaults for each mode if that’s what you would prefer. The exception is Internet Explorer 11, where the appropriate version opens in each mode. But only if IE is your default browser. If you switch to, say, Chrome, you will get the Desktop version of IE in both Desktop and Metro. Go figure.

The real test for Windows 8 will come this fall, when Microsoft plans to unveil a touch-optimized version of Office. Its big selling point for Windows 8 and Windows RT tablets such as the Surface Pro and Surface has been the unique availability of Office. But Office, even with the touch enhancements of Office 2013, is a deeply unsatisfactory experience on a tablet.

Tabletizing Office is no easy task. To work well with touch, its interface has to be simplified radically, meaning that many features will have to be eliminated or hidden. With a 20-year history of Office applications providing every option, bell, or whistle that any user might want, this sort of pruning runs deeply against the grain. But including too many features will, ironically, seriously compromise usability. It will be very interesting to see what choices Microsoft makes.

Finally, a plea to Microsoft and its OEM partners: Please fix the behavior of touchpads in Desktop. Laptops designed for Windows 8 generally come with large, no-button touchpads. MacBooks set the standard for these some years ago: A one-finger tap acts like a normal mouse click, a two-finger tap brings up a context menu. This works on Windows touchpads but, in keeping with the Windows philosophy that there must always be more than one way to do anything, a tap on the right side of the touchpad, with one or two fingers, also brings up a context menu. This is disorienting, unnecessary, and symptomatic of Microsoft’s inability to ever let anything go.

Lenovo, to its credit, offers its own solution. A tab buried deep in the Mouse control panel lets you restrict the right-click effect to a small area of the pad. It even lets you set the area in the lower right corner when you are using the touchpad as a pointer and in the upper right corner when you are using the ThinkPad eraser-head TrackPoint. It’s a rare win for traditional, flexibility, and convenience. Windows 8 could use a few more of these.

NSA Spying: Why So Little Outrage?

Since the revelations about the extent of telephone and internet surveillance by the National Security Agency first broke a couple of weeks ago, I’ve been struck by how little outrage there has been aside from activists at the left and right end of the political spectrum. Today, my wife Susan, who is tech savvy but doesn’t live and breathe this stuff the way I do, answered the question:

I assume that whenever I type something on a computer somebody is watching. How is the government different from Google?“

The fact is that most of us have, without really thinking about it, surrendered our assumptions of privacy. Someone–it may be Big Brother, or a private company that can be forced to share the information with Big Brother without telling us–is watching and we no longer much care. This attitude has seriously interfered with our ability to work up much outrage.[pullquote]Someone–it may be Big Brother, or a private company that can be forced to share the information with Big Brother without telling us–is watching and we no longer much care.[/pullquote]

There’s another factor. The NSA/CIA/FBI abuses of the 1960s and 70s, revealed in detail by the Church Committee and other investigations did real harm to real individuals and groups. People and groups were targeted for surveillance and sometimes harassment based on their constitutionally protected opinions, speech, and actions. People were outraged because the government’s behavior was outrageous.

So far, at least, no one has been able to point to any harm to individuals or groups that has been caused by NSA surveillance. Most Americans regard their government as mostly benign and the threat raised by government information collection is very abstract. As Matt Blaze of the University of Pennsylvania pointed out at the Computers, Freedom, Privacy conference in Washington, most Americans are comfortable with the government having the information as long as Barack Obama or George W. Bush wan in charge (though few people are equally comfortable with both), but almost no one would trust Richard Nixon with it. Nixon is safely out of the picture.
Personally, I am far more bothered by NSA vacuuming up records on every phone call made in the U.S. than I am by the PRISM program for collecting internet data. There is still much we don’t know, and probably never will know, about PRISM, but it sounds mainly like a streamlined system for NSA to retrieve targeted information, officially only on ”non-U.S. persons,” from internet companies.

On the other hand, the collection of phone data gives the government a shockingly complete record of our lives. In many ways, this so-called metadata is more useful than the content of the calls themselves because the data can be parsed by computer. Courts have long imposed a much weaker standard for the collection of call data, which requires only a subpoena, than for content, which requires probably cause and a wiretap warrant. But those rules were written before computers made the analysis of data far more powerful and potentially far more destructive of privacy.

At the CFP conference, NSA whistleblower Thomas Drake has a friendly audience for his attacks on NSA suspicionless surveillance. His bitterness at the agency is understandable, since it hounded and (unsuccessfully) prosecuted him for revealing financial mismanagement more  than any intelligence secrets. But he went over the line when he compared the NSA to the Stasi, the East German secret police that set the Warsaw Pact standard for spying on its own citizens.

But the Stasi destroyed lives by the thousands for sins, real or imagined, turned up by its snooping. So far, there is no evidence NSA’s  collection of information has been abused, again accounting for the lack of any real public outrage. But it is sitting there on the NSA’s computers and that is dangerous, given our own history of abuse. Maybe this information has been useful for disrupting some terror plots, but we need a discussion of whether it was worth the price–a discussion in which the government has been unwilling to engage.

Why the FAA Slow Walks Electronics in Planes

Last Friday, The Wall Street Journal reported that the Federal Aviation Administration was moving to relax rules banning passengers from using phones, tablets, and other electronics at the beginning and end of flights. But by Monday, the Journal was warning us, not so fast. It will be many months before the rules change and even then not all devices may be allowed on all planes.

Behold the thalidomide effect. In the early 1960s, the U.S. Food & Drug Administration denied approval to thalidomide, a drug designed to treat morning sickness in pregnant women, thus sparing the U.S. from the severe drug-induced birth defects that plagued Europe. The non-approval won heaps of praise for the agency and for Dr. Frances Kelsey, the examiner whose suspicions kept thalidomide off the U.S. market.  And it greatly strengthened the already strong belief of regulatory agencies that inaction is the safest course for bureaucrats who live in constant fear of political fallout if a decision goes bad.

That bias toward inaction and extreme caution is why the FAA will not spend the next year or so testing every conceivable device in every known type of commercial aircraft before inevitably concluding that the use of electronics is safe in all phases of flight. They will continue to ban the use of cell-type radios during takeoff and landing–you don’t want to take any chances during these critical phases of flight and these signals are orders of magnitude stronger than incidental emissions or even Wi-Fi transmissions.

We know the use of these devices is safe because it is going on all the time.  I don’t think my use of devices is atypical. I dutifully stow stow my phone and tablet for takeoff and landings, but they are in airplane mode,  not shut down. I leave them in airplane mode during the flight because there’s generally no reception above 10,000 feet anyway and leaving the radio on just drains the battery as the device searches for a network. So from beginning to end of any flight, there are undoubtedly dozens of devices powered up at all times. [pullquote]The flight crew’s tablets are used in the cockpit, right on top of the instruments whose putative sensitivity to interference was the original reason for the ban.[/pullquote]

The strongest case for allowing device use comes from the airlines themselves. All of the paper documentation traditionally stowed in the cockpit and carried in the pilots’ flight bags takes space, adds weight, and is an enormous pain for both the airline and the crew to keep up to date. All of this can be eliminated by using tablets, mostly iPads in practice, as electronic flight bags, which airlines are doing as fast as they can. American Airlines just became the first U.S. carrier to complete the transition, including its fleet of ancient MD-80s. And these tablets will be used in the cockpit, right on top of the instruments whose putative sensitivity to interference was the original reason for the ban.

Unlike the Transportation Safety Administration’s ill-fated attempt to allow small pocket knives and other objects back on planes, the new rules on electronics will eventually go into effect. There doesn’t appear to be any organized pushback to the idea. Cabin crews, whose opposition was instrumental to killing the TSA change, will doubtless be glad to stop enforcing rules they, along with everyone else, regard as pointless.

But all the incentives at regulatory agencies are to move slowly and cautiously. So even though the FAA is a sprinter compared to the glacial Federal Communications Commission, it will take many months before any change happens. In the meantime, you’ll just have to make do with the airline magazine, or Skymall, or, as I do, something on paper that you have brought along to amuse yourself for the first and last few minutes of a flight. It really won’t hurt you to do without your electronics for a few minutes.

How Windows 8 Is Truly Broken

Regular Tech.pinions readers know that I am not a fan of Windows 8. But an experience today brought home just how truly broken the two-operating-systems-in-one-package really is.

I have been setting up a Lenovo ThinkPad Helix–one of the new breed of convertible tablet/laptops–for evaluation. I always try to do real work on eval systems and a project I am helping with requires forms to be filled out in Adobe Acrobat Reader. So I installed Acrobat, no problem (except for Adobe sneaking in Google Chrome and the Google Toolbar in the same installation.

Installing Acrobat changed the default program for handling PDF files from Microsoft’s Reader, a Metro program, to the Desktop Acrobat. And Windows 8 file assignments are global; there is no way to specify one program for use in Metro and another for Desktop.

So once Acrobat was installed, opening up a PDF web page or mail attachment in Metro dumped me into Desktop to use Acrobat. I could manually change the assignment of PDF files back to Reader, but then opening a PDF file on the Desktop switched me to Metro. For a saved PDF file, there’s the clumsy option of windows’ Open With command.

Microsoft has turned one of the simplest and most natural of operations into a thoroughly annoying pain in the ass. And dozens of other file types, particularly audio, video, and photos, cause similar programs. One way or another, accessing them forces jumps between Desktop and Metro.

The solution to this boneheaded problem is obvious: Allow a separate file association for each mode, and ship the OS with appropriate defaults so that content opens in the right program for each mode. It’s possible that the problem will be fixed in Windows 8.1 when the preview release comes out next week. But commenters in the official Windows blog have been asking for change and Microsoft has not responded, so I’m not hopeful.

One of the many disturbing things about Windows 8 is the sense that Microsoft has stopped listening to its customers. They didn’t listen during the beta test and they haven’t listening in the nearly eight months that the software has been languishing in the marketplace.

It’s possible that the two-headed nature of Windows 8 is so conceptually flawed that it cannot be fixed. We’ll see shortly how serious Microsoft is about trying.

 

Apple Could Challenge Microsoft for Desktop Dominance. But It Won’t

Apple’s opportunity to dominate desktop computing probably disappeared the day in 1981 that IBM shipped the Personal Computer. Apple’s first attempt at a “business” computer, the Apple ///, was a technical and commercial flop. The anti-corporate “computer for the rest of us” marketing pitch that accompanied the introduction of the Macintosh in 1984 went over badly with business at a time when businesses were buying most of the computers.

The argument, occasionally still heard, that the better system lost is arguable at best. The Mac was a huge usability breakthrough, but in the early years, the graphical user interface demanded more than the hardware could deliver. Microsoft made a major leap with Windows 3.0 in 1990 and by the mid-1990s, when consumer sales became really important, Windows 95 and Windows NT were moving ahead of the aging Mac OS. Mac continued to slip as Windows forged ahead and it wasn’t until Apple’s big switch to Intel processors in 2005, along with increasingly powerful and stable versions of OS X, that Apple had a real claim to equality, let alone superiority.

But the divergent directions indicated by Windows 8 and OS X Mavericks change everything. Although it was Steve Jobs who began talk of the post-PC world when he introduced the iPad in 2010, it seems like it is Microsoft that has bought into the idea. The attempt with Windows 8 to design an operating system that spans traditional PCs, hybrids, and tablets has resulted in a sub-optimal experience on both. With Windows 8.1, Microsoft seems on its way to fixing some of the worst problems of Windows 8 (and its ill-begotten sibling, Windows RT) on tablets by eliminating some, perhaps most, of the need to drop back into Desktop mode to accomplish key tasks. But only relatively minor changes are planned for Windows 8 on a traditional PC, an experience that leaves many users longing for Windows 7.

Mavericks, by contrast, marks Apple’s renewed commitment to the traditional PC, a commitment that had been at least a little in doubt with the surge of iOS features into Lion and Mountain Lion. Except for improved notifications, an idea that borrows from and builds on iOS, the big changes in Mavericks are Serious PC Stuff: A new tabbed interface for the Finder, tagging for better file location and classification, major under-the-hood changes to cut power consumption, and greatly improved support for multi-display setups. Along with a badly overdue, but radical and exciting overhaul of the Mac Pro, Apple is telling Mac users, “We’ve got your back.” [pullquote]Mavericks, by contrast, marks Apple’s renewed commitment to the traditional PC, a commitment that had been at least a little in doubt with the surge of iOS features into Lion and Mountain Lion.[/pullquote]

Apple is now in a position to claim clear superiority in traditional PCs. The new MacBook Airs (pictured) are the first computers to ship with Intel’s next-generation Haswell processors and through a combination of close work with Intel and a lot of software fine tuning, Apple is able to beat the industry by a wide margin on battery life–something made possible by complete control of hardware and software. I expect Apple will do equally well with its MacBook Pros and iMacs this fall when Intel ships the rest of its Haswell line.

Macs could rule the world. Apple’s market share has been rising as Windows PC sales have fallen sharply while Mac sales have been mostly flat. I expect this trend to continue and for Apple’s share to rise. But–in partial answer to the question raised by John Kirk earlier this week–I don’t expect Apple to go after the mass market still dominated by Windows.

The reason is simple. According to NPD, the average selling price of a windows PC at the end of last year was $420. ((NPD data probably understate the average somewhat because the firm measures retail sales, missing the often more expensive units sold directly to enterprise buyers.)) The cheapest Mac is a $599 mini, and the cheapest laptop is a $999. Apple will cheerfully sell you an iPad for as little as $329 and provide a first-rate tablet experience, but there is no way it can provide what it regards as a satisfactory Mac experience at the price most windows machines sell for. ((I don’t mean to perpetuate the myth of an Apple premium. On an equal feature basis, Macs are no more expensive than Windows systems. It’s just that Apple only sells top-of-the-line products.))

The great bulk of buyers is unable or unwilling to spend what Apple commands, and Apple is unwilling to cheapen its products, slash its margins, or both, to meet the market. As a result, Apple will settle for modest gains in share .

This does not mean, however, that Microsoft is home free to at least hold on to its share of a shrinking market. The real threat could come from the bottom, from Google’s Chromebook. Chrome OS, whose only application is the Chrome browser and which depends on web apps (key ones modified to work offline) to do anything, performs well on hardware far more modest than required for Windows or Mac OS. For users with relatively modest needs and good internet connectivity, a Chromebook is a low-cost viable alternative to both a tablet and a Windows laptop. And it will only get better as Google converges Chrome OS and Android, potentially bringing a richer store of apps to Chrome.

These days, the fact that Apple is not coming after them as hard as it might is cold comfort to Microsoft.

 

 

Google, Motorola, and the Future of Android

To hear both Sundar Pinchai, head of Android and Chrome at Google, and Dennis Woodside, CEO of Motorola Mobility, tell it, Motorola is just another Android OEM despite being a wholly owned Google subsidiary. This may be technically true at the moment, but it cannot be true for the long run. And just what Google does with Motorola has huge implications for the future of Android.

Business realities alone say the current arrangement cannot last. Motorola is a hole of at least $10 billion (purchase price plus cumulative losses, less the gain from the sale of the set top box business) in Google’s balance sheet. Although there was speculation at the time of the acquisition that Google was really after Moto’s patents, the standards-essential patents ase subject to fair, reasonable, and non-discriminatory licensing worth much less than many believed. Sooner or later, Moto has to start paying its way.

Woodside himself suggested, perhaps without intending to, that the relationship has to change during an appearance at the D11 conference a couple of weeks ago. Competitors, he noted, are earning 50% margins on smartphones. ((Of course, the only profitable competitors are Apple and Samsung.)) “We don’t necessarily have the same constraints,” he said. “One of the areas that is open for Motorola is building high-quality low-cost devices. The price of a feature phone now is about $30 0n a worldwide basis. The price of a smartphone is about $650. That’s not going to persist.”

The difficulty is that Apple and Samsung, by virtue of their enormous volumes and tightly controlled supply chains, are already the low-cost producers. Motorola is not going to beat them on the cost side. So to underprice them, as Woodside is threatening to do, will require sacrificing gross margins, perhaps selling phones at a unit loss. For a business unit already losing money by the bucket, that would seem to be a suicidal course.

Unless, of course, someone is prepared to subsidize this raid on the business models of Apple and Samsung. And that someone would have to be Google, which certainly has the deep pockets needed for this fight. Taking on Apple, while difficult, doesn’t pose huge problems for Google. Over the past few years, the relationship of the companies has deteriorated from best buddies to frenemies to all-out competitors.

Samsung is a very different matter. The Korean giant is second only to Google itself in importance in the Android ecosystem. It is by far the largest seller of Android handsets, from the iPhone-challenging Galaxy S 4 to low-cost units for emerging markets. And it has to be watching the Google-Motorola relationship with an extremely wary eye.

For now, Google and Samsung are co-dependent. That fact is what lies behind Google’s much trumpeted arms-length relationship with Motorola. But the relationship will be severely tested if Motorola goes at the heart of Samsung’s Android business model. (Microsoft’s OEM partners were very unhappy when it went into hardware competition with the surface and surface Pro, but at least it did not try to undercut their pricing. And, for better or worse, poor Surface sales have largely spared it fallout from entering the competition.)

Samsung has options if it comes to view Google as a competitor in a way that makes the current Android arrangements untenable. It could fork Android, going forward with its own flavor of the operating system and its own services, home-grown or developed in partnership with other players,  in place of Google’s. It could accelerate the development of Tizen, the Linux-based mobile operating system it has sponsored along with Intel. Or, far less likely, it could  move to Windows Phone (unlikely, I believe, because while this might be the easiest course to execute, the fact that it is trading one gorilla dance partner for another will make it unattractive.)

The defection of Samsung from Android would put tremendous strain on Samsung, Google, and the Android world. Software has never been Samsung’s long suit. It can afford to buy a lot of talent, but changing a hardware company’s culture to support the software effort required is very difficult. Android would become largely a Google/Motorola business. The viability of all the profitless Android phone makers is dubious, let along their ability to provide leadership.

If all these hypothetical strategies succeed, we could see a very different phone market: Apple would continue to be Apple, mostly riding above the fray. Samsung  would be slugging it out with Googlerola. And Microsoft and BlackBerry would be trying to squeeze out some gains from the confusion.

 

An Old Mystery Solved: Project C-43 and Public Key Encryption

For most of history, it was believed that the only way a message could be encrypted was if the sender and the receiver shared the secret of srambling and unscrambling the text. That view changed sharply in 1976, when Stanford computer scientists Martin E. Hellman and Whitfield Diffie published a paper called ““New Directions in Cryptography” that described what is now known as public key encryption (PKE). Two years later, Ron Rivest, Adi Shamir, and Len Adelman of MIT described a simpler method. When the web came along, the Diffie-Hellman and RSA algorithms became the bedrock of secure communications.

But PKE had an unknown pre-history. As early as the 1960s, John H. Ellis of GCHQ/CESG, the British equivalent of the National Security Agency’s Central Security Service, was experimenting with ideas about “non-secret encryption.” He described his work in a 1970 paper entitled “The Possibility of Non-Secret Digital Encryption,” but it remained classified until 1997. In the 1970s, CESG researchers Clifford Cocks and Malcolm Williamson found ways to implement PKE, but this work, too, stayed secret for more than two decades. A 2004 Wired story by Steven Levy gives a detailed account of the British efforts. In his account, The History of Non-Secret Encryption,” Ellis drops a fascinating hint of earlier work. Reflecting on the “obvious” impossibility of secret communications without a shared secret, he wrote:

The event which changed this view was the discovery of a wartime Bell Telephone report by an unknown author describing an ingenious idea for secure telephone speech… The relevant point is that the receiver needs no special position or knowledge to get secure speech. No key is provided; the interceptor can know all about the system; he can even be given the choice of two independent identical terminals. If the interceptor pretends to be the recipient, he does not receive; he only destroys the message for the recipient by his added noise. This is all obvious. The only point is that it provides a counter example to the obvious principle of paragraph 4. The reason was not far to seek. The difference between this and conventional encryption is that in this case the recipient takes part in the encryption process. Without this the original concept is still true. So the idea was born. Secure communication was, at least, theoretically possible if the recipient took part in the encipherment. ((Ellis, J.H., “The History of Non-Secret Digital Encryption,” p. 1))

Ellis refers to a document titled “Final Report on Project C-43” without any additional identifying information. For years, this passing reference has intrigued the cryptographic community with the possibility that Bell Labs researchers might have made important progress on private key encryption as early as the 1940s. It turns out that the mysterious Final Report exists and is available (if obscurely) online. ((The original document is probably in the archives of the Defense Technical Information Center at Fort Belvoir, Va. Preliminary and progress reports on C-43 are at the National Archives & Records Administration’s Archives II facility in College Park, Md., but the final report is not with them.)) ((Be patient with the download. It’s a 6 MB scanned PDF and can take a while to load.))

Some background on Project C-43 is needed to make sense of this. The name refers to a wartime contract between Bell Labs and the National Defense Research Committee for work on systems for secret speech transmission. The goal both to devise methods for communication for U.S. forces and, more urgently, to means to unscramble German and Japanese transmissions. (Because voice communications at the time were analog signals, the digital techniques used for encrypting text were not available; purely audio techniques had to be devised.) AT&T’s work ranged from theoretical projects at its West Street lab in Manhattan to running radio intercept stations in Holmdel, N.J., and Point Reyes, Calif.

Project C-43 ran in parallel to, but apparently with little or no contact with, a better known Bell Labs secret speech effort, Project X. This project, which produced a cumbersome but effective method for secure speech transmissions between fixed locations, is described in detail in the official history of Bell Labs. ((Fagen, M.D., ed., A History of Engineering and Science in the Bell System: National Service in War and Peace (1925-1975) Bell Telephone Laboratories, 1978, pp. 296-312.)) Bell researchers submitted regular progress reports on C-43 to NDRC and at the end of the contract in 1944, Walter Koenig Jr., the engineer who headed the project compiled these into a final report. (I don’t know why Ellis talked about an “unknown author;” Koenig’s name appears on the title page.)

One obvious way to secure speech is to hide it with noise that can then be removed at the receiving end by a technique similar to what is used in today’s noise cancellation systems. But the approach is fraught with many difficulties, not the least of which is securely transmitting to the recipient a copy of the noise the noise that is to be subtracted. The Project X method required courier distribution of  noise tracks on phonograph records. Because the noise had to be as long as the speech it masked and each track could only be used once–it was the audio equivalent of a Vernam cipher or a one-time pad–the system was exceedingly cumbersome. In the course of a discussion of masking methods, Koenig, almost as an aside, describes what seems to have been a thought experiment:

Another masking system is shown in figure 21, which uses only one line. In this system, noise is added to the line at the receiving end instead of at the sending end. Again, the noise can be perfectly random. Since the noise is generated at the receiving end, the process of cancellation can, theoretically, be made very exact. This system, however, cannot be used for radio at all because the level of the noise decreases with distance from the receiving station, while the level of the signal increases, The interceptor, therefore, will get good speech signals if he is close to the transmitter. With telephone lines this differential can be kept small. ((Koenig, “Final Report on Project C-43,” pp. 23-24.))

c-43-figure-21This is what so intrigued Ellis. Alice could speak and transmit in clear, while Bob would simultaneously inject noise into the same circuit. An adversary intercepting the conversation would hear only the masking noise. Bob, knowing the exact characteristics of the noise, could cancel it and retrieve the signal—encryption with no shared secret. Alas, it proved to be unusable for the reasons stated in the report. For example, while Project X desperately needed a solution to its key-distribution problem, its purpose was to secure long-range radiophone transmissions, initially between Washington and London.

It’s safe to say that beyond the inspiration it gave to Ellis, this early Ball Labs work did not contribute materially to the development of PKE. Other than the lack of a shared secret, the audio approach bears no resemblance to any public-key method, since there is no concept of a public key involved. It remained for Ellis, Cocks, and Williamson, and then, independently, Hellman, Diffie, Rivest, Shamir, and Adelman to discover the mathematics that allow a piece of publicly shared information to be used for secure data communications.