Why I Love My Mac

iMac photo (Apple)My aging 27″ iMac, the system I use most for work, had been acting cranky lately but I was busy and ignored the symptoms–until I couldn’t anymore. On Wednesday evening, I tried to reboot it and it just sat there, twiddling endlessly. With enough patience, I finally got it to boot.  I ran Disk First Aid verify, and when that failed, repair. Still no happiness. But, as you can guess from the headline, this story has a happy ending.

So first thing yesterday, I decided to make a clean backup before heading off for the Mac emergency room, a/k/a the Bethesda Row Apple Store. I had to postpone   my Genius Bar appointment a couple of times because the backup took longer than I expected. When I finally got it in, the young woman at the Genius Bar ran diagnostics and told me that while the drive passed hardware tests and would probably be fixed by reformatting, it was covered by a free replacement policy Apple had put in place for a batch of flaky 1 TB Seagate drives that turned up in iMacs.

The Mac was ready to come home a few hours later, with a new hard drive loaded with OS X Mountain Lion. I went through the preliminary setup, plugged in my Time Machine drive and let the restore run overnight. I had to jump through a few additional hoops, such as reactivating Microsoft Office 2011*, but this morning, it was back to its old happy self.

This experience is the major reason why I continue to use Macs and to recommend them to others. Over the past 20-something years, I have suffered through the failure of many Windows systems and in  every case, getting them fixed and rebuilt was a monumental time-suck of a do-it-yourself job. Even if you back up conscientiously, restoring from a Windows backup is a complicated job that requires both time and skill. At best, you might have a fairly recent system image that will allow restoration of the disk with applications and a more recent incremental backup to restore data.

And, of course, there is nothing like the Apple Store for Windows. Even if I had a convenient Microsoft Store–there’s no full-service store in the state of Maryland–they do not offer the range of services that Apple does. I probably would have spent a day and a half fixing the machine myself had it been a Windows box.

The Apple Store is a huge part of the reason that Macs (and iPhones and iPads) provide a vastly superior customer experience. If you doubt that, head for the Samsung Store or the Google Store the next time you have a problem with your Galaxy S or Nexus 7.

——

*–Reactivating Office could have been a real pain, because it required a product key, something that is becoming an increasingly serious issue as software is downloaded rather than purchased in a box that makes retaining the key easier. Fortunately (and unlike just about any consumer) I was able to retrieve a key from my Microsoft Technet account. Of course, this problem is caused by Microsoft, not Apple. I was pleasantly surprised to discover that  my Adobe Creative Cloud CS 6 applications fired up without licensing glitches.

Spectrum: Sharing Nicely Can Go a Long Way

Dark Side of the Moon album cover

Sharing has been part of U.S. spectrum policy from the beginning. When the government started handing out AM radio licenses in the 1920s and 30s, a relative handful of stations were assigned “clear channels” that they did not share with any other broadcaster in North America. There were allowed to operate at up to 50 kW and on nights when the atmospheric conditions were right, could be heard hundreds of miles from their transmitters. The rest of the stations got just a local monopoly on their frequencies. This worked fine in daytime, but some broadcasters had to shut down as soon as the sun set to avoid interfering with neighbors.

Still, the model for spectrum use in the U.S. and the rest of the world has been exclusivity. If you had a license, whether you were a TV station, a taxicab company, or a wireless phone operator, no one else within range to interfere was allowed to operate on your patch of spectrum. This worked fine as long as spectrum was relatively plentiful, But as noted in the earlier articles int his series, demand for wireless bandwidth is rising fast and we have run out of spectrum to assign. The tendency has been to view spectrum allocation as a zero-sum game: Anyone’s gain had to be someone else’s loss.

But this may well be a self-defeating process. If every megahertz of bandwidth assigned to wireless data has to be pried from the hands on an incumbent, it’s going to be a very slow and painful process. As the President’s Council of Advisors on Science & Technology (PCAST) put it: ”

PCAST finds that clearing and reallocation of Federal spectrum is not a sustainable basis for spectrum policy due to the high cost, lengthy time to implement, and disruption to the Federal mission. Further, although some have proclaimed that clearing and reallocation will result in significant net revenue to the government, we do not anticipate that will be the case for Federal spectrum.

The saving grace is that much of the spectrum currently assigned is not used very intensively. Some, for example, is assigned nationwide, but used in only specific locations. The government reserves a chunk of spectrum in the 3550 megahertz band for radar use, but it is generally used only in locations along the Atlantic and Pacific coasts. One approach to freeing this spectrum for wireless use would be to set up large coastal exclusion zones and issue wireless data licenses for the middle of the country. Unfortunately, this would exclude the most densely populated parts of the country. A better approach, endorsed by the  and being actively pursued by the Federal Communications Commission is to take advantage of advances in technology to allow much finer grained sharing by allowing wireless data operations where the spectrum is not being used for rader. There are two possible “smart radio” approaches: One is to have a mobile device check its location against a database and operate on those frequencies only in areas known to be safe. Another is to actively seek out the radar signals and back off if they are detected. The 3500 MHz shared spectrum is likely to be used primarily for small cells, and idea I will explore in the next installment of this series.With those who hold spectrum fighting hard not to give it up, sharing must play a key role in meeting growing demand.

Another form of sharing is utilization of locally unused channels in the large swath of 600-800 MHz spectrum reserved for broadcast television. The FCC hopes to scavange spectrum for Wi-Fi-like unlicensed use in two different way. One, an idea that has been around for several years, is to allow the use of “white spaces”–televisions channels that are unassigned in a given location. The problem is that different channels are free in different places. The FCC is an fairly advanced development of rules for the use white spaces. However, base stations and and devices will be responsible for checking which frequencies they can safely use. White spaces are not likely to do much in the biggest cities, where dense channel assignments leave little spectrum available for sharing. In the end, the most important contribution of white spaces is to provide high speed broadband to rural areas, where TV channel allocations are sparse and good alternatives are few.

A second source of shared spectrum is part of the FCC’s  plan to consolidate and sell off unused broadcast spectrum. The analog tuners used for many years in TV sets had a poor ability to reject signals in adjacent channels, so the original channel assignments set up wide “guard bands” to protect signals from interference (these are common throughout spectrum assignments.) New digital tuners are much more precise and the FCC proposes to free TV guard band channels for unlicensed use. Again, exactly which frequencies will be available will vary from market to market.

(A Washington Post story created much excitement around the internet by suggesting that the FCC has a plan to turn this new unlicensed spectrum into a nationwide free Wi-Fi service. The FCC has proposed nothing of the sort. There may be more Wi-Fi-like service available–it would not technically be Wi-Fi and would not work with existing Wi-Fi devices–but it won’t be national and it most likely won’t be free.)

The incumbent carriers continue to prefer exclusive spectrum assignments. It’s the way they are used to operating and besides, their ability to controls lots of bandwidth forms a powerful barrier to entry for potential competitors. They also remain deeply ambivalent about Wi-Fi and unlicensed spectrum schemes, not being quite sure whether they are threats or potential saviors for overcrowded networks. As Joan Marsh, AT&T vice-president, federal regulatory, wrote in response to the PCAST recommendations:

The Report’s core recommendations, however, have generated significant controversy.  The Report found that the new norm for spectrum use should be sharing, not exclusive licensing.  While we agree that sharing paradigms should be explored as another option for spectrum management, sharing technologies have been long promised but remain largely unproven.  The over-eager pursuit of unlicensed sharing models cannot turn a blind eye on the model proven to deliver investment, innovation, and jobs – exclusive licensing.  Industry and government alike must continue with the hard work of clearing and licensing under-utilized government spectrum where feasible.

Notwithstanding these misgivings, AT&T, T-Mobile, and Verizon have agreed to cooperate with the Defense Dept. in studying approaches to sharing spectrum in the 1750 MHz band, prime wireless real estate adjacent to frequencies currently used for wireless data. Since no one is willing to part with spectrum they currently hold, one way or another spectrum sharing has to be a key component of any plan to meet growing demand.

 

The Enterprise Is Important. Let’s Get It Right.

BlackBerry z10 photo (BlackBerry)

One of the most striking features of much tech writing today is its near total ignorance about corporate software and systems. Except for sites like ComputerWorld and others that specialize in the enterprise, reporting is sparse and when it appears, often wrong.

This tendency has been glaringly revealed in a lot of the writing about the new BlackBerry and the BlackBerry 10 operating system. The heart of the blackBerry business has always been the enterprise and the company’s hopes for revival hinge on its ability to win back customers who have been drifting to other platforms or bring-your-own-device options. But consider this from TechCrunch:

In short, BB10 isn’t built for the way business is done today. When RIM was in its ascendance there weren’t many options for an IT guy. You could install Exchange, sendmail, or Lotus and wait for a crash. BES was a godsend. Now that’s no longer true. 99.9% uptime is the rule, not the exception, and there are hundreds of cloud service providers that can turn a single founder into a mobile powerhouse from the comfort of her phone – her iOS phone.

The writer, the usually solid John Biggs, doesn’t realize that BlackBerry Enterprise Server was never an alternative to Microsoft Exchange or Lotus Notes. It ran (or runs) on top of one of those platforms. BES may have been a godsend, but not for that reason. And major commercial mail platforms such as Exchange or Lotus Domino Mail have provided three nines of uptime for a very long time. and to the extent that I can understand that last sentence, there have been cloud providers for a very long time too, including those that offer hosted BES services.

BlackBerry is making a serious play to regain the corporate market, BlackBerry Enterprise Service 10, released last week, provides two new services: BlackBerry Balance, a software approach that partitions a BlackBerry 10 devices into separate  business and personal halves, and BlackBerry World for Work, a custom corporate app store. It also brings messaging and mobile device management support to Android and iOS devices, as well as BlackBerry 10s.

BlackBerry faces huge challenges and its success in the enterprise is far from assured. But if you want to analyze its chances, it helps to know how this stuff actually works.

BlackBerry Delivers a Product: Now It Has To Sell It

BlackBerry z10 photo (BlackBerry)Gartner analyst Michael Gartenberg nicely summed up the first day of the rest of BlackBerry’s life: “Good launch,” he tweeted. “Now it’s all execution.”

After what seemed like an interminable pregnancy, BlackBerry (the new corporate name; Research In Motion is history) delivered some very nice hardware running an impressive new operating system. The  all-touch Z10 is available immediately in Great Britain, next week in Canada and in March in the U.S. The Q10, with a traditional BlackBerry keyboard, is due in April.

The new products have a lot to offer. The Z10 looks pretty much like  every contemporary  smartphone–a black slab with a 4.2″ display that puts is between the  iPhone 5 and the somewhat bigger run of Android handsets. But it features a unique gesture-driven, messaging-centric operating system that combines some of the better features of the late, lamented webOS and Windows Phone 8 and which is its main selling point. Unlike the painful compromises of previous BlackBerry software, the QNX-based BlackBerry 10 is fully touch-optimized and is fluid and highly responsive. Its gestures take a bit of learning, but not very much.

But the new BlackBerry is not going to sell itself in a world thoroughly dominated by iPhone and Android. And the marketing message at BlackBerry’s Jan. 30 launch event was a bit muddled. It’s an old truism in marketing that if you are trying to sell to everyone, you are targeting no one and the BlackBerry approach seems somewhat unfocused.

One symptom of that was the announcement that Alicia Keyes would be BlackBerry’s  new creative director, but it was far from clear what her task is. Asked about it at a press conference, she offered something vague about increasing its appeal to the entertainment industry and to women. But one market is too narrowand the other too diffuse to be addressed meaningfully. I suspect Keyes will have about as much impact for BlackBerry as the Black-eyed Peas’ will.i.am has had in a similar role at Intel.

To have a chance of success (which I define, at least initially, as beating out Microsoft to become the No. 3 smartphone platform, a definition BlackBerry seems to share), the new phones have to win over several key markets.

The Die-hards. The  core of dedicated BlackBerry users still hanging on to their Bolds and Torches are the lowest-hanging fruit.  BlackBerry has to migrate them to the new platform as quickly as possible. They will be helped in this effort by the fact that radical as the new software is, it maintains a certain essential BlackBerry-ness. An example: I was annoyed by the fact that the mail program asked for confirmation each time  wanted to delete a message. But I found the toggle to turn that off exacty where I, as a longtime BlackBerry user, expected it to be.

The BYOD Crowd. This is a much tougher audience. Corporate IT managers, while grumbling about the traditional cost of BlackBerry services, have always liked having a  platform that offered unified management and proven security. But they had been forced to accept the iPhones and Androids that esxecutives have brought into the system and now must manage a melange of devices. They are prime targets for BlackBerry but winning them over win’t matter un less marketers can also win back the execs who adandoned BlackBerry in the first  place. One thing that will help is BlackBerry Balance, software that devides a device into a secure business partition and an open personal partition. Another, which could win me mover, is the integration of Evernote, the invaluable note-taking cloud service, into it Remember app, a sort of a cross between OneNote and Tripit.

The Message-centric. BlackBerry has always been primarily about messaging and the new versions do not ignore that heritage. While the rest of the system has been greatly beefed up, messaging remains paramount. If you are the sort of person who wants to know right away when an email message or a response to a tweet comes in–and want quick and easy access to it, the new BlackBerry is for you. The BlackBerry Hub, a central feature of the user interface, is the ultimate unified inbox. Marketing built around the BlackBerry’s messaging prowess could win over this audience.

BlackBerry has all the other smartphone bases covered, but not generally in a way that makes it a must have. The supply of available apps, somewhere between 70,000 and 100,000 depending on who was talking and when, is pretty good for the launch of a new system. Most of the  major categories are covered and those that are missing, including Netflix, HBO Go, and Instagram, are rumored to be not that far off. BlackBerry made it relatively easy for developers to port Android apps to BB10, and approach that accounts for about 40% of the initial offerings. The camera is good enough to be competitive, but isn’t a reason anyone will buy a BlackBerry.

One large group of current BlackBerry customers that will not be served by the new phones is the millions of buyers–many in emerging markets–of inexpensive Curves for whom the biggest attraction is BlackBerry Messenger. The Z10 and Q10 are expected to sell for around $200 on a two-year contract in the U.S., and if BlackBerry has plans to come up with a low-cost model for the Curve market, they aren;t talking about it yet.

The bottom line is that BlackBerry has given it a really good shot. To increase their chances of success in a very tough market, they need to refine their message and focus their marketing tightly on the groups they need to win.

 

 

 

 

 

 

 

 

Business and Social Media: The Good and the Ugly

imagesI had an interesting little experience this morning with how customer services organizations use social media well and badly.

I was headed to New York for the BlackBerry 10 launch and took the first train on the Washington Metro Red Line to connect with the 6 am Acela at Union Station. The trip proceeded uneventfully to the Metro Center station downtown, where we sat, and sat and sat. Finally, they put us off the train and it headed out without passengers. At no time was there any useful communication with passengers, other than the order to get off.

What was a tight connection became an impossible one and despite hitting Union Station at a dead run, I got to the gate just after they closed the doors. I had sent several tweets on the situation, all using Metro’s #wmata hashtag but saw no response.

Then came the interesting part. After settling down to wait for the 7 am Acela, I got a response from @Amtrak reminding me to change my ticket for the later train. The only mention I had made of Amtrak was a tweet, without the “@,” complaining about the endlessly looping security video.

Metro, which uses a Twitter account to post often stale information about its unceasing delays and breakdowns, does not seem to have any interest in a dialog with customers. (Actually, Metro seems to regard itself primarily as an employment program for its workers, who avoid interactions with riders at all costs. If they succeed in moving some passengers, that’s a coincidental benefit.)

Amtrak, by contrast, seems to monitor Twitter aggressively and respond quickly, even when a response isn’t really required. I ride Amtrak to New York fairly frequently, but I’ve never had strong feelings about it, other than that it is preferable to flying Delta or US Airways or taking the bus. But that little tweet this morning made me feel that someone cared, even if it was only an AI bot.

The lesson here is clear. If you are going to have a presence on social media, you create an expectation of real communication. A little effort can go a long way, and the lack of effort can easily antagonize customers you could easily please (or at least defuse their anger.)

Apple and the Burden of Bigness

Apple financials chart

A lot has been written lately about “exponential growth,” nearly all of it wrong. If you want to see what real exponential growth looks like, check out the graph of Apple’s revenues and profits . And this tells an important story about Apple the commentators, including financial analysts who should know much better, have completely missed. Explosive growth over the past few years has transformed Apple from a scrappy underdog into one of the largest companies in the world–it should break into the top 10 of the Fortune 500 this year. And things become very different when you get this big.

Of course Apple’s growth is slowing down. The thing about exponential growth is that it is inherently unstable and unsustainable. If Apple continued along its a 2005-2012 growth track until 2016, its annual sales would be nearly a trillion dollars, a clear impossibility. In fact, the single most remarkable thing about Apple is that it managed to sail smoothly through a growth spurt that would have destroyed many companies. And this all the more remarkable because it simultaneously went through the transfer of leadership from the late Steve Jobs to Tim Cook.

Regardless of how quickly Apple grows over the next few years, the growth that has already occurred has transformed the company in ways that both its fans and its critics ignore. The Apple that introduced the iPhone in 2007 was a middling-sized company with $24 billion in sales and 24,000 employees. It was a cheeky upstart in the phone business, a fifth the size of the AT&T with which it drove a hard bargain to be the exclusive carrier of the iPhone at launch.

Lots of things happen happen to a company when it grows this big, most of them bad. A certain amount of bureaucracy is just required to keep the wheels turning. Corporate functions, such as human resources and legal, swell. Apple seems to have done a splendid job of keeping the bloat that comes with rapid growth under tight control; amazingly Apple’s employment tripled in a period when its sales increased six-fold (meaning, of course, that sales per employee doubled, a spectacular accomplishment.) Above all, the bigger you get, the harder it is to maintain a rapid rate of growth because the absolute size of the increase you must generate explodes.Apple seems to have done a splendid job of keeping the bloat that comes with rapid growth under tight control.

In fiscal 2007, Apple sold 7 million Macs, 51.6 million iPods, and 1.4 million iPhones. In fiscal 2012, it sold 18 million Macs, 35 million iPods, 125 million iPhones, and 58 million iPads. One thing that stood out in Apple’s “disappointing” first quarter of fiscal 2013 is that sales of the iPhone and iPad were constrained by supply. The supply chain appears to have fallen a bit short, but given the growth experienced, it’s a wonder that the supply chain is functioning at all. Think of the number of components that had to be purchased, assembly lines that had to be brought on stream, and finished product that had to be shipped, often half way around the world, to accommodate that growth. And all of this was accomplished without any evidence of the quality problems that usually accompany rapid expansion. The supply issues of the past couple of quarters were probably inevitable; they do not diminish the reputation Tim Cook has earned as a supply chain genius for managing the growth.

What mostly needs adjusting is expectations for Apple. All the speculation about whether Apple has lost is mojo, or its cool, or whatever fails to consider that what Apple has lost is its ability to grow quickly, not because of anything its management is doing wrong but as a function of pure size. Look at it this way. From its 2010 introduction through fiscal 2012, the iPad generated about $57 billion in revenue, about half of Apple’s total revenue growth through the period. But for a hypothetical new product to make a similar contribution over the next three years, it would have to be twice as successful as the iPad in dollar value terms. That’s not likely to happen.

What does all of this mean for Apple’s stock price, the source of so much angst of late? The odd thing is that even when Apple was growing most rapidly and its share price was rising quickly, the market didn’t act as though it expected the growth continue. The ration of the stock price to trailing 12-month earnings last year peaked at just a bit over 15, very low for a company whose sales and earnings were growing phenomenally quickly.

The market was right: The growth had to slow. But even though the stock was priced seemingly in anticipation of slower growth, investors responded to the seeming reality of a slowdown by driving the price down so hard that the P/E fell below 10 (it has since recovered a bit along with the stock price.) There may have been plenty of reasons for the stock price to stop going up, but none but emotion for a 30% decline.

None of which is to say anything about where the price will go next. Markets will do what they do. Apple remains an extraordinarily well-managed company with a very strong product portfolio and, I suspect, the ability to surprise us again with new and innovative products. But I doubt that we will ever see another spurt of growth like the past five years. Apple has just plain gotten too big for that to happen.

Cheering for BlackBerry

BlackBerry invitation header

On Wednesday, Research In Motion will launch its bid to save itself with the redesigned from the ground up BlackBerry. I’ll be at the launch event and I will judge the new hardware and software on their merits, still I have to admit that I am cheering for a BlackBerry comeback.

Apple reinvented the smartphone in 2007, but before that, the most important smartphone innovations came from Palm and RIM. Palm’s Treo (actually, the original version came from Handspring, a company started by Palm’s founders and later merged back into Palm) invented the concept of integrating a mobile phone and a PDA, along with third-party apps. The original BlackBerry, which was not a phone, created true mobile email and calendar. Eventually, this all came together to create the modern smartphone, which Apple took to the next level with the iPhone.

The iPhone did in Palm and nearly killed RIM. The decline of Palm was inevitable. The company was always cursed  by under-financing and a lack of stable, competent management. When Apple turned up the heat, Palm lacked the wherewithal to respond successfully with its reinvention after its purchase by Elevation Partners was too little, too late. Its demise after a horribly bungled acquisition by Hewlett-Packard was a somehow fitting  ending to a very sad tale.

RIM is a very different story. Palm knew what was happening to it but couldn’t do much about it. RIM, riding high as BlackBerry sales continued to soar well into the iPhone era, but lacked the paranoia than Intel’s Andy Grove long ago pointed out was a key to survival in a highly competitive industry. RIM co-CEOs Michael Lazaridis and Jim Balsillie were convinced of the superiority of their product and their business model and failed to respond to the market’s shift toward demanding highly capable handheld computers, not glorified messaging devices.

Fortunately, RIM, unlike Palm, had deep financial resources and significant annuity revenue streams that bought it another chance. It has two reservoirs of strength, the popularity of its low-end devices in emerging markets (where volume sales can be had, but profits are scarce) and enterprises, especially governments and others with the greatest concern about security. Success in neither is a given, but the opportunities exist. And I’ll admit to a long-standing fondness for RIM, particularly Mike Lazaridis’ uncontained enthusiasm when he talked about his newest BlackBerry or showed off a lab at RIM’s Waterloo, Ont., headquarters.

I think that both Android and Apple would benefit from some additional competition. Microsoft, despite heroic efforts, has so far failed to win much traction in the mobile market. Had Windows Phone and Windows RT taken off, there wouldn’t be much room for a RIM comeback. But they haven’t, so there is. It’s going to take a spectacularly good product to succeed in this tough neighborhood. I’m hoping that RIM still has it in them.

Apple and Imperfection

Apple-Think Different

Near the end the dot-com bubble, smart investors finally realized that a major problem with tech stock pricing was that dozens of companies were priced to perfection: Their stock prices were so high relative to the underlying financials that only a perfect performance could justify the share price for any length of time. Very few companies could deliver perfection and the house of cards folded.

Apple these days seems to be the opposite of a bubble dot-com. Despite a depressed stock price–it was trading at a very mediocre 11.6 times trailing earnings before accounting for a sharp after-hours post-earnings plunge. Apple has now given up all the gains of the past year,

And while I am no financial analyst, this is ridiculous. The sharp run-up in the stock that ended abruptly this fall was fully justified by the company’s stellar performance and even at its peak, Apple was still underpriced by most fundamental metrics. Two things have been true about Apple’s performance for some time: Its margins and growth rate were both unsustainable. But in a reasonable world, there was room for both to decline, as they have, and for shares to keep rising, as they most certainly haven’t.

Apple has always been a stock that traded heavily on emotion rather than analysis and now is no different. If pessimists want to drive it lower, they mill, despite a P/E heading for single digits and a price that’s just a bit more than three times the cash on hand.

Disclosure: I do not have any direct position in AAPL stock, though funds I invest in may.

 

Spectrum: Where It Came From, Where It Goes

Dark Side of the Moon album cover

In the the beginning, wireless spectrum in the U.S. was free. In  1983, the Federal Communications Commission created the first analog cellular networks by assigning two chunks of airwaves in the 800 MHz band. One chunk was reserved for the incumbent local wireline carrier, or Baby Bell as they were then known. This ancient history is important because the leg up that was given to the companies that gradually coalesced into Verizon Wireless and AT&T formed the basis for these carriers’ domination of the U.S. market. The story of spectrum over the past three decades is mostly a tale of the rich getting richer, all the while bemoaning their poverty.

Over time, the government assigned more and more spectrum to wireless voice and (eventually) data. New competitors did arise. Sprint, until then primarily a wireline long-distance operator, created its network out of the 1994 auction of 1900 MHz “personal communications services” spectrum. Wireless phone pioneer Craig McCaw built the Nextel network out of bits and pieces of “special mobile radio” licenses intended for dispatch services. T-Mobile and its predecessors assembled a bunch of smaller carriers using the GSM standard, which was then widely used everywhere but the U.S.

But through auctions and acquisitions, the biggest carriers managed to get even bigger. The last major wireless spectrum auction was the 2007 sale of television bandwidth that had been freed by completion of the transition to digital TV broadcasting. To the surprise of just about no one, the overwhelming winners in the sales  were Verizon and AT&T, which have been using the spectrum in the 700 MHz band to build out their fourth-generation LTE networks.

The problem we now face is that after 30 years of freeing bandwidth for mobile data use, we’ve pretty much run out of spectrum that can be reassigned without a major fight. The only sale on the horizon is an additional 100 MHz of TV bandwidth. But among many other complexities, availability of this spectrum will require some stations to give up their licenses (in exchange for a share of the proceeds from the auction) and others to move to new frequencies to create usable blocks of contiguous spectrum. The convoluted process mandated by Congress means that the sales won;t begin until 2014 (at the earliest) and are likely to yield a good bit less than 100 MHz in many parts of the country.The problem we now face is that after 30 years of freeing bandwidth for mobile data use, we’ve pretty much run out of spectrum that can be reassigned without a major fight.

In the absence of new allocations coming down the pike, Verizon and AT&T have been bulking up on spectrum through mergers and acquisitions. AT&T failed to convince Justice Dept. anti-trusters that its need for spectrum justified its proposed 2011 acquisition of T-Mobile. It announced on Jan. 22 that it intends to acquire the remainder of regional carrier Alltel, the bulk of which was bought by Verizon in 2008. Verizon is buying the spectrum of a consortium of cable companies, which once had dreams of building their own wireless networks.

The incumbent wireless carriers insist that the system is header for crisis without additional bandwidth and the the best, and perhaps only, way to get it is by selling them the rights to spectrum currently held by others. In a post on a the AT&T public policy blog, Joan Marsh, vice-president, federal regulatory, responded to a recommendation that sharing spectrum with federal agencies might be a good way to increase capacity, saying:

The Report [of the President’s Council of Advisors on Science & Technology] found that the new norm for spectrum use should be sharing, not exclusive licensing.  While we agree that sharing paradigms should be explored as another option for spectrum management, sharing technologies have been long promised but remain largely unproven.  The over-eager pursuit of unlicensed sharing models cannot turn a blind eye on the model proven to deliver investment, innovation, and jobs–exclusive licensing.  Industry and government alike must continue with the hard work of clearing and licensing under-utilized government spectrum where feasible.

John Marinho, vice-president of technology and cyber security for CTIA-The Wireless Association, which speaks for the wireless incumbents, wrote:

Trust me, the carriers are deploying and using every single technology and “trick” they can to try to solve the looming spectrum crisis in the near-term, but nothing will solve the problem like more spectrum. Claude Shannon proved that there are practical limits to how much bandwidth capacity is available from a limited amount of spectrum. One has to look no further than the father of information theory to realize that the solution is more spectrum.

I’ll have more to say about Shannon’s laws and its implications for wireless networks in future installments, but the truth is that there are lots of techniques for expanding the capacity of wireless networks that have yet to be deployed in any serious way. Martin Cooper, who built the first cellphone for Motorola before there was a network to use it on, says: “I can tell you that the way not to create more spectrum is to redistribute it. And that is what the government is proposing to do now, take it away from some people and give it to others. That’s not going to do it.”

The next articles in this series will explore some better ways.

 

Spectrum: The Shortage Is a Crisis, but Not Serious

Dark Side of the Moon album cover

The late economist Herb Stein used to say that “if something cannot go on forever, it will stop.”

A profound economic truth lies behind that seeming flip statement. The world is forever on the verge of running out of vital commodities–oil, food, water, and many more–but somehow we never do. In the worst case, as a commodity grows scarce, its price rises and demand shrinks. The real world, however, human ingenuity triumphs over shortages. We find alternatives to whatever we are running out of, or, better, we find ways to use what we have much more efficiently. So it is with the spectrum we need to move ever-growing volumes of wireless data to our proliferating mobile devices.

In the short run, available spectrum is more or less fixed, creating an atmosphere of shortage. The established carriers, especially Verizon Wireless and AT&T, warn of “exponential”* growth in demand and use claims of shortage both to lobby for new allocations of spectrum for wireless data use and to justify data caps and higher rates. Critics argue that while dedicating more spectrum to wireless data is desirable, much can be accomplished through greater efficiency in the use of what we have.

In this an subsequent articles in this series on spectrum, I will examine the claims and look at possible solutions. Perhaps the biggest issue is just what is happening with demand for spectrum. The truth appears to be that it is still growing very quickly, but at a decelerating rate. Cisco’s Visual Networking Index, which has often been criticized for exaggerating the growth rate, indicates this clearly. It shows the growth rate for mobile data slowing from 133% in 2011 to an estimated 78% in 2014. A growth rate of nearly 80% is still staggeringly fast, but the effect of this deceleration is enormous. At a 133% compound annual growth rate, consumption would increase 240-fold over a decade; at 78%, just 60-fold. The difference: More than 100 exabytes of data per month.Stein’s Law: “If something cannot go on forever, it will stop.”

But even if we discount the more breathless and self-serving estimates of growth in wireless data use, it is clear that the amount of spectrum allocated to wireless data will be, at some point in the not too distant future will be inadequate to meet demand, based on today’s technologies. It is also clear that to meet this demand, we must both find additional spectrum and find ways to use it more efficiently. Fortunately, both are eminently doable.

The actions that can be taken to improve the availability of spectrum for data include:

  • Auctioning spectrum currently used for other purposes. This is the course favored by the incumbent carriers and, to a considerable extent, by Congress and the Federal Communications Commission. The big problem is that it is extremely difficult to get anyone–public or private–who currently holds spectrum to part with it. Legislation passed last year provides for the auction of 100 MHz of unused or under-used television spectrum for  data, with the current broadcast licensees sharing in the proceeds. The rules for these “incentive auctions” are extremely complex. No spectrum will actually be sold until next year at the earliest, and it seems unlikely that the amount freed will ever come up to 100 MHz. Prying spectrum from the vast hoard held by government agencies, particularly the Defense Dept., is even more difficult.
  • Speeding buildout of unused spectrum. Even while complaining of spectrum shortages, the incumbent carriers still have a lot of spectrum in the bank. Neither Verizon nor AT&T has completed the build-out of LTE networks on the 700 MHz-band spectrum they bought in 2007, a Verizon has just acquired considerable additional spectrum in a deal with Comcast and other cable companies. The biggest chunk of barely used spectrum is nationwide coverage at 2.5 GHz held by Clearwire, whose financial woes have allowed only a small portion of the network to be built out. Both Sprint and Dish Networks are bidding for control of Clearwire with the fate of this spectrum in the balance.
  • Spectrum sharing. A lot of spectrum is assigned to entities, usually government agencies, that sue it only sparingly. For example, Defense Dept. operates a scattering of military radars in the 3.5 GHz band. The FCC is currently implementing a plan that will allow commercial use of this spectrum by devices and base stations specially designed to operate only where and when they will not interfere with the radar.
  • White spaces. This is a Wi-Fi-like spectrum-sharing variant that operates on unused portions of the television band. Unfortunately, white space is most available in rural areas and scarce in crowded cities where it is really needed. It is most likely to have its main impact as an alternative to wired broadband service in rural areas.
  • Small cells. The basic principle  of cellular communication is that limiting the range of base stations to fairly small areas allows spectrum to be reused, as long as the cells are far enough apart to avoid interference. Cell sizes, which depend on transmit power and the height of the antenna, range from a radius of 30 kilometers in the country to 1 km or less in dense cities. But reuse of spectrum can be increased greatly by using very small cells in the densest areas.
  • Wi-Fi offload. Unlike other wireless technologies, Wi-Fi operates on spectrum that is free for anyone to use, and Wi-Fi access points serve areas with a radium of 100 m or less. The load on crowded cellular data networks can be reduced greatly if as much traffic as possible is shifted to Wi-Fi, and new technologies are enhancing the ability of this offload to be handled automatically and seamlessly.
  • Smart antennas. While small cells reduce the radius of coverage, smart antennas can reduce the angle of the sector covered. Current cellular antennas typically cover a 120° sector. Smart antenna technology can allow base stations to beam their transmissions to the devices to which they are connected, again allowing for greater resuse of spectrum.

Most or all of these technologies are going to be needed in combination to deal with the growing demand for wireless data,  but the fact is that the spectrum “crisis” is a challenge we can meet with a combination of sound policy and good technology. I’ll be looking at each of these options in more detail in coming articles in this series.

*–Truth in mathematics time. The essential characteristic of exponential growth is that it increases at an ever increasing rate. (For those of you who remember your calculus, all derivatives are positive.) This never happens in the real world, at least not for long, because growth is always constrained by something. As noted below, there is, in fact, evidence that the growth in demand for wireless data is already decelerating.

Why Hardware, and CES, Still Matter

DEC system board (Wikipedia)An odd notion that hardware no longer matters has lately taken hold in the world of tech commentary. For example, in a well-argued piece explaining his decision not to attend the Consumer Electronics Show, Buzzfeed’s Matt Buchanan  writes:

[S]oftware and services have become the soul of consumer technology. Hardware (seriously doesn’t the word “electronics” in the conference’s dusty title make your eyes instantly droop a bit?) has become increasingly commoditized into blank vessels that do little more than hold Facebook and Twitter and the App Store and Android and iOS. And the best and most interesting vessels, increasingly, are made by the very companies making the software.

It’s true that the relationship between software and hardware is changing, but this is happening in much more complicated and interesting ways. If hardware were a pure commodity, sales of phones, tablets, and PCs would behave the way commodity markets do, with all business flowing to the lowest cost  and no-name Chinese manufactures of good enough handsets, tablets, and PCs dominating even in advanced economies. Instead, the premium producers, especially Apple and Samsung, are winning. (Samsung makes lots of low-end phones, but it is enjoying its greatest success with its top-of-the-line products.)

What is happening is that hardware and software are becoming more and more integrated, to the point where it is difficult to tell where one ends and the other begins. This integration is at the heart of Apple’s success and the need for it is driving Google and Microsoft into the hardware business and may push Samsung to break with Google’s control of Android or to develop an alternative to it.

The integration of hardware and software also makes the meme started by Google’s Eric Schmidt and repeated by many others, that the only companies that really matter to consumers are Apple, Amazon, Google, and Facebook. Of these four, only Apple comes anywhere close to full vertical integration. All of them depend on a sprawling infrastructure of companies, including Intel, Qualcomm, Nvidia, and ARM Holdings, that design the non-commoditized components on which everything else depends. These companies, as it happens, were very well represented at CES.

Tech is a complicated business. But the tech commentariat is hopelessly addicted to simpleminded generalities. The consumers of punditry would be better served if we all stopped to think a bit more.

The ThinkPad X1 Carbon Touch: Windows 8 Is Tough Even on a Great Windows 8 Laptop

ThinkPad X1 Carbon Touch photoI have been using Windows 8 through its several beta and preview versions on equipment designed for earlier editions, and I have been wondering for many months whether my unhappiness with it resulted from shortcomings of the hardware. I’ve now had a chance to spend some time with a first-rate Windows 8-optimized touchscreen laptop and while it works much better than older hardware, the new operating system remains an uncomfortable two-headed beast.

If you want a conventional clamshell laptop with a touchscreen, Lenovo ThinkPad X1 Carbon Touch (from $1,349) strikes me as an ideal workhorse. It features a 14″ 1600×900  display, an Intel i5 or i7 processor, and SSD storage to 256 GB. It weighs 3.4 lb., is .74 in. thick–just a hair thicker than the non-touch version–and provides a solid 5 hours of battery life. The keyboard is outstanding, as you would expect from a ThinkPad, and both the multitouch display and the big trackpad work very well with the full repertoire of Windows 8 gestures (there is also the traditional ThinkPad TrackPoint stick, which remains great for pixel-precise pointing.)

Windows 8 is certainly happier on the X1 Carbon Touch  experience than any older laptop I have tried. Most important is that the swipe-from-the-side gestures so important to effective use of Windows 8 now work flawlessly on both the screen and the trackpad. But that’s not nearly enough to overcome the essential clumsiness.

Windows 8 still feels like two operating systems loosely bolted together. In fact, what the experience of working both in the traditional Windoiws Desktop and what, for lack of a better name, I still call Metro, most felt like was switching between virtual machines under a system such as Parallels or VMware. The two user interfaces share storage and a clipboard—and not much else.

This separation manifests itself in many annoying ways. For example, if you start typing while in the Metro start screen, you initiate a search for applications, including Desktop apps. Indeed, with the disappearance of the Start button, this is the standard way to launch Desktop applications not on your task bar or desktop. You would expect that if you put the cursor outside any desktop window and began typing, you’d get the same search. Instead, nothing at all happens.

Then there is the failure of of Metro and Desktop apps to communicate. The Metro Calendar and People apps and Outlook don’t seem to know anything about each other. So adding an appointment or contact in one has no effect on the other unless you sync through an external service.

The usefulness of the touch display is badly damaged by the wildly inconsistent behavior of applications. For example, pinch and stretch works just as you would expect in the Desktop version of Internet Explorer. But in the Google Chrome browser, the gesture works on the trackpad, but not on the screen.

Adobe Photoshop CS6 would seem to be an application that could benefit greatly from touch, but it just plain doesn’t work. You can use touch to select items from menus or palettes. But when you touch the screen inside a picture, whatever tool you are using simply disappears and moving your finger has no effect at all. The tool cursor reappears as soon as you touch either the TrackPoint or trackpad.

Obviously, this situation will improve if and when third-party software vendors add proper Windows 8 touch support to their products. But it’s not as though Windows 8 sneaked up on them, and their failure to work properly with touch is depressing.

Outlook 2013 context menu screenshotMicrosoft hasn’t done that great a job itself with making its most important applications touch-ready. Office 2013 works better with touch than earlier versions, but that’s not saying much and the effort has a half-hearted feel to it. For example, the “ribbon,” Office’s do-everything menu bar offers a choice between “touch” and “mouse” modes. In the latter, the menu items and icons are bigger and further apart and therefore much easier to hit accurately with your finger. But the same courtesy does not extend to other interface elements. In particular, context (right-click) menus are much too small for comfortable touch use. (In general, context menus are evil with a touch interface.)

Office applications also have a strange proclivity to pop open an onscreen keyboard, for example, in Outlook whenever a search box is selected. This makes sense on a pure tablet or a convertible or hybrid when the physical keyboard is not available. But it makes no sense at all on a laptop where the keyboard is permanently attached, and Windows ought to know better.

I think the conventional touch laptop ought to be a truly useful tool. The undoubtedly will become more common since Intel has decreed that touchscreens will required for the next generation of lightweight notebooks to carry the Ultrabook label. I’ve spent enough time working with a tablet and a keyboard that the idea of reaching to touch the screen no longer feels odd. But the deficiencies of the software keep the hardware far short of its potential. This will change in time, though there is no excuse for Microsoft launching either Windows 8 or Office 2013 half-ready for touch. For now, the fact that you pay a $200 premium for the touch version of the X1 Carbon–other touch models carry a similar premium–is a bet on the come.

Google vs. Microsoft: Just Cut It Out


YouTube screen shot

Hostilities between Google and Microsoft are heating up, and users are being caught in the crossfire.

Microsoft, of course, has spent the last couple of years trying to bring the wrath of the federal government down on Google. This campaign failed last week, when the Federal Trade Commission let Google off with a mild admonishment because it did not have a case it thought it could win.

There’s no way to know if this is retaliation, but Google seems determined to make life difficult for Microsoft customers. The latest evidence is Google’s apparent decision to block access to Google Maps from Windows Phone 8 handsets. The issue is shrouded in a bit of confusion. Gizmodo first reported the blockage. Google responded by saying that the problem was that the mobile version of Google Maps is designed to work with Webkit browsers and the Windows Phone 8 browser is based on the non-Webkit Internet Explorer. But this explanation fell apart when Microsoft pointed out that the Windows Phone 8 browser is essentially the same as the Windows 8 version of IE, which works just fine with Google Maps.

App developer Matthias Shapiro seemed to settle the argument with a YouTube video  that shows calls from Windows Phone 8 to Google Maps failing until the browser-agent string is changed to disguise the browser. With the phony browser-agent string, Google Maps worked just fine (in what appears to be a Windows Phone 8 emulator).

Fortunately, Windows Phone 8 users have other mapping options. I supposed Google has the right to deny its Maps service to any device it wants to block, but this just seems dumb and petty.

In other Google annoyances, yesterday I entered a search string in my Chrome browser and when the search page came up, I got an odd popup asking me if I wanted to share my results on Google+. Thinking that no one could conceivably be interested in my search for information on Fermat’s Little Theorem, I closed the window, unfortunately before I thought to capture a screen shot.  I have not yet been able to replicate this behavior, but Google popping up a G+ interstitial every time I do a search could just drive me to Bing.

Qualcomm and the Birth of the Smartphone

Qualcomm pdQ photoQualcomm CEO Paul Jacobs’ appearance on the Charlie Rose Show brought back memories of the earliest days of smartphones. Jacobs told rose that he originally proposed adding a cellular radio to the Apple Newton MessagePad. When Apple demurred, Jacobs headed to Palm, then owned by 3Com, where he negotiated a license for Qualcomm to build a phone based on Palm OS.

The original Qualcomm pdQ wasn’t very good–I later described it as “a Palm glued to a phone.” It had all the functionality of a Palm 3 PDA and a typical CDMA phone of the late 1990s, but virtually no integration between the two sets of features. As I recall, you couldn’t even dial the phone by looking up a contact on the Palm and tapping the number. The only real advantage was that you got to carry one big device instead of two smaller ones. Needless to say, it sold poorly.

The followup pdQ a couple of years later was a more interesting product. By then, Qualcomm had sold its handset business to Kyocera, including the in-development pdQ 2. The revamped pdQ was a much more appealing product. It was much smaller than the original and offered some real integration of PDA functionality. It also borrowed the primitive Web-browsing capability of the Palm VII. Data communication in those days was limited to a theoretical maximum of 14.4 kilobits per second and you often did much worse than that, so the Palm system relied on pre-digested an condensed web snippets.

Interestingly, in the same BusinessWeek column in which I wrote about the Kyocera pdQ, I also dealt with what turned out to be the true ancestor of the modern smartphone. The Handspring VisorPhone was pretty terrible product from the company set up by Palm’s founders to build licensed Palm-compatibles. The VisorPhone, $299 with contract (!), was a GSM phone module that slid into the accessory slot of a Visor PDA and added phone and SMS apps to the standard Palm repertoire. Not many people bought it, but Handspring used the design experience to build the Treo 300, the first trule integrated smartphone, and the Treo 600, the first successful one.

CLARIFICATION: Turns out folks at Qualcomm in addition to Paul Jacobs have fond memories of the pdQ. Engineers who worked on the project point out that there was some significant integration between the phone and the Palm including the ability to place a call from the Palm Address Book, a “find and dial” search for phone numbers across apps, Address Book search from the phone dialpad, and APIs to give third-party Palm developers access to pdQ phone features. These features don’t sound terribly exciting today, but they were breakthroughs in 1999.

CES 2013: Where’s the Excitement?

Photo of Panasonic CES booth (Wildstrom)Next week, as I have in early January for I can’t remember how many years, I’ll head for Las Vegas and the International Consumer Electronics Show. I’ll admit I don’t much like either Vegas or CES, but this year, I’m anticipating the endless cab lines and paying $350 for a $150 hotel room with even less enthusiasm than usual. That’s because the prospects for anything resembling news from CES are pretty grim.

This is less a rap on CES than a comment on the current state of the industry. True, CES is to some extent a victim of its own success, a cacophony of announcements, most of them of little interest to anyone beyond the people making them. And with the shift to mobile, some of the attention has moved to Mobile World Congress, the annual February phone fest in Barcelona.

The fact is, however, that the tech industry has a problem: There undoubtedly will be a Next Big Thing, but it isn’t in sight at the moment. There aren’t even rumors about it. Television sets have long been the preeminent product at CES, dominating the massive “booths” of Samsung, LG, Panasonic, and Sony. But it’s really hard to generate any excitement about TVs at the moment. The industry featured 3D sets with much fanfare in 2011, but consumers have been lukewarm at best and the hoped-for flood of 3D content has failed to materialize. 3D is now just another feature of high-end sets, not a new and exciting product category.,The 4K “super HD” format is interesting for the future of very large displays but the high cost and lack of content make it mostly a curiosity for now.

“Smart TV,” another big industry hope, also hasn’t gone much of anywhere. Many TVs come with apps and internet connectivity to access over-the-top services, but poor user interfaces and limited programming choices have left users less than excited. The industry my be ripe for disruption, but the content owners and distributors remain in firm control. Maybe some day the long-awaited Apple TV will materialize to reinvent television. Or maybe the rumors about a big Intel TV effort will amount to something. I’m hoping, but I’m not holding my breath.

Since the death of Comdex a decade ago, CES has become, more or less by default, the country’s premier PC show. But the PC business is in the doldrums and without the presence of either Apple, which has never been a CES exhibitor, or Microsoft, which ended its heavy presence after last year, there is little prospect for major news. Besides, the major PC manufacturers unveiled all their new products for early 2013 in conjunction with the Windows 8 launch last fall.

For the past two years, CES has been overrun by dozens of tablets, most of which are never seen again, at least outside of China. I’m sure the same will be true this year. But the big players won’t be making any news.

Automobiles have become rolling computers and I don’t follow the area as closely as I should. I’m looking forward to catching up with the latest in automotive electronics and informatics. Of course, I could probably learn more and save a lot of time and money by skipping CES and going to the North American International Auto Show in Detroit the following week.

I still expect to learn a fair amount at CES. I always like to prowl the smaller, less expensive booths on the fringes of the big exhibit halls. Mostly, they are populated by component manufacturers showing products that could only draw someone with a deep interest in connectors or power supplies but every once in a while, something offbeat and novel turns up. And I’ll cruise the show-within-a-show exhibitions put on by Pepcom and ShowStoppers that allow a sort of a speed dating view of dozens of products, many of which won’t turn up on the CES floor. If I find anything interesting, I’ll be sure to let you know

 

 

 

Why Mourn the Death of Pirates?

Cnet screenshot

At some level, I have a bit of grudging admiration for CNET for publishing an article so obviously hostile to the the interests of its corporate parent, CBS. But on the other hand, it is long past time for anyone who want to be considered a responsible commentator on tech to praise common thievery. Christopher MacManus writes:

For many years, Installous offered complete access to thousands of paid iOS apps for free for anyone with a jailbroken iPhone, iPad, and iPod Touch. Think of it as being able to walk into a fancy department store, steal anything you want, and never get caught.

In my personal experiences with the app, I could often download the latest iOS applications and games for free from a variety of sources within mere seconds. After downloading, you could then install the app on your iDevice as if you purchased it from Apple’s App Store. Additionally, during its prime, it wasn’t unrealistic to expect expensive App Store apps hitting Installous mere hours after release.

I suppose it would also be nice to be able to shoot people I don’t like, but we don’t allow that sort of thing. Folks who download commercial apps they haven’t paid for don’t even have the lame excuse of those who torrent movies or TV shows that aren’t otherwise available for download or streaming. It is stealing pure and simple, and most of the time it isn’t ripping off some big corporation (another lame excuse for theft) but picking the pocket of a developer.

Maybe CNET intended this as some sort of New Year’s joke, in which case it isn’t very funny. MacManus identifies himself as a freelancer, so I imagine he expects to be paid for his work. He should extend the same courtesy to others.

 

Patents: That Word Does Not Mean What You Think It Means

USPTO logo

Yesterday, the internet was abuzz with reports that the U.S. Patent and Trademark Office had “rejected” another Apple iPhone patent. Many commentators jumped to the conclusion that, since this patent figured heavily in Apple’s recent legal victory in a case claiming infringement by Samsung, Apple had been dealt a heavy legal blow. But, it turns out, not so fast. Patent law speaks its own language in which you have to forget about the plain English meaning of words.

What are we to make of a statement like this, in the USPTO finding?

No rejection of the claims, as presently written, are made in this Office action based on the  Hill and Ullmann references because the teachings of those references are essentially cumulative  to the teachings cited in the rejections below. However, in order for claims to be found patentable and/or confirmed in this ex parte reexamination proceeding, the claims must be patentable over every prior art patent and printed publication cited in the order granting the request.

I think I know what all those words mean, but the passage as a whole reads like something from a nightmare version of a reading comprehension test. I am not a patent expert, and I count on folks like the Verge’s Nilay Patel and Matt Macari, intellectual property lawyers by training, to illuminate the dark ways of patent law. And, as Macari pointed out with regard to a similar USPTO ruling on another Apple patent, the rejection of claims following a request from reexamination, also known as a “first Office action,” is the first step in a very long process.

In this case, the challenge was filed by Samsung and, as is the normal practice, its challenge was considered without any response from Apple (that’s what ex parte means.) Macari cites USPTO statistics that such request are granted over 90% of the time. Apple now gets to come in an defend its patent before the UYSPTO–Samsung isn’t actually a party to the case. In a bit under 70% of such cases, some of the claims of the original patent are invalidated in reexamination while the rest are upheld; the patent in question contains 21 claims. About 11% of the time, all claims are rejected, leaving the patent invalid.

Although the USPTO reconsideration order came to light because Samsung filed it as part of its attempt to change or overturn the recent judgment in favor  of Apple, the action is not light to have any impact on that case, at least not any time soon. Under U.S. law, a patent is presumed valid until the USPTO says otherwise. At least for now, the reexamination order should not change anything.

 

The Shape of 2013: Predictions for the Year Ahead

Crystal ball graphic
After 15 years of making predictions, with a track record that would have made you rich if you’d bet on them, I’ve been away from the practice for a couple of years. But as the regulars at Tech.pinions have agreed to end the year with a set of predictions each, I’m back at the game. My best guesses for 2013:

A Modest Rebound for BlackBerry. Like many others, I was prepared to write off BlackBerry during the last year as its market share cratered. And if Windows Phone 8 had really taken off or if Android had made a serious play for the enterprise, it would be very hard to see where there might be room in the market for Research In Motion, no matter how promising BlackBerry 10 looks. But I think there is room for at least three players in the business, and right now the competition for #3 is still wide-open. BlackBerry still enjoys a lot of residual support in the enterprise IT community, and some key federal agencies that had been planning to move away from the platform, such as Homeland Security’s Immigration & Customs Enforcement, have indicated they are open to a second look. The challenge Research In Motion faces is that BlackBerry 10, which will be leased on Jan. 30, needs to be appealing enough to users, not just IT managers, that it can at least slow the tide of bring-you-own devices into the enterprise.

A Windows Overhaul, Sooner Rather Than Later. Even before Windows 8 launched to distinctly mixed reviews, there were rumors about that Microsoft was moving toward a more Apple-like scheme of more frequent, less sweeping OS revisions. Microsoft sometimes has a tendency to become doctrinaire in the defense of its products; for example, it took many months for officials to accept that User Access Control in Vista was an awful mess that drove users crazy. But Microsoft has had some lessons in humility lately and the company knows that it is in a fight that will determine its relevance to personal computing over the next few years. I expect that, at a minimum, Windows 8.1 (whatever it is really called) will give users of conventional PCs the ability to boot directly into Desktop mode, less need to ever used the Metro interface, and the return of some version of the Start button. On the new UI side, for both Windows 8 and RT, look for a considerable expansion of Metrofied control panels and administrative tools, lessening the need to work in Desktop. In other words, Microsoft will move closer to what it should have done in the first place: Offer different UIs for different kinds of uses. The real prize, truly touch-ready versions of Office, though, are probably at least a year and a half away.

Success for touch notebooks. When Windows 8 was first unveiled, I was extremely dubious about the prospects for touch-enable conventional laptops. The ergonomics seemed all wrong. And certainly the few touchscreen laptops that ran Windows 7 weren’t every good. Maybe its my own experience using an iPad with a keyboard,  but the keyboard-and-touch combination no longer seems anywhere near as weird as it once did. And OEMs such as Lenovo, Dell, HP, and Acer are coming up with some very nice touch laptops, both conventional and hybrid. Even with a premium of $150 to $200 over similarly equipped non-touch models, I expect the touch products to pick up some significant market share.

Significant wireless service improvements. We’ll all grow old waiting for the government’s efforts to free more spectrum for wireless data to break fruit. The incentive auctions of underused TV spectrum are not going to be held until 2014, and it will be some time before that spectrum actually becomes available. The same is true for a new FCC plan to allow sharing of government-held spectrum in the 3.5 GHz band. But the good news is we don’t have to wait. Technology will allow significant expansion of both the capacity and coverage of existing spectrum. Probably the two most important technologies are Wi-Fi offload, which will allow carrier traffic to move over hotspots set up in high-traffic areas, and femtocells and small cells, which can greatly increase the reuse of of the spectrum we already have. Unlicensed white space–unused free space between TV channels–should begin to make a contribution, especially in rural areas where TV channels are more sparse. And the huge block of mostly idle spectrum the Sprint is acquiring with its proposed purchase of Clearwire will also ease the congestion, probably starting next year. (Stay tuned for a Tech.pinions series on spectrum issues in January.)

Intel Will Make a Major ARM Play. It’s hard to believe today, but Intel was once a major player in the ARM chip business. In 1997, it bought the StrongARM business from a foundering Digital Equipment. Renamed XScale, the Intel ARM chips enjoyed considerable success with numerous design wins as early smartphone applications processors. But XScale was always tiny compared to Intel’s x86 business and in 2006, Intel sold its XScale operations to Marvell. A year later, Apple introduced the ARM-based iPhone. Today, ARM-based tablets are in the ascendancy, x86-based PCs are in decline, and Intel is struggling to convince the world that a new generation of very low power Atom systems-on-chips are competitive. Maybe the Clover Trail SOCs and their successors  will gain a significant share of the mobile market, but Intel can’t afford to wait very long to find out. With its deep engineering and manufacturing skills, Intel could become a major ARM player quickly, either through acquisition or internal development.

Maps for iOS: What Does Google Have Against Tablets?

Google maps iPad screenshot

Google’s failure to understand that a tablet is something other than a really big phone is becoming one of the great mysteries of the technology world. The Android tablet business has been crippled by a lack of dedicated tablet apps, a situation that Google has done almost nothing to correct. Now Google has confirmed my worst fears with the release of the long-awaited Google maps for iOS.

Google maps for the iPhone is lovely. It’s better than the old Google-based iOS Maps app, adding vector maps and turn-by-turn directions. And it draws on slick search abilities and deep geographic data knowledge, the lack of which can make using Apple’s own Maps app an adventure. And Google maps integrates transit information (a feature sadly not available in the Washington, DC, area.)

But the iPad is a very different story. For whatever reason, Google did not bother to come up with a separate iPad-optimized version. Like any other iPhone app, Maps will run on the iPad, but like any other iPhone app, it looks ghastly. The picture above shows Google Maps on a  third-generation iPad in 2X mode (the alternative would be to display an iPhone-sized image in the middle of the screen.) . Scrolling and zooming is not as smooth as on the iPhone, and notice the enormous amount of screen area that is wasted by by simply scaling up the various on-screen controls.

This is all rather hard to understand, since Google should have had no trouble developing an iPad version in parallel with the phone edition. Much smaller developers do this all the time. I can only hope that Google will realize that the iPad is something more than a larger iPhone and correct the error quickly.

How To Make Windows 8 Great

Del XPS Duo 12 Convertible
There has been a lot of discussion here lately, both in posts such as Why IT buyers are Excited About Convertibles and Hybrids and Microsoft Surface: How Relevant Are Legacy Apps and Hardware? about the failings and the potential of Windows 8. So inspired by these posts, and even more so by readers’ comments on them, here is a radical if only partially baked idea: How about a hybrid operating system for hybrid devices?

In Metro (I’m going to go on calling it that until Microsoft comes up with a real alternative), Microsoft has designed a very good user interface for tablets and touch-based apps. The legacy Windows Desktop is still an excellent UI for a traditional mouse-and-keyboard PC. But in bolting the two together in Windows 8 and, to a lesser extent, Windows RT, Microsoft has created a very ugly two-headed calf. The tendency of Metro to pop up while you are working in Desktop, and for Desktop to be necessary for some tasks even while in touch mode, renders both interfaces far from optimal.

Microsoft should do three things. The easiest is to get Metro out of Desktop by allowing booting into Desktop and restoring traditional UI elements, such as a start menu, that were removed from Windows 8.  Fixing Metro is harder. Basically, Microsoft has to finish the job by creating features, utilities, and apps that allow the user to do everything in the touch interface. The toughest challenge is Metrofying Office. It would be extremely difficult to recreate all the functionality of Word, Excel, and the rest in a tablet app and almost certainly unwise to try. Instead, Microsoft has to pick a core feature set that can work in a touch interface on relatively small screens and build the applications around these. (If reports are to be believed, Microsoft is doing this for iOS and Android anyway; why not Windows?)

But the really cool thing would be hybrid Windows for hybrids, a shape-shifting operating system designed for a new generation of devices that can convert from traditional PCs to tablets (the forthcoming Surface Pro probably belongs in this class.) Why not an OS that presents the traditional Desktop UI when the device is being used with a keyboard and touchpad or mouse, then converts instantly and automatically to a touch-first Metro-type UI when the device transforms?

The key to making this work is the use of solid state storage, which allows for very fast saving and restoration of state. I envision a system where you could be editing a Word file in Desktop, then switch to tablet mode, where you make some changes to the file in the touch version of Word. When you switch back to Desktop, Word would still be open with your file, but it would include the edits made in tablet. I suspect that the Desktop and Metro versions of programs would still have to be different applications and this would require closing and reopening of files when switching modes. But SSDs can make this happen so quickly that the user will barely notice.

I’m not suggesting this is at all a trivial job or that in can be done very quickly. The Office project alone is a very large undertaking, one that I can only presume is already underway, although Microsoft has been totally silent about it. There is a great deal of work beyond that, and third-party software vendors would have to get on board with mode-switchable versions of their applications.  But the result would be new and exciting computing experience.

How I Met My Computer: Technologies That Changed My Life

IBM 7090 (IBM Corp.)

I got into computers because I was fascinated by a friend’s programming manual. It was at the University of Michigan, probably in early 1966, when I had my first look at The MAD Manual, a beguiling guide to the Michigan Algorithm Decoder that borrowed from both Mad magazine and Alice in Wonderland. I was hooked, got a student account, and taught myself to program, which in those days meant pounding out your code on an IBM 026 keypunch, turning in your program deck, and getting output in a day or so.

MAD, a derivative of the now largely forgotten Algol language, was uncommonly elegant and powerful for its day. But the IBM 7090 mainframe on which it ran was soon replaced by a monster System/360-67. It came with time-sharing terminals, but no MAD, so we all had to learn Fortran IV and some 360 Assembler. (Bonus question: What was a Green Card? Hint: It had nothing to do with immigration law.) By then I was working on a thesis that used a big data set and while we had access to lots of canned subroutines to do data analysis (the collection became SPSS), you had to write the code to glue them together and handle input and output. I became a programmer–not a terribly good one–in spite of myself.

Apple 2 (Cokumbia University)After graduating from college, I mostly left computers aside for a few years. Journalists in those days wrote on typewriters–manuals, mostly. Editors cut and pasted, literally, with scissors and rubber cement, then sent their copy off to typesetting or gave it to a Teletype operator for transmission. But in 1979 or ’80, I played with an Apple ][ at a friend’s house and knew I had to have one. I didn’t know what I might do with it, but I knew I needed one. I got an Apple ][+ with a monochrome monitor and two floppy drives . One of the first things I did with it was the “shift-key mod,” jumpering a pin on the keyboard connector to a pin on a chip on the motherboard; this allowed the Apple to type and display lower-case letters.

I spent many happy hours with my ][+; I mastered Zork, my kids learned to program on it, and it was the only computer I ever understood at a deep level. For some reason, I still remember that you generated a beep from the internal speaker by writing to memory location 2030, and you could make a kind of music by controlling the frequency of POKEs. By the mid 80s, I was doing most work-related stuff on an MS-DOS machine, but through many successor PCs and Macs, I never lost my affection for that original Apple. For me, the personal computer has never again been as personal.

XyWrite III screen shotBy the mid 1980s, BusinessWeek‘s New York editorial operations were running on a VAX-based Atex system but bureaus were still using typewriters. The time had come to computerize. We had no IT staff resident in Washington. I didn’t know much, but I knew more than anyone else, so the job of supervising setup of the computers and installation of a local area network fell to me. We used proprietary 3Com networking software and servers. But the key tool was XyWrite, a DOS text word processor created by folks from Atex. It was by far the most advanced word processor of its time, fast, flexible, and powerful. It was ideally suited to the needs of the publishing industry and let us mimic many of the functions of the expensive Atex system on cheap hardware. XyWrite was amazingly customizable. An acquaintance created a XyWrite template that could be used to create weaving drafts for setting up looms. Internally, we customized it  to the point where a story in XyWrite could be fit against a magazine layout.

The big problem was that our LAN in Washington had no way to communicate properly with the Atex system in New York. We could send files into the system and they sent output back to our printers. The two systems communicated, such as it was, using custom software and hardware created by a company that had long since disappeared. No one knew quite how it worked and we just had to treat it as a black box. The crowning achievement of  my career as a developer was overseeing creation of a program that solved the communication program that made our LAN look like a single teletype machine. Incoming messages were received by a server, parsed for addresses (four-character Teletype codes), and in addition to being printed, were directed to the inboxes of recipients on our 3Com local-only mail system. It was a hideous kludge, but it worked and was used for several years.

cc:Mail boxIn time, we moved to a real email program, cc:Mail, that not only linked all our offices but provided gateways that connected us to the world. I had been using MCI Mail and CompuServe mail for some time, but cc:Mail (later acquired by Lotus) was a huge breakthrough. Without the help or, indeed, the knowledge of our corporate IT folks, we set up  an internet connection with a small local ISP and an SMTP gateway. (I also registered the domain mh.com. It’s still owned by McGraw-Hill, but inactive. The ought to sell it–two-letter .com domains are worth something.) Email quickly went from being a better way to send messages internally to a primary way of doing business. Our network also gave us very early access to the rest of the internet, including the nascent World Wide Web, though it was some years before that became very useful.

In 1994, I gave up managing this stuff (as a sideline to my day job as deputy bureau chief) and took up writing about technology full time. Finally what had been more or less a hobby for nearly 30 years became my occupation. It was a time of tremendous change in the personal technology business, which was just on the cusp of moving from early adopters to a true mass market. (My very first “Technology & You” column took a look at the updates Apple Newton MessagePad 110 and a would-be competitor called the Motorola Envoy. I presciently predicted that such devices required reliable, reasonably fast wireless communications to be useful. Less prescient was my belief that the Magic Cap operating system that powered the Envoy had a future.)

I all my years reviewing devices, there were very few products that I believed at first glance would really change the game. One turned up in early 1996, when Ed Colligan stopped by my office with an early production unit of the Palm Pilot. Those first Palms had no wireless communications,  but the computer sync (over an RS-232 serial cable!) was, for its time, a masterpiece of simplicity, as was the device itself. It was a very Apple-esque design–many of the Palm folks had deep roots in the Apple world–and it set the standard for what Apple would accomplish a decade later with the iPhone and iPad.

BlackBerry 850 photo (Research In Motion)The second came in 1999, when Mike Lazaridis showed up with the first BlackBerry. It was basically a two-way pager–it ran on a paging network–that differed from existing devices in two critical ways: It had a tiny, curved keyboard that you could actually type on with reasonable speed. And it had a network back end that reliably and securely exchanged mail with corporate servers. The BlackBerry, probably more than any other device, changed the way I and millions of others worked. It meant that for better or worse, if you had network coverage–and the coverage of that slow pager network was in many ways more ubiquitous than today’s 3G and 4G networks–you were in the office. At a time when remote access to corporate networks from a laptop was a very iffy thing, this anytime, anywhere connectivity was a huge breakthrough.

The last few years in tech have really been wonderful ones, but I have been saddened by the demise of Palm and the decline of Research In Motion. These are companies that produced products that changed my life, and the world.

 

 

Why Apple Manufacturing Needs Few Workers

Old photo of women on assembly line (National Park Service)

Apple CEO Tim Cook’s announcement that the company would do some Mac assembly in the U.S. brought on a flurry of publicity vastly disproportionate to the importance of the development. It’s good that manufacturers see opportunities for U.S. operations for a variety of reasons, but a big surge of employment isn’t one of them. Dan Luria, a labor economists with the Michigan Manufacturing Technology Center was quoted by Bloomberg as saying that the Apple operation is likely to add only 200 jobs.

That’s not surprising to anyone who has visited a modern manufacturing facility, or who has seen only pictures of crowded Chinese assembly lines. Most factory work these days, especially in high-tech operations, is done by machines, not people (this is how a manufacturing company like Intel achieved revenues of more than half a million dollars per employee last year.)

The change is most striking in electronics assembly. Circuit board manufacture used to require humans to mount components on boards and solder them in place. Today, components have shrunk to the point where it is difficult at best for humans to place them with sufficient accuracy and impossible to solder by hand. Instead, high-precision robots place the parts on boards, which are then soldered in a quick trip through an induction furnace. Many Chinese factories still use lots of people for final assembly jobs because labor has been cheaper than robots; this is changing fast as Chinese wage rates rise.

A narrowing wage differential is one reason manufacturing in the U.S. is becoming more attractive. Rising shipping costs is another. As Quentin Hardy wrote in the New York Times Bits blog:

“The labor cost on a notebook, which is about 4 to 5 percent of the retail price, is only slightly higher than the cost of shipping by air. Soon even that is likely to change because of the twin forces of lower manufacturing costs from automation and higher transportation costs from rising global activity.”

The good news is that while the jobs are fewer, they are much better than most old factory work. Machines have taken over the heavy, dirty, dangerous jobs. (During my one summer of factory work, I spend a couple of weeks on the  shipping line, sealing boxes, and applying shipping labels and postage. Back then, this was all done with glued tape and labels and I ended each shift covered head to waist in glue. I would have paid the robot myself to escape.) The jobs that remain are more for technicians than operatives. They require higher skills and generally offer higher pay and certainly better working conditions.

Why Windows 8 Drives Me Nuts

Windows 8 screen shotI had a hard drive fail in a couple-year-old ThinkPad this week, so I decided to use the opportunity to install Windows 8 on a completely clean system. The installation was painless except for a bit of difficulty in getting Wi-Fi working. But there was one problem. The system was annoyingly going to sleep after too short an interval.

I’ve changed this setting dozens of times on previous versions of Windows. In Windows 7, you select Control Panel on the start menu, choose Power Options, and click on “Change when the computer sleeps.” This works, albeit in a clunky way, in Windows 8. You open Desktop, bring up the Charms bar, select the Settings charm, and click  Control Panel. It takes a few extra clicks and is not at all intuitive, but it’s not too bad once you have figured it out.

But it seems to me that if Metro–or whatever Microsoft wants us to call it–is the user interface of the future, there ought to be some way to perform a basic function like this without falling back on the desktop. This is especially true on a Windows 8 tablet, where the touch-unfriendliness of the Desktop becomes a real issue.

The best I could do to stay in Metro was: From the Start screen, bring up the Charms bar and select the Search charm. Pick Settings as the search domain and start typing “sleep.”  “Change when the computer sleeps” pops up; click it and the control panel opens. Of course, at this point, you are back in Desktop. Again, this method to perform a simple task seems totally unintuitive, especially since if you type “screen” or “display” in the search box you are not offered the sleep option.

This is just one more example of how Windows 8 often feels like two operating systems roughly bolted together. If you could work consistent in one of the UIs, say Desktop on a conventional laptop and Metro on a tablet, Windows 8 wouldn’t be bad. But if there’s a way to avoid jumping back and forth (without resorting to third-party UI modifications), I haven’t found it. And it makes Windows 8 a trying experience.

 

Microsoft, IBM, and AT&T: History Comes Around

IBM, AT&T, and Microsoft logos

In his post “Why the Wheels Are Falling Off at Microsoft,” John Kirk paints a bleak picture of the company’s future. It got me thinking about a relevant bit of history about how rich companies handle existential challenges. Around 1990, IBM and AT&T found themselves in similar, difficult positions. The iconic companies had been among the dominant forces of the 20th century, but their world was changing in very unpleasant ways. Each had been through a long and wrenching antitrust battle with the government; AT&T’s loss cost it the local phone business in a breakup, IBM’s victory cost it more than a decade of heavy distraction. Each was seeing its core business eroded by technological change: Satellites and new networking technologies were lowering the barriers to entry into AT&T’s lucrative long distance business, while minicomputers and PCs were eating away at IBM’s mainframe dominance. But each company also had a tremendous advantage–the enormous cash flow from its legacy businesses could buy the time needed for reinvention. It’s what happened next that is important for the future of Microsoft.

IBM turned to new leadership, hiring Louis Gerstner, who had earned his stripes at RJR Nabisco and American Express. He put IBM through a meat grinder that included the dumping of whole divisions and massive layoffs of employees, many of whom had been with the company for years. But the IBM that emerged was fierce  and focused, ready to take advantage of a booming technology market. Today IBM is again one of the country’s most successful companies.

AT&T , by contrast, used its money for what turned out to be a calamitous series of acquisitions. The post-breakup AT&T desperately wanted to get into the computer business  and in 1991, it bought NCR Corp. for $33 billion. The company launched an unsuccessful series of minicomputers and lost billions getting into, then out of, the PC business. NCR was spun out in 1997. In 1994, AT&T bought the two-thirds of McCaw Wireless it didn’t already own for $11.5 billion.  This acquisition, too, withered under new ownership and AT&T ended up spinning the wireless business out as an independent company that eventually became Cingular.

In the most humiliating deal of all, AT&T in 1998 bought Tele-Communications Inc., the country’s second-largest cable operator, for $48 billion. After spending many billions to upgrade the network, AT&T sold its cable operations to Comcast for $45 billion. These failed attempts to get into new businesses left AT&T an empty husk with a proud history, a valuable brand, and an aging backbone network. In 2005, SBC, a company born of the merger of AT&T Bell System subsidiaries, bought what was left of its former corporate parent and assumed its name. The AT&T name and its T stock symbol lived on, but the company founded by Alexander Graham Bell was gone.

Like AT&T and IBM, Microsoft was battered by a long antitrust battle with the government. Like them, it is having serious problems coming up with an adequate response to technological and competitive change eating away at its core businesses. And like them, it still has a lot of money coming in that will make a transition possible.

The question is, which model will Microsoft follow, AT&T or IBM? Will it emerge as a chastened, perhaps smaller, but very competitive company? Or will it just slowly fade away? The money gives it time to fix things, but it has to make key decisions about what sort of future it wants soon, and whether the leadership the company now has can get it there.

Can Big Data Make Us Healthier?

Photo of Mobile Health summit echibition

I spent some time this week at the Mobile Health Summit, an annual Washington event featuring the latest in mobile health-related technologies. The exhibition hall was filled with sensor-based devices that can track blood pressure, body weight, blood glucose, pill-taking behavior, and just about any other facet of human life. There was even a $199 sled from AliveCor that turns an iPhone into an electrocardiogram.

Nearly all of these devices feature either Wi-Fi or cellular wireless capability, making them part of an ever growing machine-to-machine network, also known as the Internet of Things. There is little doubt that they are becoming an important part of individualized treatment that can help keep us healthier, albeit at a sometimes creepy loss of privacy. (One company was showing connected motion sensors that could alert a care giver if it didn’t sense you moving about your room when you were supposed to be up and about. I can see the usefulness, but still find the idea disturbing.)

To be truly useful, the data from these sensors should feed into the patient’s medical record in a way that gives a health care provider a big-picture idea of what is going on. Infrastructure providers, including Verizon, AT&T, and Qualcomm, are building systems that can consolidate data from a variety of sensor sources.

But the question in my mind is whether we can go beyond individual medicine and use the staggering mass of data that will be produced by our quantified futures to improve health in general. The practice of medicine remains, in many ways, stunningly unscientific. Treatments are often selected without solid statistical knowledge of outcomes because data is hard to come by. Many decisions are based more on instinct and custom. What studies do exist too often reach sweeping conclusions on the basis of painfully small numbers of patients involved.

I have no doubt that researchers could gain tremendous insight into medicine, particularly what does and does not work to keep us healthy, if they could use big data approaches to study treatments and outcomes from an aggregation of the information that is starting to flow. However, many challenges–technical, business, and regulatory–have to be met before this can happen.

Today, what data does exist is likely to be stored in completely disconnected silos. Changes in technology, insurance company practice, and government regulations are forcing the adoption of electronic medical records (EMR) at a rapid rate, but EMR systems often cannot talk to each other. If you land in the hospital, you will be very lucky if its records system can communicate directly with your doctor’s. The government’s Center for Medicare and Medicaid Services.the payment agent for two massive programs, has a vast collection of data on treatments and outcomes hidden away in an assortment of mutually incompatible legacy databases. (CMS has launched a modernization program mandated by Obamacare, but it could take years to bear fruit.)

There are many obstacles in the way of turning a big collection of individual medical records into useful big data. An obvious one is privacy. Medical records are about as sensitive as personal data gets and we have to make sure that the identity of individuals is not exposed when the information is aggregated. There are already extensive protections in place, most significantly in the U.S., the Health Insurance Portability  and Accountability Act (HIPAA.) Some experts, notably Jane Yakowitz Bambauer, fear that excessive concern with making sure that data remain anonymous threatens to cripple valuable research.

There are also major issues in making sense of the data. If you are researching outcomes, it does help to be able to find all the patients with the condition you are studying. But the metadata accompanying today’s medical records, often designed more for the needs of insurance companies and other payers than for doctors, can make identifying the relevant data hard. “People are carefully coding the financial side, but that provides very little help on the clinical side,” says Dr. David Delaney, chief medical officer for SAP Healthcare. A new system called ICD-10 is in use in much of the developed world, but won’t be fully implemented in the U.S. for another two years.

Venture capitalist Vinod Khosla, writing for Fortuneargues that data analytics will eventually replace 80% of what doctors now do. Fortunately for his prediction, Khosla does not put a time on “eventually.” I have no doubt the time will come, but given the myriad difficulties, I suspect it will take a lot longer than any of us would like.