Cord-cutters Beware: It’s Going To Get Expensive

Photo of cable cutting (Fotolia)The are many reasons why television viewers choose to cut the cord, to be among the 15% or so of Americans who get by without a cable or satellite feed. But probably the most important is the desire to pay less for the content by grabbing it out of the airwaves or finding it on the internet.

I have some bad news for penny-pinching cord-cutters. The more people choose to do without a cable subscription, the more expensive the over-the-top alternative is going to become.

The economics of this are stark. Charlie Ergen, CEO of Dish Network, describes the current U.S. environment as “90 million [households] paying $1,000 a year. I don’t think in my lifetime that number goes up.”

Broadcast stations still rely heavily on advertising, but the retransmission fees paid by cable and satellite providers, estimated at about $3 billion a year, are an increasingly important part of their revenues. That’s why broadcasters are so scared of, and are fighting so hard against, Aereo, which provides over-the-air TV over the internet without compensating the stations.

Basic and lower-tier cable-only networks also get advertising revenue, but their rates are often much lower and cable fees represent a larger part of their income. Premium channels depend mainly on subscription fees. Except for the relatively small amounts they pay through subscription internet services such as Netflix and Hulu+, cord cutters are not part of this economy.

Comcast, which plays in pretty much all aspects of the TV game, provides a good look at these economics. Last year, its cable operations generated $20 billion in revenues from video services. Of that, by far the biggest chunk, $8.4 billion, went to pay for content. Its NBC Universal unit received $8.8 billion from its cable channels, $8.2 billion from broadcast (including both the NBC network and 10 owned-and-operated local stations), $5.1billion from movies, and $2.1 billion from theme parks.[pullquote]Somewhere, money will have to be found to feed the content beast. Otherwise, it will be goodbye Game of Thrones, hello Survivor: Mall of America.[/pullquote]

By contrast total Netflix revenues last year were $945 million, the great bulk of it from 27 million U.S. streaming subscribers.

Somewhere, money will have to be found to feed the content beast. Otherwise, it will be goodbye Game of Thrones, hello Survivor: Mall of America.

A number of things are going to have to happen if a significant fraction of customers cut the cord. There’s going to be a lot less free stuff available and the paid content is going to cost more. TV show owners who now regard whatever they get from Netflix for old seasons of Mad Men as gravy will have to drive up the cost if that becomes the primary channel of distribution—and the $9 all-you-can-eat monthly Netflix subscription will be a thing of the past.

HBO will eventually decide to sell content to viewers without cable companies acting as intermediaries; it already has the infrastructure to do this in place with HBO Go. But don’t expect the cost to be much less than the $10 a month or so you’d currently pay for an HBO subscription (the bundling practices of both the cable operators and the content providers make it very hard to figure out the charge for any particular service.) Similarly, access to ESPN sports is going to cost at least as much as the $5 or so per customer per month that cable operators pay for the service.

A second thing that will happen is that content providers are going to become a lot less sanguine about account sharing. Trust me, the services know that you and your three closest friends are sharing one Netflix streaming account or that your college-student son has “borrowed” your Verizon FiOS login credentials to watch HBO Go and ESPN 3. For now, it’s more trouble than it is worth to stop these practices. But as over-the-top revenues grow in importance, the content providers tolerance of such cheating will shrink. A crackdown is inevitable.

Fortunately, getting your television over-the-top has some advantages other than price. You get to watch what you want, where you want, and on the device you want. But all those charges are going to add up, and by the time you’re done, you could be paying as much as you do for a cable subscription. Or more. The byzantine cross-subsidies created by bundled prices mean that at least some customers are getting more than they pay for. In the end, that one payment a month for a big bundle of content may look at lot better than it does today.

 

Office for Tablets: Delay Could Be Death

Office logo

The usually very well-informed Mary Jo Foley reports at ZDNet that we won’t see major improvement in Microsoft Office for tablets until next year, spring of 2014 for Windows RT and fall for the long-awaited iOS and Android versions. If true, this is big trouble for Microsoft’s cash-cow Office franchise.

The huge threat is that this long wait gives everyone a year to 18 months to continue to learn to live without Office. In tech time, that is more-or-less forever. The longer people go without Office on their tablets and the more that tablets become the dominate computing tools, the less people will want or need the Microsoft software. It will hold on in the enterprise in those roles where Office is indispensable, but that will be a steadily shrinking market.

The bizarre thing is that Microsoft foresaw the future of tablets with the development of the Tablet PC in 2001, but utterly failed to recognize their importance once Apple released the iPad in 2010. The brand-new version of Office relies on Windows’ classic Desktop user interface and its applications are unsuited for use even on Microsoft’s own tablets unless they are effectively configured as laptops with keyboards and a stylus or mouse. Outlook, a key component of the Office suite, is not available at all for RT, the tablet version of Windows (Foley says Outlook RT is fall 0f next year.)

As the latest sales figures suggest, the world is moving decisively to tablets. To the extent that people need Office-like apps, companies more nimble (and less riven by internal politics) than Microsoft will provide them. If Microsoft doesn’t get around to releasing tablet versions of the applications until the fall of 2014, it is likely that very few people by then will care.

The School Standards Debate: Time for Tech To Weigh In

School kids (Photo c Monkey Business-Fotolia

 

Tech people are very fond of whining about the U.S. educational system, complaining that it is not producing the sort of workers they need. With a few notable exceptions–Bill and Melinda Gates and Dean Kamen come quickly to mind–the are much less good when it comes to doing anything about the problems of schools.

OK, here’s your chance. It won’t even cost you anything–calls for better education seem to die quickly in places like Silicon Valley when the talk turns to taxes–except some leadership.

The Common Core State Standards are the most important school reform to come along in many years. The standards fo mathematics and language arts lay out what we expect students to learn, year by year, from kindergarten through high school. They are not a curriculum, but a set of mileposts for what curriculum should cover, and they inject a badly needed dose of rigor into education. If you have any interest in K-12 education, you should take the time to read them here.

dear_industryDespite a studied effort by their authors and sponsors at the National Governors’ Association and Council of Chief State School Officers to avoid political pitfalls, the standards have come under increasing attack from both the left and right. CCSS was initially adopted by 48 states and the District of Columbia, but three states have withdrawn their support and their is pressure in many others to do the same.

On the left, opposition to CCSS is closely tied to opposition to standardized testing, based on the assumptions, not necessarily warranted, that the standards will lead to increased testing. The anti-testing advocacy group FiarTest argues:

More grades will be tested, with more testing per grade. [No Child Left Behind] triggered an unprecedented testing explosion (Guisbond, et al., 2012). The Common Core will compound the problem….

Lured by federal funds, states agreed to buy “pigs in a poke.” The new tests do not yet exist except for a few carefully selected sample items, so it is not possible to judge their quality. Nevertheless, states are committing large sums of taxpayer money for the equivalent of “vaporware”—much hype, little substance. New drugs must be carefully tested before release lest they do more harm than good. Yet, these new measures are being pushed through with at most one year of trials. There’s no guarantee that they will function as advertised and many reasons to believe they will not.

The argument that more study is needed is especially pernicious. CCSS has been in development for more than a decade and, unlike the radical math and science curriculum reforms of the early 1960s (remember New Math?), the new standards are mostly a compilation of best practices already in use. Then there’s the obvious paradox of demanding more evaluation while opposing the testing the could provide the data. (The National Education Association and the American Federation of Teachers, which oppose the use of standardized tests to assess teacher performance, both are on the record in support of CCSS.)

But the truly fevered opposition to CCSS is coming from the right, and this is what is threatening implementation in the states, largely through interference by state legislatures. The main objection, despite evidence to the contrary, is that CCSS represents a federal takeover of local education. Then there’s the complaint that CCSS is both untested and that the government is trying too hard to test it. Tiffany Gabbay, writing on the conservative site The Blaze, syas:

According to the conservative think tank American Principles Project, Common Core’s technological project is “merely one part of a much broader plan by the federal government to track individuals from birth through their participation in the workforce.” As columnist and author Michelle Malkin has pointed out, the 2009 stimulus package included a “State Fiscal Stabilization Fund” to provide states incentives to construct “longitudinal data systems (LDS) to collect data on public-school students.”

With attacks, often ill-informed (or completely uninformed; many of the people attacking CCSS show no sign of any knowledge of what they contain) coming from all sides, CCSS could use some friends, and I think its time for the tech industry to step up. I am much more familiar with the math standards than language arts, both because it is my area of interest and because by the nature of the beast, the language arts standards or vaguer and harder to interpret. The math standards, if properly implemented, would represent a huge step forward. They aim both at increased computational skills, largely deprecated in the standards in use for the past 25 years, and deeper understanding of the connectedness of critical topics in mathematics. Curriculum based on these standards should produce students better able both to do math and to think more deeply and critically.

This is exactly what tech companies is looking for in its future labor force. So instead of complaining about the deficiencies of American students, get out there and work for some constructive change.

 

Fox, Aereo, and the End of TV

Aereo antenna array (Aereo, Inc.)

News Corp. Chief Operating Officer Chase Carey’s threat to pull the Fox network from the airwaves if Aereo wins its legal battle to retransmit over-the-air TV signals without paying for them is probably nothing more than bluster. But the fact that he could make such a threat with a straight face, and in front of the National Association of Broadcasters, no less, is a clear indication that the end of TV as we have known it is approaching.

The broadcast networks, especially Fox and the old big three of ABC, CBS, and NBC–are still tremendously important players in the TV world. Far more people watch their content than any of the cable-only channels. They still dominate news and live sports (though ESPN and to a lesser extent Fox have made significant inroads in the latter).

But over-the-air is no longer how they reach most of their viewers. And while we still think of broadcast TV as ad-supported, the retransmission consent fees paid by cable carriers–and avoided by Aereo–have become a tremendously important source of revenue to local stations. In a sense, they already are pay TV stations from the point of view of most viewers, and that is why  Carey’s threat is not an empty one.

What would it mean if over-the-air broadcast TV disappeared? For one thing, we could forget about the hideously complex incentive auction now being planned by the FCC to free a bit of the prime spectrum now occupied by TV stations for wireless data use and just turn the whole thing over to wireless.

Some of the more interesting consequences would be for politicians. Members of Congress depend on local stations to keep their names and faces in front of voters, especially as local newspapers fade away. Politicians are also the beneficiary of regulations that require local stations to sell advertising to candidates for federal office at the lowest rates they charge any customer. In fact, if stations stopped broadcasting over the air, the Federal Communications Commission would lose essentially all ability to regulate their content, rates, or much of anything else.

Even the most anti-regulation Republican doesn’t really want that to happen. That’s why Carey’s real audience may have been Congress. If Aereo wins in court, as seems increasingly likely, the broadcasters are likely to turn to Congress for relief. Carey’s statement was likely a shot across the bow in that fight.

But history has shown us that depending on favorable treatment from government to save you from the forces of change can work, but only for a little while. The times they are a-changing for television.

How To Beat Patent Trolls: Fight

Troll image (© DM7 - Fotolia.com)

When faced with a lawsuit that has even a slim chance of success, lawyers almost always urge businesses to settle rather than fight. Litigation is extremely expensive, and unless the suit raises an issue of principle that is important to defend, the candle simply isn’t worth the game.

Unfortunately, in the world of patents, this attitude had led to a proliferation of patent trolls, companies that buy up unused and generally vague software patents and then claim infringement against businesses, often smaller companies without big legal budgets, that actually make things. The U.S. District Court for the Eastern District of Texas, which has been remarkably friendly to trolls, is the heart of the racket.

It would be nice if the U.S. Patent and Trademark Office would revoke the thousands of ill-considered patents it granted, especially in the early days after software patents were first allowed. It could be nice if Congress changed the laws to make it harder for so-called non-practicing entities to engage in a legal shakedown. But neither of these things is likely to happen any time soon.

So it is time for businesses to stand up and fight. Patent trolling will persist as long as it is a profitable activity. By raising the cost to the trolls, admittedly at some short-term cost to themselves, businesses can destroy the economics of the shakedown.

Rackspace Hosting, an infrastructure-as-a-service company beset by trolls, is leading the way. Last month it won a signal victory by obtaining summary judgment against a company that claimed a patent on rounding off floating-point numbers. (Rackspace was supported in the case by Red Hat Software, whose Linux implementation contained the allegedly infringing code.)

Now Rackspace has gone on the offensive filing a breach of contract suit against “patent assertion entities” Parallel Iron and IP Nav. The case, described in detail in this Rackspace blog post, is legally complicated. Parallel Iron is suing Rackspace for infringement of  a patent it claims covers the open-source Hadoop Distributed  File System. Rackspace argues the suit violates the terms of an earlier stand-off agreement it negotiated with Parallel Iron and IP Nav.

Rackspace, which says it has seen its legal bills rise 500% since 2010, explains why it has decided to fight:

Patent trolls like IP Nav are a serious threat to business and to innovation. Patent trolls brazenly use questionable tactics to force settlements from legitimate businesses that are merely using computers and software as they are intended. These defendants, including most of America’s most innovative companies, are not copying patents or stealing from the patent holders. They often have no knowledge of these patents until they are served with a lawsuit. This is unjust.

The rest of the tech industry shouldn’t leave this battle to the Rackspaces of the world. In particular, big companies with deep pockets should stop paying trolls to go away, a tactic that makes sense in the short run but is ruinous in the long. As independent software developer Joel Spolsky argues:

In the face of organized crime, civilized people don’t pay up. When you pay up, you’re funding the criminals, which makes you complicit in their next attacks. I know, you’re just trying to write a little app for the iPhone with in-app purchases, and you didn’t ask for this fight to be yours, but if you pay the trolls, giving them money and comfort to go after the next round of indie developers, you’re not just being “pragmatic,” you have actually gone over to the dark side. Sorry. Life is a bit hard sometimes, and sometimes you have to step up and fight fights that you never signed up for.

 

Facebook Home: The Death of Android

Facebook Home Chat-heat (Facebook)As a core operating system, Android is thriving. As a brand–and a user experience–it is dead. Facebook just killed it.

Android’s brand demise has been coming for a long time. Phone makers have been taking advantage of Android’s open architecture to install their own modified versions, such as Samsung’s TouchWiz. The most recent Android launches, the Samsung Galaxy S 4 and the HTC One, have barely mentioned Android. And in announcing Facebook Home, Mark Zuckerberg talked about Android only to say that Facebook was taking advantage of the openness of both Android and the Google Play Store to let anyone with a fairly recent Android phone replace the Android experience with the Facebook Home experience.

I dont know how many people will want Facebook  completely dominating their phone experience. I’m out of  the target demographic by more than a generation, so I’m probably a poor judge. But I’m pretty sure Facebook’s announcement won’t be the last of its sort. Maybe we’ll see a Twitter Home, or a Microsoft Home built around a growing suite of Windows/Skype/Xbox/SkyDrive products.

All of this seems to leave Google in some difficulty. Facebook is a direct competitor to Google’s primary business of delivering customers’ eyeballs to advertisers. Google’s considerable difficulty in monetizing Android just got considerably worse, and things are likely to go downhill from here.

Of course, one thing Google could do, at the risk of being evil, is lock down future releases of Android. That, however, might well be locking the barn door too late. Open source and free (as in speech) versions of Android are out there and Google action might well be viewed as just another fork of Android.

Google never seemed to know just what it wanted to do with Android. Now it may be too late to figure it out.

The TV Cartel Is Starting To Crack

Aereo antenna array (Aereo, Inc.)

By any reasonable standard, Aereo is a ridiculous service. But the rules and contracts that cover the distribution of television content are anything but reasonable. And that means that Aereo, silly as it is, could be the beginning of the end for the cartel of studios, sports leagues, broadcasters, networks, and cable and satellite distributors that has a headlock on content.

Aereo, which is backed by IAC/Interactive Corp. and its wily CEO, Barry Diller, invested a new way of distributing broadcast television. If you subscribe (currently available only in New York) for $12 a month, you are assigned a tiny TV antenna in an array of antennas (pictured above) in a Brooklyn data center. The content–all over-the-air broadcast stations in the area–is converted to an internet stream and delivered to your iPhone or iPad, computer browser, AppleTV, or Roku box. The service also functions as a DVR in the cloud so you can time-shift your viewing.

The silliness is that broadcasters ought to cut out the middleman and stream broadcasts themselves. But local stations can only stream their own content, mostly local news. Networks could stream a lot more, but only content they own outright or have the streaming rights for (a restriction that excludes most sports and much else.) Besides, local stations, networks, studios, sports leagues, and cable companies are locked into a system of contracts, often long term, which no one wants to break because, in the imortal words of Milo Minderbender*, “everyone has a share.”

It’s obvious why Aereo poses a threat to this cozy relationship. So its not surprising that pretty much every station in New York filed suit claiming that Aereo violated their copyrights. They argued that Aereo was essentially acting as a cable company and was required to negotiated what is called “retransmission consent,” a privilege that typically requires a hefty fee. But Aereo carefully exploited every corner and loophole in the law. Those individual antennas–technically quite unnecessary–allowed it to argue that it was merely piping over-the-air content to customers from their own antennas. And it made sure to deliver content only to subscribers within stations’ service areas, thereby honoring local exclusivity requirements,

Aereo won the first round of the legal battle when a district judge denied an injunction blocking the service. And in a potentially much more important decision, the Second Circuit Court of Appeals, in a 2-1 decision, affirmed the lower court decision. The only bright spot for the broadcasters was the dissent of Judge Denny Chin, who called the approach of individual antennas “a sham” and “a Rube Goldberg-like contrivance over-engineered in an attempt to avoid the reach of the Copyright Act and to take advantage of a perceived loophole in the law.”

It’s not clear what will happen next in the case. The TV stations could request an en banc review by the full Court of Appeals or appeal to the Supreme Court, but both are fairly long shots legally. Aereo CEO Chet Kanojia told The Verge he expects the broadcasters will turn to Congress for legislation blocking Aereo. Local broadcasters still carry considerable heft on Capitol Hill primarily because members count on local news to provide vital free media during campaigns.[pullquote]Local broadcasters still carry considerable heft on Capitol Hill primarily because members count on local news to provide vital free media during campaigns.[/pullquote]

But the loss of  the Aereo case is not the only ill omen for broadcasters and networks. Another major blow to the status quo was the success of House of Cards, the slick, high-budget original series on Netflix. While Netflix won’t give out viewer numbers, the company is clearly pleased with the effort and plans to expand it. Original internet programming that can compete straight-up with HBO and Showtime has to make those networks start rethinking their dependence on cable and satellite companies for distribution. For now, they make their content available online only to viewers who are already subscribers. They know full well that a lot of people are viewing pirated versions of their shows–the season premiere of HBO’s Game of Thrones set a BitTorrent volume record–and they know that subscribers are sharing their IDs and passwords with non-subscribers. For now, they are prepared to tolerate the loss (assuming that folks getting content illegally but for free would be willing to pay for it if it were available a la carte.) But this is purely an economic calculation, not a conviction, and will change when the economics tip.

The condition of the television business shouldn’t be confused with the collapse of the record industry. The music business was in trouble before it was hit with large-scale piracy and the record companies made things worse through denial, resistance, and the idiotic strategy of suing customers. The TV industry knows it has to move into a new era. But the current arrangements are highly profitable and it wants to proceed with all deliberate speed.

In the end, that may not be possible. Dish Networks CEO Charlie Ergen sees the end coming. “One of two things will happen,” he said at the D: Dive Into Media conference in February. The rising cost of content will present an incumbent distributor “with a deal they just can’t stomach” and they’ll blow the system up. “But more than likely, they’ll just die because somebody will come in underneath them on price. The likeliest candidates are Amazon or Netflix. Possible Apple. And Microsoft could do it.”

——–

*–If you don’t know who this is, you should stop whatever you are doing and read Joseph Heller’s Catch-22.”

 

Want To Sell Used Digital Content? Not So Fast

Just two week after the Supreme Court stop a publisher’s attempt to impose tight limits on the ability of purchasers to resell books, a federal judge in New York has reminded us of the limits on our resale rights when it comes to digital products. In Kirtsaeng v. John  Wiley & Sons, the Supreme Court ruled 6-3 that the “first sale” doctrine applies to goods made outside the U.S. and that a purchaser has the right to resell a book no matter where it was published.

Today’s decision by Judge Richard J. Sullivan of the U.S. District Court in Manhattan appears to end the effort by ReDigi to create a market in used digital music. The judge granted Capitol Records’ motion for summary judgment and while he did not immediately issue an injunction against ReDigi’s operations, that seems likely to follow.

The decision is highly technical and turns on a distinction between what copyright law calls a “phonorecord” and a sound recording. If you own a vinyl or CD recording–a phonorecord– you are free to sell it, but not so with a digital copy.  In essence, the judge said that if Congress wants to create a right to resell digital content, it may do so, but absent such action, forget about it: “[T]he Court cannot of its own accord condone the wholesale application of the first sale defense to the digital sphere, particularly when Congress itself has failed to take that step.”

Apple’s Cloud Conundrum

Photo of tornado (© James Thew - Fotolia.com)

 

Apple is really bad at the cloud. And while that is not hurting the company much today, it is going to become a growing problem as users rely on a growing number of devices and come to expect that all of their data will be available on all of their devices all of the time.

Apple’s cloudy difficulties are becoming apparent through growing unhappiness among developers about the many flaws of Apple’s iCloud synchronization service. Ars Technica has a good survey of developer’s complaints about the challenges iCloud poses for developers.  This long Tumblr post by Rich Siegel of Bare Bones Software is a deeper dive into some moderately technical detail.

These developer issues matter to both Apple and to its customers because iCloud is not being integrated into third-party apps, and some that have integrated it are abandoning it. This leaves users with limited and often complicated solutions for access to their data. Like most tech writers, I’m an extreme case, working regularly on a large assortment of devices working in different ecosystems. I rely on a variety of tools to sync my data, an approach that can be a configuration nightmare. But even someone living entirely within the Mac-iOS ecosystem cannot count on iCloud to provide anything near a complete solution. Just try to move  PDF document from a Mac to an iPad.

The fact is that both Microsoft and Google are far ahead of Apple in cloud services. Microsoft has built on its years of experience with SharePoint and Exchange, plus such commercially unsuccessful but technically important projects as Groove and Live Mesh, to build SkyDrive and its associated services. Google has always lived in the cloud and has put its expertise behind Google Drive. Smaller vendors, such as DropBox and SugarSync, also offer solutions far superior to Apple’s. But all of these companies have taken years to get where they are in large part because this stuff is really, really hard. None of them offers a complete multiplatform, multidevice, multi-application solution, but they are getting there.[pullquote]The fact is that both Microsoft and Google are far ahead of Apple in cloud services. And this stuff is really, really hard.[/pullquote]

Cloud information management solutions are only going to get more important as users choose among multiple devices to pick the one best suited to the need at hand. For many, these devices will be heterogeneous, perhaps an Android phone, and iPad tablet, and a Windows PC. The winners will be service providers who make a full range of services available to all devices on all platforms. Microsoft and Google come close, working hard to look beyond Windows and Android, respectively. Apple provides only grudging iCloud support to non-Apple devices, another self-imposed handicap.

Apple has the advantage of starting in this new multidevice world with the best-integrated solutions. But it is serious danger of blowing that lead unless it can drastically improve its cloud offerings.

And one more thing: The cloud imposes new security challenges for service providers. This is a problem no one has solved yet, but Apple has failed particularly miserably. Check out this Verge article for a good rundown on iCloud security failings.

 

 

 

 

 

Uniloc v. Rackspace: A Rare Patent Win in East Texas

Patent shingle (USPTO)

The U.S. District Court for the Eastern District of Texas has a well-earned reputation as a place where non-practicing entities, more colorfully known as patent trolls, use their dubious patents to extort money from companies that actually do things and make stuff. So it was deeply gratifying to see infrastructure-as-a-service provider Rackspace Hosting win a summary dismissal of a patent claim brought by Uniloc USA.

Uniloc claimed a patent of a general method for rounding floating point numbers and argued that the Red Hat Linux used by Rackspace infringed upon it. Red Hat defended Rackspace as part of its program for indemnifying customers against such claims.

The Uniloc patent was silly and clearly should never have been granted; the method claimed is neither novel nor non-obvious–two of the three legs on which all patents rest. But mere silliness often fails to stop patent claims from dragging on for years a tremendous expense to all concerned. So the quick end to this case is something of a miracle, especially in a district where patent holders have a very strong chance of winning.

The case does not address the general issue of software patentability nor does it, as some reports have held, determine that mathematical algorithms are not patentable. But Judge Leonard Davis found (PDF of ruling courtesy of Groklaw.net) that the patent in dispute failed to comply with the rules of patentability of algorithms as laid out by the Supreme Court, mainly in Gottschalk v. Benson and in re Bilski. Judge Davis wrote:

 [A]ccording to the patent itself, the claims’ novelty and improvement over the standard is the rounding of the floating-point number before, rather than after, the arithmetic computation… Claim 1 merely constitutes an improvement on the known method for processing floating-point  numbers…  Claim 1, then, is merely an improvement on a mathematical formula. Even when tied to computing, since floating-point numbers are a computerized numeric format, the conversion of floating-point numbers has applications across fields as diverse as science,  math, communications, security, graphics, and games. Thus, a patent on Claim 1 would cover vast end uses, impeding the onward march of science.

A few more Judge Davises and the patent mess could look a whole lot less messy.

 

 

 

The FCC: After Four Frustrating Years, Tough Work Ahead

Work Ahead (© iQoncept - Fotolia.com)

 

Julius Genachowski was one of President Obama’s original tech warriors, so hopes were high when he became chairman of the Federal Communications Commission in 2009. He leaves the post some modest accomplishments, some bigger disappointments, and a general sense of stasis that has replaced the excitement of 2008.

This situation is not Genachowski’s fault and there is not much chance that his successor, no matter who it is, will be able to speed process. Inertia is a powerful force in Washington, and few institutions are harder to get moving than the FCC. Why else would the commission still be arguing over rules prohibiting cross-ownership of newspapers and television stations–and issue likely to come to a boil again if Rupert Murdoch goes ahead with a bid for The Los Angeles Times–even as both sets of institutions fade into irrelevance?

The commission has two huge problems. First, the FCC’s actions are governed by a terrible and hopelessly obsolete law, the Telecommunications Act of 1996. Any time the commission seeks to stretch its authority, say, by trying to regulate network neutrality, it can count of being sued and probably slapped down by the courts.

Second, major industry constituencies—big telecommunications companies and wireless carriers, broadcasters, cable companies—see much to lose and little to gain from change, and then opposition of any one constituency can cause things to drag on interminably.

A good example is freeing unused or underused television broadcast spectrum for wireless data use. The fight stems from the transition to digital TV mandated in the mid 1990s and completed in 2007. TV stations ended up with more spectrum than they had good use for. The result was a plan for “incentive auctions,” in which stations would receive part of the proceeds from the sale of spectrum (which they didn’t pay for in the first place). The FCC plan was complex, and Congress, at the behest of broadcasters, made it even more baroque in 2011 legislation authorizing the sales. Broadcasters continue to throw up roadblocks and it now appears that the auction process, originally expected to start next year, is unlikely to get going until 2015. The TV fight is also holding up a plan to make some of the unused TV spectrum, the so-called TV whitespaces, available for unlicensed wireless data. Not surprisingly, the broadcasters oppose that plan too.

Unfortunately, there’s not a lot an FCC chairman can do to speed the agency’s glacial pace. Federal law creates endless possibilities for delay. Any time the commission tries to push its boundaries, it will be sued and objectors have generally found a friendly ear at the conservative D.C. Circuit Court of Appeals, which hears all challenges to FCC actions.[pullquote]Unfortunately, there’s not a lot an FCC chairman can do to speed the agency’s glacial pace. Federal law creates endless possibilities for delay.[/pullquote]

This is troubling, because the FCC has some major items on its agenda. The most urgent is finding ways to find more spectrum for wireless data. It has become clear that the traditional approach of transferring spectrum from incumbents to new users has limited potential to increase bandwidth, at least in any reasonable amount of time. What’s needed is sharing of spectrum—especially between government agencies and private users–and new technologies to use the spectrum we have more efficiently. Steps to do both, sometimes simultaneously–as in the sharing of of the 3.5 gigahertz band between military radars and small-cell wireless data–are underway, but it is going to be a long slog. Incumbent holders of spectrum don’t give it up easily, even for sharing, while establisher service providers will maneuver to prevent competitors from gaining any perceived advantage. Look for a long slog.

Another major issue is mundane, even boring, but very important. The nearly 150-year-old public switched telephone network has nearly reached the end of its useful life; internet technology is a far more efficient way to move voice traffic than traditional circuit switching. The prospect of this happening has been looming for some years, but AT&T has forced the issue with a formal petition to transition its land-line services to an IP network. A lot of money is at stake–there is a huge investment tied up in the existing network. The FCC has to make sure that the transition balances the interests of customers and shareholders of the carriers–this mostly affects AT&T and Verizon Communications–and guarantees a reliable and affordable landline network for the future. (Much as techies disparage it, the landline network is still tremendously important and, of course, the same IP network that will carry voice calls forms the backbone of the internet.

That’s a big agenda for change coming up against a system strongly biased to inertia, complicated by a Congress whose passion for meddling is exceeded only by its lack of understanding of the issues.

When Calculators Had Gears

Photo of Friden D10
“Naked” Friden D10 (Photo: Jake Wildstrom)

My son, a mathematician at the University of Louisville, has a new hobby: restoring mechanical calculators. These machines were obsolete long before he was born, but a visit this past weekend brought back a wave of nostalgia. I was a student just before the advent of electronic desktop calculators (the ubiquitous personal calculators came along a few years later) and I spent a substantial part of my life doing statistical analysis on a mechanical calculator and a spreadsheet, which in those days was a ledger-sized sheet of ruled paper.

 

Mechanical calculators were conceptually simple and mechanically extremely complex. They were basically glorified adding machines which did multiplication as repeated addition and division as repeated subtraction. On the early manual machines, to multiply, say 135 by 25, would enter a number to be multiplied from the keyboard or by rotating pinwheels. You would then turn the crank 5 times, move the carriage one place to the right, and turn the crank twice. The answer would appear in the accumulator register. Division was a bit more complicated, but was basically the same process in reverse and could be carried out to as many decimal places as you had digits in the register. Later models replaced the crank with an electric motor and moved the carriage automatically. The last generation of Friden mechanical calculators had a square root function built in, an enormous benefit in stat work, which requires computing lots of square roots.

Disassembled Odhner 227 (Photo: Stephen Wildstrom)
Pinwheels, setting rings, and shaft of an Odhner 227

These calculators made a marvelous noise–motors whirring, gears meshing, bells ringing. The stat labs at the University of Michigan where I would were big rooms filled with dozens of machines, all going at once, making a glorious racket. The space at the Rackham graduate school included a plugboard-programmed IBM accounting machine (ancient even then), a counter-sorter, and a printer, and when everything was running at once, it sounded more like a stamping plant than a lab.

Curta calculator (Image: vcalc.net)
Curta calculator (Image: vcalc.net)

Mechanical calculators were painfully slow by electronic standards and needed lots of maintenance (those levers in the photo of the Friden, top, represent the complex mechanical logic of the machine.) It was generally a blessing when they were replace by electronic calculators and, ultimately, by software such as Mathematica, Maple, Matlab, and Sage.

But those wonderful old machines created a intimacy between analyst and data that doesn’t exist anymore. So I am happy that some in the younger generation are willing to do the work of restoring these beauties.

(Jake recently acquired a Curta, the Ferrari of crank-operated calculators. It looks a bit like an slightly oversized peppermill, if Swiss watchmakers made peppermills. Plus, there’s the amazing story of its inventor, Curt Herzstark, who designed the machine while a prisoner in the Buchenwald concentration camp. And the Curta is proudly marked “Made in Liechtenstein,” possibly the only industrial export of that postage stamp-sized coiuntry (which, of course, issued a postage stamp to honor the device.)

 

 

Google Keep: Bleeding from Self-inflicted Wounds

Google keep icon

 

I don’t know how much Google is saving by killing off Reader, but it is rapidly becoming clear that it wasn’t worth it.

Most people don’t know what an RSS reader is and Reader never became a popular offering  on the scale of, say, Gmail. But it was heavily used by techies, especially tech writers who counted on it to provide  easy access to a broad variety of industry information. I definitely count myself among them. So the decision to kill Reader after years of neglect caused widespread dismay among industry influencers.

The cost of this became clear when, a week after the Reader announcement, Google rolled out Keep, a competitor to Evernote, Microsoft OneNote, and other note-taking and syncing apps. GigaOm’s Om Malik led the charge with a post headlined Sorry Google; you can Keep it to yourself.” His argument: “It might actually be good, or even better than Evernote. But I still won’t use Keep. You know why? Google Reader.” IDG’s Jason Snell chimed in with a tweet: “Can’t wait for Google to cancel Google Keep in four years after it’s decimated Evernote’s market.”

Google, of course, has the right to kill off any service it wants, especially where it provides the service without charge and has no contractual relationship with users. But Google wants to be something new in the world: A company that can be a trusted partner providing services at little or no cost. But gaining trust requires confidence on the part of customers that the services will be there after they have come to depend on them. The termination of Reader did grave damage to that trust. The price was a rocky launch for Keep, even though the product itself generally got good reviews.

Like Malik and Snell, I’m sticking with Evernote, which is offered in both free and premium paid versions. The company is doing well and note-taking and related services are its only business, so I have confidence it’s not going to abandon the market. But Google is going to have to work hard to convince me it can be trusted.

 

 

Apple: Time To Come Out Swinging


Portrait of Steve Jobs (Matt Yohe/Wikimedia Commons)

The dominant picture of Apple in the media today is of a company on the ropes: out of ideas, falling behind the competition, stock price battered, doomed. It’s reached the point where the CEO of BlackBerry, of all people, is criticizing the iPhone as stale.

The reality is a company enjoying record sales and earnings, dominant in its most important markets, with products continuing to be the envy of customers and competitors alike.

How did a company that spent so many years successfully managing and polishing its image reach this point? And how does it change a growing perceptions of failure in the face of actual success? These questions matter because over time, perceptions have a way of infiltrating reality, making the negativity surrounding Apple a long-term threat to the company.

Apple has succeeded by figuring out what consumers want before they knew they wanted it and by making superior products. But a little bit of magic has surrounded Apple products for the past 15 years or so and that helped make iPods and MacBooks and iPhones objects of desire. Today, every major Apple product is the top seller in its category, often by a wide margin. But without the perception of magic, that becomes harder to sustain.

This is where Apple misses Steve Jobs. Today’s Apple is run by a highly competent crew of executives. But Jobs was the magician and no one can replace him. A Tim Cook keynote can be interesting and informative, but it will never be the sort of cosmic event that Jobs presided over two or three times a year. This loss is irreplaceable.

But something else has changed. For the first time in many years, and certainly for the first time since the incredible iPhone run began in 2007, Apple has a competitor that truly matters. When Apple entered the phone market in 2007, it gained about 100% of the mindshare before shipping a product. Once the iPhone was a reality, and especially after the iPhone 3G and the App Store debuted a year later, Apple brushed away the incumbent smartphone makers without them putting up much of a fight.

Samsung is different. The company is nowhere close to Apple’s seamless integration from components to software and the dog’s breakfast that was the Galaxy S 4 launch shows it still has a ways to go in its presentation skills. But it is making first-rate products that are more and more Samsung and less and less Android, and backing them with a lot of money behind an effective marketing campaign. If Samsung ever learns how to overcome Google’s tablet cluelessness, it could be a formidable competitor to the iPad too.[pullquote]If Samsung ever learns how to overcome Google’s tablet cluelessness, it could be a formidable competitor to the iPad too.[/pullquote]

With Samsung on the prowl and Apple, fairly or unfairly, getting beaten up daily by both the tech and financial media, the company can no longer afford its long-time strategy of floating serenely about the noise of the tech and financial worlds. Apple never responded to rumors or much of anything else. Routine inquires to PR staff received polite no comments or, often as not, no response at all. Apple was the honey badger of tech companies. That strategy has served it well for more than a decade, but it clearly is not working very well anymore.

One thing Apple should strongly consider is giving the world some sense of its direction. Rampant speculation about Apple products while Cupertino sat in stony silence used to work in Apple’s favor, but now its being interpreted as having a lack of anything to say. Critics will say Jobs would never have considered even giving hints about products in development, but Jobs was against many things–including Apple making a phone–until he found a good reason to be for them. “What would Steve do?” is not a good guide for Apple today. The company should let the world know that it is still on top of its game.

Apple also needs to throw a bone the the financial markets. With a cash hoard is more than $150 billion and growing, Apple is beginning to look a bit like a bond fund with a consumer electronics company attached. It successfully fought off an effort by investor David Einhorn to give a good chunk of the money back to shareholders, but it has given no indication of what it plans to do, but there’s a good argument to be made that all that cash sitting around is doing investors no good whatever. It could, as Brian S. Hall suggested here the other day, set up an endowment for future product development. More realistically, it could pay a much larger dividend or use the cash to buy back its own stock. It could find something worth acquiring (maybe it’s saving up to buy Samsung, but it needs about another $100 billion.) But one way or another, it owes investors some explanation of its intentions if it hopes to win their confidence back.

There are some signs that Apple understands it is in a new environment. It has come up with a new section of its web site, going after the competition saying: “There’s iPhone. And then there’s everything else.” Marking chief Phil Schiller took on Samsung in an interview on the eve of the Galaxy S 4 launch (though he diluted its impact with an incorrect claim that the phone used a year-old version of Android.) Such steps are a good start, but Apple will have to do more. It’s a difference, and much more competitive world out there.

Google and Amazon: Doing It All Wrong

Google Glasses (Google)

 

 

By the conventional standards of business, it would be hard to find two companies with a greater tendency to do things wrong than Google and Amazon. Yet both are regarded as outstanding success story. What is going on here, and what does it tell us about how corporations ought to be run.

Each company violates a fundamental rule of business. In the case of Google, it’s a failure to diversify its sources of revenue and profits while at the same time displaying a woeful lack of discipline in how it enters new businesses. For Amazon it’s a persistent, almost stubborn refusal to maximize profits.

A glimpse at Google’s income statement reveals just how narrow the company’s success is. Google took in $50.2 billion in the year ended Dec. 31. Of that revenue, $31 billion came from advertising on Google Web sites and another $12.5 billion from ads on Google Network affiliate sites. This means that Google’s original revenue-producing activities, AdWords and AdSense, accounted for 87% of its gross. Motorola brought in another $4.1 billion Everything else–the Google Play Android store, sales of Google Nexus branded Android devices, paid Google Apps, whatever else the company does to produce revenue–generated a mere $2.4 billion. Considering that Motorola suffered a heft net loss from continuing operations, it’s safe to say that search-based advertising was responsible for well over 100% of Google’s revenues.

The unprofitability of everything Google has tried does not seem to discourage the company. Under CEO Larry Page, Google has purged a number of its least successful products. But it continues to add efforts that have little hope of generating profit in the near-term, or perhaps ever. It is spending a good bit of money developing self-driving cars, though the technology seems years away from commercialization. It’s from from clear that many people away from such hotbeds of geekdom as the Googleplex or the MIT campus will ever be willing to wear, let along pay for, Google Glasses (above.) Who but a Google engineer is going to put down $1,299 for a Chromebook Pixel, a laptop that cannot run any programs other than a Chrome browser? And why is it messing around with same-day-delivery retail, a business that seems far outside its core competency–and a logistical and business challenge that no one has cracked?[pullquote]Classical economic theory says corporations try to maximize profits. Amazon and Google prove there are exceptions.[/pullquote]

Of course, the ad business is so profitable that Google doesn’t have to worry in the near term. It’s net margin was 21%, down from recent years but still very healthy. And investors seem happy. It’s stock is trading just a bit below its 52-week high of 844 and the price is 26 times 12-month trailing earnings, a sign that investors believe growth will be healthy into the future.

So while Google’s attention deficit approach to new projects may defy business school wisdom, it isn’t hurting the company. And it is certainly benefiting consumers. We get goodies like Google Maps and Gmail for free, while Google funds the sort of research–self-driving cars–that once was the province of the government and that could have a big payoff for society, if not for Google.

If Google’s problem is a flurry of innovation that has produced little revenue and no profit, Amazon is a tale of profitless growth. Classical economic theory says the purpose of a corporation is to maximize profits, and while the research of scholars like A.A. Berle and and Herbert Simon long ago dismissed taking that notion too literally, profit motivation is still supposed to have something to do with business decisions.

Not, it would seem, at Amazon. The company’s revenues in the fourth quarter of 2012 grew 21%, and that was the worst performance in three and a half years. But profits are another story. In its best year, 2010, it netted just over 4% of sales while it actually recorded a loss last year. Amazon has relentlessly pursued growth with little regard to profitability. It has disrupted one market after another by undercutting the prices and business models of competitors.

And its investors love it. Like Google, it is trading near a 52-week high. Its trailing EPS can’t be calculated because of the loss, but Amazon is trading at a staggering 76 times expected 2013 earnings.

And  customers love it too. Unless you are in a retail business that Amazon has demolished, you are most likely the beneficiary of Amazon’s predatory nature. Amazon has not only saved me money, it has saved me countless hours I would have wasted shopping. (Once you get Amazon Prime, the tendency to order stuff online rather than pick it up at the store become overwhelming. It’s a rare day we don’t get at least one Amazon package.) And while Amazon’s impact on retailing has been the most obvious, Amazon Web Services has drastically lowered the cost of starting any sort of online business.

So let’s hear it for Amazon and Google and their impossible business models. Eventually, Google will to find a moneymaking business to supplement search ads, whose growth is slowing. And Amazon investors’ patience with tiny or nonexistent profits won’t last forever. But for the rest of us, let’s enjoy it while we can.

 

 

Heading for a Hollow Victory on Phone Unlocking

Photo of chained phoneThe news that the ranking Republican and Democratic members of the house Judiciary committee plan to introduce legislation to legalize the unlocking of mobile phones by consumers greatly increases the chances that Congress will reverse the refusal of the Library of Congress’ Copyright Office to approve the practice. But despite the excitement among tech activists, the victory is likely to be largely meaningless in practice. The only real beneficiaries will be owners of very new phones who are looking to use them on foreign networks without paying exorbitant roaming charges.

The problem is that while altering the software that ties a phone to a specific network removes artificially barriers to interoperability, formidable technical barriers remain in place. The convergence of fourth-generation networking technology on a standard called LTE was supposed to ease or eliminate the problem, but if anything, it has made matters worse. The problem is that U.S. carriers are all implementing LTE differently, specifically on different frequencies, and phone makes have not been able (or perhaps willing) to incorporate enough radios to make the phones interoperate. For example, Apple sells different LTE iPads for Verizon and AT&T and even though they are sold unlocked, they cannot be used on each other’s networks.

Verizon and AT&T are both rolling out LTE on spectrum in the 700 MHz band formerly used for analog television. But they are using different portions of the  band. And the Federal Communications Commission has not required AT&T and Verizon to act to make their networks or their phones compatible, although it would take no great technical effort to do so. Sprint is deploying LTE at the same 1900 MHz frequency it uses for existing 3G and voice services. T-Mobile will be offering services at 1700 MHz. This means that no two carriers are offering compatible services.

If you are willing to forgo LTE, you can get some partial compatibility. AT&T and T-Mobile phones will work on each other;s networks, though you can’t count on the fastest data performance. Those phones can change networks simply by swapping SIM cards. You should be able to get an unlocked Verizon phone registered on the Sprint network, and vice versa, but performance of a Sprint phone on Verizon may suffer if the phone does not support Verizon’s 800 MHz service. And giving up LTE means giving up a lot.

In other words, the hubbub over phone unlocking has been disproportional to what is likely to be achieved. Consumers may win a purely symbolic victory, but in practical terms, their phones will be as locked as ever. The time to have made a fuss was back in 2007, when the FCC was setting the rules for the 700 MHz auction. Everyone knew going in that AT&T and Verizon were going to emerge as the winners, and required interoperability of  their LTE services would have made a huge difference. But that ship has now sailed.

The carriers will make a big show of opposing any unlocking legislation. I think they do this partly because they simply do not like being told what to do and partly to keep their lobbyists in practice. In fact, the bill that is likely to emerge from the Judiciary leadership will require them to do things they are mostly or entirely already doing. It will be a feel-good victory for advocates, but it will change little or nothing.

 

 

Can We Call Windows RT a Flop Yet?

samsung_ativ_tab

Windows RT was a bold move by Microsoft to make its mark in the world of ARM-powered tablets. But five months after launch, it is looking more and more like an expensive flop.

The German site Heise Online reports (h/t to The Verge) that Samsung has cancelled its plans to roll out the RT-powered ATIV Tab in Germany and elsewhere in Europe, do to weak demand. This is the latest blow in what has been a steady pullback of OEMs from the RT market. In addition to Microsoft’s on Surface RT, there appear to be just three RT tablets available in the U.S.: The Asus VivoTab RT, the Dell XPS 10, and the Lenovo Yoga 11. Hewlett-Packard has announced that it is skipping the RT market and other OEMs seem to have no interest in expanding their product lines.

Tablets running full Windows 8 seem to be doing considerably better, with Microsoft still having difficulty keeping the Suface Pro in stock. The big question is whether these tablets, and the relatively slow sales of traditional Windows 8 PC, give developers enough incentive to create apps specifically for the user interface formerly known as Metro, or whether developers will prefer to try more touch-friendly versions of apps using the traditional Windows UI.

The End of Purchased Software (Updated)

For Rent and For Sale (© Kristina Afanasyeva - Fotolia.com)Buying software has always been an illusion. When you bought a program in a box, it seemed like you were purchasing something like a book or a music CD. But if you looked closely at the terms and conditions you had to agree to before installing program, you realized what you really had was a conditional license to use the software in ways the seller deemed proper.

For most people, this was a distinction without much of a difference. You could do pretty much what you wanted with the software, even sell it used (though you might run into trouble with a package that used an activation key.) That may be why hardly anyone bothered to read the terms & conditions.

But now the illusion of software ownership is fast disappearing. The big change is Microsoft moving to a subscription model for Office 2013. Yes, you can still buy the software (or more properly, buy a perpetual license to use it). Microsoft Office 2013 Home and Business (Word, Excel, PowerPoint, OneNote, Outlook) costs $220.The Professional version costs $399 and adds Publisher and Access. Office Home and Student. $140, subtracts Outlook and, at least technically, may not be used for commercial purposes.

These prices offer a lot less value than earlier editions. You used to be able to activate each copy of Office on two computers.These were supposed to be a desktop and a laptop, but this license requirement was not enforced in practice. And if you replaced a computer, you could uninstall Office fromt he old system and activate it on the new.

No more. All editions of Office 2013 are licensed for a single computer and are tied to that system forever. (Microsoft has the technical means to enforce that, though it remains to be seen how rigorously they will do so.) SEE UPDATE BELOW.

Microsoft is making it very clear through these unattractive terms that it doesn’t want you to buy software anymore. It wants home users to spend $99 for an annual Office 365 Home Premium subscription that offers all of the core Office applications on up to five computers (Windows or Macs). The package also includes 20 gigabytes of SkyDrive storage and 60 minutes a month of Skype calling. Like the old Home and Student version, commercial use is theoretically prohibited. Business versions come in a complex variety of plans depending on the number of seats and the applications and back-end services covered. but the basic Small Business Premium offering, which includes hosted Exchange email and Lync conferencing, costs $15 per month per user for up to 25 users.[pullquote]Microsoft is making it clear through these unattractive terms that it doesn’t want you to buy software anymore.[/pullquote]

Microsoft is hardly the first software company to go down this path. Last year, Adobe rolled out its Creative Cloud, a subscription service for its Creative Suite applications, including Photoshop, Dreamweaver, Audition, Premiere, and other creative tools. Access to the full CS6 suite on up to two computers simultaneously costs $50 a month on an annual contract; a single application is $20 a month.

Get used to this. Software vendors, Microsoft in particular, are recasting their business models to become service providers. Microsoft no longer want to just sell you Office; it wants you to use Office as part of its rapidly growing cloud infrastructure. If you are a small- to medium-sized business, someone who in the past might have been a Windows Small Business Server customer, it want to sell you hosted SharePoint for collaboration; hosted Exchange for mail, calendar, and contacts; Lync for conferencing, and anything else it can dream up. And it wants to stay a step or two ahead of Google Apps for Business, a cheaper, but in many ways less capable, offering.

None of this is necessarily a bad thing for businesses or consumers. If you had been in the habit of upgrading office every three years or so, as Microsoft brought out new versions, an Office 365 subscription could end up being less expensive than the old purchase model, especially if you want to install the software on more than two computers or if you have a mixture of Windows PCs and Macs.

My main concern is with making sure the system works smoothly. When you subscribe to Office 365, the programs reside on your computer and there are no issues working offline. However, since Office is only licensed for as long as your subscription is current, it has to check your activation status with a server from time to time, and this process can go awry. Adobe, which has been checking activation status of its creative apps for years, has had occasional problems with activation servers that have led to software becoming temporarily unusable.

Another issue is what happens if the company from whom you rent your software goes out of business or stops supporting a product? Adobe recently did provide for permanent activation of some very old versions of Creative Suite for which it wanted to shut down the activation servers. It’s a far-fetched worry that Microsoft would ever leave Office users high and dry, but it is worth thinking about since you could end up  with unusable software and files in an unsupported format.

UPDATE: The policy of tying purchased copies of Office 2013 to a single computer forever met immediate resistance from customers and didn’t last long. In a March 6 post to the official Office Blog, Microsoft announced that purchasers would be allowed to transfer the software to a different computer and that the original purchaser of a copy of Office would be able to sell it, provided the purchaser agreed to the original terms and conditions. However, activation is still limited to one computer, not the two allowed for previous Office versions. (Tip of the hat to Ed Bott of ZDNet for flagging the change.)

How Not To Win in Washington

Photo of U.S. Capitol (© imel-fotolia.com)   In November, a young staffer named Derek Khanna won brief notoriety for writing a report for the House Republican Study Committee urging a change in copyright policy to favor consumers rather than rights holders. The effort got him fired, and earned his a fellowship at the Yale Law School’s Information Society Law Project.  Now Khanna is back with a manifesto, published by BoingBoing, entitles “Cellphone unlocking is the first step toward post-SOPA copyright reform.” Unfortunately, it betrays about as many misconceptions about how Washington works as an episode of “House of Cards.”

Phone unlocking should be allowed. But the issue is a very poor choice for a showdown on the future of copyright. First, hardly anyone understands what unlocking actually does. In the U.S., it is mainly useful for using an AT&T phone on the T-Mobile network (if the phone itself operates on all the frequencies used by the two semi-compatible carriers), or, more practically, for allowing the phone to work with a non-U.S. SIM card when outside the country. Early termination fees for subsidized phones bought on contract (unsubsidized phones should be sold unlocked and usually are) mean there’s little advantage to unlocking a phone while it is still in contract, and most carriers will unlock the phone for you once the contract is over. These realities make it hard to build up a lot of outrage over the Copyright Office’s failure to extend an exemption that allowed phone unlocking, and in fact, were a major consideration in the Library of Congress’ decision.

Understand the fight. Second, this proposed fight is based on what are, at best, half-true premises. The fact is, there is no law criminalizing phone unlocking–not as such, anyway. Section 1201 of the Digital Millennium Copyright Act makes it illegal to “circumvent a technological measure that effectively controls access to a work protected under this title,” and, for complicated legal reasons, unlocking a mobile phone constitutes circumvention. There is a provision for criminal penalties, but only for willful violation for commercial gain, and, notwithstanding those FBI warnings on DVDs,  criminal prosecutions for copyright violation are very rare and nearly nonexistent for consumers.

There are more problems with Khanna’s manifesto, summarized by him in this open letter to congress:

Dear Congress, Please remove these items from your DMCA contraband list (both for developing the technology, selling and using the technology):

• Technology for unlocking and jail-breaking (currently allowed for iPhone, not allowed for iPad).

• Adaptability technology for the blind to have e-books aloud (currently subject to triennial review by the Librarian of Congress – it’s legal to use the technology but illegal to develop or sell).

• Technology to back-up our own DVD’s and Blue-Ray discs for personal use (current law makes this illegal and injunctions have even been used to shut down websites discussing this technology).

Signed,  The people

As noted above, there is no DMCA contraband list. There is a blanket ban on circumvention, with the Librarian of Congress empowered to grant specific exemptions. All 4G iPads are sold unlocked, so they don’t need an exemption. Jail-breaking (modifying an iOS device to let it accept non-App Store software) is a separate issue from unlocking and it is not even clear that it is prohibited by DMCA. Apple has warned that jail-breaking will void the warranty on an iPhone or iPad, but has never claimed a copyright violation The exception for adaptive technology for the blind is the least controversial of all the DMCA waivers and does not seem to be under any threat. It would be good if Congress got around to including the exemption in law, but the practical effect would not be significant.

Looking backward. The suggestion regarding DVDs and Blu-ray (not Blue-Ray) is particularly odd and backward looking, given the fact that distribution through physical media  is slowly going away. The fight over the ripping of DVDs is one of the oldest disputes under DMCA. But the fact that courts found  DVD ripping, even for personal use, violated the law does seem to have stopped anyone from doing so for the past dozen years. Software that extracts video files from protected DVDs is readily available, and no one seems to be making any serious effort to stop it. Copying Blu-rays is someone more difficult, but mainly for technical rather than legal reasons.

I wrote at the time that an internet-based revolt killed the Stop Online Piracy Act that this fight was easy. The legislative process is designed for failure, especially in the current partisan environment, and this makes stopping legislation simple.The real challenge is to get something passed, and if we are going to make the effort, we might as well look to the needs of the future rather than the fights of the past.

The issues must be understandable and capable of drawing broad support. One thing worth fighting for is guaranteeing a right for purchasers to resell digital media. U.S. copyright law follows the first-sale doctrine; once you have bought a copyright-protected product, you can do anything you want with it it except reproduce it. But copyright owners have used technological means to prevent resale and as more and more media goes digital, first sale is effectively disappearing.  We need our first-sale rights restored (with reasonable protections for copyright owners to make sure that copied are being sold and not multiplied.)

Another area of concern for the future is protecting the potential of 3D printing. This is a bit tricky, because the current legal environment is actually favorable to 3D printing–there generally is no copyright protection for physical objects  other than works of art, but some assault on the technology seems likely. You have to be very careful about copyright legislation because of the real possibility you will end up worse off than before.

You also have to frame the fight correctly. Khanna somewhat oddly sees the phone unlocking issue as an infringement of the property rights of phone owners. In fact, this, like so many tough fights, is a case of rights in conflict: The rights of the intellectual property creator vs. the rights of phone owners. But framing this s a property rights issue is a formula for getting lost in the weeds of an arcane legal dispute.

It’s also worth consulting the history of DMCA adoption. The tech industry, represented mainly by Microsoft,  generally favored the law and, in particular, the anti-circumvention provisions because of the fear of software piracy. This was consensus, not “crony capitalism.” There are fights worth having, and Khanna is correct that the best strategy is the build a powerful coalition through small victories and the time to get started is now.

 

webOS: Phil McKinney Hopes Third Time Could Be the Charm

Photo of Phil McKinney (HP)

webOS is the best mobile operating system that never really had a chance. It was developed by a struggling Palm but sank under the weight  of poor execution and inadequate capital. Hewlett-Packard acquired it with great hopes and even greater ambitions, but top management intrigue  undermined and ultimately killed the project. With the code released to open source, it languished for the past year and a half, unused except for the occasional hobbyist experiment, until its surprise purchase by LG, which plans to use it in televisions and other devices.

Is there really hope that the Korean electronics makers can revive an operating system that maintains an intense, if small, fan base. One expert who thinks it might is Phil McKinney, president and CEO of CableLabs, the cable industry’s research arm. McKinney was CTO of HP’s Personal Systems Group at the time of the Palm acquisition and was one of the architects of the aborted plan to make webOS a full-fledged rival to Google’s Android and Apple’s iOS

“The attraction of webOS is that it is HTML5 to the core,” McKinney said in an interview. “It’s a great platform for all kinds of systems. Early on, there was interest from TV manufacturers and other to find a way to use it.” Despite sitting on the shelf for many months, McKinney says “a lot of the features are still advanced. It has true multi-taking, a great multi-application user interface, and a notification system that was years ahead of Apple and Android.”

The big question in McKinney’s mind is whether LG can assemble right the team of engineers, make the required investments, and give it the breathing room it needs to deliver results. “Will LG put the resources behind it? Can they get the expertise?”  he asks. “If they thing they are buying it all tied up with a bow and ready to go, they aren;t right. Innovation always takes longer than you anticipate.”

McKinney, who at HP had dreams of webOS running seamlessly on everything from cellphones to PC desktops, hopes LG in in it for the long haul. “Every time I tweet about webOS I get flooded with emails. There’s still a lot of passion out there. I’m still a fan. It’s a great platform.”

 

Why the iPad Needs a File System

finder_screenshot

Over at Monday Note, the always perceptive Jean-Louis Gassée writes about how the lack of a true user-accessible file system is holding back more intensive use of the iPad as a creation tool. Jean-Louis has been on this kick for a while, and I couldn’t agree with him more.

My creative process, and I expect many of your workflows too, consists of creating documents by writing original text combined with bits and pieces from a variety of sources, including web pages, image files, Word documents, PDF files, email messages, tweets, and who knows what else. On a Mac or a Windows PC, this is very easy to do, by openeing multiple windows and cutting and pasting between them. The lack of multiple windows can’t easily be overcome on a tablet; at best, you could manage two small windows on the limited display real estate.

But Apple makes this much harder than it has to be by imposing tight restrictions on communications between iOS apps and by denying users access to any sort of listing of files available on the system. There are lots of ways around this, using third-party apps such as SugarSync and Documents to Go, but like all workarounds, the are clumsy, halfway solutions. I love traveling without a laptop, but even writing a simple blog post on an iPad is a lot more challenging that it ought to be.

I like Gassée’s suggestion of a two-tier user interface for the iPad, with the advanced version exposing features such as the file system while the standard mode keeps them hidden. I don’t think Apple would ever offer this–it violates the canon of iOS simplicity–but it sure would be a big help to some of us.

 

Chinese Hacking: What Is To Be Done?

Photo pf People's Liberation Army parade (Wikimedia Commons)

We’ve known for a long time that hackers in China have been responsible for massive intrusions into both government and commercial networks in the U.S. Most recently, the New York Times, Washington Post, and Wall Street Journal all reported sustained Chinese spying on their networks.

Now, working with Mandiant, the firm it hired to investigate the attacks, the Times has collected and published conclusive evidence of official Chinese involvement in the hacking (full Mandiant report here). Specifically, the efforts all seem to be coming from a Shanghai office building that houses People’s Liberation Army Unit 61938.

The question now is what are we going to do about it? And what is the role of the tech industry in dealing with the problem?

Spying is a reality. Countries, even friendly countries, spy on each other. But there are limits to acceptable espionage behavior, and the Chinese has gone well beyond them. The most problematic behavior described in the Mendiant report is the PLA’s role in the theft of massive amounts of intellectual property from U.S. companies, presumably for the benefit of Chinese competitors.

The U.S. government can and should step up efforts to improve Chinese behavior. For a long time, I believed that as Chinese industry developed and Chinese companies began developing valuable IP of their own, the nation would come to have an interest in an international rule of law. This same belief shaped U.S. policy under both Democratic and Republican administrations and was a major reason that the U.S. supported Chinese membership in the World Trade Organization more than a decade ago. It hasn’t worked.

The revelation of the PLA’s role—hardly unexpected—in the attacks may be grounds for a strong demarche from Washington to Bejing, but it is not going to change the fundamental economic and political relationship between the two countries. Our economies are too deeply entangled and our security interests too enmeshed for open hostility to be desirable, or even possible.

So beyond hoping for China to clean up its act, what should we do? The best answer lies in a much better defense, but that is going to require some significant changes in attitudes. Much of U.S. business still is not very serious about information security. Witness the endless vulnerabilities to attacks far less determined and sophisticated than those mounted by government entities. Business, including the tech industry, has mightily resisted any efforts to impose security regulations, but it has failed badly to act on its own. If it takes regulation to get the job done, so be it.

But the government also needs additional weapons in this fight. The reintroduction of the Cyber Intelligence Sharing and Protection Act provides the platform for a healthy debate on the subject. Last year, unfortunately, CISPA became hopelessly conflated with the Stop Online Piracy Act, and the notion has now pervaded much of the tech world that because SOPA was an awful idea, all measures designed to protect IP are bad. There are problems with CISPA, particularly with respect to privacy protections for individuals, but the charge echoed in many quarters of the tech world that it is “son of SOPA” (this, for example, from BoingBoing) are misguided. Instead of mounting knee-jerk opposition, the tech community should work to make it a better bill that will help the government deal with real threats.

The government also needs to refocus its priorities. There has been far too much talk of “cyberwar” and far too little of “cybercrime.” The U.S. does need to act to protect vital infrastructure from electronic attack, but the threat as of now is purely notional. It is hard to imagine a state—even an Iran or a North Korea—committing an act of naked cyber-aggression against the United States, because any serious attack on infrastructure has to be regarded as an act of war. To quote the late Omar Little, “You come at the king, you best not miss.” The chances that any state could successfully launch a knockout cyber-blow are vanishingly small. And it is difficult to conceive of a non-state opponent, which would have less to fear from retaliation, with the wherewithal to do serious damage.

On the other hand, the threats to U.S. assets are real and on-going, and their sponsorship by the government (or the PLA, to the extent there’s a difference) are becoming impossible to deny. If gangs sponsored by the Chinese (or Russian, or Canadian) government were robbing banks in the U.S., you can bet the FBI and the banking industry would be working together to end the assault. A similar concerted effort needs to get top priority, both in Washington and in corporate boardrooms.

The reality is the even the best defense will not completely protect us against the online theft of assets. Attackers have too big an inherent advantage in this game, mostly because it is impossible to fulluy secure systems without destroying their usefulness. But the threats can be mitigated significantly, and it’s time we got cracking.

HTC One, Android Zero

Photo of HTC One (HTV

There was a word missing from HTC’s unveiling of its impressive new HTC One phone. HTC executives talked about the BlinkFeed streaming home screen, the redone Sense user interface, the BoomSound audio system, and the Zoe photo-plus-video app. But there was no mention of the phone’s Android software. Even on the One’s web page, you have to drill down to specs to learn that it runs Android.

This downplaying says a lot about the branding efforts of both HTC and Google. HTC, having come through a very rough patch that saw its market share and profits tumble, is anxious to relaunch itself as a premium smartphone provider. Talking about Android cannot do this; mentioning Android just makes it look like a provider of commodity hardware running commodity software.

So instead, HTC is promoting the One brand as well as the subbrand it has chosen for the proprietary apps and services that it hopes will distinguish itself from the Android pack. HTC isn’t alone. Samsung has established Galaxy as its premium brand and is spending heavily in both development and marketing to establish a unique hardware identity. About the only one promoting Android these days other than Google is Verizon, with its Droid franchise. (Probably not coincidentally, Verizon is the only one of the four major U.S. carriers not offering the HTC One.)

Of course the One does run Android (version 4.1.2, to be precise) and HTC, which doesn’t offer much in software beyond the apps it has developed for the One, needs the Android app ecosystem and the Google Play store to make the phone valuable to users. But HTC’s handling of the announcement makes it clear how much the brand value of Android has eroded even as its market share has grown. For Google, which seems to be struggling to find a way to make money off Android, that cannot be a good thing.

 

Spectrum: The Wheels of the FCC Grind Slowly

Dark Side of the Moon album cover

Remember back at CES in January when Federal communications Commission Chairman Julius Genachowski announced a plan to free 195 MHz of spectrum in the 5 gigahertz band for expanded Wi-Fi? I hope you we’re planning on using it anytime soon.

As outlined by wireless guru Steven Crowley, it’s going to be at least 2015 before the new unlicensed spectrum becomes available. The FCC has put a Notice of Proposed Rule Making (FCC-speak for its snail-paced official decision-making process) on the agenda for its Feb. 20 meeting.

One problem is that the FCC shared responsibility for spectrum allocation with the Commerce Dept.’s National Technical Information Administration–and NTIA doesn’t seem terribly enthusiastic about the idea and certainly is in no rush to implement it. The big complication is that  federal government radars and ground-to-air communications systems operate in the 5 GHz band, and any new Wi-Fi uses will have to protect those operations. A preliminary NTIA report of  spectrum allocation warns of interference risks and concludes “that further analysis will be required to determine whether and how the identified risk factors can be mitigated through, for example, the promulgation of new safeguards in addition to the FCC’s existing requirements.” That report is not due until the end of 2014, and then it is anyone’s guess how long it will take for whatever safeguards are recommended to be implemented.

Another threat to the additional Wi-Fi spectrum comes from, of all places, the auto industry. Once of the proposed new Wi-Fi channels is adjacent to spectrum allocated for Dedicated Short-Range Communications, a very promising technology that can be used by cars to communicate with each others and with roadside sensors. The Intelligent Transportation Society of America has written to Genachowski warning of possible interference with DSRC and asking for further study before the expansion of Wi-Fi is approved.

Nobody ever said this spectrum stuff was easy.

Spectrum: Multiplication Beats Addition

Dark Side of the Moon album cover

Martin Cooper recalls the days of mobile radio-telephones before cellular service:

You’d have one station in a city and you could conduct in that city 12 phone calls at one time. During the busy hour, the probability of connecting, of getting a dial tone, was about 10%. Of course, the reason was a city with 12 channels could support perhaps 50 people with reasonable service. They put 1,000 people on it. So the service was abominable.

The solution had been developing for a long time before Cooper made the first cellular call in 1973. Back in 1947, engineers at Bell Labs came up with a scheme for using relatively low-powered transmitters to serve hexagonal cells. With some care and cleverness in assigning channels, the same spectrum could be reused, provided the cells were far enough apart. Over time, AT&T developed the technology that allowed a call to stay connected as a mobile phone moved from one cell to another and Motorola created the mobile handsets. An industry and a new way of life was born.

The sort of subdivision that made the cell phone possible will also enable a vast expansions of the amount of data that wireless networks can carry without a commensurate increase in wireless spectrum. Get ready for heterogeneous networks, or hetnets, that will use a variety of techniques to chop up spectrum and space into smaller chunks that will allow for greater reuse.Get ready for heterogeneous networks, or hetnets, that will use a variety of technologies to boost data capacity.

Wi-Fi handoff will be a key part of the hetnet. It’s being used that way today, albeit in a somewhat random and uncoordinated way. Nearly all Wi-Fi-capable mobile devices are designed to switch to Wi-Fi for data whenever it is available. One big problem, is that the device has only a vague idea of what available means. This works fine when I come home and my devices automatically connect to the network, whose password is in memory. My iPhone connects automatically to AT&T hotspots and my iPad does the same for Verizon.

Many other networks, however, require a login. Sometimes it’s a password that you can enter and it will be remembered from then on. Sometimes its a popup page that just wants you to agree to terms and conditions. And sometimes it’s a page that require a username, a password, and often a credit card number for payment. While these methods vary int he annoyance they cause, all are a serious impediment to a seamless handoff. Even worse, is that your device will try to use a Wi-Fi network to which you haven’t connected, either because you lack a password or don’t care to pay. Sometimes you have to manually turn Wi-Fi off to get your phone or tablet to work properly.

Change is coming, through a technology known as Passpoint or Access Point 2.0. This will allow truly seamless handoffs between cellular and Wi-Fi (and perhaps, in the future, white space) networks, which the device itself providing authentication. The standards are nearing final ratification, Once that happens, says Doug Lodder, vice president for business development of hotpot provider Boingo, “the carriers will run it through their labs and will negotiate roaming agreements. It’s starting to roll out, but we won’t see widespread availability until 2014.”

Small cells. Traditional cell antennas, mounted on towers or other structures, typically serve a radius of from several kilometers to several hundred meters, depending mostly on the height of the tower. Small cells, also known as microcells, picocells, and femtocells, serve ranges from a couple hundred meters down to a few tens of meters. Home femtocells are designed to provide connectivity to otherwise unserved places and connect to the network through a residential broadband connection. But other small cells are a fully managed part of a cellular network, intended to multiply the use of spectrum by chopping areas into very small cells.

You can’t just plop small cells down in areas already covered by standard cell service, at least not using the same frequencies. The Federal Communications commission is proposing that the shared 3500megahertz band be dedicated to small-cell use. Higher frequency signals have shorter range and less ability to penetrate obstructions than the 700 to 2100 megahertz signals typically used for wireless data, making them well suited to small cells.

Small cells, and a related technology known as distributed antenna systems (DAS) have the advantage of making it much easier to provide good coverage inside buildings. As Cooper says, “It’s kind of an anomaly that if you think about it, most of our cellular conversations are in buildings and in offices, because that’s where we spend most of our time. But all the stations that provide services, almost all of them are outside. It’s kind of backwards.” Whereas small cells use multiple miniature access points, not unlike a Wi-Fi network, DAS splits the signal of a single base station among multiple antennas, each serving a small region. “You have smaller pipes, but fewer people attached to each pipe,” says Boingo’s Lodder. A single DAS array can also carry signals for several cellular networks.

Smart antennas. Cellular communication is a broadcast service. A single cell antenna covers, typically a 120° sector of its cell. But smart antenna technology makes it possible to focus that beam and steer the signal to a recipient, allowing closer reuse of spectrum. There has been a lot of research on smart antennas, but limited deployment in the field. A versions, called multi-input, multi-output (MIMO) is used with Wi-Fi and LTE, but the purpose has been more to extend range than to increase spectrum reuse. Smart antennas are one more tool in the engineering toolbox that can allow us to move a lot more data on the spectrum we have.

More wireless data spectrum is always welcome and the growth of demand for bandwidth probably cannot be met entirely within existing spectrum allocations, But new spectrum is getting harder and harder to find and the politics of prying it loose are exhausting and not terribly productive. Our best hope for meeting demand is to do more with what we have. And, fortunately, there is a great deal more that can be done.