The Sad State of Consumer 3D Printing

Makerbot and models (Makerbot)CES this year had a huge area given over to 3D printing. Sadly, I found it about the most depressing area of the show. True, there were some interesting high-end machines and some very cool samples of work from commercial 3D printing service bureaus in a variety of materials. But the bulk of the space was devoted to consumer machines printing out the same tchotchkes we have been seeing for several years now. Prices for low-end machines continue to fall, dropping from the low four digits to the mid threes, but this nascent industry has year to make a case for why anyone but a relatively small group of hobbyists might want one.

3D printing has been the object of a lot of wild technological enthusiasm for a while. Googling “3D printing will change everything” turns up 2.6 million hits. Starry-eyed futurists saw a 3D printer in the basement replacing the need to buy manufactured goods while a different model in the kitchen would print out food from edible powders. But look at a consumer-grade 3D printer and more likely than not, it will be printing a bunny or a Yoda head.

I believe 3D printing has tremendous commercial potential. It, along with the functionally related technique of CNC machining, have created a way to fabricate prototypes of manufactured objects much faster and much cheaper than traditional model making. For mass production, traditional manufacturing techniques such as injection molding, die casting, and stamping will always be cheaper and faster, but 3D printing has tremendous potential for very short run customer manufacturing, especially with the development of increasingly capable printers that can laser-sinter metal powders into complex metal objects.

Medical researchers are conducting fascinating experiments with 3D printers, particularly the construction of printed objects that can be used as scaffolds to grow body parts. Artists are using 3D printers to construct objects that would be difficult or impossible to create with conventional techniques. (A number of these were on display at the mathematical art exhibit at the recents Joint Mathematics Meetings in Baltimore.)

The techno-optimism surrounding 3D printing can’t seem to go away. Nick Graham, founder of Joe Boxer, wrote for Business Insider:

The world will transform from a macro-manufactured supply chain to a micro-manufactured supply chain, or what is known as distributed manufacturing.  And this supply chain will not be thousands of miles long. Rather than one factory producing 10 million toys a month, there could be 10 million factories producing one toy a month, and those factories will not be overseas, but in your kitchen, your garage or wherever you feel like putting your 3D printer.

He also cited a CNN report in which $18 worth of plastic and electricity was used to 3D print consumer goods with a retail value of as much as $1,900. But as a business guy, he sure knows that this is like valuing a BMW based on its weight in steel, aluminum, and plastic. I suspect Joe Boxer never priced shorts based on their weight in cotton.

The fact is that printer in your kitchen or garage isn’t happening unless you are a hobbyist. I think there are a number of reasons for this:

  • Most people can’t really think of anything they want to print in 3D. This is a problem that won’t go away.
  • The output quality of consumer-grade printers is poor. 0.3 mm resolution sounds good, but you can clearly see lines that mark the layers of deposition, not the smooth surfaces produced by molds. Increasing the resolution to improve quality leads to bad tradeoffs between lower tolerances (read, more expensive) and even slower printing than the already painfully slow process.
  • It is very hard to design an object to be printed. Most people have no idea of how to use CAD software, or of how to design and object that is actually printable. All they can really do is run designs they download from sites such as Thingiverse. The software is thankfully getting better and steps such as Adobe’s announced pan to add 3D support to its Creative Cloud applications is a major help, but 3D design will always be difficult.
  • The printers themselves are very fidgety and require a lot of calibration and adjustment and close monitoring of temperatures.This should get better as the machines improve.

The bottom line is that 3D is just not a very attractive proposition for consumers and not likely to become one. The scenario it think is more likely is much wider availability of 3D printing service bureaus. If you know how to do designs today, you can already send your print file off to a company such as Shapeways and have it produce your model in any of a wide range of materials, including plastics, ceramics, and metals. Soon, you may be able to take a broken object from home to a 3D scanner-equipped shop, have in scanned, and order a custom manufactured replacement part, though in financial terms, this will probably only make sense for things for which conventional replacements are not available. There is real benefit in this, but it is hardly revolutionary.

Mac at 30: The Shadow of a Smile

happy-macBen Bajarin points out that a key characteristic of Apple for the past 30 years has to make things as simple as possible for users and the same spirit that motivated the Mac in 1984 drives the iPad today. I’ll agree and go further: Apple’s dedication to user experience extends to making its customers feel happy.

As Steven Levy notes in his outstanding reflection in Wired on the launch of the Mac, “it opened with a smile.” To be precise, with the friendly “happy Mac” icon, designed with the rest of the original system icons, by
Susan Kare. The disk would spin for a while and eventually a “desktop” would appear, filled with more of Kare’s icons. Click one, using that other novel device, the mouse,  and something interesting would probably happen.

Unless you were using computers back in the early 80s, you probably don’t realize how stunningly differnet the Mac was. When you fired up an IBM PC , you heard some beeps (the Power On Startup Test). Then some cryptic configuration information appeared on the screen. Finally, if all went well, you would be presented with a line on the screen that looked like:

A:>

or, if you had a hard drive

C:>

followed by a blinking cursor. If you typed in a valid DOS command, something would happen.

The Mac wasn’t always happy. If the boot disk was missing or unreadable, it would show this puzzled icon

mac-question-mark

mac-bombAnd if the Mac crashed, as happened not infrequently in those days, you would get the dread system bomb. This was the Mac at its most DOS-ish. The Resume button, like the Continue button on early Windows error messages, did nothing useful, even when it wasn’t greyed out. And the ID number, more often than not negative, provided no useful information, at least not to ordinary mortals. But, at least, there was always that whimsical bomb.

The original Mac belonged to what was still a primitive era of personal computing. Things went wrong at a rate we would not tolerate today. But the Mac managed to mostly make its users happy by making things easy and friendly, while IBM PCs remained hostile, intimidating devices (the first usable version of Windows was six years in the future when the Mac launched.)

Apple has never lost this impulse. The original iPhone was far more complex and capable than the smart phones then on the market, but no one needed an instruction manual. You picked it up and you could figure out how to use it. The iPad, by virtue of being a some level just a big iPhone, was even more obvious.

Microsoft, by contrast, has never quite gotten the hang of this art of making users happy. The Windows 7 and Mac UIs are closer than they have ever been, and Windows Phone, while introducing a whole new UI metaphor, was relatively comfortable. Unfortunately, the effort to translate it to the PC with Windows 8 produced a hybrid mess, in which you can never live completely in the familiar world of Windows 7 or the new, but well conceived, world of the Phone-like Metro UI. It does not open with a smile, and it doesn’t make many users smile either.

Netflix and Neutrality

netflix-button-568x411
When the D.C. Circuit Court of Appeals struck down the Federal Communications Commission’s network neutrality rules, many commentators cited Netflix as the poster child for the horrors that await. Left free to discriminate, internet service providers would either throttle Netflix streaming traffic to favor their own cable content, or would charge Netflix extortionate fees.

Funny thing is Netflix doesn’t seem particularly concerned, and thereby hangs a tale. After saying nothing for a week, discussed the issue in a letter to shareholders that shows it has a much clearer understanding of how markets really work than do the net neutrality advocates. The complete section is worth reading:

Unfortunately, Verizon successfully challenged the U.S. net neutrality rules. In principle, a domestic ISP now can legally impede the video streams that members request from Netflix, degrading the experience we jointly provide. The motivation could be to get Netflix to pay fees to stop this degradation. Were this draconian scenario to unfold with some ISP, we would vigorously protest and encourage our members to demand the open Internet they are paying their ISP to deliver.

The most likely case, however, is that ISPs will avoid this consumer-unfriendly path of discrimination. ISPs are generally aware of the broad public support for net neutrality and don’t want to galvanize government action.

Moreover, ISPs have very profitable broadband businesses they want to expand. Consumers purchase higher bandwidth packages mostly for one reason: high-quality streaming video. ISPs appear to recognize this and many of them are working closely with us and other streaming video services to enable the ISPs subscribers to more consistently get the high-quality streaming video consumers desire.

In the long-term, we think Netflix and consumers are best served by strong network neutrality across all networks, including wireless. To the degree that ISPs adhere to a meaningful voluntary code of conduct, less regulation is warranted. To the degree that some aggressive ISPs start impeding specific data flows, more regulation would clearly be needed.

What Netflix knows is that the ISPs, who are monopolists or duopolists in most markets, do not operate in a vacuum. Yes, in theory a monopolist can do anything it wants, but in practice it is constrained by its customers, who have their own ways of dealing with unconscionable actions. (The exception would be a monopolist who has total control of an essential good for which there is no substitute, say water. Governments either prevent such monopolies from forming or regulate them closely. Governments, too, have to worry about their customers, i.e., voters.)[pullquote]Mess with us, says Netflix, and face the righteous wrath of our customers, who like us a lot more than they like you.[/pullquote]

Netflix, while avoiding the panicky reactions of some of its erstwhile supporters, is putting down a marker: Mess with us and face the righteous wrath of our customers, who like us a lot more than they like you. And if the customers going get riled up on their own, we’ll see to it they they are riled. And these angry customers can cause a lot of trouble. Ask a cable operator that faced a choice a revolt by customers if it kept a big football game off its service in a dispute over retransmission fees. They always cave.

A slightly more realistic fear on the part of neutrality regulation advocates is that monopolistic carriers could crush innovative startups by providing discriminatory rates that protect incumbents. Never mind that this is the exact opposite of their first fear, but it is not at all clear why such an action would ever be in an ISP’s interest. And if an ISP were to collude with a Netflix against a challenger, they would quickly find themselves in antitrust trouble (see U.S. v Apple.)

What Netflix is saying is that some sort of reasonable neutrality is in everyone’s interest, even in the absence of regulatory requirement. If the ISPs act irrationally (or of we are misreading what is in their best interest), there is plenty of time for a regulatory response. The course favored by the strongest net neutrality advocates, common carrier regulation of ISPs, might have  solid legal basis, but would be far more intrusive than the relatively modest rules the court struck down. Let’s wait and see before we urge drastic action.

 

 

Windows 8 is Worse Than Vista (for Microsoft)

Hewlett-Packard created a bit of a stir this week it promoted its PCs by announcing that it was bringing back Windows 7, the operating system that Microsoft replaced nearly a year and a half ago. Despite claims such as “HP really wants people to buy a Windows 7 PC instead of a Windows 8 machine” by The Verge‘s Tom Warren, the promotion was more of a marketing stunt than a retreat from Microsoft’s flagship operating system by one of its most important partners.

Still, HP’s willingness to trade on the perceived unpopularity of Windows 8 is an indication of the steep challenge facing Microsoft as it considers the design the the next versions of windows, which may or may not be called Windows 9 but which is expected to be introduced, in at least preliminary form, at Microsoft’s Build developers’ conference in April.

The Vista challenge. The last time Microsoft faced a somewhat similar challenge was in 2007, after Microsoft released Vista as an overdue replacement for Windows XP. Vista opened to less-than-enthusiastic reviews, made worse by the fact that the launch, the first major update of Windows in more than five years, was heavily hyped by Microsoft.

Vista was not as bad in reality as it is in memory, but it did have some very serious problems. First, Microsoft, as it usually did, grossly understated the hardware requirements. Customers who upgraded older systems faced serious performance issues and even some new machines weren’t up to the job, even though Vista automatically disabled some processor-intense graphics features on slow systems. A lot of user interface features were changed for no apparent reason. And Windows XP’s notoriously promiscuous willingness to install any software it was offered was replaced with a nagging feature called User Account Control that required an administrative password for the simplest of configuration changes. Bottom line: People hated it.

But there were two saving graces for Microsoft in the situation. First, computer users saw no real alternative to Windows. Mac market share was growing, but not so much as to be threatening. The dislike for Vista might cause people to delay PC purchases or to demand machines that could be downgraded to Windows XP (sound familiar?), but the customers weren’t going anywhere.

Superficial problems. The problems of Vista were mainly superficial. Some tuning and upgrades to faster systems, whose prices were falling quickly, took care of performance. The more objectionable user interface issues were fixed and UAC was tamed. Windows 7, released in mid-2009, was a fairly minor reworking of Vista, a fact revealed by its Windows 6.1 internal version number. It was a relatively easy fix and was an immediate critical and popular success.

There is no easy fix for Windows 8. The Windows 8.1 update dealt with some of the most obvious issues: The UI formerly known as Metro is now somewhat more flexible and less space-wasting on big displays, Metro users have less need to run the Desktop, and users of traditional desktop apps on traditional desktops or laptops now get to spend more of their time in the legacy Desktop environment without bouncing out to Metro.

But the vexatious reality is that Windows 8 remains a two-headed operating system that does everything, but nothing well. Apple has wisely understood that the worlds of touch devices and of keyboard-and-pointer devices are separate and irreconcilable. The iPad can’t do everything a Mac and do, the Mac can’t do everything an iPad can do, and Apple and its mostly very happy customers are just fine with that.

Fundamental duality. It would be a major shock if Microsoft announces that Windows 9 will change the fundamental dual nature of Windows. I think Microsoft really should pull the two halves of Windows 8 apart and come out with two operating systems (or at least two user interfaces, not quite the same thing), each optimized for its own usage. Tablets should get a touch interface–son of Metro. Traditional PCs, likely to be the smaller market in the future, need a UI designed to work primarily with a keyboard and a pointing device, and that would probably look more like legacy Windows than Metro.[pullquote]It would be a major shock if Microsoft announces that Windows 9 will change the fundamental dual nature of Windows.[/pullquote]

I’m not convinced there is much of a future for touchscreen notebooks. I have used Windows 8 and 8.1 on both conventional clamshell touch notebooks and convertibles of varied design and I am not convinced that any of them come close to a MacBook. In fact, running Windows on a MacBook is a superior experience to most Windows notebooks because of the superiority of Apple’s touchpad.

The big question is just what will make a Windows 9 tablet an attractive proposition? The answer has to be what Microsoft has always thought it was: Office. Though consumers have learned to live without Office, the productivity suite remains extremely important to business and professional users. Unfortunately, Office 2013, Microsoft’s companion to Windows 8, changed just enough to be annoying to Desktop users while being all but unusable with a purely touch interface. If Office is the big selling point for the Surface (Pro or otherwise), it is also the reason you never see a Surface without a keyboard attached.

Office is the key. The mystery Office for Metro, about which Microsoft has been very, very quiet, is the key to the whole project. If Microsoft can come up with versions of Word, Excel, Outlook, and PowerPoint that provide the features users demands while working well on a touch device, it has a chance for a dramatic revival of the franchise. Of course, this is a very difficult thing to do and Microsoft, working in Apple-like secrecy has thus far provided almost no clue about where it is headed.

The Build conference, to be held in enemy territory in San Francisco April 2-4, is looking to be the most important milestone for Microsoft in a long time.

 

 

CES: The Company That Wasn’t There

 

Steve Ballmer at CES 2012
Steve Ballmer in happier times, at CES 2012

For many years, Microsoft set the tone for CES with a keynote the evening before the annual consumer electronics extravaganza opened. It was always heavily attended and heavily covered, and there was usually at least one piece of significant news. On the show floor, Microsoft  had a huge, prominent booth, conveniently located at the point where attendees were most likely to enter the sprawling Central Hall of the Las Vegas Convention Center.

At its 2012 appearance, CEO Steve Ballmer (above)  announced Microsoft would not be coming back. But last year, even with its keynote slot taken up by Qualcom’s Paul Jacobs and its floor space occupied by Chinese TV maker Hisense, Microsoft managed to be a presence at CES. There was considerable interest in the newly released Windows 8  and Windows RT, and in the Surface and Surface Pro tablets and Ultrabooks.

This year: Nothing. Microsoft, of course, had a presence in Las Vegas, a suite of meeting rooms at one of the hotels. But far more notable than the lack of official participation in CES is the near-zero mindshare Microsoft had on both participants and its erstwhile partners. After all, Apple has managed to be a looming presence at CES for years without ever taking an official role. In 2007, it notoriously drew a huge chunk of CES media from Las Vegas to San Francisco to cover the mid-CES launch of the original iPhone. In an astonishingly short time, Microsoft has gone from being the great, feared bully of the tech world to being a company that most people rarely think about.[pullquote] To the extent that CES is a reflection of the viability of Microsoft’s consumer offerings, the company has some big decisions to make. [/pullquote]

The fact that Microsoft was neither seen nor talked about at CES is probably a reasonable reflection of the company’s current place in the consumer world. The new Xbox One is selling fairly well, though not as well as the Sony PlayStation 4, but it doesn’t seem to be generating a great deal of excitement. I saw Xbox Ones here and there around CES, along with a larger number of Xbox 360s, but without Microsoft’s sponsorship, there was no one to generate Xbox buzz. By contrast, Sony dedicated a substantial part of its exhibit to PS/4 and the display included a stunning video wall on which a PS/4 FIFA World Cup game was being played (photo below).

sony-fifa-wall

For Microsoft, the Intel exhibit may have been the low point of the show. Intel and Microsoft long were  neighbors on the show floor and effectively promoted each other’s products. There were a fair number of systems, mostly Ultrabooks, running Windows in the Intel booth. But Intel grabbed a lot of attention by announcing that it would be promoting laptops running both Windows and Google’s Android software. And the center of interest of its display was a section promoting its new ultra-small, ultra-low-power Edison system on chip and particular, its entry into the Internet of babies: The Mimo baby monitor from Rest Devices, a tiny plastic turtle that slips into a specially designed onesie and beams data on your baby’s motions and vital signs–to your iOS or Android device. Edison is based an on x86 processor, but devices based on in are going to run Android or Linux, not Windows.

Among leading leading laptop makers, only Lenovo was present on the floor.  ((The original version erroneously said Lenovo was among the absent.))  HP and Dell skipped the show floor altogether. And while companies such as Toshiba, Samsung, Sony, and Panasonic showed laptops, they did not get very prominent placement.

Microsoft, of course, still has a healthy enterprise business, and you would not expect that to be reflected at CES. But to the extent that CES is a reflection of the viability of Microsoft’s consumer offerings, the company has some big decisions to make. Its acquisition of Nokia seemed to represent a decision to stick with and rebuild the consumer market, but so far it is not helping. As Microsoft goes through its protracted selection of a new CEO, it has to decide whether it really wants to be in consumer markets–and just what sort of investment it will take to become relevant again.

The Net Neutrality Slap Down: Time to Move On

The decision from the Court of Appeals for the District of Columbia was hardly unexpected, but it was sharp and unequivocal. Writing for a unanimous three-judge panel, Judge David Tatel told the Federal Communications Commission that its Open Internet Order, designed to preserve network enutrality, exceeded its authority: “[E]ven though the Commission has general authority to regulate in this arena, it may not impose requirements that contravene express statutory mandates.”

You would think that by now, the FCC would have figured out that the courts mean it. Every time the Commission tries to stretch beyond the letter of the law, it gets shot down. In the case preceding this one, the FCC fined Comcast for violating its network neutrality rules.  The Supreme court struck down the rules. The FCC morphed them into the Open Internet Order, and now the appeals court has said no again. The White House and the FCC may appeal the decision to the Supreme Court, but their chances are not good. Judge Tatel, a Clinton appointee is a highly respected jurist who carries a lot of weight with Supreme Court liberals. (Judge Judith Rogers, also a Clinton appointee, concurred; Senior Judge Laurence Silberman, a Reagan appointee, dissented in part but in a way that would have further restricted FCC powers.)

Frankly, FCC Chairman Tom Wheeler ought to move on. The theory behind network neutrality is sound. The argument is that unless neutrality is enforced, carriers such as Verizon and AT&T will adopt discriminatory practices or pricing policies that will favor the largest and most powerful content providers. Startups and little guys with be hurt and innovation will suffer.

The problem is that network neutrality violations are unicorns. Everyone knows what they are supposed to look like, but no one (well, hardly anyone) has ever seen one. Free Press, a pro-neutrality advocacy group, reacted predictably to the decision: “Right now there is no one protecting Internet users from ISPs that block or discriminate against websites, applications or services. Companies like Verizon will now be able to block or slow down any website, application or service they like. And they’ll be able to create tiered pricing structures with fast lanes for those who can afford the tolls and slow lanes for everyone else.”

But why would they? What actually would be in if for them? One reason it is hard to make a compelling case for net neutrality regulation is that in the two decades since the internet was turned over to private carriers, this simply has not happened. If carriers really started messing with

att-sponsored-dataAT&T recently announced that it would arrange deals with content providers who would subsidize traffic on the AT&T 4G wireless network and those bits would not count against customers data caps. Opponents, as expected, promptly denounced the idea as a violation of net neutrality and the FCC said it would at least take a close look. The real question is whether this will help or hurt consumers, and I think the best we can say at this point is that it depends on just how it is implemented. I could certainly see a benefit to me from being able to watch Netflix without worrying about data consumption–provided nothing else I depend on is lost in the process. I think it is time for a bit of regulatory humility; let the FCC stand back and see how things work out. If they go badly for consumers, there will be plenty of time for government intervention.

The appeals court left the door open to a more draconian alternative. The FCC has classified broadband service providers as Title II carriers, putting them in the same lightly regulated class as cable TV operators. It could, as the court noted, reclassify them under Title I, which would regulate them as common carriers, the same as voice providers. Such a course has been advocated by Free Press and others.

I think it would be a terrible mistake. For one thing, it would embroil the FCC in a huge fight with Congress and would likely freeze progress on frankly more important issues, such as freeing more wireless data spectrum and furthering the transition of telephone services to IP networks. Second, it would create a regulatory regime in which the only companies likely to thrive are the spiritual and literal heirs of the Bell System, AT&T and Verizon (and to a lesser extent, Century Link.) Survival under that sort of regulation requires a special skill set which these carriers have spent over a century refining. Title II reclassification would end up being a far more profoundly anticompetitive move than any retreat of network neutrality.

 

CES: Android’s Big Business Bid

Android is turning up in the strangest places. The Google mobile operating system, alresady the numerically dominant platform for smartphones and tablets worldwide, is making a move to desktops and laptops.

It’s not clear that this is something Google envisioned or much desires. Google has had a fair amount of success with PC-like Chromebooks using the browser-based Chrome OS. But OEMs are opting instead to use Android in systems, which ofter also incorporate Microsoft Windows in some form.

At CES, Both Hewlett-Packard and Lenovo are showing all-in-one desktop units running Android that can also double as standard desktop monitors. I’m still having a bit of trouble figuring out the use case for these systems, as well as the case for Android rather than Chrome OS, but the manufacturers are pressing ahead.

The Hewlett-Packard Slate 21 Pro All-in-One is a $399 21.5-inch touchscreen Android desktop aimed at a business makers. With its 21.5-inch 1080p display, it looks like a seriously oversized tablet. A hinged prop gives a continuous range of screen adjustment from near vertical to near horizontal and a USB mouse and keyboard are standard.

The Slate 21 runs Android 4.3 (Jellybean) and connects to the Google Play Store to run all standard Android apps. Since the Slate 21 is more-or-less permanently fixed in landscape position, a modification to the OS lets portrait-only apps run scaled up and post-boxed on the horizontal screen.

A couple of features let the Slate 21 function as a business thin client. Built in software supports printing to network printers. And the unit is certified to run Citrix Receiver, letting it function as a virtual Windows desktop in a Citrix Xen Mobile environment. Skype and HP MyRoom teleconferencing apps are preloaded. And you can convert the Slate to a stand desktop monitor by plugging in a PC with an HDMI cable and pushing a button to switch.

HP is targeting the Slate 21 primarily as small and medium size businesses as well as hospitality and other verticals, including kiosk use. The device is compatible with standard VESA accessories for wall or swing-arm mounting.

HP is also selling a version of the Slate aaimed at consumers (without the ability to double as a monitor.) Acer also offers a similar consumer device.

Lenovo is taking a somewhat different tack with the ThinkVision 28. This $1,199 Android all-in-one features a stunning 28-inch 4K touch display that can double as a PC monitor. Its primary market is likely to be creative professionals, such as photographers and graphic artists, though it’s still a but unclear to me what they will do with the Android part. Lenovo also offers a cheaper ands smaller consumer Android all-in-one, the $399 N308 with a 19-inch display.

Both Intel and AMD are pushing a somewhat more curious idea, alptops that can dual-boot both Windows and Android. Technically, this is not particularly difficult, though dual boots have never been terribly popular outside of some niche markets. Asus has announced the Transformer Book Duet, a 13-inch convertible laptop, starting at $599, that can boot both Windows 8 and Android Jellybean.

I’m no great fan of Android tablets to begin with and it will take some convincing to get me to believe there is a real market for these products (I think similar devices based on Chrome OS might makemore sense. Still,its good to see experimentation continuing in traditonal form factors.

When is a Core Not a Core?

Nvida generated some pre-CES excitement by announcing its Tegra K1 processor, which looks to be the most powerful graphics engine ever designed for mobile use.  But the company also spread a lot of confusion by describing it as a “192-core processor” chip. How did we go from two- and four-core system-on-chip processors to 192 in one enormous leap.

Of course, we didn’t. There are cores, and there are cores. Specifically there are general purpose (CPU) cores and graphics (GPU) cores, and Nvidia was not terribly clear about the distinction in the announcement. The Tegra K1 in fact comes with either two 64-bit or four 32-bit CPU cores plus a 192 GPU core unit based on Nvidia’s much-admired  Kepler architecture.

There’s a big difference between how these two types of cores work. CPU cores are designed to handle the general run of processing, with sophisticated units using technologies such as out-of-order processing and predictive branching to speed execution. Multiple cores work independently of each other and some CPUs can handle more than one process thread per core.

GPU cores use a very different architecture called simultaneous instruction, multiple data (SIMD). All of the cores execute the same instructions on parallel streams of data. This approach was, of course,  developed for processing graphics where, for example, rotating an image requires performing the same mathematical operation on every pixel, a job that naturally lends itself to massive parallelism.

But GPU processing in recent years has moved well beyond graphics as it turns out there are many other computing chores then lend themselves to SIMD organization. A whole branch of computer science has arisen to take advantage of general-purpose computing on GPUs (GPGPU). Nvidia offers a set of programming language extensions called CUDA to help developers create GPGPU programs, while OpenCL is an open-source equivalent.

SIMD processing is not practical for every problem, but where it is good, it is very, very good. Nvidia should have been more precise in its nomenclature, but bringing a Kepler-class GPU to a mobile system-on-chip could create a new world of high-powered mobile computing.

 

 

CES: Wearables Yes; Watches, Not So Sure

As the CES pre-announcements pour in, it’s clear that a leading feature of this year’s electronics extravaganza will be wearable devices of all types. Smart watches will make the biggest splash, but I have grave doubts about this category’s chances for success. On the other hand, small, no-display wearables built on sensor technologies are likely to have a big impact over time.

I have been using a Qualcomm Toq watch for the past couple of weeks, and while I find it a very interesting technology demo, it fails a very basic test as a product: I cannot figure out what problem I have to which the Toq is an answer. Unlike many of my younger colleagues, I don’t feel fully dressed without a watch, even though my omnipresent mobile phone serves perfectly well as a timepiece.  But I am not about to trade the elegant Baume et Mercier that I have worn for years for a big chunk of plastic.

The Tog does a fine job of relaying selected information–incoming calls, messages, email headers, stock prices, weather, and more–from an Android phone, currently a Moto G, to my wrist. But I can get all that information and a great deal more just by fishing the phone out of my pocket. I just don’t gain enough from a smart watch to justify wearing one.

Not that the Toq is without its interesting features. At the top of the list is its Mirasol display. This is a reflective dichroic technology the, like E Ink, requires no power to maintain a persistent image but, unlike E Ink, displays full color. This makes it possible to have a display that is always on, as a watch should be, and still be able to go for up to a week without recharging (wirelessly, in the case of the Toq.) The color is considerably less vibrant than an LCD or OLED display (and less saturated than in the Qualcomm photo above), but it’s not bad and the reflective technology means that the quality of the image actually improves in bright light. The big problem with Mirasol is that Qualcomm, which has been working on the technology for several years, has had a lot of trouble manufacturing it at scale. Even now, it’s not clear that Mirasol displays can be produced competitively in sizes than the Toq’s 1.55 inches.

Pure sensor wearables have a lot more appeal. So far, the field has been dominated by two types of devices. Fitness sensors, such as those from Fitbit, Nike, and Jawbone, with a very limited or no display and medical sensors, such as heart rate or respiration monitors, that mostly are regulated as Class 1 or Class 2 medical devices.

The potential for big improvements in sensors is coming from technologies like the M7 chip in the Apple iPhone 5s and the X1 chipset in the Moto X. These combine accelerometers, gyroscopes, and compasses to give six-axis motion sensing with memory and a low-powered processor. The result is a device that is able to log motion over a long period of time with very little power consumption. If these technologies are moved into a separate device and combined with a Bluetooth LE (low energy) radio, you have a freestanding device that can record sensor measurements for an extended period and upload the data for analysis only when it is convenient. If the devices get small enough and cheap enough, they could be built into clothing, athletic equipment, or pretty much anything else you can imagine.

Thinking hard about the smart watch, I have not been able to come up with a reason why I would want one (and why, despite persistent rumors, I don’t think Apple will go beyond internal experiments with the design.) But let your imagination roam and you can come up with all manner of uses for wearable or embeddable sensors. That’s where the action will be.

Quantum Computing, the NSA, and Reality

Washington Post, 1/3/2014The Washington Post led today’s front page with a very curious choice: An article, by Steven rich and Barton Gellman, that said the National Security Agency is sponsoring research into quantum computing that, if successful, would break public key encryption. The story is odd for two reasons. First, it would be very strange is NSA were not doing this, since quantum computing is a hot area of cryptologic research and that is at the core of the NSA’S mission. Second, except for revealing a contract between the NSA and an obscure University of Maryland physical sciences  lab, the article contained essentially nothing new. In fact, if you read to the end of its fourth paragraph, it told you “the documents provided by [Edward] Snowden suggest that the NSA is no closer to success than others in the scientific community.”

To understand why this is important, you’re first going to have to put up with a brief lesson in cryptography and math. Traditional “symmetric” encryption algorithms, such as the Advanced Encryption Standard, are very efficient, but have a big problem: You must have a copy of the secret key, typically a 128- or 256-bit number (roughly 40 or 80 digits) to either encrypt or decrypt the data. This isn’t much of a problem when you are, say, encrypting the data on your own hard drive. But if secret information needs to be shared, securely transmitting the secret key has traditionally been encryption’s secret heel.

That’s the problem asymmetric, or public key, encryption was designed to solve. Data encrypted with one key can be decrypted with another and only one of the keys need be kept secret. The two keys are related by a mathematical technique with the interesting property, called a trap door, that makes it simple to compute in one direction but all but impossible to reverse. In the case of the RSA algorithm, key security depends on the fact that it is easy to multiply two large prime numbers–typically about 350 or 700 digits–together, but very hard to factor their product to find the primes. In a more abstruse technique called elliptic curve encryption, the challenge lies in solving something called the discrete logarithm problem over an elliptic curve (which I am not going to attempt to explain, though you can read about it here.)

Public key encryption has one very big drawback: It is orders of magnitude slower than symmetric techniques, making it practical only for encryption of very short messages. So, public key and symmetric encryption are used together to get both speed and convenience. For example, to protect a financial transaction on the internet, public key encryption, built into your browser or app, is used to protect a “session key.” Once the session key has been transmitted, a symmetric technique such as AES is used to protect the actual data.

What does any of this have to do with quantum computers? In 1994, a Bell Labs (now MIT) mathematician named Peter Shor developed an algorithm that could factor numbers many, many times faster than the best classical technique. The difficulty was that it relied on quantum effects and could only be carried out on a quantum computer. And while the theory of quantum computing is well understood, the machines have proven devilishly difficult to build. In 2001, an IBM Research team succeeded in using Shor’s algorithm to factor a number for the first time. Unfortunately, the problem it solved was 15=3×5. In 2012, this result was improved to 21=3×7. While these were important theoretical results, they leave us a long way from being able to factor the the 1,024-bit product of two large primes. And a variant of Shor’s algorithm that can be used to solve the discrete logarithm problem is even further from practicality.

The reason why the NSA would be interested in quantum computing is obvious, but so is the fact that the current state of the art does not pose a threat to anyone. In recent years, there have been suspicions among researchers that the NSA might have achieved a secret breakthrough that would put it well ahead f academic researchers. At least to the extent we can tell by the documents obtained by Edward Snowden, that does not appear to be the case and our current techniques of encryption are safe from at least this type of attack.

 

Eight Innovators That Shook the World

Note: This article was updated to correct the omission of Google.

There’s no more tedious subject on the internet than an endless discussion of which companies are or are not innovative. If you doubt it, pick a random Tech.pinions comment thread; if the thread is of any length the subject is sure to come up.

The main reason these arguments are so fruitless is that people are not bothering to define their terms, so they end up arguing more about what innovation is than who does it. So to end the year by rushing in where angels fear to tread, I want to take a look at the most innovative companies of the personal computer era, going back to around 1980.

But to start with, I am going to define just what I mean by innovation. Unlike invention, innovation does not require major technological breakthroughs. Instead, it is the process by which inventions, perhaps yours, perhgaps those of others, are turned into novel and useful products and services. The companies I am talking about here created products or services that changed the world in important ways, though many of them invented little or nothing. Here, in no particular order, is a look a eight companies whose personal electronics innovations changed the world for the better.

Apple ][+Apple: Apple may, as its critics claim, not be much of an inventor but the company has an unparalleled record as an innovator. From the Apple ][ to the Mac to the iPod to the iPhone, the iPad, and  the new Mac Pro, Apple has (except for a few grim years in the mid–1990s) an unparalleled record of innovation. Apple simply made everything it touched work better. Even its occasional flops, the Newton MessagePad and the QuickTake camera, for example, were interesting products that made significant contributions. And when Apple wasn’t doing breakthrough products, it was revolutionizing the retail experience and, as Harry C. Marks points out here, customer service.

Google: In some ways, Google is the anti-Apple. Where Apple is tightly focused and highly selective in its product development, Google seemingly will try anything and many of its projects go nowhere. Its spectacular innovative success, of course, is web search. Sergey Brin and Larry Page did not invent the mathematical approach of the Pagerank algorithm, but they tamed it and made it usable, and Google has never stopped  refining  search. Nor has Google ever stopped finding new ways to put search to work, both providing services and making money. The outstanding example of a search extension was Google Maps. There were plenty of digital maps before Google came along, but it took the combination of location awareness and search to make them truly useful. Google’s mobile maps on the iPhone and later Android phones helped turn smartphones into indispensable information tools.

Intel: Intel is an exception, both a major inventor and innovator. The company invented the microprocessor and, if you count the work co-founder Robert Noyce did at Fairchild Semiconductor, it can claim the integrated circuit as well. Intel’s microprocessors condensed the complex computational guts of computers onto a single chip and enabled the personal technology revolution. But Intel added innovation by developing a production process focused relentlessly on manufacturing efficiency. The company was often not the first to use the newest new chip technologies, but it was organized so that once a technology was adopted, it could move into production very rapidly and at massive scale. The result was a steady increase in computing power and decline in price that transformed the industry.

kindleAmazon.com: Amazon’s most significant invention is the notorious “one-click” patent. But as an innovator, it has revolutionized retailing. It also turned a decade of failed attempts to create e-readers on its head by making the purchase and consumption of digital books a simple and seamless experience. And along the way, it turned some surplus computing and storage capacity into a Amazon Web Services, multibillion dollar business that has allowed countless startups to get off the ground and scale, sometimes to spectacular size (see Netflix), with minimal capital investment.

Microsoft: No leading tech company has been more reviled for lack of innovation than Microsoft. It’s true that the company has not been a deep fount of invention, though it’s done more than most critics will allow, but innovation is another story. The most significant contribution of Microsoft was the democratization of business computing, which in turn made the wxplosion of personal computing (and the commercial internet) possible. Having cleverly negotiated a non-exclusive license deal form MS-DOS with IBM, Microsoft worked with Compaq and other clone makers to make computig cheap enough to put a PC on every desktop. The development of Windows, especially Windows 95, dramatically increased the accessibility of computers to non-techincal users. And though it was late in recognizing the importance of the internet, it was Microsoft that gave hundreds of millions of users the wherewithal to connect.

dynatacMotorola: Motorola didn’t quite invent the cellphone by itself. AT&T Bell Labs scientists came up with the idea of cellular networks years before there phones to use on them. But a Motorola team headed by Martin Cooper developed the first practical cell phone, the DynaTAC. Because of the size and cost of early handsets, most of the first cell phones were permanent installations in cars. But Motorola came up with the pocketable (if you had a fairly big pocket) MicroTAC and then the “miniature” StarTAC (still a lot bigger than today’s smartphones) that turned cellular telephony into a true consumer industry. After dominating the industry in the early years, Moto lost its way during the transition from feature phones to smartphones and now is a division of Google, but its innovative contribution is undeniable.

Hewlett-Packard: No, not the computer PC operations, which turns out competant, mostly boring machines by the millions. HP’s big innovation was making laser printing universally available. Apple actually produced the first desktop laser printer, the LaserWriter, but it was very expensive and worked only with Macs. HP took the same Canon printing engine and produced a hit for offices of every size and soon after, for home use too. As a bonus, the early HP LaserJets, especially the LasetJet 4 series, were monsters of reliability and durability. HP’s big fail: Failing to do much to make connecting computers and printers simpler since about 1990.

Handspring TreoPalm/Handspring: Neither Palm nor its offspring and later acquirer Handspring were ever phenomenally successful companies. They were chronically underfunded, and Palm suffered from terrible corporate ownership. Yet for all of their soap-opera struggles they managed to bring to market two tremendously important firsts: The first useful PDA and the first practical smartphone. The Palm Pilot wasn’t the first PDA, but it was the first one people wanted to use, mostly because of designer Jeff Hawkins’ relentless focus on a simple user experience. Qualcomm came out with the first smartphone by combining a Palm with a cell phone, but the sleek, integrated Handspring (later Palm) Treo set the stage for the revolution.

Apple, iPhone, and the NSA: A Tale of Sorry Journalism

Copy of NSA document from Der Spiegel

Watching CNN on New Year’s Eve, I learned that the National Security Agency was able to snoop on everything I did or said on my iPhone. Actually, I had been reading this for a couple of days on an assortment of web sites, whose idea of reporting seems to consist pretty much entirely of reading and borrowing from other web sites, with, or more likely without, attribution.

If you dig back through the sources here, you find a fascinating dump of documents in Der Spiegel (German original) about the NSA’s Tailored Access Operations including a 50-page catalog of snooping devices worthy of MI-6’s fictional Q. One, called DROPOUTJEEP, claimed the ability to compromise an iPhone by replacing altering its built-in software. “The initial release of DROPOUTJEEP will focus on installing the implant via close access methods,” the 2008 document said. “A remote capability will be pursued in a future release.” In other words, before any snooping took place, the NSA first needed to get its hands on your iPhone and replace its software ((It shouldn’t come as a surprise that a device that falls into the hands of an adversary can be compromised in this way. The ability to jailbreak iPhones is as old as the iPhone itself, and once you can modify the firmware, you can make it do pretty much whatever you want.)) .

This extremely important qualification quickly disappeared from subsequent reports. For example, an Associated Press story (which appeared on the Huffington Post under the headline “The NSA Can Use Your iPhone To Spy On You, Expert Says”) said: “One of the slides described how the NSA can plant malicious software onto Apple Inc.’s iPhone, giving American intelligence agents the ability to turn the popular smartphone into a pocket-sized spy.” Forbes.com reported: “The NSA Reportedly Has Total Access to the Apple iPhone.”

Part of the problem is that Jacob Appelbaum, an independent journalist allied with Wikileaks and a co-author of the Spiegel article, went well beyond the cautious printed piece in a speech to the Chaos Computer Club in Heidelberg, Germany. Unlike more circumspect accounts of NSA disclosures such as those by Bart Gelman in The Washington Post ((Very interestingly, the Spiegel articles made no mention of Edward Snowden, the source of the recent flood of NSA revelations.)) , Appelbaum was quite willing to speculate far beyond what was supported by his texts. As quoted by the Daily Dot, he said in his CCC speech: “Either [the NSA] have a huge collection of exploits that work against Apple products, meaning they are hoarding information about critical systems that American companies produce, and sabotaging them, or Apple sabotaged it themselves.”

Apple was typically slow to respond to the charges. In a statement released Dec. 31, after the story has been percolating for a couple of days, it said:

Apple has never worked with the NSA to create a backdoor in any of our products, including iPhone. Additionally, we have been unaware of this alleged NSA program targeting our products. We care deeply about our customers’ privacy and security. Our team is continuously working to make our products even more secure, and we make it easy for customers to keep their software up to date with the latest advancements. Whenever we hear about attempts to undermine Apple’s industry-leading security, we thoroughly investigate and take appropriate steps to protect our customers. We will continue to use our resources to stay ahead of malicious hackers and defend our customers from security attacks, regardless of who’s behind them.

I’m not sure how upset we should be about NSA’s Tailored Access Operations, of which DROPOUTJEEP was a part. A lot of this is the stuff of spy movies and is the sort of thing intelligence agencies are expected to do. ((One thing not quite clear from the Spiegel story is whether the NSA was designing the exploits and leaving them to others, such as the FBI, to execute,  or whether NSA was running its own “black bag” operations. The latter would be disturbing, as it appears to be outside the NSA’s charter.)) One the whole, I agree with University of Pennsylvania security expert Matt Blaze, who tweeted:  “Given a choice, I’d rather force NSA to do expensive TAO stuff to selected targets than let them weaken the infrastructure for all of us.”

But I have no doubts at all about the quality of much of the journalism. The idea that the government can tap into any iPhone anywhere, anytime, makes great clickbait, but sorry reporting. Too many writers, it seems, couldn’t be bothered to track the story back to the original sources or even read the NSA document that many plastered on their sites. There’s no excuse for this.

 

 

Most Read Columns of 2013: Learning to Love Chromebooks and Succeeding

I have been a skeptic about Chromebooks since Google announced them. What could you really do on a pseudo-laptop whose only native application was the Chrome browser and which depended on an internet connection for most of its functionality. But I avoided sharing my opinion because I had never used on for more than a few minutes.

Now I have remedied that situation and you can count me as a convert. For the past cuple of weeks I have been spending a lot of time with a Chromebook. Not the drool-worthy $1,299 Google Pixel but a humble $250 Acer c710 with an 11.6” non-touch display, 4 GB of RAM, a 1.1 GHz dual-core Intel Celeron, and an almost pathetically old-fashioned rotating hard drive.

A Chromebook is far more restricted than a regular laptop of even a tablet. Without the ability to load standard applications, you must make do with web apps, which are limited both in scope and in functionality. But it is a good 80% or 90% solution, perfectly acceptable for the great bulk of what most people want to do most of the time. The applications and the operating system are both lightweight, so that performance feels snappy despite the modest specs.

Most important for those of use who live in a world where we are disconnected at least some of the time, the key Google apps, especially Docs, work offline. A Gmail add-on, officially still in beta testing, lets you read, edit, and reply to email messages offline.[pullquote]The Chromebook is very good at what it does well, and for a large number of people, it would be a more than adequate replacement for a conventional PC.[/pullquote]

I wrote this post mostly on the Chromebook, much of the time offline. The WordPress editor is not offline-friendly, so I composed in Google Docs, then copied and pasted into WordPress. The image was downloaded from the Web, saved as a local copy, and uploaded to WordPress. In terms of the apps I used, the experience was much like working on an iPad (or an Android tablet) except for the convenience, for writing, of working on a laptop form factor.

I used the image I found as-is. Chrome features a very limited built-in picture editor. Anything more sophisticated would have required using one of a number of on-line picture editors, such as Pixlr. Though it requires a live internet connection, it’s fine for occasional use and designed to be familiar to a Photoshop user. (Oddly, Google does not offer a Chrome version of its own Picasa photo tool.)

But I would n’t want to use the Chromebook to process a large number of images from my camera. It can’t handle the RAW format I like to use on by DSLR and there is nothing–at least that I know of–like Adobe Lightroom for batch processing of photos. And even with a fast internet connection, moving a large number of multi-megabyte photos to and from web servers will get tired quickly.

Similarly, I really wouldn’t want to do much audio or video editing on the Chromebook. I have too much invested in my familiar tools (Apple FinalCut and Adobe Audition) for these complicated chores, and any complicated video editing would be a tedious chore on the low-powered C710.

But this is all a little like complaining that a good bicycle isn’t a Lexus. A Chromebook cannot do everything that a Windows PC or a Mac (of even a Linux PC) can do. It can’t even do everything that a tablet can do. For one thing, the selection of games is very limited though there is, of course, Angry Birds. But it is very good at it does well, and for a large number of people, it would be a more than adequate replacement for a conventional PC.

 

Hail and Farewell, All Things Digital

so_long_farewellNext Tuesday, New Year’s Eve, will mark the end of one of the most important journalistic tech industry web sites, Dow-Jones’ All Things Digital. Those of you who have been following this drama in recent months know that its disappearance will only be momentary.  Walt Mossberg and Kara Swisher, who have run the site and associated conferences as an independent unit (but with no equity stake) are parting company with News Corp. after failing to reach agreement on a new contract.

The site, with its crack team intact, will re-emerge with a new name and new corporate investors, including NBC Universal, at the start of the new year. But the end of its six-year run is worth a moment of reflection. (Walt and Kara seem to agree; ATD is offering a series of posts (here’s the first) with summaries of and links to some highlights.

Many of the ATD crew has been friends, colleagues, and competitors, some for more years than any of us want to admit. But when ATD started in 2007, it brought to the web  the journalistic standards of The Wall Street Journal with the timeliness, aggressiveness, and attitude that befitted a post-print publication. It quickly became the go-to site for, among many other things, Kara’s exhaustive (and occasionally exhausting) coverage the decline and, maybe, rebirth of Yahoo, Peter Kafka on media, Ina Fried on mobile, and Arik Hesseldahl on the enterprise.

It will be very interesting to watch what changes and what remains the same in the new ATD. Equally interesting will be what Dow-Jones plans to do to fill the giant hole left by the departures. It clearly has major plans, and has been hiring a lot of staff.

(So long and thanks for all the fish. The picture of  the dolphins, originally from icanhascheezburger.com, is from the allthingsd.com home page.)

 

 

A Tale of Two Ads: “Misunderstood” vs. “Scroogled”

Screen shot from commercial (Apple via YouTube) If you want to know why Apple keeps winning  in consumer markets and Microsoft keeps losing, you can find much of the answer in the ads the two companies use to present themselves to the world. This week, Apple channeled Frank Capra and Vincente Minelli into an iPhone ad in the form of a perfect 90-second nano-feature film. Microsoft, meanwhile, spends its ad dollars to trash the competition and come across as combining the worst features of Mr. Potter and the Grinch. I have worked with both companies for many years and can assure you that while they are very different from each other, both are fiercely competitive, touchy, and as huggable as  hedgehogs. But there can be big difference between what you are and the persona you choose to present to the world.

The iPhone ad (left), titled “Misunderstood,” blows away the memory of the rather odd ads Apple has run lately. In it, a sullen boy or 13 or so seems totally absorbed by his iPhone during the family Christmas celebration. But the kid has really been making a video documenting the family that, when shown via Apple TV, reduces his mother and grandmother to tears. Yes, it sounds sappy as can be but set against a soulful version of “Have Yourself a Merry Little Christmas ((The only real fault I can find in the ad is a terrible jump cut in mid-song. I have been unable to identify the performer, but she’s wonderful.)) ,” it packs a powerful emotional punch.

Microsoft’s 90-second anti-Chromebook ad (left), part of a recent extended attack on all things Google, is the complete opposite. A young woman walks into a pawn shop hoping to trade her “laptop” for enough money to buy a ticket to Hollywood. The man behind the counter laughs at her and tells her that because it is a Chromebook and not a real laptop, “it’s pretty much a brick.” “See this thingy,” the man says, pointing to the Chrome logo. “That means it’s not a real laptop. It doesn’t have Windows or Office.” After some of Microsoft’s by-now familiar attacks on Google tracking, pawn shop guy says, “I’m not going to buy this one. I don’t want to get Scroogled.” I’m going to leave aside the ad’s numerous misrepresentations and outright falsehoods (apparently news of standalone Chrome apps has not yet made it to Redmond) and focus on its tone. It is, in a word, nasty. Apple’s ad leaves you with the warm fuzzies, Microsoft’s leaves you wanting a shower. I don’t think it  is a coincidence that this bullying tone of advertising and the general attack on Google were born after Microsoft brought Mark Penn aboard as executive vice president for advertising and strategy. Penn, a longtime Democratic operative and a veteran of Hillary Clinton’s 2008 presidential campaign knows negative advertising inside and out. There are two things well known about negative political ads. One is that voters absolutely hate them. The other is that they work. But selling a consumer product is very different from selling a candidate. U.S. elections, even primaries by the time they get serious, are zero-sum, binary affairs. If you can convince voters that the other guy is a bum, your guy will benefit. Microsoft’s problem, though, is that consumers don’t seem to want to buy its products. I cannot see how telling them that Chromebooks are bad and Google is evil makes them want to run out and buy Windows 8 or a Surface 2.  Considering how thuggish that ad makes Microsoft look, they are probably just as likely to head for the nearest Apple Store. (One very odd criticism of the Chromebook in the Microsoft commercial is that it doesn’t run iTunes.) ((You could argue that the Mac vs. PC ads of a few years ago were Apple’s own foray into negative advertising,  but there were two critical differences. One is that the ads were done with a light and humorous touch. The second is that they favorably compared Macs to Windows rather than simply trashing the competition.)) Microsoft desperately needs people to want Microsoft products (other than Xboxes.) This is not a problem that marketing can solve–better products have to come first–but ads that drip aggression and hostility are only going to make things worse.

Follow up: Adobe, Apple, and Bad Error Messages

video_play_modules_error

In a Tech.pinions Insiders article yesterday, I wrote that one reason for the success of tablets is that they do not regularly befuddle users with error messages. No sooner had I written it than I got a lesson in the worst of traditional software error messages.

I was planning to use Adobe Premiere Pro to prepare a couple of short videos I had shot at a Children’s Chorus of Washington concert ready for YouTube posting. When I started Premiere on my iMac, it hung with the splash screen indicating it was trying to load a module called ExporterQuickTimeHost.bundle. No error message was generated; it just would not get beyond that point in the startup process.

I turned to Google and found this was a well-known issue affecting a number of Creative Cloud/Creative Suite components that use this module and a companion called ImporterQuickTime.bundle. The problem was that the advice was all over the place; these are official Adobe support forums but Adobe does not (at least not reliability) provide any support on them. And crowdsourcing is not necessarily a great way to find the solution to subtle problems.

I tried following the suggestions in some of the posts, did a bit of fiddling on my own, an eventually got Premiere Pro to get past its hang point. Instead, it generated an error message saying it “could not find any capable video play modules. ((Sharp-eyed readers looking at the error message above will notice from the Hangol characters that this is from the Korean version of Premiere Pro. I neglected to capture the error message when it occurred and didn’t want to mess up my system to replicate it once I finally got it fixed. So I scrounged the web for the closest copy I could find.))” Once again, it was off to Google for an answer.

This time I found an official Adobe support page dealing with the problem. Its advice, sone of which seemed to have much to do with the problem at hand, was:

  • Make sure the current user account has administrative rights. (User software should never require administrative rights, but the fact is I was running an administrative account.)
  • Update graphics drivers. (Just try to do that on an iMac less than five years old.)
  • Two other suggestions that only applied to Windows systems with switchable graphics.

So no help there. I prowled around the support forums so more, got some suggestions, none of which  worked. Finally I decided to to uninstall and reinstall Premiere Pro. Fortunately, I knew that Creative Cloud applications, unlike most Mac programs, require that you run an uninstaller that than just drag the application to the Trash. When I fired up Premiere Pro,  it loaded just fine. ((For the record, I turned to FinalCut Express to edit and title the videos because there was some time pressure to get them up. When they were finished, I wanted to use Adobe Media encoder to transcode them for YouTube. For unknown reasons, I have both Media Encoder CS6 and Media Encoder CC on my system. The CC version generated the same error as Premiere, but the CS6 version worked fine. Go figure.))

updater-errorAs I was starting to work on this article, I saw a tweet from my friend Rob Pegararo about a problem he was having with the Mac Updater for Mavericks. His Mac was telling him it could not update the Mac App Store app because the code was not signed by Apple. Again, I perused the support forums and found that this problemn goes back to the release of Mavericks. There were lots of suggestions, many of them contradictory (suggestions centered on it being either a file permissions or a digital certificate problem; I vote for the latter) and none of them authoritative.

There are several problems here. First, the error message, like the Premiere message, does not give even a knowledgable user a clue about what caused the problem. Second, it uses incomprehensible terminology. I have actually encountered “preflight file” before as a technical term in publishing, but I have no idea if this usage is related. Finally, this problem has been out there for weeks and Apple owes its customers better than to leave them flopping around in search of an answer.

I’m not a great fan of Microsoft customer service, but at least when a known issue arises with Microsoft software–often even with an interaction between third-party software and Windows–you can generally get help, and often an authoritative answer, by searching the Microsoft Support Knowledge Base. Apple and Adobe both could do a lot better.

Hits, No Errors: The Secret of Mobile Success

 

fatal-error-no-error

When Jeff Hawkins was designing the original Palm Pilot, he had a simple rule for his team. If a feature was generating error messages, it either got fixed fast or it was removed.  The Palm user experience was designed to be error free. ((Here’s a note Hawkins wrote explaining his design philosophy in response to a 1998 column of mine in BusinessWeek.))

The Palm was designed to do just a few things, but do them very well. Unlike just about every other high tech device of its era, it almost never threw error messages. And nearly 20 years later, I still believe this commitment to user experience helped Palm successfully create the category that eventually became today’s smartphones.

A great deal has been written about the reasons of the consumer success of today’s tablets, particularly the iPad, at the expense of traditional PCs. Of course, there are the obvious factors of their ultra-portability, relatively low price, and the availability of a plethora of clever apps that are either free or very inexpensive. But I think the is another, at least equally important factor: Tablets, to steal a phrase, just work. They don’t offer a lot of complexity. They don’t scare or confuse users with incomprehensible and vague threatening error messages.

Things have gotten better from the days when Mac and Window users frequently saw messages like these:

errorsBut after 30-plus years of dealing with these things, I still see Windows and OS X error messages that I do not understand. An example: when editing a complex document in Word 2011 (Mac), it is not unusual, after a lot of changes have been made, to get a message saying that Word has run out of room to store the document. Since I am working on a system with 12 gigabytes of physical memory and an all but unlimited amount of virtual memory, something other than the stated cause is behind the message. And I’ve learned that the correct response is to close and reopen the document. Despite the message, it always saves correctly. But why should I have to put up with this? And on a tablet, I don’t.

Features, not bugs. The iPad is the leader in tablet simplicity. A number of design decisions, for which Apple has been roundly criticized by those who dislike the locked-down nature of iOS, contribute to the iPad’s error-free nature. There’s no USB port, no installable device drivers, no user-accessible file system, no direct way to print to standard printers, no way to install apps not approved by Apple, very limited communication between apps, which run in a sandbox. This Apple-knows-best approach to software design has eliminated a large number of ways that things can go wrong and cause baffling errors. (In the early days of Mac and Windows, a common source of crashes was one program overwriting another’s memory. Throughout the history of Windows, error analysis has shown that the overwhelming majority of application and system crashes were generated by installable device drivers.)[pullquote]A number of design decisions, for which Apple has been roundly criticized by those who dislike the locked-down nature of iOS, contribute to the iPad’s error-free nature. [/pullquote]

This is not to say that apps and even the OS in tablets never crash. But they do it quietly and gracefully, without generating an error message or requiring any action. When an iOS or Android app crashes, it usually just quietly shuts down and restarts itself, generally without loss of data and without affecting any other running apps. Even a system crash, rare in my experience causes a reboot in which the tablet mostly or completely restores its pre-crash state.

App updates are another place where tablets shine. Both Android and iOS automatically install app updates in the background. By contrast, as I was working on my Mac today, a Window popped up informing me that skyDrive needed to be updated. I gave permission for the update, which then proceeded to open at least six more windows–I lost count–each of which required some action on my part. If the software needs updating, just go ahead an update it (the auto-update feature can be disabled on both Android and iOS, but I doubt that many users do.)

Good behavior. I think the geekiest among us underestimate how important this well-mannered behavior is to a lot of users. Especially folks who write code are used to complex and hard-to-diagnose errors and consider them part of a day’s work. The Mac system bomb was always a bit of a joke, though on some particularly nasty versions of Mac OS, it was rare to get through a day without seeing it at least once. But I remember people who became genuinely upset after getting that Windows “illegal operation” message, believing they had sone something seriously wrong. Microsoft made matters worse by sometimes including a “Continue” button in the dialog box that invariably did nothing when clicked.

People like tablets because they don’t behave this way. They just do what you want them to do. And both system requirements, UI limitations, and the prevailing ethos of app design cause developers to write apps that only do one thing, and more often than not, do it well.

Windows struggles. This may be one reasons why Windows tablets struggle so badly. Metro-style apps, for the most part, behave like tablet apps should. But there is still Windows underlying the whole thing and the ability to run any (Windows 8) or a few (Windows RT) legacy Windows apps makes the tablets prone to all the ills that Windows is heir to.

I sometimes chafe at the restrictions imposed on the iPad (and to a lesser extent, Android tablets) compared to a traditional PC. I would love for it to be easier to print, easier to share files among apps, easier to load content. But when I think about what I have gained by giving a few things up, a realize it is a trade I would make again in a second. I love the power that a traditional PC gives me when I need it, but I value the simplicity a tablet offers when I don’t.

The Freemium Model May Be Going Away

sugarsync

SugarSync, one of the pioneers of freemium cloud storage, announced today it was ending its free service. From now on,the minimum account will be 60 gigabytes of storage for $7.49 a month or $75 a year. SugarSync had offered a permanently free 5 GB account.

“There are many companies in this space that are giving away free storage, however, most of these companies will not be viable,” SugarSync CEO Mike Grossman said in a statement.  “We are already in a solid financial position and this shift will further strengthen our business. Also, this change will allow us to better serve loyal customers and expand our service offerings. ”

SugarSync will continue to offer a 90-day free trial of a 5 GN account or a 60 GB plan free for 30 days.

Unless free accounts generate a high conversion rate to the paid service, free just isn;t a very good business model for businesses not supported by ads. Storage has gotten cheap, but it is not free, and the bandwidth required to move data in and out of storage is even more expensive. Other freemium services, such as Dropbox, which offers a 2 GB free account, are likely feeling similar pressures. (Free services are more likely to persist where they are part of larger offerings with broader monetization goals, such as Google Drive and Microsoft SkyDrive.)

If you use more than one computer with any regularity, SugarSync, which provides many-to-many sync, not just cloud storage, is a terrific service well worth the cost of a paid account. I use it as a complement to Dropbox (and occasionally GoogleDrive and SkyDrive.) I use SugarSync to keep specified directories synced between different systems. I use Dropbox for ad hoc sharing of files among my own systems, and for selective sharing with others, especially for files too big to move by email.

 

The Pitfalls of Techno-optimism (and the Ambition of Amazon)

Photo of Amazon drone (Amazon.com)

Jeff Bezos’ interview with Charlie Rose on 60 Minutes accomplished three things. It told the world that Amazon is a true technology company, not just a giant retailer.  It took attention away from unpleasant subjects, such as working conditions in Amazon’s fulfillment centers or the company’s chronic lack of profits. And it established beyond a doubt that Bezos is the true successor to Steve Jobs as the tech world’s premier visionary and magician.

The interview showed Bezos to be better than Jobs, because Steve could only create his reality distortion field in person. Judging by the sometime rapturous reception given to Bezos’ promise of  drone-driven Amazon Prime Air, he can do it just fine over the airwaves. Although there were some misgivings in the cold light of Monday, most initial responses sounded as though Bezos had made a real announcement of a real product. “Amazon Chief Reveals Drone Delivery System: Unmanned delivery aircraft could be ready within five years” reported the normally sober Time Tech. “Amazon’s Drones for Delivery,” read the headline in an unquestioning Wall Street Journal report. and Bezo biographer Brad Stone, while expressing at least a dose of skepticism, wrote for Bloomberg Businessweek:

The aerial drone is actually the perfect vehicle—not for delivering packages, but for evoking Amazon’s indomitable spirit of innovation. Many customers this holiday season are considering the character of the companies where they spend their hard-earned dollars. Amazon would rather customers consider the new products and inventions coming down the pipeline and not the ramifications of its ever-accelerating, increasingly disruptive business model.”

In fact, the Bezos announcement belongs to the same absurd-but-taken-seriously genre as Udacity founder Sebastian Thurn’s proclamation  that the success of massive open online courses would eliminate the need for all but 10 universities in the world, and the reporting of it mostly without a bit of critical analysis reveals a major failing of tech journalism.

Economic sense. For example, just about no one who wrote about the Amazon idea bothered to consider the economics of drone delivery. Until we have fully autonomous drones, and that is a lot further off than Amazon’s five-year horizon,  each of those cute octocopters is going to need a remote-control pilot. Piloting a drone to deliver on a customer’s front porch (and we have no idea of how Amazon plans to make deliveries to multifamily residences; maybe the drones will be able to open the apartment building door and fly straight to your doorstep) is vastly less efficient and thus far more expensive than a traditional truck route. The number of purchases for which drone delivery could make sense will never be more than minuscule.

Small, cheap drones are a fascinating technology with huge potential. But their most likely use seems to be in a large variety of remote-sensing roles (which themselves could be good or evil), not delivering packages.

The techno-ethusiasm that greeted the Bezos interview  is hardly unique. We have seen the same sort of reaction to 3-D printing, which at least has the advantage of being real and available today. 3-D print is also a very exciting new technology that enables many things once thought impossible. But it has also inspired a vast quantity of tech journalistic nonsense: 3-D printing will replace conventional manufacturing, families will meet all their needs for manufacturing objects with home printers; or, my favorite, we will solve the problem of hunger by printing food. These breathless predictions uniformly ignore the limitations of both technology and economics, not to mention the fact that after 40 years, old-fashioned 2-D printers remain the most unreliable pieces of tech equipment that most of us own.

Interesting experiments. Similarly with the crypto-currency Bitcoin. It’s fascinating experiment in a privately, indeed collectively, issued fiat currency with no government or central bank to stand behind it. But despite a lot of techno-enthusiasm, the chances that Bitcoin will replace the dollar or the Euro, or even become an important medium of exchange, a nil.

I give Jeff Bezos all the credit in the world for the PR coup of 60 Minutes. He launched the holiday shopping season by getting everyone talking about Amazon, in a mostly good way, at the cost of producing a clever video. But it’s time for the tech commentariart to show the ability to do more than parrot outlandish claims.

 

 

 

Wheeler’s FCC: Ring Out the Old Fights, Bring in the New

The list of recent of accomplishments at the Federal Communications Commission is pretty short. Although President Obama and his FCC chief, Julius Genachowski, took office with a lot of bold talk, very little has happened in the past five years. One reason is that the FCC has gotten mired in the same partisanship that has crippled policymaking in general.

But the FCC has wasted a tremendous amount of time and energy fighting old battles that no one was willing to let go of. Should anyone really care about rules governing the concentration of broadcast ownership at a time when online media are exploding and broadcasters are losing their relevancy? And the FCC’s network neutrality rules, which are likely to be struck down by the D.C. Circuit Court of Appeals, deal with a threat that, after 20 years of the commercial internet, remains much debated but almost entirely theoretical.

A savvy insider. Things may be about to change. After his confirmation was held up for months for no particular reason, Tom Wheeler has taken over as FCC chair. A savvy Washington insider who has won the respect of most of the interest groups that shape communications policy, Wheeler is off to a fast start. And he is setting an agenda of issues that matter for the future.

The two items at the top of Wheeler’s list both deal with the way internet and wireless technologies have revolutionized communications and lest policy gasping to catch up. The one that certainly has the tech industry’s attention is the need for more spectrum to support the burgeoning use of wireless data. The other, which has been largely off the public radar but which is vitally important to the future of communications, is what to do about the expensive and increasingly obsolete public switched telephone network. Wheeler declared his interest in taking on the telephone issue in a blog post in which he announced it is time for what he calls a“Fourth Network Revolution.”

AT&T Gets the Ball Rolling. The process was actually set in motion last year when AT&T formally petitioned the FCC for permission to abandon its traditional wireline phone network. Wheeler plans to begin formal consideration of this in January. There is little doubt that the public switched telephone network (PSTN) and much of the regulatory regime that have grown up around it over more than a century are relics of an earlier technology.

There’s a widely held misconception that the phone system is analog. In fact, AT&T (the old AT&T, not the company that now uses the name) began experimenting with digitizing truck calls in the 1930s. Today, only the “local loop,” the bit of the system that connects “plain old telephone service” ((This is actually a standard industry term describing traditional residential service.)) subscribers is analog. All the links between switches that form the backbone of the system,as well as most business connections, are digital. The question is, what kind of digital?

Packets or Circuits? The internet runs on a technology called packet switching. Messages, including digital voice services such as Skype or Vonage, are broken up into short packets. Each packet finds its way to its destination independently; the packets need not follow the same path, nor need they arrive in order.. A TCP/IP network is based on what is known technically as “best effort delivery”; the network will do its bet to deliver each packet in a timely fashion, but no promises.

The PSTN works very differently. When you make a traditional call (or when your wireless call is connected to the PSTN), an SS7 switch at your central office contacts a chain of SS& switches at other central offices along the way to create a dedicated circuit for the call. This circuit, which has a bandwidth of 64 kilobits per second, is devoted exclusively to you call until you hang up. ((The circuit is not actually a physical wire. AT&T developed a technology called time division multiplexing decades ago to allow multiple calls to share a wire. And these days, the “wires” are almost all optical fibers that can handle thousands of calls.)) The legacy landline telephone system delivers very high reliability and good voice quality (though the sound fidelity is artificially degraded by frequency limits that go back decades), but makes very inefficient use of the network.

us_phone_lines
Data: ITU

 Although the number of land lines in the U.S. has dropped considerably from its peak, the decline has flattened out in recent years and there are still about 150 million lines in use (chart.) A fair number of these lines are already IP-based. If you get a land line from a cable company, AT&T Uverse, or Verizon FiOS, you already have IP phone service.

What Sort of Regulation? The biggest question facing the FCC as it considers the IP transition is what sort of regulatory regime should apply to the new system. Even most libertarians will agree that some things will still need to be regulated, such as 9-1-1 emergency services. Universal service, the idea that telephone service should be available to every American and, if necessary, at a subsidized price that even the poorest can afford, is a political reality that will not go away. There will be arguments, however, about just what sort of service is required. There have already been disputes about Verizon’ efforts to substitute wireless service for landlines in some isolated areas in the wake of hurricane Sandy.

What should the rules be for interconnections between phone systems? Should they be like the unregulated market for internet peering, or like the heavily regulated system of traditional phone interconnects? How reliable does the system have to be? The legendary “five nines” of the Bell System, 99.999% reliability, allowed for an average of only five minutes of downtime per year. No internet provider offers a service level anything like that because the cost of doing so is so high. How much reliability is enough in an era when nearly everyone can pick up a mobile phone when the landline is down?

Questions like these, and many, many others, will dominate the debate over the IP transition, and the answers will shape U.S. telecommunications policy for decades to come. It is probably the most important question the FCC will face for a long time, and it’s a good thing that Wheeler is giving it a very high priority.

 

X-ray Glasses Are for Real

Evena Eyes-On Glasses (Evena Medical)
Evena Eyes-On Glasses (Evena Medical)

Readers of a certain age remember comic book ads for mail-order glasses that would let you see through people’s (read: girls’) clothing. They were, of course, a pure rip-off, but their real world counterpart is about to hit the market.

Evena Medical and Epson have developed Eyes-On glasses that can see through your skin to locate blood vessels. The glasses actually illuminate tissue with infrared light. The light penetrates the skin to a depth up up to 10 mm and is selectively reflected back by deoxygenated hemoglobin, the sort that is in the red blood cells flowing through your veins. The goal is to make it much easier for doctors or nurses to find veins to draw blood or insert an intravenous line.

The subcutaneous veins are imaged–the imaging system is Epson’s contribution–in a the Eyes-On head-up display, allowing precise location of a vein without the usual (and for me, often unsuccessful) effort to get it to “pop.” And because it is actually venous blood that is being imaged, Eyes-On will also spot any leakage at the injection site, allowing for quick corrective action.

The Evena Eyes-On glasses are a bit more expensive than the comic book specs that went for a buck back in the 1950s. Around $10,000, in fact. But they work, and are actually something of a bargain since they will replace a much less mobile and more inconvenient cart-mounted version that cost twice as much.

The glasses are scheduled to go into full production in April, with general availability later in the spring.

Education and the Future of MOOCs

Photo of empty lecture hall (© SeanPavonePhoto - Fotolia.com)

Maybe online course aren’t going to remake the face of higher education after all.

After a fast start, reality seems to be closing in on the world of the massive, open, online courses that were supposed to replace traditional lectures and recitations and make free, or at least very cheap, higher education available to everyone. San Jose State University has slowed down a move to deliver introductory undergraduate courses through MOOC provider Udacity.

Udacity itself, one of several MOOC providers that have sprung up in the last couple of years, is refocusing its activities on corporate training. Sebastian Thrun, Stanford computer scienctist and founder of Udacity, told Fast Company,  “We were on the front pages of newspapers and magazines, and at the same time, I was realizing, we don’t educate people as others wished, or as I wished. We have a lousy product. It was a painful moment.”

No surprise. I’m not surprised. I’ve been a skeptical enthusiast for online education since MIT started its Open Courseware initiative a few years back, and over the last year or so, I have enrolled in several offerings, mostly from Coursera, like Udacity, a for-profit provider of open courses. My experience has been a very mixed bag, but one that has taught me a lot about where the approach does and doesn’t work. A couple of general observations. First, the technology has a long way to go and on one seems to have figured out a completely effective way to deliver lectures on video. I’ve seen a number of approaches, from video recording regular blackboard lectures, to slide-based presentations in which the instructor only occasionally appears, to a course that used cartoony “virtual students” to asked questions in computer synthesized voices.  None worked completely, though the last was the most annoying. Online lectures today remind me of the earliest days of television, when shows were “radio with pictures.” No one has quite cracked the medium yet.

Second, and more important, MOOCs seem to work best for those who need them least. Most surveys of those who enroll,  in MOOCS and especially those who complete them–typically no more than 5% to 10% of the original enrollees–tend to be people who already have undergraduate degrees and often more. “If you’re looking to really move the needle on fundamental educational problems, inside and outside the United States, you’re going to need to help people reach the first milestone, which is getting their degrees to begin with,” Daphne Koller, a founder of Coursera, told The Chronicle of Higher EducationMy experience is that MOOCs require very highly motivated students. It can be very easy to slough off, fall behind, and drop out. You have paid little or nothing, so there’s little at stake.[pullquote]If you’re looking to really move the needle on fundamental educational problems … you’re going to need to help people reach the first milestone, which is getting their degrees to begin with.”–Daphne Koller, Coursera[/pullquote]

Where MOOCs work. MOOCs work best for the sort of course that mainly consists of transferring information. I am finishing up a Coursera course in basic financial accounting, taught by Professor Brian J. Bushee of the University of Pennsylvania’s Wharton School. I signed up because I had just become treasurer of a non-profit and was embarked on a project of getting improved financial reporting for better decision support. Accounting isn’t the world’s most interesting subject and this wasn’t a terribly interesting class. But it gave me the knowledge I was looking for. Introductory accounting is a very good subject for online training: It consists mostly of a lot of sometimes arbitrary rules and how to apply them. And it lends itself well to effective multiple-choice evaluations. (This, however, was the course with the annoying virtual students.)

In a similar vein, I took an introduction to cryptography course from Stanford Professor Dan Boneh. Again, I though the course succeeded because it focused on straightforward information, not deep ideas. It was mostly aimed at teaching programmers what to do, and probably more important, not do, when implementing encryption, Some of the concepts were quite difficult and I thought that multiple choice evaluation was not terribly satisfactory; questions with longer free-response answers would have been much better. But the need to keep costs down drives machine-scorable evaluations.

The big surprise was a another Penn-Coursera course, Calculus: Single Variable, taught by Professor Robert Ghrist. This was not  standard freshman calculus course; the course description suggested prior familiarity with introductory calculus. Ghrist took a fresh, almost idiosyncratic approach to a familiar subject (he focuses heavily on Taylor series and included a week on discrete calculus, a subject rarely touched in an introductory course.) Above all, he showed that multiple choice or numerical answer questions could be both interesting and challenging. This is the course to watch for anyone who wants to see how to do it.

A disappointment. My biggest disappointment was a course called Introduction to Mathematical Thinking taught by Professor Keith Devlin of Stanford. Devlin is a distinguished mathematician and an excellent lecturer, but this Coursera project was tedious. The lectures consisted mainly of him talking while writing, a sort of online version of the blackboard talk, but it did not work well for me. My biggest problem was that the problem sets, which I expected to deal with ideas, consisted mainly of slogging my way through endless truth tables. And Devlin’s attempt at supplement machine scoring with peer evaluations just did not work (In fairness, I took the course the first time it was offered; Devlin has taught it a couple times since and it may well have improved.)

There’s no question that the rising cost of higher education is a big challenge to U.S. society and that the inefficiency of the current system is a big contributor. But MOOCs as they exist today do not seem to be the answer, or at least not more than a small part of the answer.

 

 

UnderArmour Deal Shows Fitness Tech Going Mainstream

mapmywalk

If there was any doubt that fitness measurement, whether using wearable devices or sensor-equipped smartphones, is going mainstream it should have been settled Nov. 14  by UnderArmour’s purchase of startup Map My Fitness for $150 million.

Of course, devices such as the Fitbit or Nike FuelBand and associated fitness apps for iPhone and Android have been around for a while.  But each of these existed in its own cozy ecosystem, with the devices mostly talking to dedicated apps. Device-agnostic approaches as Map My Fitness, whose combination of fitness tracking and social networking communities includes Map My Run,  Map My Ride, and Map My Walk, are integrating  the sensor of your choice into a broader system of fitness and lifestyle tools. They are your personal Internet of Things.

We are just at the beginning of this trend, which is being driven both by culture and technology. Some key technology developments and making fitness sensors cheaper, more ubiquitous, and more versatile. The M7 chip in the iPhone 5s (also, the new iPads, though their larger size makes them less practical as fitness devices) can continually record motion data from the accelerometer, gyroscope, and compass while consuming only a minuscule amount of power. The  Motorola Moto X has similar sensor-based capabilities. Dedicated fitness sensor devices are also getting better, with the $150 Jawbone UP24 the latest to hit the market.

Even relatively diminutive smartphones are too big for a lot of fitness uses. Do you really want a phone in your pocket while you work out on an elliptical trainer? But the tiny size and low power consumption of these new sensors enables the design of very small wearable devices capable of continuous monitoring and logging. Another technology, Bluetooth LE (for low energy) lets wearables upload data to smartphones, tablets, or PCs as soon as they come into range.

Built-in GPS is still impractical for small wearables because of the size of radios and antennas, the need to be positioned with a clear view of the sky, and, most important, the significant power demand. But a wearable device could download an initial position from a phone’s GPS, then use inertial navigation information from the gyroscope, accelerometer, and compass to track its course.

App developers have only begun to scratch he surface of the opportunities provided by a new generation of sensors. Apple has enhanced the iOS programming interfaces to let developers take advantage of M7 data, but relatively few apps take advantage of it just yet. Developing more advanced software will be the key to making sensor-based technology a part of everyday life. “There’s lots of sensor data,” said Kevin Callahan, vice president for innovation strategies and co-founder of Map My Fitness (in an interview before the UnderArmour acquisition was announced.) “The question is how to give it back to the user in a compelling way.”

With the increasing miniaturization of sensors and radios, the dedicated device may eventually disappear altogether as inexpensive sensors are built into our bicycles, runnings shoes, and jackets. Says Callahan, “Two years from now I expect everything to be embedded.”

 

Apple Support: The Good, the Bad, and the Ugly

movie poster (IMDB)See update below

Apple is a company that inspires both delight and dismay in its customers, sometimes both in the same person and on the same day.

First the good news. My 27″ iMac had been acting weird since I installed Mavericks. It would occasionally lock up for no reason and then would take forever to reboot.Then it started loading an obscure process at bootup (something to do with mounting an audio CD) and I could watch in the Activity Monitor as it swallowed up all my physical memory. I decided to try reinstalling Mavericks, but it would repeatedly hang during the installation. I couldn’t go forward and I couldn’t go back.

I made an appointment at the local Apple Store, where the tech at the Genius Bar said that if I had a current backup, the simplest solution would be to reformat the hard drive, reload the OS from the store network, then restore from backup.It took less than 30 minutes to complete a clean install, and was free.

Try doing that with any Windows machine. Apple Store service is, without doubt, one of the best reasons for buying an Apple product. If Apple charges a premium for Macs–and that’s a dubious contention on a feature-for-feature basis–the Genius Bar alone is worth it. I’m pretty good at fixing Macs, but it has saved me several times.

Then there’s the bad and ugly: Apple’s total lack of transparency or honesty regarding problems with its software. Mavericks users have reported a range of issues, not terribly surprising for a new OS release, and by far the worst of them seem to involve the Mail application. Gmail users reported that the new application was incapable of handling Gmail folders properly. Whether this was a bug or a feature is not entirely clear–Mac Mail has always had a tenuous relationship with Google’s idiosyncratic approach to the IMAP protocol. But it obviously left a large number of Mac users upset. The official response from Apple: silence.

snell_tweetThere were reports of many other issues. Jason Snell, the editorial director of Macworld was horrified to discover that the entire contents,of his Exchange mailbox had simply vanished. I found that Mail was no longer appended a signature to my outgoing messages on one of the two Macs I have updated. When I tried to fix the problem, the program would not let me choose any of the signatures it knew existed.

A search of Apple support forums showed I was far from alone. But if Apple monitors its own forums, it doesn’t bother to respond. I tried a couple of the workarounds other users suggested, but so far no happiness.

Microsoft may not be willing or able to help you with a malfunctioning PC, but it is a lot more forthright about bugs. Serious issues get acknowledged in the Microsoft Knowledge Base, along with word on fixes and, if necessary, workarounds. In particular, Microsoft is far more forthcoming about security issues. (Apple typically issues security patches once a month without detailing what has been fixed; Microsoft issues patches on a similar schedule, but publishes a detailed list of what issues are addressed.)

Apple Insider reports that Apple has begun letting developers test a new version of Mail.app that addresses problems whose existence it has not yet acknowledged. Hopefully, it will show up someday soon as OS X 10.9.1 with little or no explanation. And maybe it will fix the Gmail problem and maybe it won’t.

It’s amazing that a company that is so good at delighting customers at the Genius Bar can be so pigheaded about helping users with the sorts of software problems that plague every major new release. A simple acknowledgement of “we know about it and we are working on it” would go a long way toward assuaging frustrated users. But that’s not the Apple way.

UPDATE

About the time I was posting this, Apple began pushing out an update to Mavericks Mail. As usual, Apple did it without announcement (other than this terse bulletin posted at Apple support) and this update notification (if you are subscribed to auto updates):

mavericks-mail-update

Preliminary reports suggest it addresses most Gmail issues. It does not fix the signature problem I encountered.

While I salute Apple for addressing the Gmail problem promptly, I continue to be puzzled by the company’s insistence on being so damn mysterious about such things.

 

Tablets, Desktops, Laptops: How the Tools Fit My Life

© Sergey Nivens - Fotolia.com

With the endless arguments about tablets’ productivity or lack thereof, I decided to take a close look at the computing tools in my life. The result is a seemingly contradictory conclusion: We truly live in a post-PC era in which the traditional PC remains a vital player.

I think my habits are fairly typical of a knowledge worker in 2013. The main differentiations are probably that I am older than average and am self-employed, working from home. I spend pretty much all of my waking hours with some sort of connected device readily at hand.  My primarily tools are a oldish 27″ iMac, a 13″ MacBook Air, and an iPad (as of last Friday, an Air; before that an iPad 3.) I use an aged Windows 7 desktop less frequently and a Windows 8 ThinkPad less still. I use a Samsung Note 10.1 tablet only when I want to check something Android. And at any given time, I have assorted other equipment in for evaluation. And a Kindle Fire, which I use exclusively as an ebook reader.

A desktop for the desk. Most of each working day when I am in town is spent at my desk, and that means in front of my iMac, equipped with an aged USB keyboard that I think is left over from a Macintosh G4. For many things, it is my tool of choice. I do technical writing that  requires having lots of windows open at once and the use of Word, Excel, PowerPoint–not those functions but the actual Microsoft Office programs–and SharePoint. I make a lot of use of Adobe’s Creative Cloud–Photoshop for pictures, Premiere Pro for video, Audition for Audio. All of that is work from a legacy PC, in my case, mostly a Mac.

But there are many things for which I much prefer the iPad, even if the desktop Mac is available. I have the Tweetbot Twitter client on my Mac, iPad, and iPhone, but the iPad version is by far my favorite. On the Mac, when I click on a web link in a tweet, it opens the page in a tab that appears on the far right of my tab list. When I’m done and close the tab, I’m left in the browser in what had been my rightmost tab. The iPad version, by contrast, makes great use of the single-window, one-app-at-a-time interface. When I click a link, the page fills the screen. When I’m done, I click the Close button and in Tweetbot, exactly where I left off. (The iPhone version works the same, although web pages, of course, are harder to read on the small screen.) The Mac and iPad versions of the Feedly RSS reader work more similarly, but the iPad model is still slicker at opening web pages.

A iPad away. When I’m away from my desk, the iPad is generally with me. Mostly, I use it to keep up with incoming mail, my tweet stream, the odd game, and whatever else needs doing. For more serious work, I have a Zagg Flex keyboard. The iPad, over time, has largely enlaced the Mac book, with DropBox, Google Drive, and SugarSync giving me access to key documents. But it isn’t quite a laptop replacement.

I have done many Tech.pinions posts on the iPad, but it has its deficiencies. I usually write in the Byword markdown editor and then transfer the contents to WordPress, because the browser-based WordPress editor is not very well suited to touchscreen use. Handling art work remains a lot clumsier than it ought to be. but I can do it in a pinch.

My biggest frustration is trying to moderate Tech.pinions comments on the iPad. The Disqus moderation page really, really does’t like mobile Safari and handling comments is painful. It’s weird, but the need to moderate comments can be the one thing that causes me to take a laptop on a trip that I otherwise might leave at home. There’s an interesting distinction here. Some tasks, such as spreadsheets or video production, are inherently unsuited for the tablet. But many, such as Disqus moderation, are being held back simply because no one has optimized the software yet. In time, more and more of these chores will become accessible.

I find there are plenty of tools for writing on the iPad. Pages works fine for the sort of simple document you might want to create on a tablet, and both Byword and Editorial are great for straight text or HTML. I don’t do slide presentations much, but Keynote is fine.

The pain of Numbers. I haven’t used Numbers much, but I tried last weekend to use it to create a not-too-complicated budget document on the iPad. It quickly sent me scurrying back to my Mac and Excel. Trying to enter spreadsheet data from the on-screen keyboard was horrible. I found Apple’s system of modal keyboards–one for pure numerical entry, one for text, one for functions–slowed me down insanely. I understand why they do it–using the full regular iPad keyboard covers too much of the screen–but I just couldn’t get used to it. Using an external keyboard helped some, but Numbers just is not a very good program; it’s a case where simplicity actually gets in the way and the minimalist user interface actually makes things harder. But, in general, spreadsheets, unless they are very small and simple, are one of those things that really belong on a traditional PC, the bigger the display, the better.

Would Microsoft Office on the iPad make it even more useful. I can see some edge cases where it would be nice to have it, but only if Microsoft could produce apps that really fit the device. Their inability so far to do this for Windows tablets is not encouraging. I agree with my colleague Tim Bajarin (Tech.pinion  Insiders only) that this ship has sailed.

You’ve probably noticed that the device that gets lost in this workflow is my MacBook. Most days it just sits on the desk or in a bag, closed and forlorn. It gets used on longer trips when I know I am going to need the power of traditional PC applications, or when I have to work on something that must be done in Word because of the need to handle long, highly formatted documents or a requirement for Word-compatible change tracking. But most of the time, the desktop and the iPad handle my workflow (with the iPhone filling in) and the laptop that has become the tweener that gets left out.