Why Regulators Should Approve the T-Mobile/Sprint Deal

on May 3, 2018
Reading Time: 3 minutes

On the heels of the T-Mobile/Sprint merger announcement (Round 3), the market has been pessimistic, with consensus on the Street that chances of approval stands at less than 50%. I disagree. If T-Mobile and Sprint play their cards right, the chances of getting the deal through this time ‘round are much better. Here are some of the main points I believe regulators should consider.

  1. The Market Has Changed. These two companies first broached a deal nearly 5 years ago, not long after SoftBank had acquired Sprint. Much has changed since then. There have been some serious vertical market integrations involving other operators (Comcast-Universal, Verizon-AOL-Yahoo, and pending AT&T-Time Warner). Cable now seems serious about being in mobile, and DISH sits on a treasure trove of spectrum. So even though the number of national facilities-based wireless providers will go from four to three, we’re likely to have 1-3 additional major MVNO/resale players in the future: cable, some incarnation of DISH, and possibly some Internet giant (Google, Facebook, Apple, Amazon…take your pick).
  2. Why the Focus on Wireless When It’s Broadband That Needs More Competition? I’ve always been curious about the FCC’s maniacal focus on the level of competition in wireless, even though the U.S. has less broadband competition than nearly any other OECD country. Only 50% of U.S. households have a choice of more than one broadband provider. The new T-Mobile, with far greater spectrum breadth and capacity, would be in a much better position to offer a competitive broadband service via wireless, in some contexts.
  3. What Would Have Become of Sprint? I’m surprised there hasn’t been more focus on this. Sprint has been losing share, has huge debt, and still lags on network coverage and quality. Even the wunderkind Masayashi Son has not been able to turn the company around. Without a merger, what are Sprint’s real prospects? Wouldn’t it be better for network investment, the market, and consumers to have three strong competitors, rather than two giants, a sort of strong #3, and a weak #4? Can regulators point to any other countries where there are four healthy and profitable national wireless carriers?
  4. This is good for 5G. The challenge of having four strong national competitors is even greater when one considers 5G, given the level of investment that will be required. There’s a very real risk that T-Mobile, and especially Sprint, would fall further behind as 5G gets built. And it’s not only about having a war chest of dollars and spectrum. With the number of small cells that will be required for 5G, especially in urban areas, the zoning/siting/permissioning process alone, spread across four operators, would be a nightmare.
  5. This Is Partially The Fault Of Our Existing Spectrum Policy. It’s ironic to me that the feds might try to block this deal, while at the same time, it’s been their objective for the past 20 years to maximize the $$ intake from spectrum auctions. T-Mobile, Sprint, and other potential upstarts/new competitors would have a far better chance of building a good network and successfully competing in wireless if they didn’t have to spend tens of billions of dollars to merely acquire the spectrum. In several of the more recent spectrum auctions, well-heeled folks from Comcast to Google have bowed out because, well, it got too expensive for them.Tom Wheeler, former FCC Chairman, now at the Brookings Institutions, has been pushing the idea of spectrum sharing for a number of years. He wrote this week that perhaps the best way to compete in 5G would be for the carriers to build a shared 5G network. This is the sort of creativity we need, rather than government’s current approach of talking out of both sides of its mouth.
  6. Perhaps Some Creative Concessions Would Be In Order. In the wake of closing arguments at the AT&T-Time Warner trial, it’s been suggested that one possible outcome might be that AT&T wins approval to move forward, but must make some concessions, possibly divesting of the Turner assets or some way of assuring against discriminatory pricing.In previous wireless mergers, concessions have mainly revolved around divestment of spectrum. What about something more creative here? For example, perhaps the new T-Mobile has to agree to offer wholesale rates on some reasonable basis, thereby encouraging a more vibrant resale market. This has been the approach in some other countries, especially some broadband markets in Western Europe, which has resulted in more competition and lower prices. Naturally, new T-Mobile would argue that the same rules should apply to Verizon and AT&T, which could be a condition attached to deals extant and likely in the future.

There are valid arguments on both sides of this one, from a regulatory perspective. But given important changes in the market’s structure, plus the road ahead from a strategic and financial perspective, the benefits of this proposed combination outweigh the potential downsides.

Apple’s Unsurprising Earnings

on May 3, 2018
Reading Time: 4 minutes

After Apple’s earnings, the more common sentiment I saw was that of surprise. Largely because the prevailing narrative was the iPhone X sales had dropped significantly in demand. Apple’s fiscal Q2 iPhone sales are never as large as the quarter before which is a holiday quarter. But the March quarter has consistently been their second largest when it comes to iPhone sales.

VMWare’s CEO share his view on why we Tech is still exciting

on May 2, 2018
Reading Time: 5 minutes

I spent a good part of this week at Dell Technology World, their big customer event in Las Vegas that highlights Dell, EMC, VM Ware, Pivotal and all of the business that are now part of Dell. In the past, this event has been called Dell World and Dell EMC World but this year, given that Dell is now multiple companies with many different business all working together, they decided to change the name of the event to Dell Technology World to reflect their updated company focus.

Technology and Dating

on May 2, 2018
Reading Time: 4 minutes

Most of my friends know my husband, and I met over 18 years ago on a dating site. At the time, all my friends were in a relationship and not interested in going out much, plus the “bar scene” had failed to deliver anyone close to Mr. Right. The internet represented a great way to meet busy professionals looking for a relationship. Even then, of course, you had the odd fake profile or photoshopped picture, but by and large, people on the site were looking to meet someone they could date. If you equate online dating to online communication, I suppose, to today’s generation that has grown up on Tinder, Zoosk, and Grindr, what my husband and I used would look a lot like email: slower, more structured and not that different from the analog mail.

Technology is changing us

Technology changed us in many ways. Thanks to tech, our world got bigger and faster. Just think of how we shop and communicate with people. That fast and furious pace does not always do much for our social life, however. The app and service economy has impacted many aspects of our life from transportation to hospitality. So it should be no surprise that dating has been impacted by both the technology itself and how technology changed us.
We want to meet someone fast, we might not necessarily be looking for a long-term relationship, and apparently, we are quite happy to outsource a lot of the work that goes into attracting and getting to know someone. It seems that our busy lives leave little time for trial and error. As much as I sometimes found it uninspiring to read profiles and exchange some initial chit-chat with people I quickly find out I was not interested in, I would never have thought to delegate the process to someone else. That process helped me find out more about myself and what I truly wanted.

Assistant get me a Date!

Over the weekend I came across a terrific article by Chloe Rose Stuart-Ulin (@chloerosewrites) on Quartz that talked about her experience as a “Closer” for the service Virtual Dating Assistant (ViDA). The idea behind it is quite simple. If you are busy but want a date, ViDA takes care of all the chit-chat in the middle to get you a phone number of the prospect you are interested in. This is not, however, the case of saying your human secretary sending roses or a bottle of wine on your behalf to the woman or man you are trying to go out on a date like in any good romantic movie. This is impersonating you on Tinder and getting $1.75 for any phone number they can secure. All, of course, without the other person knowing they are not chatting with you but with a professional.

We are relying more and more on assistants so why not? Right? Well, the article establishes that the practice is legal in the US but raises questions of ethics which I totally wonder about too. That said the existence of the service and its success is not really what got me thinking. As I was reading the article and the mention of the training manual that Mrs. Stuart-Ulin was given I thought, once again, at the danger of bias in AI.

What Women want

The manual written by the company founder calls for an alpha male attitude and states as rule number one:

“Don’t make her think too hard,” the manual says. “When writing sales copy…the goal is to reduce her ‘cognitive load’ so she’s more likely to reach the end and still have energy to write out a reply.”

The undercover reporter, in her role as Closer, was reminded that her approach was too female and she was encouraged to:

“use shorter sentences, ask fewer questions, use fewer smileys, wait longer to reply, and set up dates before even asking if the woman is interested. If a woman doesn’t respond to our cheesy pick-up lines or cough up her number by the third message.. move on, as the match is no longer cost-effective”

In the ViDA case, it was “just a manual” but you could certainly imagine, as we move more and more into a world driven by AI that ViDA could move part of the process if not all to a bot trained with all the misogynistic understanding of what a woman wants. How terrifying is that idea if you think more broadly and go from dating into a work environment and you imagine a bot sharing the same beliefs is your first exchange with HR for a job interview? We have started to talk about bias in AI, but I really would like to hear more companies focus on this topic and making eradicating it a priority.

Dating and AI

The other thought I had after reading the article was about how technology could help enhance online dating services in a meaningful way. What if these services could access the huge amount of data we currently share on social media, of course with our permission, and any other information we would feel comfortable sharing. Then use that data to create your profile as well as to come up with better matches. I realize that I am making the big assumption that our social persona is actually true to reality which is not often the case, but I give people the benefit of the doubt.

Interestingly as I was getting ready to queue up this column for publication, Facebook announced at F8 it will soon launch a Dating feature not quite as I suggest but more focused around local events and activities you participate in. A tricky time to launch a feature that apparently has been in the works for years, given the current scrutiny Facebook is under and the lack of trust that some consumers now have.

If you want to push my idea further without getting too “Black Mirror” (watch the episode “Be Right Back” and you know what I am talking about), you can even think about a bot with access to all that information standing in for you through the initial portion of the connection. While not the same as doing that yourself, I would find it more ethical than paying a writer to impersonate you.

The bottom line: the limit to how good technology can be is us, humans, with our understanding or lack of-of what is needed, our biases and our inherent desire to cut corners. Hence I remain optimistic about the power technology has to improve our lives, but I remain highly skeptical that we will use our best judgment in deploying it.

Peak Smartphone and Implications for What’s Next

on May 1, 2018
Reading Time: 4 minutes

There are so many wonderful parallels between the PC market and the smartphone market. Well, everyone may not agree they are wonderful, but as one who studies the industry, I find them wonderful. The PC market approximately 30 years to peak, from early sales to mass market. The Smartphone was a slightly shorter cycle reaching the mass market in roughly 20 years. But, the smartphone market has without a doubt peaked.

The Shifting Enterprise Computing Landscape

on May 1, 2018
Reading Time: 4 minutes

Some of the most challenging technologies to understand are those created to serve as the backbone of modern businesses. The amount of different enterprise software and hardware tools, the number of different vendors, and the range of different capabilities all come together to create an enormous concoction of technical choices that can overwhelm even the savviest of technical minds.

It’s no wonder that there’s such an enormous services business focused on integrating this range of options into functioning solutions that people can use to operate their organizations. It’s complicated stuff.

Part of the challenge is trying to put all the various pieces into the proper context. When you start to do that, you begin to realize that some of the confusion stems from the fact that there are lots of different specialized tools for unique business situations. Not all companies share the same requirements, have the same situations, have the same existing resources and talents, etc. As a result, there’s an enormous range of options that are available to address these different needs—they don’t all try to serve the same basic functions. Some deal with older applications, some with newer cloud-style mobile apps, some are for computing and storing locally, some are for doing the same in the cloud, and many are about transitioning between these different worlds.

At the Dell Technologies World event in Las Vegas this week, founder and namesake Michael Dell laid out a vision for how he’s been able to piece together a variety of different companies and technologies to address these diverse needs. In the process, he managed to link together these various elements in a way that helped explain the company’s overall long-term strategy—as well as the success they’ve had in pursuing what many perceived was a risky plan. One particular point that stood out is that since its founding 34 years ago, Dell (the company) has generated a staggering $1 trillion in revenues and now leads the market in quite a few of the infrastructure product categories. Say what you want about old-school hardware companies, but that’s impressive.

One of the other key messages that became clear at the event was the ongoing evolution of the overall enterprise computing landscape. In particular, the move from a centralized cloud-based world back towards a distributed edge computing model, driven by IoT and new paradigms in computing, is now becoming mainstream. (A topic I wrote about in a previous column on Edge Servers Redefining the Cloud.) In addition, as part of that shift, there’s a recognition that for certain applications and certain organizations, the need to maintain their own core data centers (more of a private cloud approach) is also not going away or completely migrating to the cloud.

What you end up with, then, is the concept of edge to core to cloud, in which different aspects of an organization’s computing efforts are done through different resources, in different locations, by different types of applications and, often, even by different combinations of technologies and partner companies. Fundamentally, it’s all about building a flexible range of options that can accommodate nearly any type of existing environment, tools, and skills and can move them to entirely new sets of these factors. Pragmatically, it’s about making the transition to a software-defined world—from software-defined and managed data centers (or centers of data, as VMWare CEO Pat Gelsinger smartly recognized in his keynote speech at the event), to dynamically created and updated cloud-native applications.

These concepts become particularly important as you start to see the real-world influence of major technology investments in companies outside the tech world. While people have been touting (and often “overhyping”) the notion of “digital transformation” for quite some time, the truth is, it’s just now that we’re starting to see companies beyond the early adopters within the tech industry start to deploy these new types of computing architectures. From impressive new companies like AeroFarms, which provides hydroponic-based “vertical farms” that can live inside warehouses within cities, to old-school companies like Traveler’s Insurance, a wide range of traditional industry companies highlighted at the Dell event were shown to be on the path towards becoming software-based, or software-defined organizations.

A key part of the solution for many organizations revolves around automation and various forms of AI. Though it’s easy to write off automation as little more than fancy scripting, the truth is that properly orchestrated automation routines can save organizations large amounts of time and money. In fact, in some ways, the types of simple AI that many organizations are just starting to experiment, or do simple trials, with is arguably a more advanced form of automation that relies on real-world data input to create and evolve over time. As Michael Dell himself pointed out, there’s a big difference between the kinds of artificial specific intelligence that’s driving today’s vertically-focused AI implementations, and the artificial general intelligence that many people are worried could have such a catastrophic impact on society. Realistically, it’s the focused, specific AI applications that we’re going to be seeing worked on and deployed over the next several years, while the more general AI is still likely many years away.

Making sense of all the various types of enterprise computing technologies and the transformations they are inspiring is no easy task and, in some ways, communicating how all the various pieces work together and how these digital makeovers are actually occurring is even harder. To their credit, even though the path that Dell Technologies has embarked upon is unquestionably a tough one, they’ve begun to demonstrate that there is a strong, market-driven logic to it. They still have a number of challenges to overcome—and a lot of debt to pay back—but it’s starting to look like the long term integrated technologies bet was a good one.

Warning: The iPhone X Could Be A Problem For Analysts

on May 1, 2018
Reading Time: 5 minutes

WARNING: Angry critic of Apple’s critics ahead. Proceed with caution. You have been warned!

On April 30, 2018, Daniel Howley, wrote: “The iPhone X could be a problem for Apple.”

Rhetoric

First, let’s discuss the rhetorical device used both in the author’s headline and throughout the article. The author says the iPhone X “COULD” be a problem for Apple. Well, chocolate chips COULD be a problem for pancakes. Probably won’t be, of course. In fact, they’ll probably make the pancakes DELICIOUS. But they COULD BE a problem. So lets all panic right? Right?

Wrong. Here’s a new rule of thumb for all you aspiring Apple analysts. If you start your article with a headline that says that something “could be a problem”, that could be a problem, and you probably shouldn’t start your article.

Could

And since iPhone sales still make up the bulk of Apple’s revenue, any hit to that could be a problem for the tech giant.

“Could” again, huh? Not hedging our bets much, are we? Why don’t you just write an article that says: “Anything could be anything. We just don’t know.”

Correlation vs Causation

Apple’s stock price has taken a hit in recent weeks, as reports point to lower demand for the iPhone X.

You know who else’s stock price took a hit in recent weeks? Everybodys. The whole market dropped. So are we saying that lower demand for the iPhone X brought down the whole stock market? Or are we saying that the author of this article doesn’t understand the difference between correlation with causation?

The stock market predicts the future in the same way that a weather vane predicts the direction of the wind. So if you think you can tie Apple’s stock price to any one rumor, then you really, really, really need to keep your money out of the stock market.

Supply Chain Pain

Those reports come as Taiwan Semiconductor Manufacturing, which makes chips for iPhones, reported lower than expected guidance for the next quarter

Hmm. A report of lower than expected quantities being ordered from one of Apple’s suppliers. Well, THAT’s never happened before. Oh, wait. It happens all the time.

Lower than expected guidance could mean that Apple is seeing decreased demand for the iPhone X.

There’s that “could” word again. Lower than expected guidance from a single supplier of hardware “could” mean decreased demand. Or it “could” mean that Apple was shifting to a new supplier or using slightly different parts in their new iPhone models.

NAH, that’s just crazy talk!

Iteration

(T)he fact that the iPhone 8 and 8 Plus featured relatively few big changes from the iPhone 7 and 7 Plus, and you can begin to understand why analysts and investors are on edge.

Gee, where have I heard that the newest iPhone “featured relatively few big changes from” its predecessor? Oh, I  know. Every single year since the iPhone debuted.

iPhone iteration is like climbing a mountain. The uphill climb seems painfully slow, but when you look back, you realize that you’re a long, long way from where you started and that the device you’re now holding has scaled some pretty impressive technological heights.

I Don’t Think

“I don’t think the X is doing as well as [Apple] would have hoped,” explained Gartner personal technology analyst Tuong Nguyen.

Well, that’s cool and all, except that you don’t know what Apple’s expectations were, and you don’t yet know what the actual sales are, which makes your speculation, well, not really very cool at all.

Rumors

“The reason I believe that is based on a lot of the rumors I’ve been reading about what they’re planning for the next iteration [of the iPhone] later this year.

Oh, I heard a rumor. Now THAT’s some great reporting. Because rumors about Apple are ALWAYS true. So long as you define the word”always” as “almost never.”

The Price Is Right

Nguyen said he believes that while Apple has been able to steadily introduce new devices at slightly higher price points, $999 is too high for consumers to justify spending on a smartphone —especially one that doesn’t change the market as monumentally as the original iPhone did.

What? What? What?

Well, first off, NOTHING is going to change the market as monumentally as the original iPhone did. Asking Apple to come up with another iPhone every year — or any year — is like asking Ford to introduce the original Model-T every year. The iPhone was a major paradigm shift in technology. If you’re expecting Apple — or anyone — to come up with something as big as the original iPhone any time soon, then you seriously need to lay off the shrooms.

Secondly, consumers have ALREADY shown they’re willing to spend the kind of money Apple is asking for in order to buy an iPhone X. How do we know this? Because THEY BOUGHT THE PHONE X, that’s why. Were you not paying attention when Apple announced that it’s been their best selling iPhone since it was introduced? Just a suggestion here, but perhaps it’s not the best of ideas to write articles that say that customers are unwilling to do what customers have already willingly done.

You Can Never Go Home Again

(Apple) might be considering bringing back the Home button and lowering the price to make it more palatable.

(Spits coffee on screen).

Say what now? You think — for even one second — that Apple is going to bring back the home button? Okay, that’s it. Turn in your analyst cap, collect your severance pay at the door, and don’t let the facts hit you where the Good Lord split you on your way out the door.

Average Sales Price

The best way to determine how well Apple’s iPhone X is selling is to watch for the iPhone’s ASP. A higher average selling price could mean that consumers did indeed opt for the high-priced iPhone X. A lower average selling price, though, would mean that fewer consumers spent their money on the X and instead went for the iPhone 8, 8 Plus or previous generation models like the 7 and 7 Plus, 6s and 6s plus or SE.

This is the first bit of analysis in the article that actually makes sense (except for the “could” part). So — and I’m just spitballing here — how about we wait UNTIL THIS AFTERNOON to find out what the ASP is before writing what meandering stock market prices, unreliable supply chain changes, and unsupported rumors mean to Apple’s iPhone future?

Nah, who am I kidding? That’s never going to happen.

A Nice Problem To Have

Look, you don’t have to be a seer to see Apple’s future. All you have to do is to look at the iPhone X, talk to the people who actually own one and you’ll know that you’re already looking at the future. The iPhone X’s signature feature is its ability to recognize your face. It’s a wonderful feature, with endless potential and it will be YEARS before Apple’s competitors have anything like it.

While Daniel Howley — and so many like him — think that “The iPhone X could be a problem for Apple”, I’m predicting the iPhone X could be a problem for them. Why? Because there isn’t a company in the world that wouldn’t want a “problem” like the iPhone X, proving — once again — that when it comes to Apple, these naysayers haven’t got a clue.

Apple’s Obsession with Thinness

Motherboard drawings
on April 30, 2018
Reading Time: 3 minutes

While recently shopping at Costco, I strolled by the notebook computer aisle, all Windows machines, and stopped in my tracks. I was struck by how sleek, and compact some of the new Windows machines had become, particularly the Lenovo X1 Carbon with its matte black carbon enclosure and the Dell XPS 13 with its impressive edge-to-edge display.

Having used MacBooks over the past decade, I’ve paid less attention to the progress of Windows notebooks, generally pleased with MacOS software and tolerating the lack of progress of the Apple hardware: the mediocre keyboards, the loss of useful ports, and the elimination of the iconic MagSafe connector. I had accepted Apple’s message that I needed to give up these features for small and light.

What struck me most about these notebooks at Costco were that they still had most of the varied ports and their keyboards were so much better. They were still lightweight and compact. Compared to the MacBook 12-inch I’ve been using, the keyboards were like day and night. The Lenovo and Dell keyboards both had greater travel, a better click profile and much better response compared to my MacBook. Granted, I may be more sensitive than others, having been part of the team that developed the Stowaway keyboard for the Palm, but Apple’s recent spate of keyboards have been notoriously fragile and mediocre, as I recently experienced.

A few months ago, my keyboard had to be replaced. One of the keys failed to work, and I brought my computer to a local Apple store. A technician tried blowing out dust. He explained how the new keyboards are so sensitive, that just one piece of dust or a particle of sand can cause a failure. While in the past the keys could be disassembled, or as a worst case, the keyboards could be replaced, he explained that it’s no longer possible to do so on the new generations of MacBooks and MacBook Pros.

The key could not be fixed, and the computer was sent off for repair. I was surprised to learn that replacing the keyboard required replacing all of the electronics because they were all one assembly, apparently glued together. Without my AppleCare, the cost would have been about $700. That’s $700 for a problem caused by a piece of dust! For reference, a good quality keyboard costs less than $10 to produce.

I checked a couple of teardown sites and confirmed that the keyboards and other components could be replaced on the Dell and Lenovo, although not on Microsoft’s Surface computers. But replacing a keyboard or battery on a new MacBook requires replacing a major portion of the computer.

As a former hardware design engineer and director of PowerBooks at Apple in the 90s, I wondered how Apple strayed so far from creating products with good reliability and reparability, the inclusion of useful ports, and other features that once caused MacBooks to stand apart from their competition.

I’ve been trying to imagine what went through the engineers’ minds. After all, engineers I’ve worked with taking pride in developing reliable products that will provide great consumer satisfaction. What decisions were made along the way that caused intelligent engineers to design these troublesome products?

I’m convinced it must be Apple’s obsession with thinness. It appears to be an obsession so strong that it discards good design practices to create a design language that impacts reliability and performance. And not only has this focus impacted the MacBooks, but iPhones, as well.

It’s the same obsession that’s led to iPhones with underpowered batteries to make the iPhones thinner and thinner. The results are phones with lower capacity batteries that degrade to an unusable level much sooner than a larger capacity battery. With batteries dropping to about 70% capacity after 300 cycles, they fail to keep the phone running reliably, requiring Apple to slow down the processor or replace the batteries sooner than on other phones.

You’d think by now it should be clear to Apple that they’ve gone too far with thinness, seriously affecting the functionality of their products, increasing repair costs and reducing customer satisfaction. I would hope Apple realizes that these sacrifices are not necessary, and functionality should not suffer for a design statement that few care about or cover with a case that makes the product thicker. If Apple doesn’t address this obsession, they are providing a golden opportunity for even diehard Mac users such as myself to consider a Window’s notebook computer.

Disruption Targets Intel

on April 30, 2018
Reading Time: 4 minutes

While there are many interesting companies earnings to discuss, and we will hopefully get to all of them, Intel’s stood out to me the most. It is no secret my concern for Intel has been growing, but I remained cautiously optimistic up until this point. On the surface, most everything looks great. They just hired one of the best chip designers in the world in Jim Keller, who came from Tesla, but before that was responsible for amazing work at AMD. Intel also hired Raja Koduri who is one of the leading engineers in graphics design and worked with Jim at AMD as the head of Radeon’s engineering team. These two are rockstars in the silicon industry on every level.

Don’t Call it a Comeback: Convertibles Shine in Growth-Challenged PC Market

on April 27, 2018
Reading Time: 3 minutes

There hasn’t been a lot of bright spots in the PC industry the last few years. With year-over-year shipment declines, things can often seem a little bleak. But one PC category enjoying strong growth is the convertible notebook. In 2017 the convertible category grew by 28% year over year at the worldwide level. Compare that to traditional notebooks that declined by nearly 4% and traditional desktops that declined by more than 6%. Even more notable: The convertible category’s five-year compound annual growth rate (CAGR) through 2017 was 72%.

Old School Form Factor
Convertible notebooks have a special hinge that let a user “convert” from a traditional clamshell orientation, rotating the screen all the way around into a tablet-like configuration. The form factor itself has been around for a very long time. The first convertibles came on the scene when touch was effectively a bolt-on feature of Windows, and as a result, they were slow to catch on with a mainstream audience.

The category saw some increased interest when Microsoft leaned into touch with the ill-fated Windows 8. However, that OS was very much a reaction to the rise of the tablet. Or, more specifically, the iPad. So while some PC vendors were experimenting with convertibles, much of the industry’s focus was on what we now call detachables, which are devices with a removable first-party keyboard. Led by Microsoft’s push into hardware with the original Surface, detachables seemed at the time a much more sensible response to the iPad. They focused on being a good tablet most of the time and could function as a notebook when you attached a keyboard. The problem with the convertibles of that time was that while they were good notebooks, they made lousy, oversized tablets.

True Believers: Lenovo and Best Buy
Between 1999 and 2006 convertibles grew from a 20K unit per year market to one that moved about 900K units. From 2006 until 2011, volumes increased to well over one million units per year. In 2012, the year Microsoft launched Windows 8, volumes dropped dramatically, down to about 800K units, before rebounding in 2013 to over 2.2 million units. Lenovo owned nearly 39% of the market that year and enjoyed a year-over-year shipment growth of 254% versus a market increase of 174%.
And while other vendors chased Microsoft’s Surface with their versions of detachables, driven in part by strong detachable forecasts by firms such as IDC, Lenovo continued to argue-both in private and in the market-that the convertible also represented a strong opportunity. In 2014, the convertible market doubled again to more than 4.8 million units (Lenovo owned 38%). The company’s market share in the category peaked at 42% in 2015 on total market shipments of over 7 million units. By this time, HP, Dell, and other PC vendors had recognized the opportunity and shifted more resources toward convertibles.

By 2017, the form factor had very much come into its own. Silicon advances meant convertibles could be increasingly thin, and as the tablet wave receded, more home and commercial PC buyers realized that the convertible’s ability to be a great notebook and a serviceable tablet was what they needed. In 2017 the total market reached 12.2 million units, led by Lenovo, HP, and Dell. (It’s worth noting that during that same year, detachables grew to 21.9M units, led by Apple, Microsoft, and Samsung.)

While Lenovo’s commitment to the convertible form factor was key, there was another instrumental player in the growth of the convertible market: Best Buy. The giant U.S. retailer showed interest in both detachables and convertibles, but saw the latter as the larger opportunity, and pushed its vendor partners to help it grow the category. Since 2014, the U.S. region has represented anywhere from 40 to 49% of the worldwide market for convertibles, and Best Buy has moved a sizeable portion of that total each year.

Strong ASPs and a Bright Future
So, in a market where most categories are trending downward, convertibles have been the rare growth story. But what makes convertibles even more interesting to the PC industry is the fact that they also tend to carry a notably higher average selling price (ASP) than most other PCs. For example, in 2017 the worldwide ASP for a convertible was $796, versus $645 for a traditional notebook and $505 for a tradition desktop. The only PC form factor with a higher ASP in 2017 was the ultraslim category at $936.

And the convertible category shows no signs of slowing down. Most of the major PC vendors continue to push new products here, and IDC shows continued strong growth throughout the five-year forecast. In fact, IDC’s latest numbers show convertible with the strongest five-year CAGR in the PC market at 10%. By 2022 the category should grow to nearly 20 million units per year.

And honestly, this may be too conservative. Many in the industry believe that at some point in the future, the bill of materials on a convertible will drop low enough to let vendors turn that all but the cheapest notebooks into convertibles. If that happens, these numbers will be much higher, although the resulting ASPs will undoubtedly be much lower. Either way, I look forward to seeing where the industry takes this form factor next.

News You might have missed: Week of April 27, 2018

on April 27, 2018
Reading Time: 4 minutes

Amazon.com Announces First Quarter Sales up 43% to $51.0 Billion

Net sales increased 43% to $51.0 billion in the first quarter, compared with $35.7 billion in first quarter 2017. Excluding the $1.6 billion favorable impact from year-over-year changes in foreign exchange rates throughout the quarter, net sales increased 39% compared with first quarter 2017.

Operating income increased 92% to $1.9 billion in the first quarter, compared with operating income of $1.0 billion in first quarter 2017.

Net income was $1.6 billion in the first quarter, or $3.27 per diluted share, compared with net income of $724 million, or $1.48 per diluted share, in first quarter 2017.

Facebook Post Earnings

on April 26, 2018
Reading Time: 4 minutes

There are narratives out there examining Facebook’s latest earnings and saying “see no impact on the business.” Any knowledgeable person would understand that recent events would certainly have no impact on Facebook’s business because any potential threats will take many months if not years to manifest. Even then, Facebook is unlikely to face any grave threats to their business. Our predictions were not that Facebook would see serious threats to their fundamental business any time soon. The big question is what, if anything, could impact the fundamentals of their model? I’ll outline a few things, but then also share why it will be easy for Facebook to navigate around such threats.

TSMC and 7nm will revitalize major chip players

on April 26, 2018
Reading Time: 3 minutes

Though the emphasis on chip technology discussion often centers around the likes of Intel, AMD, NVIDIA, and Qualcomm, one of the biggest players is contract foundry TSMC. Taiwan Semiconductor Manufacturing Company represents more than 50% of the fabless semiconductor production in the world, building for those same companies listed above, including Intel for select projects like modems.

There is competition in the field, mostly stemming from the likes of Samsung and GlobalFoundries, in terms of leading-edge technology capability. Samsung has targeted TSMC’s market share as an area for its own growth with dramatic investments in R&D and production capability. GlobalFoundries is smaller, but spunky, pushing ahead with new tech like EUV, hoping to become a “fast follower” to the Taiwanese giants. Even Intel has talked about opening its fabs to external production, but the impact there has be minimal thus far.

Despite the pressure from other companies, TSMC continues to be the leader in both revenue and, debatably, roadmaps. Last week during an analyst call the company announced it had started high volume production of 7nm FinFET silicon, with 18 different products taped out from its customers. A tape-out is the term for the final chip validation that enables volume production to begin. TSMC expects to have 50 total 7nm tape-outs by the end of 2018.

7nm is of particular interest to the semiconductor industry as it will bring wide adoption from a host of different applications. For years, 16nm FinFET technology has been the stalwart of TSMC’s portfolio and is where the bulk of high performance chips like graphics and CPUs have remained. This is despite the fact that 10nm process technology has existed since late 2016. 10nm has only been utilized by a few key partners including Qualcomm and Apple, targeting more on power efficiency than raw performance products. Chips that demanded higher performance capability (frequency) stuck with the 16nm node.

But with 7nm that changes. This is where we will find NVIDIA’s next generation graphics chips for gaming, machine learning, and AI. AMD is going to be building its upcoming graphics family with TSMC 7nm (while the next-gen CPU products will stay with GlobalFoundries). TSMC mentioned other 7nm customers that include cryptocurrency ASIC designers, neural processing engines, and even mobile processors from Qualcomm and Apple will find their way to it.

Performance claims for TSMC 7nm FinFET technology are impressive. The company stated on its analyst call that moving from the current 16nm node to 7nm will result in a 70% die shrink, saving customers dramatically on the area per chip for each wafer. As cost is based on a per-wafer model, this is a big advantage for vendors like Qualcomm that are building small chips and allows someone like NVIDIA to design a more powerful GPU in the same area. 7nm will also provide either a 60% power consumption drop at the same frequency levels or a 30% improvement in frequency at the same power level, allowing engineers to decide between these traits on a per-use basis.

In short, this means longer battery life, more processor capability, and finally some observable performance improvements coming to consumer products in 2019.

For TSMC itself, the potential timing and technology lead it appears to have with 7nm FinFET comes at a perfect time. With the market for new smartphone chips softening based on many reports, the demand for the 10nm process will be following the same trend. With 7nm, TSMC will be able to balance and manipulate customer demand between mobile vendors, graphics vendors, AI vendors, etc. As the shift in silicon demand moves, TSMC will potentially have the answer for all of them.

The first generation of TSMC’s 7nm process technology is using existing manufacturing hardware, though it does require additional production steps called patterning. So, while this lowers the barrier of entry and Capex requirements for TSMC, it does mean that each wafer takes more physical time in machines, increasing demand and decreasing throughput (without production capability investments). This means there will be a cost increase, though TSMC and its partners haven’t talked specifics. For flagship products like GPUs and cryptocurrency miners this won’t be a problem as the cost can be absorbed or prices can be increased to compensate. For less expensive processors like budget-market cell phone chips, it could be concerning.

The net result of this 2018-2019 ramping of 7nm process technology is that hardware is about to get interesting once again. The stagnant areas of graphics and high performance mobile devices will likely see big changes in 2019 as utilization of TSMC (and Samsung and GlobalFoundries) new 7nm tech ramps. It also means that nascent areas of AI processing engines, machine learning chips, even crypto/blockchain processors will have the room to grow and expand in capability in a way we have yet to witness.

 

Silicon Valley’s Great Divide

on April 25, 2018
Reading Time: 4 minutes

I have been watching with keen Interest the most recent verbal battle between Tim Cook and Mark Zuckerberg. The center of this battle deals with privacy issues and free service paid for by ads vs. one that’s tied to selling products or services and how each model deals with individuals private data.

Alexa, Who is Your Favorite Kid?

on April 25, 2018
Reading Time: 4 minutes

As of Saturday, thanks to Alexa Blueprints, when my daughter asked Alexa the question she got an answer she did not expect, one that made us chuckle and made her mad: “Either one of the dogs!” said Alexa in her cheerful voice.

Alexa Blueprints are personalized skills that any Echo user can now create thanks to a very easy to use set of templates covering different topics in the areas of storytelling, home automation, learning and knowledge and of course fun and games.

Creating a skill is very straightforward and does not require any coding. You are basically creating a script, and all the work is done in the background for you. It took me five minutes to create five family questions with some funny answers for temporary amusement. As soon you run the skill it becomes available to all your Echo devices.

Making it Personal

Alexa already built a close relationship with its users through her name and personality. Cementing that relationship by creating an even more personal bond is very important as other digital assistants will try and enter our homes.

While it might seem silly, our relationship with Alexa is no different than a relationship with a human being. Knowledge, familiarity, humor all play a role in making you feel comfortable with a person so that you feel you can rely on them. Amazon wants the same when it comes to Alexa and us.

Creating their own skills leaves users with a sense of control over the information they might not be necessarily happy to share with a third party skill but are comfortable entrusting Amazon with. At the end of the day, they have already let Amazon into their life. Third party skill developers might also not be interested in creating a skill that allows for bedtime stories or math quizzes that are for the one person you have in mind.

Making it Useful

The opportunity that Alexa Blueprints offer is more than cuteness of course. You take a look at the first templates Amazon designed, and you see how it is all about making Alexa part of the family. As she is more helpful in the home and sounds more personal, our trust and reliance will grow. Interestingly, it is also about growing awareness outside the family. You have an Echo in your home, so clearly you are interested in Alexa, but what about all those people who come into your home from your guests to your dog sitter or babysitter? Alexa can prove useful to them by giving hints about your Wi-Fi password or telling them where you keep your dog leash or having an emergency number ready. These might be people who have yet to experience Alexa and instead of having you explain what she can do, they experience first-hand how Alexa can be helpful. House guests can also see the fun side of Alexa by playing games with her at a party.

With Blueprints, you can also expand your knowledge on things you actually care about. There are many trivia skills already available for Echo, but they are supposed to entertain more so than they are supposed to grow your knowledge. The Blueprints template can be used to learn or revise a topic. I set up a set of flashcards to review my knowledge of American Government in preparation for my US Citizen test. It took a bit longer than for the family questions, and it also showed that complexity in what you want to do could escalate pretty quickly and, at least for now, these templates are not meant to deal with complicated questions.

We are planning to create revision tests for our kid for any upcoming tests. If we sit her down to ask questions it’s homework, if Alexa does it, it’s fun. Anything that will get our daughter to learn is welcome in our home, and of course, the fact that she can be entertained and learn without staring at a screen is a plus. I also see Blueprints as a hook for kids to get interested in creating content themselves. Clearly geared at kids who can read and type there is potential in letting them create their own sci-fi story or their own flashcards. Using the storytelling template, a child can let her/his creativity loose while learning about the actual structure of a story.

Wanting More

I enjoyed my initial experience with Blueprints, but I was left wanting more. Isn’t that human nature after all! I want more templates that I am sure will come, but mostly I want more intelligence. After I set up my revision test, I quickly found that there was no room for error in my answers. Alexa would not apply any intelligence in processing my answers and understand that what I said was close enough to the answer to make it right. This happens with third-party trivia skills I used, but it is more obvious here because I am the one who entered the answer.

On the family front, while Alexa can answer a question like “Who is the most annoying kid?” with “Grace has her moments!” She cannot say “You have your moments!” when it is my daughter asking that question. So all in all, while she is more part of the family you are still very much aware, she is not human.

While I see a big potential for Alexa Blueprints, the big question is of course, how many people will bother creating their own skills? Discovery, I think is the biggest hurdle here rather than usability. When you get into your Alexa app now, under skills, you can see skills you created so you might be curious to see what that is but if you click now, you get a high-level ad that does not really do justice to the experience. I hope to see Amazon start using some of the personalized skills in its ads to encourage more usage. I am convinced the return for both the user and Amazon will be worth the investment.

Spotify vs. The Integrators

on April 24, 2018
Reading Time: 4 minutes

Spotify intrigues me in many ways. It’s easy to be bearish on Spotify. That is at least the most common narrative I see from Wall. St and pundits. Spotify makes a great product, but they are also up against dynamics that are hard to overcome.

The “Not So” Late, “And Still” Great Desktop PC

on April 24, 2018
Reading Time: 3 minutes

Talk about a category that doesn’t get much love. Desktop PCs are considered by many to be the dinosaur of the device category. After all, they’re big, bulky, typically heavy beasts that don’t exactly fit the mobility profile with which everyone seems obsessed.

And yet, they continue to lumber on. Sure, shipments have slipped from their peak and will likely continue to do so for the foreseeable future. However, there were still just under 100 million desktop PCs shipped worldwide in 2017. No matter how you look at it, that’s still a big number.

More importantly, desktops continue to evolve and improve, and they continue to be the form factor of choice for a wide variety of applications, from professional eSports and PC gaming through professional audio, music and video digital content creation (and let’s not forget cryptocurrency mining). In their fortified workstation versions, desktops still dominate for applications such as 3D modelling, scientific research, and much more.

Plus, for those who love to tinker with and build their own compute devices, absolutely nothing beats a desktop PC. Whether it’s the range of color light-equipped RGB fans, or the auto engine-style heat pipes, there’s no shortage of ways to customize the look of your custom desktop rig.

The customization possibilities continue “under-the-hood” as well, with an enormous range of hardware components and software utilities designed to wring the absolute maximum potential performance out of a given desktop PC system.

The latest entry into the desktop component fray is AMD’s new second-generation Ryzen (though not Ryzen 2) family of desktop CPUs, topped by the 3.7 GHz, 8-core, 16-thread, Ryzen 7 2700X. Long a sentimental favorite of the DIY PC crowd, AMD has had difficulty competing against Intel from a performance perspective for many years, but last year’s Ryzen launch and the additional refinements in this year’s CPUs have made things interesting again in the world of PC benchmarks.

Thanks to a variety of refinements to algorithms that dynamically boost clock speed based on workloads and power efficiency (Precision Boost 2 and XFR, or Xtended Frequency Range, respectively), as well as some reductions in latencies to on-chip caches and system memory, these new CPUs offer mid-single digit percentage improvements versus last year’s models, despite having very similar overall architectures.

More importantly, in my mind, are the refinements that AMD has made to their Ryzen Master CPU tuning and overclocking software. Like Intel’s Extreme Tuning Utility, Ryzen Master provides an overview of the performance, temperature, and other various settings of each core in the CPU. While its primary intention is to enable overclocking and other performance enhancements—and with the help of some liquid nitrogen can apparently enable speeds up to 6 GHz per core—the refined UI of Ryzen Master offers an IoT-like snapshot of the physical characteristics of the different Ryzen CPU cores. It’s a fascinating example of how people can now get a much more detailed view of their technology devices at work.

Desktop PCs are clearly not the right choice for everyone, but they clearly are a great choice for a significant, and often overlooked, group of people. Given the renewed competitive energy between Intel’s Coffee Lake generation desktop CPUs and these new second generation AMD Ryzen chips, there’s also a surprisingly strong but typically overlooked set of technologies benefitting today’s desktop market.

Thanks to these advancements, as well as the continuously growing range of workloads that are being performed on both consumer and commercial PCs, it’s safe to say, we’ll likely still be talking about a desktop PC market for decades to come.

iPhone X Study Follow Up

on April 23, 2018
Reading Time: 4 minutes

I’m sure by now most have you have seen the iPhone X survey I published went viral. Many thanks to John Gruber who linked to the article and made it the second most read article in Tech.pinions history. As of now over 75,000 people have read that aritlce and at this rate it will be 100,000 by the end of the day. On the back of that article I intended to add some follow up commentary given there is plenty of data from that study I have not shared yet. But, the public reaction to my article brought up some new thoughts as well worth sharing.

Top Takeaways From Studying iPhone X Owners

on April 20, 2018
Reading Time: 4 minutes

This article was originally published for subscribers of the Tech.pinions Think.tank. To learn more, or sign up for your daily dose of tech industry insight, click here.

Last month, in collaboration with Aaron Suplizio from Experian, we conducted a study on iPhone X owners. Most of the respondents in our survey were from the US, but we did have pockets of respondents from many parts of Europe. Our study intentionally focused on the early adopter part of the market due to this cohort being one of the larger majority groups of iPhone X owners. We knew focusing on this cohort would yield the highest volume of owners and we were right. That being said, we did capture enough non-early adopters to generate some insights on mainstream views of iPhone X, but for this article, I will focus on early adopters.

Customer Satisfaction
It’s tempting to believe that just because results of a study lean heavily on one particular profile that the results are so skewed, you can’t use them. This thinking would be entirely false. For many years I’ve been extensively studying every type of consumer profile, and the real insights come when you examine these different groups separately under a microscope. There is some value looking at the topline representative results of a study, but there is more value in breaking those results up into consumer profiles and see what different types of consumers have to say on the subject you are studying. Very few people in research know and understand this nuanced point.

Interestingly, when it comes to customer satisfaction with a product, we have not seen much variance between how early adopters and mainstream consumers rank products they like. In fact, if anything, early adopters tend to be more critical and less satisfied overall than mainstream consumers. Which is why when we see customer satisfaction from the early adopter profile come in as quite high, we know the product in question is quality.

When it came to overall customer satisfaction, iPhone X owners in our study gave the product an overall 97% customer satisfaction. While that number is impressive, what really stands out when you do customer satisfaction studies is the percentage who say they are very satisfied with the product. Considering you add up the total number of very satisfied, and satisfied, to get your total customer satisfaction number a product can have a high number of satisfied responses and lower number of very satisfied responses and still achieve a high number. The higher the very satisfied responses, the better a product truly is. In our study, 85% of iPhone X owners said they were very satisfied with the product.

That number is amongst the highest I’ve seen in all the customer satisfaction studied we have conducted across a range of technology products. Just to contrast that with the original Apple Watch research with Wristly I was involved in, 66% of Apple Watch owners indicated they were very satisfied with Apple Watch, a product which also ranked a 97% customer satisfaction number in the first Apple Watch study we did.

On Apple’s last earnings call, Tim Cook reported a 99% customer satisfaction number for iPhone X. An observant person may have caught this and wondered why we have different numbers. There are some possible answers like different panel makeups, but I think the big one is we have a significantly higher number of iPhone X owners in our study than 451 Research did in theirs. The higher number of responses led to a slightly more balanced number, but with a +/- of 2% from either survey I’m confident the number holds up.

Where things got interesting in our survey was when we looked at customer satisfaction of iPhone X by specific features. I have created the following chart to help with the visual.

Looking across the board at satisfaction for the main features of iPhone X, it appears Apple has nailed the benchmark features. It was encouraging to see the two major behavior changing features of iPhone X in the new home button-less UI, and FaceID itself ranked an over 90% customer satisfaction. Toward the latter end of the curve were both portrait photos and portrait selfies (*note some of the new features like portait lighting is in beta). Both areas where it seems Apple still has some headroom to improve but still ranked a solid satisfaction number. Then there was Siri.

I can do a whole post on early adopters opinions of Siri, but since it’s on the chart, I just want to make a few points. Firstly, you may think it’s odd for us to include Siri in this since it’s not a feature unique to iPhone X. While this is true, we included it since it is designed to be a core feature of the iPhone but also for the unique optimizations with on-device performance and machine learning that exist with Siri on iPhone X due to the new processor design. The main point, however, is a reflection of an insight I mentioned earlier that early adopters are more critical than mainstream consumers of technology. This is reflected in this chart but also highlights something important. Just because a demographic may be early tech leading, or even fanatical about Apple, Siri ranking low with this cohort shows that they are also quite pragmatic and ready to criticize when necessary.

Overall, the data we collected around iPhone X show that if Apple is truly using this product as the baseline for innovation for the next decade, then they are off to a strong start and have built a solid foundation. The big exception still being Siri, but I’m optimistic Apple is making changes to the priorities around Siri and am hopeful we will see progress here in the next few years. If Apple can bring Siri back to a leadership position and in combination continue to build on the hardware and software around iPhone X base foundation, then they will remain well positioned for the next decade.

News You might have missed: Week of April 20, 2017

on April 20, 2018
Reading Time: 4 minutes

ZTE’s Very Bad Week

The U.S. Commerce Department on Monday banned U.S. companies from providing components, software, and other technology to ZTEfor seven years as punishment for violating agreements reached with the US Department of Commerce after ZTE illegally sold phones and equipment to Iran and North Korea. After admitting to busting the sanctions in 2017 and being fined $US1.2 billion, ZTE agreed to take action against employees but failed to do so. The US ban could affect the company’s ability to build smartphones and other equipment because it relies on American processors and Google’s apps.

New AMD Ryzen Chips Put Pressure on Intel, Again

on April 19, 2018
Reading Time: 3 minutes

Today marks an important day for AMD. With the launch of the Ryzen 2000-series of processors for consumer DIY enthusiasts, gamers, and OEM partners, AMD is showing that not only did it get back into the race with Intel, but that it also is confident enough in its capability and roadmap to start on the journey of an annual cadence of releases.

The Ryzen 2000-series is not the revolutionary step forward that we saw with the first release of Ryzen. Before last year, AMD was seemingly miles behind the technology that Intel provided to the gaming markets, and the sales results showcased that. Not since the release of Athlon had AMD proved it could be competitive with the blue-chip-giant that built dominating technology under the Core-family brands.

While the first release of Ryzen saw insane IPC improvements (instructions per clock, one of the key measurements of peak CPU performance) that were 50% over the previous architectural design, Ryzen 2000 offers a more modest 3-4% uplift in IPC. That’s obviously not going to light the world on fire, but is comparable to the generation-to-generation jumps we have seen from Intel over the last several years.

AMD does have us looking forward to the “Zen 2” designs that will ship (presumably) in this period next year. With it comes a much more heavily revised design that could close remaining gaps with Intel’s consumer CPU division.

The Ryzen 2000-series of parts do have some interesting changes that stand out from the first release. These are built on a more advanced 12nm process technology from GlobalFoundries, down from the 14nm tech used on the 1000-series. This allows the processors to hit higher frequencies (+300 MHz) without drastic jumps in power consumption. Fabs like GF are proving that they can keep up with Intel in the manufacturing field, and that gives AMD more capability than we might have previously predicted.

AMD tweaked the memory and cache systems considerably in this chip revision, with claims of dropping the latencies of cache 13-34% depending on the level. Even primary DRAM latency drops by 11% based on the company’s measurements. Latency was a sticking point for AMD’s first Ryzen release as its unique architecture design meant that some segment of cores could only talk to the other segment over an inter-chip bus called Infinity Fabric. This slowed data transfer and communication between those cores and it impacted specific workloads, like lower resolution gaming. Improvements in cache latency should alleviate this to some degree.

The company took lessons learned in the first generation with a feature called Precision Boost and improved it with the 2000-series. Meant to give additional clock speed to cores when the workload is only utilizing a subset of available resources, the first iteration used a very rigid design, improving performance for few scenarios. The new implementation creates a gradual curve of clock speed headroom and core utilization, meaning that more applications that don’t fully utilize the CPU will be able to run at higher clocks based on the available thermal and electrical capabilities of the chip.

There are other changes with this launch as well that give it an edge over the previous release. AMD is including a high quality CPU air cooler in the box with all of its retail processors, something that Intel hasn’t done in a few generations. This saves consumers money and lessens the chances of not having a compatible cooler when getting all the new hardware home. StoreMI is a unique storage solution that uses tier-caching to combine the performance of an SSD with the capacity of hard drive, essentially getting the best from both worlds. It supports a much larger SSD for caching than Intel’s consumer offerings and claims to be high-performance and low-effort to setup and operate.

AMD saw significant success with the Ryzen processor launch last year and was able to grab a sizeable jump in global market share because of it. In some retailers and online sales outlets in 2017 AMD had as much as 50% share for PC builders, gamers, and enthusiasts. AMD will need many consecutive iterations of successful product launches to put a long-term dent in the Intel lead in the space, but the Ryzen 2000-series shows that AMD is capable of keeping pace.

Chromebooks, iPads, and the Desire for New Computing Platforms

on April 19, 2018
Reading Time: 4 minutes

I recently got my hands on Google’s Pixel 2 Chromebook. I have been wanting to use the Pixel 2 for some time and test it in my everyday computing workflows. There is so much to like about the Chromebook platform. It’s fast, fresh, and feels extremely modern. Much more modern than Windows or OS X. But it is really the speed, lack of clutter, and overall fresh feeling of the OS that I like best. After a few weeks with the device I can see how you can make a strong case an operating system like this has more legs for the future of notebooks, and maybe desktops than Windows or OS X. With the exception of apps.

AI is no Knight in Shining Armor fighting to save Humanity

on April 18, 2018
Reading Time: 4 minutes

Last week during Mark Zuckerberg’s congressional hearing we heard Artificial Intelligence (AI) mentioned time and time again as the one size fits all solution to Facebook’s problems of hate speech, harassment, fake news… Sadly though, many agree with me that we are a long way away from AI to be able to eradicate all that is bad on the internet.

Abusive language and behavior are very hard to detect, monitor, and predict. As Zuckerberg himself pointed out, there are so many different factors that play into making this particular job hard: language, culture, context, all play a role in helping us determine if what we hear, read or see is to be deemed offensive or not.

The problem that we have today with most platforms, not just Facebook, is that humans are determining what is offensive. They might be using a set of parameters to do so, but they ultimately use their judgment. Hence consistency is an issue. Employing humans also makes it much harder to scale. Zuckerberg’s 20,000 people number sure is impressive, but when you think about the content that 2 billion active users can post in an hour, you can see how futile even that effort seems.

I don’t want to get into a discussion of how Zuckerberg might have used the promise of AI as a red herring to get some pressure off his back. But I do want to look at why, while AI can solve scalability, its consistency and accuracy in detecting hate speech in the first place is highly questionable today.

The “feed It Enough Data” Argument

Before we can talk about AI and its potential benefits we need to talk about Machine Learning (ML). For machines to be able to reason like a human, or hopefully better, they need to be able to learn. We teach the machines by using algorithms that discover patterns and generate insights from a massive amount of data they are exposed to so that they can make decisions on their own in the future. If we input enough pictures and descriptions of dogs and hand-code the software with what could look like a dog or be described as a dog, the machine will eventually be able to establish and recognize the next engineered “doodle” as a dog.

So one would think that if you feed a machine enough swear words, racial, religious or sexual slurs, it would be able to, not only detect, but also predict toxic content going forward. The problem is that there is a lot of hate speech out there that uses very polite words as there is harmless content that is loaded with swear words. Innocuous words such as “animals” or “parasites” can be charged with hate when directed to a specific group,of people. Users engaging in hate speech might also misspell words or use symbols instead of letters all aimed at preventing keywords-based filters to catch them.

Furthermore, training the machine is still a process that involves humans and consistency on what is offensive is hard to achieve. According to a study published by Kwok and Wang in 2013, there is a mere 33% agreement between coders from different races, when tasked to identify racist tweets.

In 2017, Jigsaw, a company operated by Alphabet, released an API called Perspective that uses machine learning to spot abuse and harassment online and is available to developers. Perspective created a “toxicity score” for the comments that were available based on keywords and phrases and then predicted content based on such score. The results were not very encouraging. According to New Scientist

“you’re pretty smart for a girl” was deemed 18% similar to comments people had deemed toxic, whereas “I love Fuhrer” was 2% similar.

The “feed It the Right Data” Argument

So, it seems that it is not about the amount of data but rather, about the right kind of data, but how do we get to it? Haji Mohammad Saleem and his team at the University of McGill, in Montreal, tried a different approach.

They focused on the content on Reddit that they defined as “a major online home for both hateful speech communities and supporters for their target groups.” Access to a large amount of data from groups that are now banned on Redditt allowed the McGill’s team to analyze linguistic practices that hate groups share thus avoiding having to compile word lists and providing a large amount of data to train and test the classifiers. Their method resulted in fewer false positives, but it is still not perfect.

Some researchers believe that AI will never be able to be totally effective in catching toxic language as this is subjective and requires human judgment.

Minimizing Human Bias

Whether humans will be involved in coding or will remain mostly responsible for policing hate speech, it is really human bias that I am concerned about. This is different than talking about approach consistency that considers cultural, language and context nuances. This is about having humans’ personal beliefs creep into their decisions when they are coding the machines or monitoring content. Try and search for “bad hair” and see how many images of beautifully crafted hair designs for Black women show up in your results. That, right there, is human bias creeping into an algorithm.

This is precisely why I have been very vocal about the importance of representation across tech overall but in particular when talking about AI. If we have a fair representation of gender, race, religious and political believes and sexual orientation among the people trusted to teach the machines we will entrust with different kind of tasks, we will have a better chance at minimising bias.

Even when we eliminate bias at the best of our ability we would be deluded to believe Zuckerberg’s rosy picture of the future. Hate speech, fake news, toxic behavior change all the time making the job of training machines a never-ending one. Ultimately, accountability rests with platforms owners and with us as users. Humanity needs to save itself not wait for AI.