Despite PC Market Consolidation, Buyers Still Have Plenty of Options

on May 25, 2018
Reading Time: 3 minutes

The traditional PC market’s long-term decline gets plenty of press, but one of the other less-talked-about trends occurring inside the market’s slide is the massive share consolidation among a handful of players. A typical side effect of this type of market transformation is fewer options for buyers, as larger brands swallow up smaller ones or force them out of business. While this has certainly occurred, we’ve seen another interesting phenomenon appear to help offset this: New players entering into this mature market.

Top-Five Market Consolidation
The traditional PC market consists of desktops, notebooks, and workstations. Back in 2000, the market shipped 139 million units worldwide, and the top five vendors constituted less than 40% of the total market. That top five included Compaq, Dell, HP, IBM, and NEC. Fast forward to 2010, near the height of the PC market, and units have grown to about 358 million units worldwide for the year. The top five vendors are HP, Dell, Acer, Lenovo, and Toshiba, and they represented 57% of the market. Skip ahead to 2017, and the worldwide market has declined to about 260 million units. The top five now represent about 74% of the total market and is made up of HP, Lenovo, Dell, Apple, and Acer.

Market consolidation in mature markets such as Japan, Western Europe, Canada, and the United States has been even more pronounced. In 2017 the top five vendors in Japan represented 77% of shipments; in Western Europe, it was 79%; in Canada, it was 83%, and in the U.S. it was 89%. Markets traditionally considered emerging, however, weren’t far behind. In the Asia Pacific (excluding Japan), the top five captured 69% of the market in 2017; in Latin American, it was 71%, and in Central and Eastern Europe plus the Middle East and Africa it was 76%.

Category Concentration
If we drill down into the individual device categories at the worldwide level, we can see that desktops remain the area of the market with the least amount of share concentration among the top five in a given year. In 2000 the top five represented 38% of the market; in 2010 it was 46%, and in 2017 it was 61%. Desktops continue to be where smaller players, including regional system integrators and value-added retailers, can often still compete with the larger players. In notebooks, the consolidation has been much more pronounced. In 2000 the top five represented 57% of the market; in 2010 it was 67%, and in 2017 it was 82%. Interestingly, in the workstation market-which grew from 900,000 units in 2000 to 4.4 million in 2017-the top five have always been dominant with a greater than 99% market share in each period.

Another trend inside each category that’s viewable at each year is the evolution of average selling prices. At a worldwide level, the average selling price of a notebook in 2000 was $2,176; in 2010 it declined to $739, and by 2017 it increased to $755. During those same time periods, the desktop went from $1,074 to $532, to $556. Workstations were the only ASP that continued to decline, dropping from $3,862 to $2,054 to $1,879. I’d argue consolidation itself has played a relatively minor role is the ASP increases in notebooks and desktops, as competition in the market remains fierce. The larger reason for these increases is that both companies and consumers now know that they plan to hold on to their PCs longer than they have in the past, and as a result, they’re buying up to get better quality and higher specifications.

New Entrants in the Market
All of this market consolidation might lead you to believe that today’s PC buyers have fewer choices than they did in the past. And this is true, to some extent. A consumer browsing the aisles at their local big box store or an IT buyer scanning their online options will undoubtedly notice that many of the vendors they purchased in the past are no longer available. But the interesting thing is that there are a handful of new players that have moved in, and many are putting out very good products.

There’s Google’s own Pixelbook, which demonstrates just how good the Chromebook platform can be. Microsoft continues to grow its product line, now offering notebooks and desktops in addition to its Surface detachable line, showcasing the best of Windows 10. And there is the mobile phone vendors such as Xiaomi and Huawei, each offering notebook products, with the latter in particular fielding a very good high-end product. It’s also notable that none of these vendors has targeted the high-volume, low margin area of the market. All are shipping primarily mid to high-end products in relatively low volumes.

As a result, none of these newer entrants have come close to cracking the top five in terms of shipments in the traditional PC market. But I’d argue that their presence has helped keep existing vendors motivated and has increased competition. As a result, I’d also say that the top five vendors are producing some of their best hardware in years.

As the traditional PC market decline slows and eventually stabilizes (the first quarter of 2018 was flat year over year), competition will intensify, and consolidation is likely to continue. It will be interesting to see how these newer vendors compete to grow their share, and how the old guard will fight to gobble up even more, utilizing their massive economies of scale. Regardless, the result will be a boon for buyers-especially those in the growing premium end of the market-who should continue to have plenty of good hardware options from which to choose.

Training My iPhone

on May 24, 2018
Reading Time: 4 minutes

I’ve often referred to Apple’s iOS as a learning operating system. When you look beneath the surface, you see Apple’s iOS starting to adapt to your habits and is constantly learning about your behavior. A simple example of this is if you go to a specific location every day at the same time, the iPhone will give you a notification saying how long it will take you to get there. For people who go to an office every day, you don’t need to tell it where your office is as the device simply recognizes your patterns. The Siri suggested apps similarly look for patterns of location or time of day where you use certain apps and give you quick access to apps you use regularly. These are two simple examples of how the iPhone, and underlying iOS, is learning about the habits and behaviors of its owner.

Dell Cinema Proves PC Innovation Still Vital

on May 24, 2018
Reading Time: 4 minutes

The future of the consumer PC needs to evolve around more than just hardware specifications. What Intel or AMD processor powers the system, and whether an NVIDIA or Radeon graphics chip is included are important details to know, but OEMs like Dell, HP, Lenovo, and Acer need to focus more on the end-point consumer experiences that define the products they ultimately hand to customers. Discrete features and capabilities are good check-box items, but in a world of homogenous design and platform selection, finding a way to differentiate your solution in a worthwhile manner is paramount.

This is a trend I have witnessed at nearly every stage of this product vertical. In the DIY PC space motherboard companies faltered when key performance features were moved from their control (like the memory controller) and on to the processor die itself. This meant that the differences between motherboards was nearly zero in terms of what a user actually felt, and in even what raw benchmarks would show. The market has evolved to a features-first mentality, focusing on visual aesthetics and add-ons like RGB lighting as much, or more, than performance and overclocking.

For the PC space, the likes of Dell, HP, and Lenovo have been fighting this battle for a number of years. There are only so many Intel processors to pick from (though things are a bit more interesting with AMD back in the running) and only so many storage solutions, etc. When all of your competition has access to the same parts list, you must innovate in ways outside what many consider to be in the wheelhouse of PCs.

Dell has picked a path for its consumer product line that attempts to focus consumers on sets of technology that can improve their video watching experience. Called “Dell Cinema”, this initiative is a company-wide direction, crossing different product lines and segments. Dell builds nearly every type of PC you can imagine, many of which can and do benefit from the tech and marketing push of Dell Cinema, including notebooks, desktop PCs, all-in-ones, and even displays.

Dell Cinema is flexible as well, meaning it can be integrated with future product releases or even expanded into the commercial space, if warranted.

The idea behind this campaign, both of marketing and technology innovation, is to build better visuals, audio, and streaming hardware that benefits the consumer as they watch TV, movies, or other streaming content. Considering the incredible market and time spent on PCs and mobile devices for streaming media, Dell’s aggressive push and emphasis here is well placed.

Dell Cinema consists of three categories: CinemaColor, CinemaSound, and CinemaStream. None of these have specific definitions today. With the range of hardware the technology is being implemented on, from displays to desktops to high-end XPS notebooks, there will be a variety of implementations and quality levels.

CinemaColor focuses on the display for both notebooks and discrete monitors. Here, Dell wants to enhance color quality, provide monitors with better contrast ratios, with brighter whites and darker black tones. Though Dell won’t be able to build HDR-level displays into each product, the goal is to create screens that are “optimized for HDR content.” Some of the key tenets of the CinemaColor design are 4K screen support, thinner bezels (like the company’s Infinity Edge design), and support for Windows HD Color options.

For audio lovers, Dell CinemaSound improves audio on embedded speakers (notebooks, displays) as well as connected speakers through digital and analog outputs, including headphones. Through a combination of hardware selection and in-house software, Dell says it can provide users with a more dynamic sound stage with clearer highs, enhanced bass, and higher volume levels without distortion. The audio processing comes from Dell’s MaxxAudio Pro software suite that allows for equalization control, targeting entertainment experiences like movies and TV.

The most technically interesting might be CinemaStream. Using specific hardware selected with this in mind, along with a software and driver suite tuned for streaming, this technology optimizes the resources on the PC to deliver a consistently smooth streaming video experience. Intelligent software identifies when you are streaming video content and then prioritizes that on the network stack as well as in hardware utilization. This equates to less buffering and stuttering in content and potentially better resolution playback with more available bandwidth to the streaming application.

These three features combine for Dell Cinema. Though other OEMs and even component vendors have programs in place to tout capabilities like this for displays or sound, only Dell has succeeded in creating a singular, and easily communicated brand around them.

Though it was seemingly impossible in its first roll out, I would like to see Dell better communicate details of the technology behind Dell Cinema. Or better yet, let’s place specific lines in the sand for them: CinemaColor means any display (notebook or monitor) will have a certain color depth rating or above, or a certain HDR capability or above, etc. Same for audio and streaming – Dell should look to make these initiatives more concrete. This would provide more confidence to consumers that no longer have to worry about nebulous language.

While great in both theory and in first-wave implementations, something like Dell Cinema can’t be a one-trick-pony for the company. Dell needs to show iteration and plans for future improvement for it, not only for Dell Cinema to be taken seriously, but for any other similar program that might develop in the future. Trust is a critical component of marketing and technology initiatives like this one, and consumers are quick to shun any product or company that abandons them.

The Foreshadowing of Sony’s Pivot

on May 23, 2018
Reading Time: 4 minutes

I remember when Apple was near its bottom. Investors, pundits, analysts, and many commentators suggested that Sony should buy Apple. At the time, Sony was the biggest consumer electronics brand around and nearly every category the company entered had tremendous success. There was a culture of innovation, attention to detail with hardware design, and a laser focus on quality. In many ways, Apple and Sony were extremely similar but Apple was not a consumer electronics company at the time, they were just a personal computer company making Macs.

Esports must do Right by Female Athletes

on May 23, 2018
Reading Time: 4 minutes

Earlier this year, I was sitting at a Dell press conference at CES when Frank Azor, one of the co-founders of Alienware now responsible for the gaming and XPS business at Dell, announced the collaboration with teamliquid to build the first two Alienware training facilities for esports. I was not ashamed to admit on Twitter that I had no idea gaming had grown up so much as to become comparable to an Olympic sport.

I had witnessed the rise of game streaming through my own kid who spends as much time watching people play Minecraft than she does playing. But I had no idea that many gamers in the world train that way and earn a living. I am pretty sure she does not know either!

While esports has been around for some decades, it has really taken a global role over the past ten years and over the past couple it has started to reflect more and more traditional sports with significant investments, and broadcasting interest from channels such as ESPN. John Lasker, ESPN’s VP of programming, compared opening up to including esports to opening up to add poker, something that nobody questions today but that did not get covered without some initial skepticism.

There is still a lot of work to be done to change the mass market perception of what esports athletes look like and why one should think about them in terms of a sportsperson with unique capabilities rather than a kid sitting in front of a PC in a bedroom eating lousy food and drinking soda. Esports is becoming so big that it will be an official medal sport at the 2022 Asian Games in China. The Olympic Council of Asia (OCA) announced a partnership with Alisports, the sports arm of Chinese online retail giant Alibaba, to introduce esports as a demonstration sport at next year’s games in Indonesia, with full-fledged inclusion in the official sporting program at the Hangzhou Games in 2022. The OCA said the decision reflects “the rapid development and popularity of this new form of sports participation among the youth.”

The eSports Audience and Athletes

According to a recent GWI report, one in three esports fans are aged between 20 and 25. Overall esports fans are now representing around 15% of internet users, and 71% of them are male. This reflects quite well the pro-gamers crowd. Yet, gaming overall has not been so male-dominated for quite some time. Already in 2012, a study by the Entertainment Software Association showed that gamers were split 53% male and 47% female.

So why are we not seeing more female pro-gamers? The answer is pretty straightforward: culture and gatekeeping. Female gamers are often made to feel they do not belong that they do not have the skills. I am sure some of you will quickly run a list of heroines like Cortana, Lara Croft, Sonya Blade but as you continue and you pay attention to their outfits you quickly spot one of the problems: objectification.
As pro-gaming added streaming as a big part of the experience as well as a revenue opportunity, being a pro gamer got even harder for women as they are often harassed. To stay in the game, many female players would “hide” themselves by avoiding voice chat and cut themselves out of the streaming revenue opportunity which then in turns limits their exposure to growing their followers and showcasing their skills. It is a vicious circle that is hard to break.

A Big Business for Some

Newzoo is estimating Global esports revenues to reach $905.6 million in 2018, an increase of more than $250 million compared to 2017. North America will generate the most revenues, contributing 38% of the global. Sponsorship is the highest grossing eSports revenue stream worldwide, contributing $359.4 million in 2018 compared to $234.6 million in 2017.

This fantastic growth does not seem to benefit all, however. Esports like many traditional sports has a pay gap problem. Earlier in the year, the winners of China’s Lady Star League took home roughly $22,000, while, this year’s LoL spring split champs took home $100,000, second place, $50,000, third place, $30,000, and fourth place, $20,000.

In traditional sports, we have some great examples like Wimbledon where the organizers have offered equal price-money for over ten years now. The Australia Open followed suit. Looking at tennis as a whole, however, the gap still, and the same is true for soccer, golf, and cricket. For some, the issue is deeply rooted in the rules of the game that favor men. Think about advertising which is where much of the money is coming from. Female esports chases after the same sponsors and TV channels as male sports, but because of the male-biased demographic on those TV channels, they do not reach similar viewing figures to those of male sports. More recently sponsors have started to realize that they can reach a pretty good demographic of women for a relatively low price and that those women are more often than not decision makers on many household purchases.

This week, Epic Games announced they would inject $100 million into Fortnite esports competition for the 2018-2019 season. The money, according to a company blog, will fund prize pools for Fortnite competitors in a more inclusive way that focuses on the joy of playing and watching the game and will not be limited to the top players only.

While no details have been shared, I am really hoping that we will see some of the money go to support female-only tournaments with prices that match what we see in the male tournaments. Female-only tournaments will not only give access to money but, equally important; they will provide a safe space for female athletes to compete without feeling isolated. By all means, I do not expect Epic Games to be a silver bullet for all that is wrong with the lack of female empowerment in esports but wouldn’t it be good if the joy of playing they talked about could involve some effort in making female athletes more welcome?

I saw some people pointing to the fact that girls should be encouraged to game, they should be included. In a way, they are making it sound like esports has a pipeline issue as tech does. Yet, when I look at my daughter’s 4th-grade class, I see boys and girls gaming together and boys acknowledging the skills of their top player who happens to be a girl. The same can be said about soccer or basketball. So it seems to me that for once, we do not have a pipeline issue, not until the kids grow up and they are told they cannot play together!

Apple and Google’s Quest to Quench Fake News

on May 22, 2018
Reading Time: 5 minutes

When I started out as a tech analyst in 1981, there were no PC analysts. I had come to Creative Strategies, which at that time was the tech arm of a global consulting firm called Business International, to cover mini-computers and sell reports on this technology. But when IBM introduced the IBM PC in 1981, I was asked to switch over and add PCs to my workload. Consequently, I became known as a PC analyst and since PC’s were new and gaining attention, all of a sudden I started getting calls from traditional business reporters to help them make sense of this new PC trend.

The World of AI Is Still Taking Baby Steps

on May 22, 2018
Reading Time: 3 minutes

Given all the press surrounding it, it’s easy to be confused. After all, if you believe everything you read, you’d think we’re practically in an artificial intelligence (AI)-controlled world already, and it’s only a matter of time before the machines take over.

Except, well, a quick reality check will easily show that that perspective is far from the truth. To be sure, AI has had a profound impact on many different aspects of our lives—from smart personal assistants to semi-autonomous cars to chatbot-based customer service agents and much more—but the overall real-world influence of AI is still very modest.

Part of the confusion stems from a misunderstanding of AI. Thanks to a number of popular, influential science fiction movies, many people associate AI with a smart, broad-based intelligence that can enable something like the nasty, people-hating world of Skynet from Terminator movies. In reality, however, most AI applications of today and the near future are very practical—and, therefore, much less exciting.

Leveraging AI-based computer vision on a drone to notice a crack on an oil pipeline, for example, is a great real-world AI application, but it’s hardly the stuff of AI-inspired nightmares. Similarly, there are many other examples of very practical applications that can leverage the pattern recognition-based capabilities of AI, but do so in a real-world way that not only isn’t scary, but frankly, isn’t that far advanced beyond other types of analytics-based applications.

Even the impressive Google Duplex demos from their recent I/O event may not be quite as awe-inspiring as they first appeared. Amongst many other issues, it turns out Duplex was specifically trained to just make haircut appointments and dinner reservations—not doctor’s appointments, coordinating a night out with friends, or any of the multitude of other real-world scenarios that the voice assistant-driven phone calls that the Duplex demo implied were possible.

Most AI-based activities are still extraordinarily literal. So, if there’s an AI-based app that can recognize dogs in photos, for example, that’s all it can do. It can’t recognize other animal species, let alone distinct varieties, or serve as a general object detection and identification service. While it’s easy to presume that applications that can identify specific dog species offer similar intelligence across other objects, it’s simply not the case. We’re not dealing with a general intelligence when it comes to AI, but a very specific intelligence that’s highly dependent on the data that it’s been fed.

I point this out not to denigrate the incredible capabilities that AI has already delivered across a wide variety of applications, but simply to clarify that we can’t think about artificial intelligence in the same way that we do about human-type intelligence. AI-based advances are amazing, but they needn’t be feared as a near-term harbinger of crazy, terrible, scary things to come. While I’m certainly not going to deny the potential to create some very nasty outcomes from AI-based applications in a decade or two, in the near and medium-term future, they’re not only not likely, they’re not even technically possible.

Instead, what we should concentrate on in the near-term is the opportunity to apply the very focused capabilities of AI onto important (but not necessarily groundbreaking) real-world challenges. This means things like improving the efficiency or reducing the fault rate on manufacturing lines or providing more intelligent answers to our smart speaker queries. There are also more important potential outcomes, such as more accurately recognizing cancer in X-rays and CAT scans, or helping to provide an unbiased decision about whether or not to extend a loan to a potential banking customer.

Along the way, it’s also important to think about the tools that can help drive a faster, more efficient AI experience. For many organizations, that means a growing concentration on new types of compute architectures, such as GPUs, FPGAs, DSPs, and AI-specific chip implementations, all of which have been shown to offer advantages over traditional CPUs in certain types of AI training and inferencing-focused applications. At the same time, it’s critically important to look at tools that can offer easier, more intelligible access to these new environments, whether that be software languages like Nvidia’s CUDA platform for GPUs, National Instruments’ LabView tool for programming FPGAs, and other similar tools.

Ultimately, we will see AI-based applications deliver an incredible amount of new capability, the most important of which, in the near-term, will be to make smart devices actually “real-world” smart. Way too many people are frustrated by the lack of “intelligence” on many of their digital devices, and I expect to see many of the first key advances in AI to be focused on these basic applications. Eventually, we’ll see a wide range of very advanced capabilities as well, but in the short term, it’s important to remember that the phrase artificial intelligence actually implies much less than it first appears.

Apple, Amazon, Google, and the New Subscription Bundles

on May 21, 2018
Reading Time: 3 minutes

A very interesting bit of research came out about Amazon’s ability to drive third-party subscription content. Variety published data from TDG, that I sense is directionally true, about how Amazon is succeeding in driving subscriptions to direct-to-consumer services from HBO, Showtime, and Starz in particular. The data comes off suggesting a bit more dominant position Amazon is driving video subscriptions, and I have to imagine Apple is not that far off in driving subscriptions to a vast manner of services like Netflix, HBO Now, Hulu, etc., as well. This article highlights a bigger trend and potential shift in who may control the content bundle ecosystem going forward.

Why the Maker Movement is Critical to Our Future

on May 21, 2018
Reading Time: 4 minutes

This last weekend the granddaddy of Maker Faire’s was held at the San Mateo Event Center and close to 100,000 people went to this Faire to check out all types of maker projects. When Dale Dougherty, the founder of Make Magazine started his publication, the focus was really on STEM and tech-based ideas.

In the early days of the Magazine, you would find all types of projects for making your own PC’s, Robots, 3D printed designs, etc. It reminded me a bit of my childhood where we had erector sets, tinker toys, and Lincoln Logs, etc. that were educational and in its own way was trying to get kids interested in making things in hopes it would help guide them to future careers.

Over time, the Maker Movement has evolved well beyond just STEM projects and now includes just about any do-it-yourself maker project you could think of doing yourself. At the Faire this year I saw quilting demos, a bee-keeping booth, an area teaching you how to ferment foods, alongside stalls that had laser cutters, 3D printers, Wood lathes, robotic kits and a lot of other STEAM based items and ideas.

Going to a Maker Faire is fun and fascinating in many ways, but to me, the thing I love most is watching the excited faces of the boys and girls who attend this event. Seeing them going from booth to booth trying to get ideas of their own that could help them create their maker projects is rewarding in itself.

The Maker Movement comes at one of the most critical times in our history. When I was in Jr High and High School in the 1960’s, the world we were being prepared for had little to do with tech. My elective options were auto shop, drafting and metal shop and I even took a home economics class. These courses were designed to prepare us for blue collar jobs. Of course, these types of jobs still exist today but in the information age, the majority of jobs now and in the future are more and more will be focused on skills related to math, engineering, and s
science.

he Maker Movement and especially the Maker Faires that are now all over the world serve as an essential catalyst with a goal to help kids get interested in STEM and STEAM. They are designed to instill in them a real interest in pursuing careers in technology and the sciences as well as introduce them to the idea that anyone can be a “maker.”

At this year’s event in the Bay Area, the Maker Faire held a college and career day on Friday Morning before the Faire itself opened that afternoon. I had the privilege of moderating a panel about career journeys with five panelists telling their stories about how they got to where they are in their careers today.

This was the Maker Faire’s first college and career day, and it was very successful. The various speakers Mr. Dougherty brought in to talk to hundreds of students shared all types of stories about what got them into STEM-related jobs and shared valuable career-related advice to those who attended this special career day.

Of the many speakers at the career day event, two stood out to me. The first was Sarah Boisvert, the founder of the Fab Lab Hub. Ms. Boisvert shared that when President Trump asked IBM CEO Ginni Rometty about her thoughts on the future jobs in America. She told him that “we do not need more coal workers, what we need are “New Collar Workers” referring to the current and future demand for a technically skilled labor force to meet the needs of America’s job market. Ms. Boisvert has written a book entitled “The New Collar Workforce: An Insider’s Guide to Making Impactful Changes to Manufacturing and Training.”

An overview of the book states:

The “new collar” workers that manufacturers seek have the digital skills needed to “run automation and software, design in CAD, program sensors, maintain robots, repair 3D printers, and collect and analyze data,” according to the author. Educational systems must evolve to supply Industry 4.0 with new collar workers, and this book leads the reader to innovative programs that are recreating training programs for a new age in manufacturing.
The author’s call to action is clear: “We live in a time of extraordinary opportunity to look to the future and fundamentally change manufacturing jobs but also to show people the value in new collar jobs and to create nontraditional pathways to engaging, fulfilling careers in the digital factory. If the industry is to invigorate and revitalize manufacturing, it must start with the new collar workers who essentially make digital fabrication for Industry 4.0 possible.”
This book is for anyone who hires, trains or manages a manufacturing workforce; educates or parents students who are searching for a career path, or is exploring a career change.”

Ms. Boisvert told the students in the audience that when she hires people, the first thing she looks for is if they have solid problem-solving skills. She sees that as being a fundamental part of “New Collar” jobs.

The other speaker that stood out to me was on my panel. Janelle Wellons is a young African American woman who initially wanted to be a theoretical mathematician. Her is her bio:

Janelle Wellons graduated from the Massachusetts Institute of Technology with a B.S. in Aerospace Engineering in 2016. After graduating, she moved from her home in New Jersey to Southern California to work at the NASA Jet Propulsion Laboratory (JPL) in Pasadena. At JPL, Janelle works as an instrument operations engineer on the Lunar Reconnaissance Orbiter, the Earth-observing Multi-Angle Imager for Aerosols, and previously on the Saturnian Cassini mission. Her job consists of creating the commands for and monitoring the health and safety of a variety of instruments ranging from visible and infrared cameras to a radiometer. She also serves on an advisory board for Magnitude.io, a nonprofit that creates project-based learning experiences designed for STEM education. When she isn’t working, you can find her playing video games, reading, enjoying the outdoors, and working on cool projects out in the Mojave.

As a young African American woman, she is an inspiration to kids of all ages and ethnical backgrounds, and she reminded me of the woman in Hidden Figures, Katherine Johnson, who also worked for NASA and was instrumental in working on John Glenn’s Earth Orbit in 1962.

As she spoke, I was watching the kids in the audience, and they were spellbound listening to her tell them that anyone can achieve their goals if they put their minds to it.

The Maker Movement and Maker Faires are critical to our future. Our world is changing rapidly. Job skills of the past need to be updated to meet the changing needs of a world driven by information and analytics and manufacturing jobs that will require new skills to operate. If you get a chance to go to a Maker Faire in your area, I highly recommend you check one out. You won’t be disappointed and like, me, will learn a lot and perhaps be inspired to become a maker yourself.

A Lot Needs to Happen Before Self-Driving Cars Are A Reality

on May 18, 2018
Reading Time: 3 minutes

Self-driving vehicles represent one of the most fascinating fields of technology development today. More than just a convenience, there is the potential to radically alter how we work and live, and to alter the essential layout of cities. Their development involves a coalition of many disciplines, a marriage of the auto industry’s epicenters with Silicon Valley, and is attracting hundreds of billions of dollars in annual investment, globally. Exciting as all this is, I think the viability a full-fledged self-driving car in the day-to-day real world is further off than many believe.

My ‘dose of reality’ is driven by two main arguments. First, I think that we’ve underestimated the technology involved in making true autonomous driving safe and reliable, especially in cities. Second, there are some significant infrastructure investments that are required to make our streets ‘self-driving ready’. This requires significant public sector attention and investment which, except for a select few places, is not happening yet.

Self-driving cars have already logged lots of miles and have amassed an impressive safety record, with a couple of notable and unfortunate exceptions. But most of this has occurred on test tracks, along the wide-open highways of the desert southwest, and in Truman Show-esque residential neighborhoods.

For a sense of the real world that the self-driving car has to conquer, I encourage you to visit my home town of Boston. Like most of the world, it’s not Phoenix or Singapore. It has narrow streets, an un- grid-like/not intuitive layout, and a climate where about half of the year’s days feature wet, snowy, or icy roads. Sightlines are bad, lane lines are faded, and pedestrians and cyclists compete for a limited amount of real-estate. In other words, a fairly typical older city. How would a self-driving car do here?

To get even more micro, I’ll take you to a very specific type of intersection that has all of the characteristics deigned to trump a self-driving vehicle. In this situation, the car has to make a left turn from a faded turn lane with no traffic light, and then cross a trolley track, two lanes of oncoming traffic, and a busy crosswalk. So we have poor lane markings, terrible sight lines, and pedestrians moving at an uneven pace and doing unpredictable things, before even getting into the wild cards of weather, glare, and so on. My heart rate quickens every single time I have to make this turn. I would want to see a car successfully self-perform this left turn a Gladwellian 10,000 times before I’d sign that waiver.

I’m sure each of you can provide countless examples of situations that would prompt the question of “can a self-driving car really handle that”? It shows the complexity and the sheer number of variables involved in pulling this off. Think of all the minor decisions and adjustments you make when driving, particularly in a congested area. Rapid advancements in AI will help.

This is not to diminish the mammoth progress that has been made on the road to the self-driving vehicle. The technology is getting there for self-driving cars to be viable in many situations and contexts, within the next five years. It’s that last 10-20% of spots and situations that will prove particularly vexing.

If we believe the self-driving car could be a game-changer over the next 20 years, I think we need to be doing a lot more thinking about the infrastructure required to support its development. We all get excited about how the potential benefits self-driving/autonomous vehicles will usher in, such as changes to the entire model of car ownership, less congested roads, the disappearance of parking lots, etc. This exciting vision assumes a world where the self-driving car is already mainstream. But I think it’s a bit naïve with regard to the type of investment that is needed to help make this happen. This is going to require huge public sector involvement and dollars in many fields and categories. As examples: improvements to roads to accommodate self-driving cars (lane markings, etc.); deployment of sensors and all sorts of ‘smart city infrastructure’; a better ‘visual’ infrastructure; a new regulatory apparatus, and so on. And of course, we will need advanced mobile broadband networks, combination of 5G with vehicle-centric capabilities envisioned by evolving standards such as V2X, to help make this happen.

This will be a really exciting field, with all sorts of job opportunities. There’s the possibility of a game-changing evolution of our physical infrastructure not seen in 100+ years. But worldwide transportation budgets are still mainly business-as-usual, with sporadic hot pockets of cities hoping to be at the bleeding edge.

Getting to this next phase of the self-driving car will require a combination of pragmatism, technology development, meaningful infrastructure investment, and a unique model of public-private cooperation.

 

News You might have missed, Week of May 18, 2018

on May 18, 2018
Reading Time: 5 minutes

Microsoft rumored to be planning Low-Cost Tablet

According to Bloomberg, Microsoft is rumored to be planning to add a low-end model to its Surface line. The new tablet line will feature a 10-inch screen and will be priced around $400. The tablets are expected to be about 20 percent lighter than the high-end models but will have around four hours fewer of battery life. Intel will supply the main processor and graphics chips for the devices, said the people, who asked not to be identified because the plans aren’t public.

Ways to Think About Crypto’s Potential

on May 17, 2018
Reading Time: 5 minutes

I have purposely avoided writing about the crypto craze mostly because of the full range of mixed opinions, many of them reasonably strong one way or another. Crypto is either the future or a fraud. Apparently, there is no middle ground. I get asked my opinion on what is going on with crypto often so I figured it was time to share a few points that will help us understand what is going on and what the future may hold for crypto and the blockchain.

HP Isn’t Standing Still as Top Market share PC OEM

on May 17, 2018
Reading Time: 4 minutes

If you ask in the tech industry, you’ll hear stories about PC vendors and technology companies that lose an edge after becoming a market leader. Competition sharpens minds and accelerates research and design initiatives to gain that one foothold over the other guy that puts you firmly in the leading position. But too often that slips away as stagnation and complacency rolls in.

With Q1 results from IDC available, HP continues to maintain market share leadership in the PC space, pulling in 20.9%, holding above Dell and Lenovo. Recent announcements from the company also indicate that the company is attempting to avoid any stalling of growth by continuing to innovate and push forward with new designs and product programs.

The new philosophy at HP focuses on the “one-life” design ideal, where commercial and consumer users share hardware between personal and business use. For a younger generation that blurs the lines between work-time and play-time, having devices that can fill both roles and permit a seamless transition between those states is deal.

Just this month, the company announced updates to its premium lines of notebooks and desktops in an attempt to showcase its ability to provide products fitting of both roles.

Perhaps the most interesting is the new HP Envy Curve, an all-in-one PC that is more than your typical tabletop design. Internals of the system are impressive, but don’t do the rest of the system justice. Yes, including a powerful 8th-gen Intel CPU, SSD storage, and 16GB of memory are a requirement, but it’s the smaller touches that make the Envy Curve stand out.

In the base of the unit HP has embedded four custom Bang & Olufsen speakers angled upward at 45 degrees to better direct audio to the consumer. All-in-ones have traditionally included the most basic of speaker implementations, but HP is hoping it can add value to the Curve and provide high quality sound without the need for external hardware.

The curved 27-in or 34-in QHD display rests on a thin neck and is coupled with a wood grain finish on the back, giving the PC a distinct aesthetic that few others in the computing market offer. If computing is in threat of being commoditized, then customization and style will be key drivers of upgrades and purchases.

Two other innovations help the Envy Curve stand out. The base includes an embedded wireless Qi charging ring, meaning you can charge your phone without the annoyance of cables or USB ports, maintaining a clean footprint. HP has also integrated Amazon Alexa support, giving PC users access to the most popular digital assistant in the home and an alternative to Cortana on Windows 10. It all adds up to a unique product for a shifting desktop landscape.

Though the Envy Curve is part of it, the Envy family is more well known for its notebook line. It rests between the company’s budget-minded Pavilion and the ultra-high-end Spectre options. Attempting to further drive the premium notebook space, where HP gained 3.2 share points just last quarter, the Envy lineup will be seeing a host of new features and options courtesy of the innovation started with Spectre.

These design changes include thinner bezels around the screen, shrinking them to get nearer to the edge to edge designs that have overtaken smartphones. HP was quick to point out that despite this move, it kept the user-facing camera at the top of the device, even though it means slightly wider bezels on that portion, to prevent the awkward angles of other laptops.

Sure View is a technology that makes displays unreadable to anyone off angle, preserving privacy of data and content and is a nice addition stemming from the company’s business line. It can be enabled with the touch of a button and doesn’t require the consumer to semi-permanently enable it with a stick-on film.

Both the 13-in and 17-in Envy models will be using 1080p displays but have a unique lift hinge that moves the keyboard to an angle more comfortable for typing. HP was able to make the device slim and attractive but still maintain connectivity options users demand by implementing a jawed USB Type-A port.

The convertible Envy x360 13-in and 15-in improvements are similar, and both now offer AMD Ryzen APU processor options, giving consumers a lower cost solution that provides a very different performance profile to the Intel processors.

HP Elitebooks, known for their enterprise capabilities, got some updates this month as well. The new Elitebook 1050 G1 is the first 15-in screen in the segment and includes pro-level features like Sure Click, Sure Start, and Sure View all aiming at keeping commercial hardware secure and reliable. The Elitebook x360 1030 shrinks the device footprint by 10%, squeezes a 13-in screen in a form factor typical of 12-in models, and has a direct sunlight capable display that reaches brightness levels as high as 700 nits, perfect for contractors and sales teams that need to work outdoors.

To be fair and balanced, nothing that was announced is on the scale of revolutionary shifts but attempting to do that in the mature PC space is nearly impossible to pull off. Design shifts like thinner bezels, smaller footprints, brighter screens, and even Amazon Alexa integration do show that there is room left in the tank to tweak and perfect design. HP is using its engineering and product teams to do just that, while trying to maintain the market share position it has earned over Dell and Lenovo.

For those in the space that thought the PC was dead and innovation was over, HP has a few devices it would like to show you.

The DOS Era of Virtual Assistants

on May 16, 2018
Reading Time: 3 minutes

Not a week goes by where some industry-related conversation I have with a major tech company does not include virtual assistants. When talking about these assistants like Alexa, Siri, Google Assistant, and Cortana, I often have to remind folks that we are still in the very early days. One of the ways I’ve started doing that is to talk about these assistants as if we are in the DOS era of computer interfaces.

Will the Gig Economy help Moms to have it All?

on May 16, 2018
Reading Time: 4 minutes

This past Sunday was Mother’s Day in the US and across many countries in Europe including my home country of Italy. As I was waking up in a hotel room miles away from my family I felt a whole bunch of emotions: sad I was not home, blessed that I have a husband that supports me in my career and extremely lucky to be in a job I love.

Thanks to the jet-lag I had plenty of time to think about my fellow moms and how much things have changed since I was growing up and my mom was a working mom. At the same time, some of the stigmas of a working mom are still there. Whether you are working, like my mom did, to contribute to the family income, or because you want a career, some people still see you as not putting your children first. And if you are taking a break to be with your kids in their foundation years, you are dealing with the judgment of not putting yourself first. I thought of my circle of fellow moms and made a mental list of how many successful business women I know, how many are the primary bread-winner in the family and how many, now that the kids are grown up would like to get back to work. It is a good healthy mix of women who, no matter where they sit in the group, support one another.

The “Motherhood Penalty”

Whether you are a working mom, or you are a mom who took time off to be with her kids as they grow up, I am sure you have stories about management taking for granted you would not be giving one hundred percent after you gave birth and that if you were leaving your career you had never been committed to it in the first place. If you have been lucky to have a supportive work environment, it might come as a surprise to hear about the “motherhood penalty.”

Data is showing that being a woman is only part of the pay gap we currently see across so many segments. The Institute for Fiscal Studies has found that before having a child the average female worker earns 10% to 15% less per hour than a male worker; after childbirth that increases steadily to 33% after around 12 years. This has financial and economic implications but also emotional ones. The “motherhood penalty,” helps to explain why women overall make 81 cents on every dollar a man earns. Conversely, research has shown that having children raises wages for men, even when correcting for the number of hours they work.

What is the Gig Economy?

The best way to describe the gig economy is the new economy that is developing outside the traditional
Simply put, the modern economy is the one evolving beyond the constraints of conventional work models. Services enabled by the app economy have opened up opportunities for people to earn a living in a much more flexible work environment. While in Silicon Valley many participating in the gig economy do so out of necessity to be able to afford the high cost of living, leading to high criticism and calls of exploitation, the concept is indeed one that opens opportunity.

According to a recent study by the McKinsey Global Institute, up to 162 million people in the United States and Europe are involved in some form of independent work. Members of the gig economy, from ride shares to food deliveries to dog walking and child care services are not employees of the company that pays them but rather they are independent contractors. Instead of working 9-to-5 for a single employer, they are leveraging their advantages to maximize their earning opportunity while balancing it around their personal needs.

While of course, many jobs in the gig economy do not include traditional benefits they might be the best fit for moms returning to work.

Be Your Own Boss

Mothers returning to work are chronically underpaid and undervalued for their experience and ability. PwC’s November 2016 report into women returning to work found that nearly 65% of returning professional women work below their potential salary level or level of seniority.

According to new research, that gap hasn’t narrowed at all since the 1980s. And for some women, it’s even increased. The study found that when correcting for education, occupation and work experience, the pay gap for mothers with one child rose from 9% in the period between 1986 and 1995 to 15% between 2006 and 2014. For mothers with two kids, the gap remained steady at 13% and stayed at 20% for mothers with three or more kids. The researchers point to a lack of progress on family-friendly policies in the United States, such as paid parental leave and subsidized childcare. Other countries, including Sweden, have narrowed their gender pay gaps after instituting such laws.

Considering how little regulations and companies’ attitude to child care and parental leave have progressed and accounting for the changes that the workplace is undergoing to appeal to the younger millennials, getting back in the game must be daunting for those moms who took a break from their career. The gig economy might offer the best opportunity to them and not just in regards to flexibility but also regarding rediscovering what they want to do and earning the best money.

From marketing to payment methods, to service delivery, technology advancements can make being your own boss much easier than it ever was. This option, of course, does not mean those big companies are off the hook when it comes to improving the level of support moms get at work and when it comes to the pay gap. All it means is that women returning to work after having kids do not have to settle anymore in a job that is not adequately paid or does not help them fulfill their full potential.

Adding AI and ML to Speech-to-Text and Language Translations Are Game Changers

on May 15, 2018
Reading Time: 3 minutes

At Google I/O, Sundar Pichai showed off an AI-based technology called Duplex, in which a computer made a call to a restaurant to make a reservation in a natural human voice and interacted directly with a person taking down reservations at a particular eating establishment.

This particular AI announcement got a lot of coverage at Google I/O and given its importance, and the breakthrough in technology it delivered, it deserved to be highlighted as one of the most important announcements coming out of this year’s Google Developers Conf. However, for those of us at the conference, it was clear that the theme of AI and Machine Learning was prevalent in all products and services they showed at the event.

Device Independence Becoming Real

on May 15, 2018
Reading Time: 5 minutes

For decades, compute devices and the tech industry as a whole were built on a few critical assumptions. Notably, that operating systems, platforms, applications, and even file formats were critical differentiators, which allowed companies to build products that offered unique value. Hardware products, software, and even services were all built in recognition of these differences and, in some instances, to bridge or overcome them.

Fast forward to today, and those distinctions are becoming increasingly meaningless. In fact, after hearing the forward-looking strategies of key players like Microsoft, Google, and Citrix at their respective developer and customer events of the past week, it’s clear the world of true device and platform independence is finally becoming real.

Sure, we’ve had hints at some of these developments before. After all, wasn’t browser-based computing and HTML5 supposed to rid the world of proprietary OS’s, applications and file types? All you needed was a browser running on virtually any device, and you were going to be able to run essentially any application you wanted, open any file you needed, and achieve whatever information-based goal you could imagine.

In reality, of course, that utopian vision didn’t work out. For one, certain types of applications just don’t work well in a browser, particularly because of limitations in user interface and interaction models. Plus, it turned out to be a lot harder to migrate existing applications into that new environment, forcing companies to either rebuild from scratch or abandon their efforts. The browser/HTML5 world was also significantly more dependent on network throughput and centralized computing horsepower than most realized. Yes, our networks were getting faster, and cloud-based data centers were getting more powerful, but they still couldn’t compare to loading data from a local storage device into onboard CPUs.

Since then, however, there have been a number of important developments not just in core technologies, but also in business models, software creation methodologies, application delivery mechanisms, and other elements that have shifted the computing landscape in a number of essential ways. Key among them is the rise of services that leverage a combination of both on-device and cloud-based computing resources to deliver something that individuals find worthy of value. Coincident with this is the growing acceptance to pay for software, services, and other information on an ongoing basis, as opposed to a single one-and-done purchase, as was typically the case with software in the past.

Admittedly, many of these services do still require an OS-dependent application at the moment, but with the reduction of meaningful choices down to a few, it’s much easier to create the tools necessary to make the services available to an extremely wide audience. Plus, ironically, we are finally starting to see some of the nirvana promised by the original HTML5 revolution. (As with many things in tech—timing is everything….) Thanks to new cloud-based application models, the use of containers to break applications into reasonably-sized parts, the growth in DevOps application development methodologies, the rise in API usage for creating and plugging new services into existing applications, and the significantly larger base of programmers accustomed to writing software with these new tools and methods, the promise of truly universal, device-independent services is here.

In addition, though it may not appear that way at first glance, the hardware does still matter—just in different ways than in the past. At a device level, arguably, individual devices are starting to matter less. In fact, in somewhat of a converse to Metcalfe’s Law of Networks, O’Donnell’s Law of Devices says that the value of each individual digital device that you own/use decreases with the number of devices that you own/use. Clearly, the number of devices that we each interact with is going up—in some cases at a dramatic rate—hence the decreased focus on specific devices. Collectively, however, the range of devices owned is even more important, with a wider range of interaction models being offered along with a broader means of access to key services and other types of information and communication. In fact, a corollary of the devices law could be that the value of the device collection is directly related to the range of physical form factors, screen sizes, interaction models, and connectivity options offered to an individual.

The other key area for hardware is the amount and type of computing resources available outside of personally owned devices. From the increasing power and range of silicon options in public and private data centers powering many of these services, to the increasing variety of compute options available at the network edge, the role of computing power is simply shifting to a more invisible, “ambient” type role. Ironically, as more and more devices are offered with increasing computing power (heck—even ARM-based microcontrollers powering IoT devices now have the horsepower to take on sophisticated workloads), that power is becoming less visible.

So, does this mean companies can’t offer differentiated value anymore? Hardly. The trick is to provide the means to interconnect different pieces of this ambient computing background (as Microsoft CEO Satya Nadella said at Build last week, the world is becoming a computer) or to perform some of the specific services that are still necessary to bridge different aspects of this computing world. This is exactly what each of the companies mentioned at the beginning of this article discussed at their respective events.

Microsoft, for their part, described a world where the intelligent edge was growing in importance and how they were creating the tools, platforms, and services necessary to tie this intelligent edge into existing computing infrastructures. What particularly struck me about Microsoft’s approach is that they essentially want to serve as a digital Switzerland and broker connections across a wide variety of what used to be competitive platforms and services. The message was a very far cry from the Microsoft of old that pushed hard to establish its platform as the one true choice. From enabling connections between their Cortana assistant and Amazon’s Alexa in a compelling, intriguing way, to fully integrating Android phones into the Windows 10 experience, the company was clearly focused on overcoming any kinds of gaps between devices.

At I/O, Google pushed a bit harder on some of the unique offerings and services on its platforms, but as a fundamentally cloud-focused company, they have been touting a device-independent view of the world for some time. Like Microsoft, Google also announced a number or AI-based services available on its Google Cloud that developers can tap into to create “smarter” application and services.

Last, but certainly not least, Citrix did a great job of laying out the vision and effort it has done to overcome the platform and application divides that have existed in the workplace for decades. Through their new Citrix Workspace app, they presented a real-world implementation of essentially any app, running on any device from any location. Though that concept is simple—and clearly fits within the device independence theme of this column—the actual work needed to do it is very difficult. Arguably, the company has been working on delivering on this vision for some time, but what was compelling about their latest offering was the elegance of the solution they demonstrated and the details they made sure were well covered.

A world that is less dependent on individual devices and more dependent on a collection of devices is very different than where we have been in the past. It is also, to be fair, not quite a reality just yet. However, it’s become increasingly clear that the limitations and frustrations associated with platform or application lock-in are going away, and we can look forward to a much more inclusive computing world.

The Missing Link in VR and AR

on May 14, 2018
Reading Time: 4 minutes

VR and AR are big buzzwords in the world of tech these days. At Tech.pinions we have been covering these technologies for over five years and shared solid perspectives on significant AR and VR products if we feel they move this technology forward.

All of our team has tried out or tested most of the available AR and VR products on the market today, and at least in my case, I only see their value at the moment in vertical markets. This is especially true for VR. Apple and Google have tried to bring AR to a broader audience but here too, AR delivered on a smartphone is still a novelty and is most acceptable when used in games like Pokemon Go and some vertical markets.

As I have written in multiple columns over the last year, I have shared my excitement for AR, especially after seeing some cool AR applications in the works that should be out by the end of the year. Although they are still delivered via a smartphone, AR Kit and AR Core are giving software developers the tools to innovate on IOS and Android and in that sense I see the possibility of a broader interest in AR later this year. I also expect Apple to make AR one of the highlights of their upcoming developer conference in early June.

However, I feel the most effective way to deliver AR will be through some form of mixed reality glasses. While the intelligence to power these AR glasses may still come from the smartphone, these glasses will be an extension of that smartphone screen and deliver a better way to view AR content, than can just be provided on a smartphone screen.

I see glasses as the next evolution of the man-machine interface and technology that will be extremely important to billions of people over the next ten years. In my recent Fast Company column, I shared how I believe Apple would tackle the AR opportunity and how they could be the company who defines AR based glasses market.

But if you have used any of the VR or Mixed reality headsets or glasses so far, you understand that interacting with the current models are difficult when you have to use a joystick or handheld wand that is needed to communicate with any of the features or actions in any given VR or AR application. Even more frustrating is that these handheld interfaces do not deliver pin-point precision yet, which makes it often difficult to activate any of these AR or VR applications functions.

I believe there are three high hurdles to get us to where AR is valuable and is acceptable by mass-market users. The first is creating the types of glasses or eyewear that are both fashionable and functional. Todays VR and AR glasses or goggles make anyone who uses them look like nerds. In our surveys, this type of eyewear is panned by people we have talked to about what is acceptable to wear for long periods of time.

The second most significant hurdle will be how the wireless technology used in these smartphones are designed to communicate with what I call “skinny glasses.” This is where the glasses rely pretty much on the smartphone for its intelligence. Getting the wireless connections and applying the smartphone’s functions and intelligence to these glasses will be difficult but critical if we want to have the types of AR glasses that people will wear and not make them stand out as some tech dweeb.

But the missing link that gets little attention when we talk about VR and AR will be the way we interact with these glasses to get the kinds of functions we want and need to make these headsets valuable. Undoubtedly voice commands will be part of this interface solution, but there are too many occasions where calling out commands will not be acceptable, such as while in a meeting, at church or a concert, or in a class, to name just a few.

Indeed, we will need other ways to activate applications and interact with these glasses, which most likely will include things like gestures, object recognition via sensors and through virtual gloves or hand signals such as those created by Magic Leap to navigate their specialized mixed reality headset.

However, I believe this is an area ripe for innovation. For example, a company called TAP just introduced a Bluetooth device that fits over four fingers and lets you tap out actual words and characters as a way to input data into existing applications such as word, or eventually virtual applications on a mixed reality headset.

The folks from Tap came by and gave me a demo of this product, and I found it very interesting. There is a real learning curve involved to understand how to tap out the proper letters or punctuation marks, but they have great teaching videos as well as a teaching game to help a person master this unique input system. Check out the link I shared about to see how it works. They are already selling thousands to vision-impaired folks and others in which using a virtual keyboard like TAP are needed for a specific app or function.

But after seeing TAP, I realized that creating a powerful way to interact with AR apps on glasses should not be limited to joysticks, virtual gloves, voice commands or gestures. This missing link needs out of the box thinking like TAP has done. Hopefully, we will see many other innovations in this space as tech companies eventually deliver mixed reality glasses that are acceptable to all users and drive the next big thing in man-machine interfaces.

Podcast: Microsoft Build, Citrix Synergy, Google I/O

on May 12, 2018
Reading Time: 1 minute

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the news and impact of Microsoft’s Build developer conference, the Citrix Synergy customer conference, and Google’s I/O developer conference.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Microsoft Pushes Developers to Embrace the MS Graph

on May 11, 2018
Reading Time: 3 minutes

Microsoft talked about a lot of interesting new technologies at this week’s Build developer conference, from artificial intelligence and machine learning to Windows PCs that work better with Android and Apple smartphones, to some smart new workflow features in Windows 10. But one of the underlying themes was the company’s push to get developers to better leverage the Microsoft Graph. This evolving technology shows immense promise and may well be the thing that keeps Microsoft front and center with consumers even as it increasingly focuses on selling commercial solutions.

Understanding the Graph
The Microsoft Graph isn’t new-it originated in 2015 within Office 365 as the Office Graph-but at Build the company did a great job of articulating what it is, and more importantly what it can do. The short version: the Graph is the API for Microsoft 365. More specifically, Microsoft showed a slide that said the Graph represents “connected data and insights that power applications. Seamless identity: Azure Active Directory sign-in across Windows, Office, and your applications. Business Data in the Graph can appear within your and our applications.”

Microsoft geared that language to its developer audience, but for end users, it means this: Whenever you use Microsoft platforms, apps, or services-or third-party apps and services designed to work with the Graph-you’ll get a better, more personalized experience. And that experience will get even better over time as Microsoft collects more data about what you use and how you use it.

The Microsoft Graph may have started with Office, but the company has rolled it out across its large and growing list of properties. Just inside Office 365 there’s SharePoint, OneDrive, Outlook, Microsoft Teams, OneNote, Planner, and Excel. Microsoft’s Azure is a cloud computing service, and Azure’s Graph-enabled Active Directory controls identity and access management within an organization. Plus, there’s Windows 10 services, as well as a long list of services under the banner of Enterprise Mobility and Security services. And now that company has rolled it into many of its own products; it is pushing its developers to begin utilizing the Graph, too.

Working Smarter, and Bridging Our Two Lives
The goal of the Microsoft Graph is to drive a truly unique experience for every user. One that recognizes the devices you use, and when you use them. One that figures out when you are your most productive and serves up the right tools at the right time to help you get things done. One that eventually predicts what you’ll need before you need it. None of it is quite as flashy as asking your digital assistant to have a conversation for you, but it’s the type of real-world advances that technology should be good at doing.

What’s also notable about the Microsoft Graph is that while it focuses almost entirely on work and productivity, these advances should help smooth friction outside of work, too. If we work smarter, perhaps we can work less. Inside this is Microsoft’s nod to the fact that while it still has consumer-focused businesses such as Xbox, most people will interact with its products in a work setting. That said, most of us have seen the lines between our work and non-work life blur, and the Graph should help drive continued and growing relevance for Microsoft as a result.

Don’t Forget Privacy
Of course, for all this to work Microsoft must collect a large amount of data about you. In a climate where people are starting to think long and hard about how much data they are willing to give up to access next-generation apps and services, this could be challenging. Which is why throughout Build Microsoft executives including CEO Satya Nadella made a point of driving home the company’s stance on data and privacy. Nadella called privacy a human right, and in discussing the Microsoft Graph both on stage and behind closed doors, executives Joe Belfiore and Kevin Gallo noted that this information ultimately belongs to the end user and it is up to Microsoft to keep it private and secure.

The privacy angle is one I expect to see Microsoft continue to push as it works to leverage the Graph in its ongoing battles with Google and Facebook. (I expect Apple will hammer home its stance on the topic at the upcoming WWDC, too.) In the meantime, it will be interesting to see if Microsoft’s developers buy into the promise of the Graph, and how long it will take for their subsequent work to come to fruition. By next year at this time, we may be hearing less about the potential of this technology, and more about end users enjoying the real-world benefits.

News You Might have Missed: Friday, May 11, 2018

on May 11, 2018
Reading Time: 5 minutes

Spotify and Hate Content

To identify hate content, Spotify said that it has partnered with a range of rights advocacy groups, such as The Southern Poverty Law Center, The Anti-Defamation League, Color Of Change, Showing Up for Racial Justice (SURJ), GLAAD, Muslim Advocates, and the International Network Against Cyber Hate. Spotify has also created an automated monitoring tool called Spotify AudioWatch to find content already on its platform that has been flagged as hate content around the world. Spotify also said they don’t believe in censoring content because of the artist’s behavior but they want their editorial decision to reflect their company values.

Google and the Machine Learning Product

on May 10, 2018
Reading Time: 4 minutes

Google has always been a company that was well positioned to capitalize on the machine learning/artificial intelligence age. A central service that’s sole focus is to organize the world’s data and make it easily accessible is a perfect combination for deep learning. Two companies have benefitted the most from the last few years of breakthroughs in deep learning algorithms, and they are Google and NVIDIA.

Google creates some spin with TPU 3.0 announcement

on May 10, 2018
Reading Time: 3 minutes

During the opening keynote to Google I/O yesterday the company announced a new version of its Tensor Processing Unit, TPU 3.0. Though details were incredibly light, CEO Sundar Pichai claimed that TPU 3.0 would have “8x the performance” of the previous generation and that it was going to require liquid cooling to get to those performance levels. Immediately much of the technical media incorrectly asserted an 8x architectural jump without thinking through the implications or how Google might have come to those numbers.

For those that might not be up on the development, Google announced the TPU back in 2016 as an ASIC specifically targeting AI acceleration. Expectedly, this drew a lot of attention from all corners of the field as it marked not only one of the first custom AI accelerator designs, but it was also from one of the biggest names computing. The Tensor Processing Unit targets TensorFlow, a library set for machine learning and deep neural networks developed by Google. Unlike other AI training hardware, that does limit the use case for TPU to customers of Google Cloud products and only TensorFlow based applications.

They are proprietary chips and are not available for external purchase. Just a few months ago, it leaked from the New York Times that Google would begin offering access to TPUs through Google Cloud services. But Google has no shortage of use cases for internal AI processing that TPUs can address from Google Photos to Assistant to Maps.

The liquid cooling setup for TPU 3.0

Looking back to the TPU 3.0 announcement yesterday, there are some interesting caveats about the claims and statements Google made. First, the crowd cheered when it heard this setup was going to require liquid cooling. In reality, this means that there has been a dramatic reduction in efficiency with the third-generation chip OR they are being packed much more tightly in these servers without room for traditional cooling.

Efficiency drops could mean that Google is pushing the clock speed up on the silicon, ahead of the optimal efficiency curve to get that extra frequency. This is a common tactic in ASIC designs to stretch out performance of existing manufacturing processes or close the gap with competing hardware solutions.

Liquid cooling in enterprise environments isn’t unheard of, but it is less reliable and costly to integrate.

The extremely exciting performance claims should be tempered somewhat as well. Though the 8x improvement and statement of 100 PetaFLOPS of performance are impressive, it doesn’t tell us the whole story. Google was quoting numbers from a “pod”, the term the company uses for a combination of TPU chips and supporting hardware that consume considerable physical space.

A single Google TPU 3.0 Pod

TPU 2.0 pods combined 256 chips but for TPU 3.0 it appears Google is collecting 512 into a single unit. Besides the physical size increases that go along with that, this means relative performance for each chip of TPU 3.0 versus TPU 2.0 is about 2x. That’s a sizeable jump, but not unexpected in the ever-changing world of AI algorithms and custom acceleration. There is likely some combination of clock speed and architectural improvement that equate to this doubling of per-chip performance, though with that liquid cooling requirement I lean more towards clock speed jumps.

Google has not yet shared architectural information about TPU 3.0 and how it has changed from the previous generation. Availability for TPU 3.0 unknown but even Cloud TPU (using TPU 2.0) isn’t targeted until the end of 2018.

Google’s development in AI acceleration is certainly interesting and will continue to push the industry forward in key ways. You can see that exemplified with NVIDIA’s integration of TensorCores in its Volta GPU architecture last year. But before the market gets up in arms thinking Google is now leading the hardware race, its important to put yesterday’s announcement in the right context.

Why Apple and Microsoft are Moving to Become Powerhouses in Services

on May 9, 2018
Reading Time: 4 minutes

In a recent column, I wrote about a new great divide coming out of Silicon Valley in which one side believes the best business model is to sell products and services vs. the other side that gives away services that are subsidized by advertising.

As I pointed out in this article, both business models are valid, but the one that relies on advertisers for their revenue have the biggest challenge when it comes to protecting their customer’s privacy as advertisers need as much data on the prospective customer to more accurately target their ads.