Why the Maker Movement is Critical to Our Future

This last weekend the granddaddy of Maker Faire’s was held at the San Mateo Event Center and close to 100,000 people went to this Faire to check out all types of maker projects. When Dale Dougherty, the founder of Make Magazine started his publication, the focus was really on STEM and tech-based ideas.

In the early days of the Magazine, you would find all types of projects for making your own PC’s, Robots, 3D printed designs, etc. It reminded me a bit of my childhood where we had erector sets, tinker toys, and Lincoln Logs, etc. that were educational and in its own way was trying to get kids interested in making things in hopes it would help guide them to future careers.

Over time, the Maker Movement has evolved well beyond just STEM projects and now includes just about any do-it-yourself maker project you could think of doing yourself. At the Faire this year I saw quilting demos, a bee-keeping booth, an area teaching you how to ferment foods, alongside stalls that had laser cutters, 3D printers, Wood lathes, robotic kits and a lot of other STEAM based items and ideas.

Going to a Maker Faire is fun and fascinating in many ways, but to me, the thing I love most is watching the excited faces of the boys and girls who attend this event. Seeing them going from booth to booth trying to get ideas of their own that could help them create their maker projects is rewarding in itself.

The Maker Movement comes at one of the most critical times in our history. When I was in Jr High and High School in the 1960’s, the world we were being prepared for had little to do with tech. My elective options were auto shop, drafting and metal shop and I even took a home economics class. These courses were designed to prepare us for blue collar jobs. Of course, these types of jobs still exist today but in the information age, the majority of jobs now and in the future are more and more will be focused on skills related to math, engineering, and s
science.

he Maker Movement and especially the Maker Faires that are now all over the world serve as an essential catalyst with a goal to help kids get interested in STEM and STEAM. They are designed to instill in them a real interest in pursuing careers in technology and the sciences as well as introduce them to the idea that anyone can be a “maker.”

At this year’s event in the Bay Area, the Maker Faire held a college and career day on Friday Morning before the Faire itself opened that afternoon. I had the privilege of moderating a panel about career journeys with five panelists telling their stories about how they got to where they are in their careers today.

This was the Maker Faire’s first college and career day, and it was very successful. The various speakers Mr. Dougherty brought in to talk to hundreds of students shared all types of stories about what got them into STEM-related jobs and shared valuable career-related advice to those who attended this special career day.

Of the many speakers at the career day event, two stood out to me. The first was Sarah Boisvert, the founder of the Fab Lab Hub. Ms. Boisvert shared that when President Trump asked IBM CEO Ginni Rometty about her thoughts on the future jobs in America. She told him that “we do not need more coal workers, what we need are “New Collar Workers” referring to the current and future demand for a technically skilled labor force to meet the needs of America’s job market. Ms. Boisvert has written a book entitled “The New Collar Workforce: An Insider’s Guide to Making Impactful Changes to Manufacturing and Training.”

An overview of the book states:

The “new collar” workers that manufacturers seek have the digital skills needed to “run automation and software, design in CAD, program sensors, maintain robots, repair 3D printers, and collect and analyze data,” according to the author. Educational systems must evolve to supply Industry 4.0 with new collar workers, and this book leads the reader to innovative programs that are recreating training programs for a new age in manufacturing.
The author’s call to action is clear: “We live in a time of extraordinary opportunity to look to the future and fundamentally change manufacturing jobs but also to show people the value in new collar jobs and to create nontraditional pathways to engaging, fulfilling careers in the digital factory. If the industry is to invigorate and revitalize manufacturing, it must start with the new collar workers who essentially make digital fabrication for Industry 4.0 possible.”
This book is for anyone who hires, trains or manages a manufacturing workforce; educates or parents students who are searching for a career path, or is exploring a career change.”

Ms. Boisvert told the students in the audience that when she hires people, the first thing she looks for is if they have solid problem-solving skills. She sees that as being a fundamental part of “New Collar” jobs.

The other speaker that stood out to me was on my panel. Janelle Wellons is a young African American woman who initially wanted to be a theoretical mathematician. Her is her bio:

Janelle Wellons graduated from the Massachusetts Institute of Technology with a B.S. in Aerospace Engineering in 2016. After graduating, she moved from her home in New Jersey to Southern California to work at the NASA Jet Propulsion Laboratory (JPL) in Pasadena. At JPL, Janelle works as an instrument operations engineer on the Lunar Reconnaissance Orbiter, the Earth-observing Multi-Angle Imager for Aerosols, and previously on the Saturnian Cassini mission. Her job consists of creating the commands for and monitoring the health and safety of a variety of instruments ranging from visible and infrared cameras to a radiometer. She also serves on an advisory board for Magnitude.io, a nonprofit that creates project-based learning experiences designed for STEM education. When she isn’t working, you can find her playing video games, reading, enjoying the outdoors, and working on cool projects out in the Mojave.

As a young African American woman, she is an inspiration to kids of all ages and ethnical backgrounds, and she reminded me of the woman in Hidden Figures, Katherine Johnson, who also worked for NASA and was instrumental in working on John Glenn’s Earth Orbit in 1962.

As she spoke, I was watching the kids in the audience, and they were spellbound listening to her tell them that anyone can achieve their goals if they put their minds to it.

The Maker Movement and Maker Faires are critical to our future. Our world is changing rapidly. Job skills of the past need to be updated to meet the changing needs of a world driven by information and analytics and manufacturing jobs that will require new skills to operate. If you get a chance to go to a Maker Faire in your area, I highly recommend you check one out. You won’t be disappointed and like, me, will learn a lot and perhaps be inspired to become a maker yourself.

Podcast: HP PCs, WiFi Mesh Standard, Blockchain And Cryptocurrency, Autonomous Cars

This week’s Tech.pinions podcast features Ben Bajarin and Bob O’Donnell discussing HP’s new PCs, the WiFi Mesh standard, blockchain and cryptocurrencies, and the outlook for autonomous cars.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

A Lot Needs to Happen Before Self-Driving Cars Are A Reality

Self-driving vehicles represent one of the most fascinating fields of technology development today. More than just a convenience, there is the potential to radically alter how we work and live, and to alter the essential layout of cities. Their development involves a coalition of many disciplines, a marriage of the auto industry’s epicenters with Silicon Valley, and is attracting hundreds of billions of dollars in annual investment, globally. Exciting as all this is, I think the viability a full-fledged self-driving car in the day-to-day real world is further off than many believe.

My ‘dose of reality’ is driven by two main arguments. First, I think that we’ve underestimated the technology involved in making true autonomous driving safe and reliable, especially in cities. Second, there are some significant infrastructure investments that are required to make our streets ‘self-driving ready’. This requires significant public sector attention and investment which, except for a select few places, is not happening yet.

Self-driving cars have already logged lots of miles and have amassed an impressive safety record, with a couple of notable and unfortunate exceptions. But most of this has occurred on test tracks, along the wide-open highways of the desert southwest, and in Truman Show-esque residential neighborhoods.

For a sense of the real world that the self-driving car has to conquer, I encourage you to visit my home town of Boston. Like most of the world, it’s not Phoenix or Singapore. It has narrow streets, an un- grid-like/not intuitive layout, and a climate where about half of the year’s days feature wet, snowy, or icy roads. Sightlines are bad, lane lines are faded, and pedestrians and cyclists compete for a limited amount of real-estate. In other words, a fairly typical older city. How would a self-driving car do here?

To get even more micro, I’ll take you to a very specific type of intersection that has all of the characteristics deigned to trump a self-driving vehicle. In this situation, the car has to make a left turn from a faded turn lane with no traffic light, and then cross a trolley track, two lanes of oncoming traffic, and a busy crosswalk. So we have poor lane markings, terrible sight lines, and pedestrians moving at an uneven pace and doing unpredictable things, before even getting into the wild cards of weather, glare, and so on. My heart rate quickens every single time I have to make this turn. I would want to see a car successfully self-perform this left turn a Gladwellian 10,000 times before I’d sign that waiver.

I’m sure each of you can provide countless examples of situations that would prompt the question of “can a self-driving car really handle that”? It shows the complexity and the sheer number of variables involved in pulling this off. Think of all the minor decisions and adjustments you make when driving, particularly in a congested area. Rapid advancements in AI will help.

This is not to diminish the mammoth progress that has been made on the road to the self-driving vehicle. The technology is getting there for self-driving cars to be viable in many situations and contexts, within the next five years. It’s that last 10-20% of spots and situations that will prove particularly vexing.

If we believe the self-driving car could be a game-changer over the next 20 years, I think we need to be doing a lot more thinking about the infrastructure required to support its development. We all get excited about how the potential benefits self-driving/autonomous vehicles will usher in, such as changes to the entire model of car ownership, less congested roads, the disappearance of parking lots, etc. This exciting vision assumes a world where the self-driving car is already mainstream. But I think it’s a bit naïve with regard to the type of investment that is needed to help make this happen. This is going to require huge public sector involvement and dollars in many fields and categories. As examples: improvements to roads to accommodate self-driving cars (lane markings, etc.); deployment of sensors and all sorts of ‘smart city infrastructure’; a better ‘visual’ infrastructure; a new regulatory apparatus, and so on. And of course, we will need advanced mobile broadband networks, combination of 5G with vehicle-centric capabilities envisioned by evolving standards such as V2X, to help make this happen.

This will be a really exciting field, with all sorts of job opportunities. There’s the possibility of a game-changing evolution of our physical infrastructure not seen in 100+ years. But worldwide transportation budgets are still mainly business-as-usual, with sporadic hot pockets of cities hoping to be at the bleeding edge.

Getting to this next phase of the self-driving car will require a combination of pragmatism, technology development, meaningful infrastructure investment, and a unique model of public-private cooperation.

 

News You might have missed, Week of May 18, 2018

Microsoft rumored to be planning Low-Cost Tablet

According to Bloomberg, Microsoft is rumored to be planning to add a low-end model to its Surface line. The new tablet line will feature a 10-inch screen and will be priced around $400. The tablets are expected to be about 20 percent lighter than the high-end models but will have around four hours fewer of battery life. Intel will supply the main processor and graphics chips for the devices, said the people, who asked not to be identified because the plans aren’t public.

Via Bloomberg

  • If this is true, I think it is a good idea for Microsoft to try in this price point again. The market has changed a lot since 2012 when Surface RT was introduced and the Surface team has also learned a lot since then.
  • Surface has established itself as the leading 2in1 PC. This is how most buyers see it rather than a tablet despite the tablet form factor. iPad Pro shares a very similar addressable market even though, mostly due to its OS, it is seen more as a tablet than a PC. A study we conducted earlier this year across the US, clearly showed the strong brand share Surface has gained among Windows users as well as Mac users.
  • When Surface RT came to market, Surface had yet to establish itself as a premium brand. The combination of that plus an inferior experience with touch and a much smaller ecosystem made it impossible for Surface RT to compete with iPad.
  • While still nowhere near the iPad ecosystem, Windows 10 delivers a much more competitive proposition today, mostly thanks to Microsoft primary apps. It is interesting that we have not seen much interest in this space from Microsoft partners which are concentrating their 2in1 efforts in the high-end segment where the Surface Pro sits.
  • It will be interesting to hear the pricing for the Surface Pen and Surface Type Cover but I doubt that it will be much different from current pricing despite the smaller size of the Cover. From a consumer perspective while these two accessories are not key to the experience they would certainly add to it. Hence why some promotional bundles would be good.
  • Talking as someone who has been using the 10.5” iPad Pro together with the Surface Pro as my on the go devices I am excited about the 10” Surface. While you feel like you are sacrificing a little screen real estate when producing content you are benefitting from a much more portable form factor for consuming content, from books to video.
  • The form factor also calls for integrated LTE which might give Microsoft some flexibility on price point if they are able to strike some deals with the carriers.

YouTube Music

Next week, YouTube is launching YouTube Music — a revamped version of its existing music service that adds some new features like personalized playlists based on your YouTube history and other usage patterns. That service, which is supposed to soft-launch on Tuesday, will cost $10 a month after a trial period. YouTube Music’s $10 a month removes ads from music videos – but not the rest of YouTube. It also allows you to download music for offline listening, and to play music in the background while you do other things.

With the launch of YouTube Music, the current YouTube Red will be renamed YouTube Premium and it will require a subscription to YouTube Music and an extra $2 subscription. So for $12 a month, you will get YouTube Music and YouTube Premium.

Via Recode

  • Launched in October 2015, YouTube Red has always been positioned by YouTube as three services in one: It offers ad-free access to all of YouTube; it’s a music streaming service that also gives access to Google Play Music; and it’s consistently releasing original movies and TV shows, starring Hollywood talent and homegrown stars that users already subscribe to.
  • With the launch of YouTube Music, Google Play Music will no longer exist in its current form. It will be a cloud locker service, to which users can upload their music for portable streaming.
  • It seems to me that this new name comes to end an identity problem the service has had for some time. Back in February, YouTube CEO Susan Wojcicki referred to the services as a music streaming service. This was quite far from how the head of content Susanne Daniels had come to describe the service in the past: a premium subscription streaming service that offers Hollywood-quality shows and movies
  • When looking at the new pricing structure it seems clear that the value is put on the music service rather than the video service. This might reflect Wojcicki’s position on not wanting to compete with Netflix, Hulu, and Amazon on producing original content.
  • It is unclear how many paying subscribers YouTube Red has. About a year from launch the service was said to have around 1.5 million subscribers with another million on a free trial. Subscribers numbers are of course limited by the geographical coverage of the service which currently includes US, Mexico, South Korea, Australia and New Zealand.
  • The new service will roll out on May 22 to those same countries but it will add another 14 countries soon.
  • YouTube Premium looks to be closer to Apple Music than Spotify as it will blend music and video. It seems however that Apple has much stronger aspirations when it comes to original content. The reason might simply be that while Apple is thinking more about Apple TV when thinking about content, YouTube is thinking more about the smaller screens with an embedded Google Assistants that Google introduced at Google i/o. The kind of content would have to be much more disposable than the big productions we see coming out of Netflix, Amazon, and Hulu.

ZTE: from being punished to being used as a Bargaining Chip

After being banned from buying irreplaceable US components for seven years which caused to cease operations, ZTE finds itself in the middle of commercial negotiations between China and the United States. Earlier this week, President Trump tweeted: “President Xi of China, and I, are working together to give massive Chinese phone company, ZTE, a way to get back into business, fast. Too many jobs in China lost. Commerce Department has been instructed to get it done!” After some strong reactions, this was followed later in the week by another Tweet “The Washington Post and CNN have typically written false stories about our trade negotiations with China. Nothing has happened with ZTE except as it pertains to the larger trade deal. Our country has been losing hundreds of billions of dollars a year with China…”

Via TechCrunch

  • Earlier in the week Politico also reported that the FBI chief said to be concerned about telecom companies like ZTE, which is closely tied to the Chinese government, being granted access to the US market.
  • The ZTE ban, however, had nothing to do with security, contrary to what the case was for Huawei. Yet such a grey line of the reasons why ZTE and other Chinese companies should not operate in the US seems to confirm my initial feeling that the ban had more to do with the current political climate between US and China than actual security concerns.
  • There is no question that China is making huge progress in many of the technologies that will shape the course of the world. From 5G to AI, China is investing heavily and growing a strong talent pool.
  • The ZTE case might have made Chinese officials more determined to achieve self-sufficiency in key high-tech sectors. This however is not a good enough reason for going back on the initial ban as lightening ZTE’s punishment would actually come across as a sign of weakness at this point.
  • What I struggle to understand is why the ramifications of the ban were not obvious and therefore were not considered beforehand. I am not suggesting for a minute that ZTE should have gone scot-free but I am sure the government could have come up with a punishment which would not have resulted in going out of business.

 

 

Ways to Think About Crypto’s Potential

I have purposely avoided writing about the crypto craze mostly because of the full range of mixed opinions, many of them reasonably strong one way or another. Crypto is either the future or a fraud. Apparently, there is no middle ground. I get asked my opinion on what is going on with crypto often so I figured it was time to share a few points that will help us understand what is going on and what the future may hold for crypto and the blockchain.

The Early Stages
This point I’m about to make is one that comes from having been around the block a few times. Granted, I’m not quite 40 yet, I will be very soon, but I’ve been in this industry now almost 20 years and more importantly have learned from industry leaders who have been in tech since the very beginning. There is a constant pattern that innovations follow that every new major technology follows. Crypto and the BLockfain are not different.

The early part of an innovation cycle is highly fragmented. A lot of money and investment goes into the new innovation but there are yet to be any standards or protocols which help unite or define it so mass adoption can take place. This pattern has taken place with every major technology cycle since the industrial revolution. This particular cycle generally creates a bubble, which bursts, which then paves the way for a better version of the whole technology to take place in the following time period. I have written about this before, in this article, where I outline the boom, bust, and buildout cycle of innovation.

The challenge with crypto today lies in the realization that we may be in the boom period of this cycle. I’ve seen many argue that Bitcoin, in particular, will stick in the long-term and it may. Or it may fail like many early technologies in the boom period. This part is irrelevant in my opinion. Succeed or fail, it will pave the way for something much more significant. The critical point and one that I’m convinced of and are many other brilliant people I respect is the blockchain is the underlying technology that matters here. Not necessarily the early implementations of blockchain we see today.

If you are a fan of the HBO show Silicon Valley, the current season has the perfect analogy of Bitcoin. During an episode where Pied Piper attempts to do an ICO (initial coin release) as an alternative to fundraising, the argument was presented that Bitcoin could be like Myspace or Friendster, things that ultimately didn’t succeed but paved the way for Facebook to become the single largest human network the world has ever seen. Understanding Bitcoin in this fashion, succeed or fail, is exactly how I think about it.

Trust and Verification
I don’t want to get into a detailed explanation of how blockchain works, but two fundamental parts of it are the most compelling to me when it comes to the preservation of information. Essentially, in very simple terms, blockchains are databases (records of information) that encrypt every layer of information from the first to last entry. This fundamentally makes information stored on a blockchain nearly impossible to alter and very easy to see if any information is tampered with. When this is accomplished the information can then be more easily distributed (de-centralized) yet still carry with it a record of trust. Meaning, even though the information may be added to from many different parties, we can still trust that it is accurate. Thus the blockchain carries with it a level of verification and authenticity even though it is decentralized. The result is perhaps one of the more powerful value propositions of the blockchain–trust.

The more I have studied and learned about blockchain technology through the years, the more I’m convinced it will touch every industry. Finance, healthcare, retail/commerce, real-estate, government, etc. Every industry is dealing with legacy database and information recording systems that are often subject to human error intentional tampering and fraudulent practices. While I am greatly exaggerating, the current way critical information is stored comes at the expense of the consumer. Centralized systems allow the power to be held by the organization who holds and manages the information. Blockchain and the decentralization of this information place the power back in the hands of consumers/people. You can see why the powers that be may not be super pumped about the blockchain.

Now, what is left out of the current discussion is the role our devices will play in this future. In particular, around a critical role blockchain will play in the future around trust and verification. In the true decentralized digital age, our identity is going to an essential part of this evolution. In a trust-based situation, as much as I the consumer want to know I can trust the information regarding healthcare, finance, retail, real-estate, etc., the service provider needs to also be able to trust I am who I say I am and that all information regarding myself is intact and unaltered. Interestingly, there is one company who makes devices, and software, that aligns pretty well with the blockchain and crypto trend and that is Apple.

Apple and The Blockchain
In all my discussions with investors and pundits on the blockchain/crypto trend, I’m always surprised the iPhone is not used more as an example. My iPhone is essentially a database of information about me and my life and it is also encrypted. Every layer may not be encrypted yet the same way a blockchain database is but it could be. One area that strikes the most resemblance is Apple Pay. Apple Pay is essentially an encrypted and tokenized crypto payment method in every sense of the word. Apple hard encrypts my payment information into the secure enclave (this is unhackable like the blockchain). When I go to make a purchase a random token is generated that authenticates and verifies my payment information and authorizes it to be able to make a purchase. I then use TouchID or FaceID as an added layer of two-factor security to assure the merchant that I am who I say I am and thus no fraudulent commerce activity is taking place. It’s a beautiful system for commerce and one that is highly aligned with the blockchain. It puts the power, security, and mechanisms of trust all into the hands of the consumer.

While that is just an example when it comes to a transaction, it applies as well to our digital identity. Our devices, like our smartphones, and eventually our wearable computers will play important roles in securing, and protecting our identity. If we view our personal technology devices as the ultimate decentralization of our identity, then we can make the case that Apple is as well positioned as any to capitalize on this new way information is stored, managed, authenticated, verified, and trusted.

Again, what’s going on with the blockchain is the big picture narrative. It has the potential to change information systems from every part of the stack altogether. What we see with Bitcoin, and other specific purpose tokens and networks may fail, or succeed, but ultimately they will pave the way for​ something much bigger I’m sure.

HP Isn’t Standing Still as Top Market share PC OEM

If you ask in the tech industry, you’ll hear stories about PC vendors and technology companies that lose an edge after becoming a market leader. Competition sharpens minds and accelerates research and design initiatives to gain that one foothold over the other guy that puts you firmly in the leading position. But too often that slips away as stagnation and complacency rolls in.

With Q1 results from IDC available, HP continues to maintain market share leadership in the PC space, pulling in 20.9%, holding above Dell and Lenovo. Recent announcements from the company also indicate that the company is attempting to avoid any stalling of growth by continuing to innovate and push forward with new designs and product programs.

The new philosophy at HP focuses on the “one-life” design ideal, where commercial and consumer users share hardware between personal and business use. For a younger generation that blurs the lines between work-time and play-time, having devices that can fill both roles and permit a seamless transition between those states is deal.

Just this month, the company announced updates to its premium lines of notebooks and desktops in an attempt to showcase its ability to provide products fitting of both roles.

Perhaps the most interesting is the new HP Envy Curve, an all-in-one PC that is more than your typical tabletop design. Internals of the system are impressive, but don’t do the rest of the system justice. Yes, including a powerful 8th-gen Intel CPU, SSD storage, and 16GB of memory are a requirement, but it’s the smaller touches that make the Envy Curve stand out.

In the base of the unit HP has embedded four custom Bang & Olufsen speakers angled upward at 45 degrees to better direct audio to the consumer. All-in-ones have traditionally included the most basic of speaker implementations, but HP is hoping it can add value to the Curve and provide high quality sound without the need for external hardware.

The curved 27-in or 34-in QHD display rests on a thin neck and is coupled with a wood grain finish on the back, giving the PC a distinct aesthetic that few others in the computing market offer. If computing is in threat of being commoditized, then customization and style will be key drivers of upgrades and purchases.

Two other innovations help the Envy Curve stand out. The base includes an embedded wireless Qi charging ring, meaning you can charge your phone without the annoyance of cables or USB ports, maintaining a clean footprint. HP has also integrated Amazon Alexa support, giving PC users access to the most popular digital assistant in the home and an alternative to Cortana on Windows 10. It all adds up to a unique product for a shifting desktop landscape.

Though the Envy Curve is part of it, the Envy family is more well known for its notebook line. It rests between the company’s budget-minded Pavilion and the ultra-high-end Spectre options. Attempting to further drive the premium notebook space, where HP gained 3.2 share points just last quarter, the Envy lineup will be seeing a host of new features and options courtesy of the innovation started with Spectre.

These design changes include thinner bezels around the screen, shrinking them to get nearer to the edge to edge designs that have overtaken smartphones. HP was quick to point out that despite this move, it kept the user-facing camera at the top of the device, even though it means slightly wider bezels on that portion, to prevent the awkward angles of other laptops.

Sure View is a technology that makes displays unreadable to anyone off angle, preserving privacy of data and content and is a nice addition stemming from the company’s business line. It can be enabled with the touch of a button and doesn’t require the consumer to semi-permanently enable it with a stick-on film.

Both the 13-in and 17-in Envy models will be using 1080p displays but have a unique lift hinge that moves the keyboard to an angle more comfortable for typing. HP was able to make the device slim and attractive but still maintain connectivity options users demand by implementing a jawed USB Type-A port.

The convertible Envy x360 13-in and 15-in improvements are similar, and both now offer AMD Ryzen APU processor options, giving consumers a lower cost solution that provides a very different performance profile to the Intel processors.

HP Elitebooks, known for their enterprise capabilities, got some updates this month as well. The new Elitebook 1050 G1 is the first 15-in screen in the segment and includes pro-level features like Sure Click, Sure Start, and Sure View all aiming at keeping commercial hardware secure and reliable. The Elitebook x360 1030 shrinks the device footprint by 10%, squeezes a 13-in screen in a form factor typical of 12-in models, and has a direct sunlight capable display that reaches brightness levels as high as 700 nits, perfect for contractors and sales teams that need to work outdoors.

To be fair and balanced, nothing that was announced is on the scale of revolutionary shifts but attempting to do that in the mature PC space is nearly impossible to pull off. Design shifts like thinner bezels, smaller footprints, brighter screens, and even Amazon Alexa integration do show that there is room left in the tank to tweak and perfect design. HP is using its engineering and product teams to do just that, while trying to maintain the market share position it has earned over Dell and Lenovo.

For those in the space that thought the PC was dead and innovation was over, HP has a few devices it would like to show you.

The DOS Era of Virtual Assistants

Not a week goes by where some industry-related conversation I have with a major tech company does not include virtual assistants. When talking about these assistants like Alexa, Siri, Google Assistant, and Cortana, I often have to remind folks that we are still in the very early days. One of the ways I’ve started doing that is to talk about these assistants as if we are in the DOS era of computer interfaces.

For many of you, this image brings back memories. This was the command line era of computer interfaces. A time when you had to know exactly what to type to interface with the computer and get what you want. It was entirely unintuitive and highly programmatic. This is where we are with virtual assistants today. While it is true the speech interface is vastly more intuitive than the command line interface; you still have to speak to all the current assistants in a certain way. These assistants are still limited in what they can understand in a similar way DOS limited our input options to a set list of commands. When the graphical interface came along, it opened the door for more people to be able to intuitively use a computer. While there was still some training necessary, it was a huge leap forward from DOS when it came to computer interfaces.

While voice interfaces are still met with some speculation when it comes to computer interfaces, I believe when we truly get to a conversational interface it will be more of a leap than DOS to Windows in human to computer interaction.

Voice May Take Computer Interfaces Farther Than Touch
I’ve been a strong advocate of touch computing interfaces. My position all along has been that touch interfaces like that on smartphones and tablets are more intuitive and natural to use than the mouse and keyboard interfaces. In the early days of tablets, many would comment on how their pre-school age kids would naturally pick up and iPad and instinctively be able to start using it. I had commonly used the phrase the end of computer literacy classes because touch interfaces are so easy to use we don’t need to be taught how to use them like we did mouse and keyboard computers.

While I still think screen and visual based computers will still play a key role in the future, I also believe the combination of the voice-based interface will take computing even farther, and make computers, even more, easier to use than touch computing.

Even just looking at voice speakers like Amazon’s Alexa, we hear countless stories of parents talking about how their kids are using Alexa and beginning to rely on Amazon’s assistant for common workflows, like setting alarms, asking for the weather, getting information, etc. The parallels between stories hearing how kids, elderly, or people who are not tech savvy have been taking to touch and voice interfaces is a clear example of the potential of this technology to help consumers do more with computers.

That is the ultimate goal, help consumers do more, be more productive, and stop wasting time with the frustrating, and difficult parts of operating a computer. The reality, however, is we still have a long way to go. To use my analogy, we are still in the command line interface stage of voice assistants. This is why seeing how Google, Amazon, Microsoft, and Apple make strides year over year to make their assistants more conversational and thus eliminate all limitations to interact with these voice assistants is paramount to their futures.

Will the Gig Economy help Moms to have it All?

This past Sunday was Mother’s Day in the US and across many countries in Europe including my home country of Italy. As I was waking up in a hotel room miles away from my family I felt a whole bunch of emotions: sad I was not home, blessed that I have a husband that supports me in my career and extremely lucky to be in a job I love.

Thanks to the jet-lag I had plenty of time to think about my fellow moms and how much things have changed since I was growing up and my mom was a working mom. At the same time, some of the stigmas of a working mom are still there. Whether you are working, like my mom did, to contribute to the family income, or because you want a career, some people still see you as not putting your children first. And if you are taking a break to be with your kids in their foundation years, you are dealing with the judgment of not putting yourself first. I thought of my circle of fellow moms and made a mental list of how many successful business women I know, how many are the primary bread-winner in the family and how many, now that the kids are grown up would like to get back to work. It is a good healthy mix of women who, no matter where they sit in the group, support one another.

The “Motherhood Penalty”

Whether you are a working mom, or you are a mom who took time off to be with her kids as they grow up, I am sure you have stories about management taking for granted you would not be giving one hundred percent after you gave birth and that if you were leaving your career you had never been committed to it in the first place. If you have been lucky to have a supportive work environment, it might come as a surprise to hear about the “motherhood penalty.”

Data is showing that being a woman is only part of the pay gap we currently see across so many segments. The Institute for Fiscal Studies has found that before having a child the average female worker earns 10% to 15% less per hour than a male worker; after childbirth that increases steadily to 33% after around 12 years. This has financial and economic implications but also emotional ones. The “motherhood penalty,” helps to explain why women overall make 81 cents on every dollar a man earns. Conversely, research has shown that having children raises wages for men, even when correcting for the number of hours they work.

What is the Gig Economy?

The best way to describe the gig economy is the new economy that is developing outside the traditional
Simply put, the modern economy is the one evolving beyond the constraints of conventional work models. Services enabled by the app economy have opened up opportunities for people to earn a living in a much more flexible work environment. While in Silicon Valley many participating in the gig economy do so out of necessity to be able to afford the high cost of living, leading to high criticism and calls of exploitation, the concept is indeed one that opens opportunity.

According to a recent study by the McKinsey Global Institute, up to 162 million people in the United States and Europe are involved in some form of independent work. Members of the gig economy, from ride shares to food deliveries to dog walking and child care services are not employees of the company that pays them but rather they are independent contractors. Instead of working 9-to-5 for a single employer, they are leveraging their advantages to maximize their earning opportunity while balancing it around their personal needs.

While of course, many jobs in the gig economy do not include traditional benefits they might be the best fit for moms returning to work.

Be Your Own Boss

Mothers returning to work are chronically underpaid and undervalued for their experience and ability. PwC’s November 2016 report into women returning to work found that nearly 65% of returning professional women work below their potential salary level or level of seniority.

According to new research, that gap hasn’t narrowed at all since the 1980s. And for some women, it’s even increased. The study found that when correcting for education, occupation and work experience, the pay gap for mothers with one child rose from 9% in the period between 1986 and 1995 to 15% between 2006 and 2014. For mothers with two kids, the gap remained steady at 13% and stayed at 20% for mothers with three or more kids. The researchers point to a lack of progress on family-friendly policies in the United States, such as paid parental leave and subsidized childcare. Other countries, including Sweden, have narrowed their gender pay gaps after instituting such laws.

Considering how little regulations and companies’ attitude to child care and parental leave have progressed and accounting for the changes that the workplace is undergoing to appeal to the younger millennials, getting back in the game must be daunting for those moms who took a break from their career. The gig economy might offer the best opportunity to them and not just in regards to flexibility but also regarding rediscovering what they want to do and earning the best money.

From marketing to payment methods, to service delivery, technology advancements can make being your own boss much easier than it ever was. This option, of course, does not mean those big companies are off the hook when it comes to improving the level of support moms get at work and when it comes to the pay gap. All it means is that women returning to work after having kids do not have to settle anymore in a job that is not adequately paid or does not help them fulfill their full potential.

Adding AI and ML to Speech-to-Text and Language Translations Are Game Changers

At Google I/O, Sundar Pichai showed off an AI-based technology called Duplex, in which a computer made a call to a restaurant to make a reservation in a natural human voice and interacted directly with a person taking down reservations at a particular eating establishment.

This particular AI announcement got a lot of coverage at Google I/O and given its importance, and the breakthrough in technology it delivered, it deserved to be highlighted as one of the most important announcements coming out of this year’s Google Developers Conf. However, for those of us at the conference, it was clear that the theme of AI and Machine Learning was prevalent in all products and services they showed at the event.

The day before Google I/O opened, the company held a special analyst event that specifically focused on AI. The chart below was shared at this event and underlines the fact that AI is used across all Google products.

While the media mostly highlighted things like Duplex and the way AI is used to impact Android P, servers and G-Suite, there were two other things they showed us at the Analyst event that I consider potential game changers.

The first is how AI is applied to voice to text translation. Their goal is to get this to 99% accuracy using AI and ML over the next few years. That said, the demos they showed us in which they dictated comments into various G-Suite applications were pretty accurate even now. They also showed us a more in-depth dive into the new AI feature called Smart Compose where a person writes a sentence, and it writes the next sentence for you based on the first sentences context. Smart Compose will work with either keyboard or voice input.

We have had various voice recognition products such as Dragon Dictate on the market for years. But these programs relied on localized software and took advantage of the current processing power available at the time of each release. These programs did get better over the years but if you ad AI and ML to this problem, the accuracy rate is bound to get better.

Google understands the importance of speech-to-text as it relates to our everyday lives. An accurate voice to text interface is critical when answering a message while driving. It is a meaningful way to respond to an email or text message on wearables or smartphones. It will eventually become a valuable input when using mixed reality glasses were using voice as part of the navigation process and voice-to-text is needed for various types of AR applications.

The second is how AI and ML are used in Google’s Translation programs. Most of us are familiar with Google Translate now, but Google said that in future versions when AI and ML are applied to the translation program will increase the accuracy rate dramatically. Google Translate does a good job today but will deliver more precise translations thanks to their own AI and ML-based technology.

But where AI based translation could be genuinely transformative is with actual language translation in real time. As an international traveler who only speaks English, communicating with locals in Japan, China, S. Korea, France, Spain, Greece, Italy and other places I travel to, this type of translation would be a godsend. There are some handheld devices out there today that attempt to translate what you say into a local language, but if you have ever used one, you know they do not work well and has a lot of limitations regarding what it understands and can translate.

Google has their eye on this type of translation too, and it is safe to say that with their AI and ML-based research going strong in this area of text and voice translation, we could see some real breakthroughs in more accurate language translation on Android phones shortly. Apple also has AI and ML research going on around various aspects of voice and text translation and they too, along with potential partners, could deliver a mobile language translation solution on IOS someday.

AI and ML will have a dramatic impact on voice to text translation, and its most prominent effect may be as part of the UI in AR and VR or mixed reality glasses as part of the information input system. Personally, the language translations excite me the most as it would make my world travels easier since I could speak with locals and have our conversation translated in real time.

As I stated above, Google is integrating AI and ML into all of their products and will impact everything they bring to market. But voice-to-text and language translations may be two of their most practical ways to use AI and ML to enhance our digital lifestyles.

Device Independence Becoming Real

For decades, compute devices and the tech industry as a whole were built on a few critical assumptions. Notably, that operating systems, platforms, applications, and even file formats were critical differentiators, which allowed companies to build products that offered unique value. Hardware products, software, and even services were all built in recognition of these differences and, in some instances, to bridge or overcome them.

Fast forward to today, and those distinctions are becoming increasingly meaningless. In fact, after hearing the forward-looking strategies of key players like Microsoft, Google, and Citrix at their respective developer and customer events of the past week, it’s clear the world of true device and platform independence is finally becoming real.

Sure, we’ve had hints at some of these developments before. After all, wasn’t browser-based computing and HTML5 supposed to rid the world of proprietary OS’s, applications and file types? All you needed was a browser running on virtually any device, and you were going to be able to run essentially any application you wanted, open any file you needed, and achieve whatever information-based goal you could imagine.

In reality, of course, that utopian vision didn’t work out. For one, certain types of applications just don’t work well in a browser, particularly because of limitations in user interface and interaction models. Plus, it turned out to be a lot harder to migrate existing applications into that new environment, forcing companies to either rebuild from scratch or abandon their efforts. The browser/HTML5 world was also significantly more dependent on network throughput and centralized computing horsepower than most realized. Yes, our networks were getting faster, and cloud-based data centers were getting more powerful, but they still couldn’t compare to loading data from a local storage device into onboard CPUs.

Since then, however, there have been a number of important developments not just in core technologies, but also in business models, software creation methodologies, application delivery mechanisms, and other elements that have shifted the computing landscape in a number of essential ways. Key among them is the rise of services that leverage a combination of both on-device and cloud-based computing resources to deliver something that individuals find worthy of value. Coincident with this is the growing acceptance to pay for software, services, and other information on an ongoing basis, as opposed to a single one-and-done purchase, as was typically the case with software in the past.

Admittedly, many of these services do still require an OS-dependent application at the moment, but with the reduction of meaningful choices down to a few, it’s much easier to create the tools necessary to make the services available to an extremely wide audience. Plus, ironically, we are finally starting to see some of the nirvana promised by the original HTML5 revolution. (As with many things in tech—timing is everything….) Thanks to new cloud-based application models, the use of containers to break applications into reasonably-sized parts, the growth in DevOps application development methodologies, the rise in API usage for creating and plugging new services into existing applications, and the significantly larger base of programmers accustomed to writing software with these new tools and methods, the promise of truly universal, device-independent services is here.

In addition, though it may not appear that way at first glance, the hardware does still matter—just in different ways than in the past. At a device level, arguably, individual devices are starting to matter less. In fact, in somewhat of a converse to Metcalfe’s Law of Networks, O’Donnell’s Law of Devices says that the value of each individual digital device that you own/use decreases with the number of devices that you own/use. Clearly, the number of devices that we each interact with is going up—in some cases at a dramatic rate—hence the decreased focus on specific devices. Collectively, however, the range of devices owned is even more important, with a wider range of interaction models being offered along with a broader means of access to key services and other types of information and communication. In fact, a corollary of the devices law could be that the value of the device collection is directly related to the range of physical form factors, screen sizes, interaction models, and connectivity options offered to an individual.

The other key area for hardware is the amount and type of computing resources available outside of personally owned devices. From the increasing power and range of silicon options in public and private data centers powering many of these services, to the increasing variety of compute options available at the network edge, the role of computing power is simply shifting to a more invisible, “ambient” type role. Ironically, as more and more devices are offered with increasing computing power (heck—even ARM-based microcontrollers powering IoT devices now have the horsepower to take on sophisticated workloads), that power is becoming less visible.

So, does this mean companies can’t offer differentiated value anymore? Hardly. The trick is to provide the means to interconnect different pieces of this ambient computing background (as Microsoft CEO Satya Nadella said at Build last week, the world is becoming a computer) or to perform some of the specific services that are still necessary to bridge different aspects of this computing world. This is exactly what each of the companies mentioned at the beginning of this article discussed at their respective events.

Microsoft, for their part, described a world where the intelligent edge was growing in importance and how they were creating the tools, platforms, and services necessary to tie this intelligent edge into existing computing infrastructures. What particularly struck me about Microsoft’s approach is that they essentially want to serve as a digital Switzerland and broker connections across a wide variety of what used to be competitive platforms and services. The message was a very far cry from the Microsoft of old that pushed hard to establish its platform as the one true choice. From enabling connections between their Cortana assistant and Amazon’s Alexa in a compelling, intriguing way, to fully integrating Android phones into the Windows 10 experience, the company was clearly focused on overcoming any kinds of gaps between devices.

At I/O, Google pushed a bit harder on some of the unique offerings and services on its platforms, but as a fundamentally cloud-focused company, they have been touting a device-independent view of the world for some time. Like Microsoft, Google also announced a number or AI-based services available on its Google Cloud that developers can tap into to create “smarter” application and services.

Last, but certainly not least, Citrix did a great job of laying out the vision and effort it has done to overcome the platform and application divides that have existed in the workplace for decades. Through their new Citrix Workspace app, they presented a real-world implementation of essentially any app, running on any device from any location. Though that concept is simple—and clearly fits within the device independence theme of this column—the actual work needed to do it is very difficult. Arguably, the company has been working on delivering on this vision for some time, but what was compelling about their latest offering was the elegance of the solution they demonstrated and the details they made sure were well covered.

A world that is less dependent on individual devices and more dependent on a collection of devices is very different than where we have been in the past. It is also, to be fair, not quite a reality just yet. However, it’s become increasingly clear that the limitations and frustrations associated with platform or application lock-in are going away, and we can look forward to a much more inclusive computing world.

The Missing Link in VR and AR

VR and AR are big buzzwords in the world of tech these days. At Tech.pinions we have been covering these technologies for over five years and shared solid perspectives on significant AR and VR products if we feel they move this technology forward.

All of our team has tried out or tested most of the available AR and VR products on the market today, and at least in my case, I only see their value at the moment in vertical markets. This is especially true for VR. Apple and Google have tried to bring AR to a broader audience but here too, AR delivered on a smartphone is still a novelty and is most acceptable when used in games like Pokemon Go and some vertical markets.

As I have written in multiple columns over the last year, I have shared my excitement for AR, especially after seeing some cool AR applications in the works that should be out by the end of the year. Although they are still delivered via a smartphone, AR Kit and AR Core are giving software developers the tools to innovate on IOS and Android and in that sense I see the possibility of a broader interest in AR later this year. I also expect Apple to make AR one of the highlights of their upcoming developer conference in early June.

However, I feel the most effective way to deliver AR will be through some form of mixed reality glasses. While the intelligence to power these AR glasses may still come from the smartphone, these glasses will be an extension of that smartphone screen and deliver a better way to view AR content, than can just be provided on a smartphone screen.

I see glasses as the next evolution of the man-machine interface and technology that will be extremely important to billions of people over the next ten years. In my recent Fast Company column, I shared how I believe Apple would tackle the AR opportunity and how they could be the company who defines AR based glasses market.

But if you have used any of the VR or Mixed reality headsets or glasses so far, you understand that interacting with the current models are difficult when you have to use a joystick or handheld wand that is needed to communicate with any of the features or actions in any given VR or AR application. Even more frustrating is that these handheld interfaces do not deliver pin-point precision yet, which makes it often difficult to activate any of these AR or VR applications functions.

I believe there are three high hurdles to get us to where AR is valuable and is acceptable by mass-market users. The first is creating the types of glasses or eyewear that are both fashionable and functional. Todays VR and AR glasses or goggles make anyone who uses them look like nerds. In our surveys, this type of eyewear is panned by people we have talked to about what is acceptable to wear for long periods of time.

The second most significant hurdle will be how the wireless technology used in these smartphones are designed to communicate with what I call “skinny glasses.” This is where the glasses rely pretty much on the smartphone for its intelligence. Getting the wireless connections and applying the smartphone’s functions and intelligence to these glasses will be difficult but critical if we want to have the types of AR glasses that people will wear and not make them stand out as some tech dweeb.

But the missing link that gets little attention when we talk about VR and AR will be the way we interact with these glasses to get the kinds of functions we want and need to make these headsets valuable. Undoubtedly voice commands will be part of this interface solution, but there are too many occasions where calling out commands will not be acceptable, such as while in a meeting, at church or a concert, or in a class, to name just a few.

Indeed, we will need other ways to activate applications and interact with these glasses, which most likely will include things like gestures, object recognition via sensors and through virtual gloves or hand signals such as those created by Magic Leap to navigate their specialized mixed reality headset.

However, I believe this is an area ripe for innovation. For example, a company called TAP just introduced a Bluetooth device that fits over four fingers and lets you tap out actual words and characters as a way to input data into existing applications such as word, or eventually virtual applications on a mixed reality headset.

The folks from Tap came by and gave me a demo of this product, and I found it very interesting. There is a real learning curve involved to understand how to tap out the proper letters or punctuation marks, but they have great teaching videos as well as a teaching game to help a person master this unique input system. Check out the link I shared about to see how it works. They are already selling thousands to vision-impaired folks and others in which using a virtual keyboard like TAP are needed for a specific app or function.

But after seeing TAP, I realized that creating a powerful way to interact with AR apps on glasses should not be limited to joysticks, virtual gloves, voice commands or gestures. This missing link needs out of the box thinking like TAP has done. Hopefully, we will see many other innovations in this space as tech companies eventually deliver mixed reality glasses that are acceptable to all users and drive the next big thing in man-machine interfaces.

Podcast: Microsoft Build, Citrix Synergy, Google I/O

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the news and impact of Microsoft’s Build developer conference, the Citrix Synergy customer conference, and Google’s I/O developer conference.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Microsoft Pushes Developers to Embrace the MS Graph

Microsoft talked about a lot of interesting new technologies at this week’s Build developer conference, from artificial intelligence and machine learning to Windows PCs that work better with Android and Apple smartphones, to some smart new workflow features in Windows 10. But one of the underlying themes was the company’s push to get developers to better leverage the Microsoft Graph. This evolving technology shows immense promise and may well be the thing that keeps Microsoft front and center with consumers even as it increasingly focuses on selling commercial solutions.

Understanding the Graph
The Microsoft Graph isn’t new-it originated in 2015 within Office 365 as the Office Graph-but at Build the company did a great job of articulating what it is, and more importantly what it can do. The short version: the Graph is the API for Microsoft 365. More specifically, Microsoft showed a slide that said the Graph represents “connected data and insights that power applications. Seamless identity: Azure Active Directory sign-in across Windows, Office, and your applications. Business Data in the Graph can appear within your and our applications.”

Microsoft geared that language to its developer audience, but for end users, it means this: Whenever you use Microsoft platforms, apps, or services-or third-party apps and services designed to work with the Graph-you’ll get a better, more personalized experience. And that experience will get even better over time as Microsoft collects more data about what you use and how you use it.

The Microsoft Graph may have started with Office, but the company has rolled it out across its large and growing list of properties. Just inside Office 365 there’s SharePoint, OneDrive, Outlook, Microsoft Teams, OneNote, Planner, and Excel. Microsoft’s Azure is a cloud computing service, and Azure’s Graph-enabled Active Directory controls identity and access management within an organization. Plus, there’s Windows 10 services, as well as a long list of services under the banner of Enterprise Mobility and Security services. And now that company has rolled it into many of its own products; it is pushing its developers to begin utilizing the Graph, too.

Working Smarter, and Bridging Our Two Lives
The goal of the Microsoft Graph is to drive a truly unique experience for every user. One that recognizes the devices you use, and when you use them. One that figures out when you are your most productive and serves up the right tools at the right time to help you get things done. One that eventually predicts what you’ll need before you need it. None of it is quite as flashy as asking your digital assistant to have a conversation for you, but it’s the type of real-world advances that technology should be good at doing.

What’s also notable about the Microsoft Graph is that while it focuses almost entirely on work and productivity, these advances should help smooth friction outside of work, too. If we work smarter, perhaps we can work less. Inside this is Microsoft’s nod to the fact that while it still has consumer-focused businesses such as Xbox, most people will interact with its products in a work setting. That said, most of us have seen the lines between our work and non-work life blur, and the Graph should help drive continued and growing relevance for Microsoft as a result.

Don’t Forget Privacy
Of course, for all this to work Microsoft must collect a large amount of data about you. In a climate where people are starting to think long and hard about how much data they are willing to give up to access next-generation apps and services, this could be challenging. Which is why throughout Build Microsoft executives including CEO Satya Nadella made a point of driving home the company’s stance on data and privacy. Nadella called privacy a human right, and in discussing the Microsoft Graph both on stage and behind closed doors, executives Joe Belfiore and Kevin Gallo noted that this information ultimately belongs to the end user and it is up to Microsoft to keep it private and secure.

The privacy angle is one I expect to see Microsoft continue to push as it works to leverage the Graph in its ongoing battles with Google and Facebook. (I expect Apple will hammer home its stance on the topic at the upcoming WWDC, too.) In the meantime, it will be interesting to see if Microsoft’s developers buy into the promise of the Graph, and how long it will take for their subsequent work to come to fruition. By next year at this time, we may be hearing less about the potential of this technology, and more about end users enjoying the real-world benefits.

News You Might have Missed: Friday, May 11, 2018

Spotify and Hate Content

To identify hate content, Spotify said that it has partnered with a range of rights advocacy groups, such as The Southern Poverty Law Center, The Anti-Defamation League, Color Of Change, Showing Up for Racial Justice (SURJ), GLAAD, Muslim Advocates, and the International Network Against Cyber Hate. Spotify has also created an automated monitoring tool called Spotify AudioWatch to find content already on its platform that has been flagged as hate content around the world. Spotify also said they don’t believe in censoring content because of the artist’s behavior but they want their editorial decision to reflect their company values.

Via VentureBeat

  • This comes as a response to the Time’s up movement that called for label RCA, Spotify, Apple Music, and Ticketmaster to drop R Kelly as a client and his music from the platform. Ticketmaster did remove R Kelly from a concert in Chicago without explaining why and now Spotify is saying that while they will not remove his music from the platform it will not be promoting it or adding it to playlists.
  • While Apple has not commented publicly, sources have told me that R Kelly’s music has not been promoted nor added to any playlist by the Apple Music algorithms for quite some time. Having asked for an R&B playlist a couple of times today I can report that no R Kelly song was included.
  • If you think that deciding what constitute hate speech on social media is hard, think how much harder it is to do so with artistic content being music or video.
  • Where do you draw the line on songs that sexualize women, incite violence, criticize leadership? Context plays a big role in determining what is acceptable and what is not. Words used by an artist of a specific race or gender might be ok but when used by others can be an insult.
  • And where do you draw the line on what is acceptable behavior from an artist? Sure there are things that are clearly wrong, no shades of grey but there are many that are not. Hopefully, Spotify will not take a cue from the NFL to determine what is right or wrong.
  • It seems to me that Spotify acted more out of concern for the possible impact on its brand if they were seen as not acting on this than anything else which is why I do not expect its AudioWatch monitoring feature to result in any real censorship.

Google Maps Goes Beyond Directions

During Google i/o, Maps were revamped to not only add features within navigations like using AR to determine the direction you need to talk when walking up to an intersection but also to become a one-stop-shop to discover content. Here are some of the new features:

Group Planning: Say you are searching for a restaurant to have dinner with friends. You can now long-press each result in the new Maps app to create a shortlist. After your first long-press, a badge pops up on the side of your screen, with a number indicating how many items have been added to the list. Once you’re done picking your candidates, you can tap the badge to bring up your list. From that page, you can share your selection with your friends, and they can upvote and downvote each option.

Explore: When you open up the app, you’ll see lists of things people typically look for in the area you’re in. Tapping each of these cards brought up a list of related places and a progress bar tracking how many items on the list the user had been to. There will also be recommendations for popular places under a section called Trending This Week, which Google curates by aggregating content from “trusted publishers, algorithmic data from its existing Trending Weekly list and where users have been visiting.

For you: The new Maps will have a For You tab that pulls up a page with recommendations tailored to you based on your preferences. For You will recommend Events and Activities as well restaurants. Another personalized feature is something called your Match score. If you have searched for new restaurants and can’t decide between a 4.2 star-rated spot or a 4.3-rated one, your Score will be particularly useful. Google’s algorithm uses your preferences, previous reviews, and ratings, but also places you frequent to understand your taste.

Via Engadget 

  • With AI being such a focus for Google across its entire business it was no surprise that AI came to play a big part in the updates to one of the most popular Google services: Maps.
  • Google is using AI to provide additional value for its map content and to the data it has about its users. This is a very smart move for two reasons. First, of course, it makes Google Maps stickier for its users. Second, if using data for its algorithm provides valuable suggestions to its users it will increase their propensity to continue to share data with Google both in Maps and outside of Maps.
  • Starting to use AI effectively will put pressure on apps such as Yelp and Trip Advisor, apps that will not have the same level of information about the user as readily available. Even though you can input your preferences in these apps the results you get are a little bit hit and miss.
  • What I particularly like about Google’s approach in these new features is the fact that they are trying to address real-life frictions. If you ever organized a group dinner you know exactly how useful it will be to be able to pull a list of restaurants together in maps where you can also assess their location from where you are and then have your friends vote which one they want to go to. This saves a considerable amount of effort and time today spent on going back and forth on messaging to come to a decision.
  • One problem I see with all this added information is that the Maps app might become overwhelming for some users. Google is using tabs for these new features which should keep it simple to navigate but this is certainly something to keep in mind.
  • The new AR feature that allows Maps users to know the direction they should take when they are walking, is another example of Google focusing on solving real-life problems. This concept is not new. Nokia when still owned the mapping business that came to be called Here had a Compass feature that was able to tell you which direction you were facing. What is different is the use of AR, which makes the experience much more immersive. Google Maps will now display an over imposed arrow through the camera on your phone that points to the direction you need to take. There is even a cute factor with a fox leaping from the arrow off your screen to show you the way.
  • I think this feature has the potential to make AR mainstream and really help understand the value of the technology. When I saw the demo at Google i/o I equated this feature to adding mobile payment support to transit. Use something every day and get value every time you do so and you will use it more and more.
  • It goes without saying that all these enhancements put pressure back on Apple Maps that had closed some of the gaps with Google Maps last year after WWDC. Apple users are generally very comfortable with sharing their information with Apple, which means that Apple could also use data to deliver more tailored information to its Maps users thanks to ML and AI. We will have to wait till WWDC to see if this is indeed the case.

Google and the Machine Learning Product

Google has always been a company that was well positioned to capitalize on the machine learning/artificial intelligence age. A central service that’s sole focus is to organize the world’s data and make it easily accessible is a perfect combination for deep learning. Two companies have benefitted the most from the last few years of breakthroughs in deep learning algorithms, and they are Google and NVIDIA.

It makes sense in this machine learning era to think about Google’s product basically being artificial intelligence broadly. That product manifests itself as search, email, video like YouTube, Google Photos, Google Assistant, and many other current and future products and services. Google is essentially THE AI/ML company. I say that because I don’t see any other company that is solely oriented around AI/ML. Focus, and more importantly, how a company focuses their R&D and CapEx is very telling about their priorities. All other companies who have aspirations in AI/ML are generally treating it as a complimentary feature for other parts of their business or products. But for Google, AI/ML is the product, and because of that, the vast majority of their spending is focused on having the best machine learning tools at their disposal. Given Google’s absolute focus on AI/ML, it is very tough to see any other company be better at general purpose artificial intelligence than Google.

Establishing Trust Remains the Center
If you caught some of the news from Google i/o, you might have seen one of the more talked about demos being one where Google Assistant can simulate a human making a call to make a reservation or book an appointment on your behalf. This demo was pretty mind-blowing and as I watched it I couldn’t believe it was real. This demo was also met with mixed reactions, and for good reason. While the promise of AI has a lot to offer humans in helping us be more productive, it will also take us a bit of time to wrap our minds around the whole experience. Which will lead to the question of trust.

AI is inevitable, but whose AI we trust to be our computing companion is the central question of the next decade. Now it may be somewhat easy to think that because of Google’s business model that most people will not trust Google. I’m not sure that is entirely true. There is a dramatic difference between how the mainstream public perceives Google vs. Facebook around the topic of privacy. In our own primary study, and I’ve seen similar results from other companies research, Google ranks much higher on the trust scale than Facebook. In fact, in our research study, Google and Amazon were nearly tied for third place of all the companies we tested, and Facebook was toward the bottom.

One of the things Google has going for them is their services generally add much more value, or are perceived as adding more value and convenience than Facebook. This is an important point in this conversation. It has long been argued that most consumers don’t mind giving up some privacy for convenience. Essentially receiving what they perceive as great value for free in exchange for some information for example. The big difference between Google and Facebook, in my opinion, is consumer see more value in Google’s services holistically than they do Facebook’s. Which means they will be more tolerant, and perhaps more trusting that Google will not abuse their information. Google also differs from Facebook in that what you use Google’s services for are relatively private. What I search for is not public domain for example, for everyone to see, where most of what I do on Facebook is public domain. This is a distinct difference in the services and a consumers mindset around using and trusting these services.

Therefore, I believe, a consumers tolerance for Google services and their openness to trade some privacy for convenience will be much higher with Google than with other companies with a similar business model. This is a primary reason why I think Google’s AI assistant is a force to be reckoned with when it comes to assistant platforms.

Competitive Conclusions
While I maintain, Google will be the best at general purpose AI, that does not mean Google’s AI will be the only ones we use in our daily lives. While predicting the future is impossible, it does seem highly likely we will use some assistants in our lives for what they do best. We are still a long way off from this reality, and the initial implementations and coordination of assistants will be awkward and clunky at best. But how these digital assistants like Alexa, Siri, Cortana, and Google Assistant collaboration and work on behalf of the needs and requests of the consumer will be critical. These assistants are platforms, and humans have been able to manage using multiple platforms in their daily computing lives already so it makes sense they can handle using multiple assistants to get stuff done.

But a key focus of all of these assistants that I think is important to understand is the positioning Google used that the ultimate goal of Google Assistant is to help you get stuff done. That mission statement is not unique to Google assistant, but I did like how Google positioned it as something can help you get time back.

This, I think, is still the ultimate direction the technology industry is slowly heading and where I think AI will transform how we use computers. We do not, and should not, need to sit in front of these screens all day to be productive. There has to be better ways for technology to increase our productivity, save us time, and give us hours of life back.

When it comes to future platforms and buzzwords like augmented reality, and voice and visual computing, this is the direction that I think will be paradigm shifting in how we live our lives. Technology can enhance our lives and be less of a burden. Work more on our behalf and allow us to spend less time wrapped up in our existing workflows and build new ones that allow us to have more of our life back. That’s a future of technology I’m all for.

Google creates some spin with TPU 3.0 announcement

During the opening keynote to Google I/O yesterday the company announced a new version of its Tensor Processing Unit, TPU 3.0. Though details were incredibly light, CEO Sundar Pichai claimed that TPU 3.0 would have “8x the performance” of the previous generation and that it was going to require liquid cooling to get to those performance levels. Immediately much of the technical media incorrectly asserted an 8x architectural jump without thinking through the implications or how Google might have come to those numbers.

For those that might not be up on the development, Google announced the TPU back in 2016 as an ASIC specifically targeting AI acceleration. Expectedly, this drew a lot of attention from all corners of the field as it marked not only one of the first custom AI accelerator designs, but it was also from one of the biggest names computing. The Tensor Processing Unit targets TensorFlow, a library set for machine learning and deep neural networks developed by Google. Unlike other AI training hardware, that does limit the use case for TPU to customers of Google Cloud products and only TensorFlow based applications.

They are proprietary chips and are not available for external purchase. Just a few months ago, it leaked from the New York Times that Google would begin offering access to TPUs through Google Cloud services. But Google has no shortage of use cases for internal AI processing that TPUs can address from Google Photos to Assistant to Maps.

The liquid cooling setup for TPU 3.0

Looking back to the TPU 3.0 announcement yesterday, there are some interesting caveats about the claims and statements Google made. First, the crowd cheered when it heard this setup was going to require liquid cooling. In reality, this means that there has been a dramatic reduction in efficiency with the third-generation chip OR they are being packed much more tightly in these servers without room for traditional cooling.

Efficiency drops could mean that Google is pushing the clock speed up on the silicon, ahead of the optimal efficiency curve to get that extra frequency. This is a common tactic in ASIC designs to stretch out performance of existing manufacturing processes or close the gap with competing hardware solutions.

Liquid cooling in enterprise environments isn’t unheard of, but it is less reliable and costly to integrate.

The extremely exciting performance claims should be tempered somewhat as well. Though the 8x improvement and statement of 100 PetaFLOPS of performance are impressive, it doesn’t tell us the whole story. Google was quoting numbers from a “pod”, the term the company uses for a combination of TPU chips and supporting hardware that consume considerable physical space.

A single Google TPU 3.0 Pod

TPU 2.0 pods combined 256 chips but for TPU 3.0 it appears Google is collecting 512 into a single unit. Besides the physical size increases that go along with that, this means relative performance for each chip of TPU 3.0 versus TPU 2.0 is about 2x. That’s a sizeable jump, but not unexpected in the ever-changing world of AI algorithms and custom acceleration. There is likely some combination of clock speed and architectural improvement that equate to this doubling of per-chip performance, though with that liquid cooling requirement I lean more towards clock speed jumps.

Google has not yet shared architectural information about TPU 3.0 and how it has changed from the previous generation. Availability for TPU 3.0 unknown but even Cloud TPU (using TPU 2.0) isn’t targeted until the end of 2018.

Google’s development in AI acceleration is certainly interesting and will continue to push the industry forward in key ways. You can see that exemplified with NVIDIA’s integration of TensorCores in its Volta GPU architecture last year. But before the market gets up in arms thinking Google is now leading the hardware race, its important to put yesterday’s announcement in the right context.

Why Apple and Microsoft are Moving to Become Powerhouses in Services

In a recent column, I wrote about a new great divide coming out of Silicon Valley in which one side believes the best business model is to sell products and services vs. the other side that gives away services that are subsidized by advertising.

As I pointed out in this article, both business models are valid, but the one that relies on advertisers for their revenue have the biggest challenge when it comes to protecting their customer’s privacy as advertisers need as much data on the prospective customer to more accurately target their ads.

Although the advertising-supported model is exciting and still important, it is becoming clear that those selling products and services, such as Apple, Amazon, and Microsoft have perhaps the most sustainable models.

Apple’s earnings call on May 1, 2018, was quite important in that it showed that Apple’s service revenue during the last quarter was $9.2 billion and if made a stand-alone company on its own, services would be a Fortune 300 company. This was a 31% increase in services revenue from the same quarter a year ago. To be clear, 62% of Apple’s revenue still comes from iPhone sales. But overall sales were strong in the last quarter, and Apple had record sales and revenue in iPhones, Apple Watch and especially services.

The chart below puts Apple’s latest quarter sales and revenue in the context of their overall earnings.

I don’t think I can emphasize enough that Apple is transitioning to making services a much more critical mix of to their revenue balance and making it the second cornerstone to drive their profitability. More importantly, the actual products they create are extremely important in that they become the mechanism to deliver these services. You can bet that Apple will double down on existing hardware and very likely introduce even more devices that funnel services to these devices shortly.

At the moment, Apple leads the pack by providing a complete ecosystem of hardware, software, and services, which is what makes them hard to compete within this new age where services are becoming increasingly important to the overall user experience.

But if you have been watching Microsoft’s strategy over the last four years, you know that they are rapidly moving to be more of a services company too. In their case, they use their partner’s hardware as the funnel to deliver apps and services like Office 365 and more recently, Microsoft 365, in which their OS, applications and other services encompass a new bundle of products for which they charge an annual rate.

This transition to a services company by Microsoft is as essential for them as it has been for Apple. And by creating a dedicated sand-boxed store in which only approved apps are allowed in the store, this gives Microsoft more control of the security for these apps and services as well as brings them a new set of revenue streams.

While this new strategy is good for Microsoft, it is not necessarily good for their partners. In the past, PC vendors had the freedom to bundle software and even security services on top of Windows in which they received legitimate revenue that helped them make money on PC’s that had small margins. But with Microsoft forcing all apps and services through their sand-boxed store, their partners are now blocked in many cases from doing their added-value software and services outside of this store.

To say that their PC vendors are not happy with this new Microsoft program would be an understatement. With PC margins shrinking, and now being blocked from adding additional value without it going through the store and paying Microsoft part of that transaction, these PC vendors are starting to look for other ways to make extra money on a PC sale.

Unless Microsoft changes their tune and becomes more lenient in allowing their vendors to add services and make additional money that goes to them and not Microsoft, I see them being more aggressive in supporting an OS platform that does allow for this type of freedom.

That OS platform will most likely be coming from Google’s Chrome OS. What PC vendors have found is that Google’s G-Suite is getting strong interest and adoption in some big enterprise accounts like Salesforce.com and they see an opportunity to use Chrome OS to gain back some control in which they can add their own apps and services to a laptop platform. How widespread this move will be is up to debate, but you can expect to see the big PC vendors being more active in promoting Chromebooks as a real alternative to Windows-based PC’s to gain at least some of their ability to make extra money with laptops in light of this problem they have now with Microsoft. They already have Chromebooks in educations, but a move to offering them in the enterprise is what makes this interesting.

While there is still money to be made in hardware, the amount of PC’s and laptops sold each year continue to shrink and this move for Apple, Microsoft and others to add services to their offerings are critical for their long-term survival. That is why you should be looking for more and more services from these companies that will be more subscription based as they steer more and more of their R&D in this direction.

Microsoft Build & Google i/o: Compare and Contrast

It would make an excellent title for a school essay, don’t you think? As much as this week was a bit of a logistical nightmare due to the two developer conferences overlapping, it was a great opportunity to directly compare the approach these two companies are taking on what has become a core area of their business: AI.

There was a lot covered in both companies’ keynotes, but for me, there were three key areas where similarities and differences were clear.

Assistants as Active Players in Human Interactions

In very different ways, assistants on both stages played an active role in human interactions. These assistants were not just voices I asked questions to, but they were providing proactive assistance in a particular situation.

Microsoft showed a compelling workplace demo where Cortana was able to greet meeting participants by name as they entered the room or joined the call. With visual support (another common theme) she was able to help the conversation by taking notes for a hearing-impaired participant, scheduling a to-do list, and setting up reminders.

Google showed a more conversational assistant that now can perform linked requests and continues to improve its voice to sound more human, even giving you the choice of John Legend as one of the voices. The demo that stole the show and pushed our current idea of the assistant to the next level was Google Duplex.

With Duplex, Google Assistant helped find a hairdresser and a restaurant and then placed a call to book an appointment. This feature will be available later in the summer but had the whole audience divided between “wow” and “eek.” There was also some skepticism given how Google Assistant at home still feels quite basic in comparison.

The exchange with both the hairdresser and the restaurant was very natural but when you listened to it a few times you can easily spot that the Assistant was picking up the key points in the task like date and time. It was a focused interaction which is very different from what we have at home where we can invoke Google Assistant to ask her anything from the weather to how to make fresh pasta.

Some people were apparently uncomfortable with the call because the people on the other end of the line were unaware they were talking to a bot. With the hairdresser case, Google Assistant did say she was making an appointment for her client which I thought was a nice way to let the receptionist know she was talking to someone else. It is tricky in this initial phase to balance disclosure with having the service used and accepted. I am sure most people would hang up if the conversation started with: “Hi, I am Google Assistant.”

I found it interesting, as I spoke to others about this demo, that the reason they wanted to be made aware they were talking to Google Assistant was that they wanted to know what data was captured and shared, not necessarily because they did not want to interact with a bot. Google did not talk about that aspect on stage.

This area is still so new to us that companies will need a lot of trial and error to figure out what we are comfortable with. Already today Microsoft Cortana can schedule a meeting for you in Outlook by contacting the people who need to participate and finding a time that works for everybody. The email that is exchanged says Cortana as a sender, but people seem more comfortable with that maybe because it is confined to email and it does not make our “rise of the machines” alarm bells go off.

There is no question in my mind though, that agents, assistants, bots, call them what you like, will become more active players in our life as they get smarter and improve their context awareness. We humans will have to learn how to make them part of that mix in a way that is not only socially acceptable to us but also to others involved.

Privacy and Ethics

Since the Facebook Cambridge Analytica debacle, the level of scrutiny tech companies have been under when it comes to privacy has increased. This coupled with The General Data Protection Regulation (GDPR) roll out in Europe, has also raised expectations on the level of transparency displayed by companies in this area. There was a definite difference between how Microsoft and Google addressed privacy on stage. Microsoft made a clear and bold statement about safeguarding users privacy with CEO Satya Nadella saying:

“We need to ask ourselves not only what computers can do, but what computers should do,” he said. “We also have a responsibility as a tech industry to build trust in technology.”

Adding that the tech industry needs to treat issues like privacy as a “human right.” Nadella’s statement echoed what Apple’s CEO Tim Cook said during a recent MSNBC interview:

“We care about the user experience. And we’re not going to traffic in your personal life. I think it’s an invasion of privacy. I think it’s – privacy to us is a human right.”

Their stand should not be a surprise as both companies share a business model based on making a profit directly from the services and products they bring to market.

At Google i/o there was no explicit mention of privacy, but Sundar Pichai stated at the very start of the keynote:

“It’s clear that technology can be a positive force, but we can’t just be “wide-eyed” about the potential impact of these innovations…We feel a deep sense of responsibility to get these things right.”

When he said that, Pichai was referring to AI in particular.

Some were quick to argue that Google did not make a clear statement on privacy simply because they cannot do so given the nature of their business model. I would, however, argue that it was not so much a case of not throwing stones in a glass house but rather not wanting to come out with an empty promise or a statement that would have seemed defensive when Google has not breached our trust. I think this point is important. Even when talking about what was introduced at Google i/o, many spoke of the price users pay to the benefit of these new smart apps and services: personal data. While this is true, it is also true that you have a choice not to use those services or not to share that data. Quite different from Facebook where people were aware their information would be used to target advertising but they were not aware they were being tracked outside of Facebook. I also believe consumers see a greater ROI from sharing data with Google than they do with Facebook, which raises their level of tolerance on what they want to let Google get access to.

Where Google felt comfortable going after Facebook was fake news and their revamped Google News service that will double down on using AI to deliver better quality, more targeted stories.

In my mind, asking Google to make a statement on privacy was the same as asking Google to change their business model. This is not going to happen any time soon and certainly not on a developer conference stage. What I would like to hear from Google more, however, is what data is seen, stored and retained across all the interactions we have. GDPR will force their hand a little in doing that, but a more proactive approach might score them brownie points and not have them be lumped in with Facebook in the time out corner.

Consumer vs. Business

Both companies had several announcements over the course of their opening keynotes that will impact business and consumers alike. Both companies had “feel good” stories that showed how wonderful tech could be when it can positively affect people’s lives. Yet, the strong consumer focus we saw at Google i/o had the audience cheering, wowing and clapping with a level of engagement that Microsoft did not see. This is not about how good the announcements were but rather how the audience could directly relate to those announcements.

Given the respective core focus of the two companies, it is to be expected that Microsoft focused more on business and Google on consumers. That said, I always argued that talking about users is not just good for the soul it is good for business especially when you are talking to developers. Business class apps are important and in some cases, they can be more remunerative than consumers apps for developers. Most business apps are, however, developed in-house by enterprises or by third parties that grant access to them through their services.

For Microsoft being able to talk about consumers is vital to attracting developers and there was not much mention of that on stage despite the bigger interest in the user within the enterprise. Microsoft did announce an increase in revenue share with developers now taking 95% of the revenue compared to the 85% they get from Apple and Google. Believing Microsoft is still committed to consumers though, might go a long way to get developers on board in the first place.

 

We have a little bit longer to wait and hear what Apple will have to offer developers, but I have a sneaky feeling that privacy, AI, and ML will all be core to their strategy while I am less clear about the extent that cloud will play a role.

Workplace Transformation

There is a theme or industry buzzword, that doesn’t get a great deal of attention. Workplace transformation is how enterprise-focused companies are describing their efforts to bring the workplace into the modern age. Some broad themes came before this one like the consumerization of the enterprise, and bring your device/apps/services, and all were an admission that the balance of power has shifted from IT manager to employee. People were exposed to a plethora of great technology in their personal lives, and the desire to use those devices, apps, services, to help them in their professional life became too influential for companies to ignore.

Having been a professional industry analyst since 2000 this paradigm change has been fascinating to watch. For the good part of my first ten years analyzing the tech industry, there was a clear divide on how consumer companies worked and thought and how enterprise companies worked and thought. Doing analysis and working at a firm that specializes in strategy, it was clear to see how the different end market focuses impacted our client’s strategic agenda. The whole world has changed, and there is no separation of consumer and commercial strategy. The consumer mindset in the workplace has flipped the model and companies are now fully embracing this trend with workplace transformation to try and bring the workplace into the modern era.

Microsoft’s Focus
Without using too many buzzwords, with the exception of the modern workplace, Microsoft is laser-focused on workflow transformation and they fully understand the need to build consumer grade product experiences for the workplace. This is why many of Microsoft’s demos around things like mixed reality, software and workflow collaboration had a consumer product feel even though the use cases were all workplace specific. Understanding workplace transformation also takes into account the multi-device world we all now live in. There is no single device for productivity anymore and consumers want to use the best tool, and all the tools, to get the job done.

Microsoft is not alone in this focus. Every company that sells into the commercial market has turned their eyes to workplace transformation and a big part of that transformation is the heterogeneous computing environment the world now finds itself in. There is no single solution a company can offer their employees. Just like in consumer markets, choice and options remain critical. Not every employee wants to use the same thing to be best equipped to get their job done. Yet that is how it worked for many years. Employees were given a Windows PC and they used Microsoft Office and that was about it. The world has moved on and it is much bigger.

IT departments are now working to provide a suite of options, which are all secured and approved, and are essentially becoming master curators for their company. It is their job to make sure employees have quality choices for the hardware, software, and services they can use to get their job done but they also need to make sure those products pass their company standards and security protocols.

The Challenge of a Consumer Focused Enterprise Approach
While many of you probably know, most of my analyst work is in consumer markets. But these trends I’m outlining bring me into situations where enterprise focused clients are interested in better understanding the consumer mindset. The more I’ve watched companies like Microsoft, Google, Amazon, Cisco, IBM, etc., start to think about a consumer-centric approach to their strategy the more I realize how difficult it is. The reality is, this strategy is much more difficult than a pure consumer play because of how the customer is both the commercial business (and IT manager/buyer) and the consumer.

Pure play consumer companies can get away being much more liberal with things like security and privacy, and have more options for their business model. There is less red tape and less protocol to deal with. I can argue that pure-play consumer company has it easier than a commercially focused business that has to serve both the enterprise customer and the consumer who ends up using the hardware, software, or service in the end. Which is a statement unto itself given how hard the pure consumer market is to start with.

This is one reason Apple’s has been so interesting to watch, particularly as they continue to grow in the enterprise. Granted, they are growing with hardware, and an OS platform not necessarily first party apps or services, but Apple has always stayed true to the consumer focus and that focus is what helped them gain traction in the enterprise. Not many, in fact, I can’t think of any, companies who dominate the consumer landscape like Apple can make inroads into the commercial market. Google will try and they may succeed but still have a long hard road ahead.

The same may be true with historically enterprise-focused companies as well. They may have a hard time truly understanding the consumer and have a hard time succeeding as the workplace transforms and becomes more modern. Microsoft has as good of a chance as any and while many of the announcements from BUILD were not the sexiest from a consumer standpoint, Microsoft is making important strides in this direction.

Bringing Vision to the Edge

In case you haven’t been paying attention recently, one of the hottest topics in the tech industry is the concept of “the edge.” Virtually every major tech company has been talking about their strategic approach to this new method of computing recently, and the announcements are bound to keep coming.

Standalone PCs are an example of edge devices, but when most companies talk about the edge, they really mean things like drones, smart home gadgets, sensor-based industrial devices, autonomous cars, and so on. While these devices can—and often do— connect to the Internet, they can also compute independently, courtesy of built-in x86 processors or Arm-based microcontrollers, and embedded software.

At Microsoft’s Build developer conference in Seattle, the company had a particularly strong focus on what they term the “intelligent edge” and touted it as one of the next major revolutions in computing. The “intelligent” moniker stems from the use of AI, machine learning and other advanced computing concepts in these edge devices, bringing a whole new level of capability—and attention—to them.

As appealing as the concept of intelligent edge computing may be, however, there have been some real challenges in enabling the potential of these new devices. The biggest issue is around creating software to run on the often novel architectures used inside of them. To that end, Microsoft has been making a great deal of effort both on the platform side, as well as the application development side.

Because there are a wide range of different devices with various levels of sophistication and diverse amounts of hardware, Microsoft now has several platform choices, including Windows 10 IoT Core, Windows 10 IoT Enterprise, and Azure IoT Edge Runtime, which the company is now open-sourcing. The Azure Runtime offering, in particular, is ideally suited for the enormous array of smart products appearing everywhere from our homes to hospitals to factories, farms and more.

Technically, the Runtime is collection of programs that runs on top of the various types of Linux, Windows or other embedded operating systems found in these smart devices, hence the name. What it does is create a common platform for which applications can be created. Without the consistent capabilities enabled by the Azure IoT Edge Runtime, developers would have to create new or different versions of their applications for each potential hardware/software combination—an impossible task.

In addition to this common base, Microsoft’s Azure IoT Edge Runtime leverages the same cloud-based computing platform offered by “regular” Azure that the company uses in its cloud computing offerings. This is very important for developers because it means they can use the same development tools and methodologies that they use for cloud-based programming models, including containers, to create software for these intelligent edge devices. This, in turn, makes it much easier for companies to build edge applications without needing people with entirely new types of programming skills.

Microsoft used their Azure platform approach to build and release a set of AI-based computer vision processing tools called Custom Vision that can leverage cameras and Qualcomm-based image processing silicon on edge devices. Part of what Microsoft calls Azure Cognitive Services, Custom Vision essentially brings “eyes” to the edge, letting companies build applications that react to visual information that these smart edge devices see—without needing a connection to the cloud.

A great example of this showed off Microsoft’s new partnership with DJI, the world’s largest drone maker. Using a DJI drone running Custom Vision on Azure Edge IoT Runtime, the two companies showed how you could create an app that would be capable of seeing and visually annotating (in real time) anomalies or other faults on a pipe that was being inspected by a drone. Microsoft is also working with DJI to build a Software Development Kit (SDK) for Windows 10 devices that allows for the creation of flight control and real-time vision or sensor-based applications for DJI drones.

With Qualcomm, Microsoft announced a vision AI developer kit that uses Azure Machine Learning services to create vision-aware applications for edge devices that can be accelerated by Qualcomm silicon and the Qualcomm Vision Intelligence Platform and Qualcomm AI Engine software tools that they have created. In practical terms, this means that developers could create applications such as smart home visual doorbells or security cameras equipped with certain Qualcomm chips to recognize specific objects, or individuals, and react appropriately—such as providing security notifications only if a person isn’t “known” to the household, and so on.

What’s particularly intriguing about this example, is that it’s one of the first of what will likely be many AI edge applications that can take advantage of the unique characteristics of certain types of silicon. Look for many more accelerated AI edge applications to come.

The potential opportunities with intelligent edge computing are already enormous, and that’s why there is now tremendous excitement within the tech industry about how these technologies can be applied. There’s still several hurdles ahead and a fair amount of work involved, but real progress is being made. As Microsoft demonstrated, we’re starting to see some critical strategic steps towards simplification on what can be a very complex topic. As a result, it shouldn’t be long before a much broader range of developers can start leveraging capabilities such as vision on the edge. When they do, the kinds of applications and services we can enjoy are going to be amazing.

SNAP’s Struggle

Despite a fair amount of negative sentiment toward SNAP, I do believe they play an important role in the industry, and may play a bigger one in the future. SNAP’s positioning of themselves as a camera company is one of the most interesting focal points for me as I study them. Despite their overall challenges, I know one thing for certain that should not be underestimated. SNAP has an absolute lock on US teens and most US millennials under 30. For this reason, SNAP’s growth story being tied to Android is a pipe dream.

Android Won’t Help SNAP
I’ve always wondered how much SNAP’s executive management truly believes in the Android story they keep telling Wall St. SNAP CEO Evan Spiegel seemed confident that their growth story was held back because of a poor implementation of the SNAP Android app. Now the Android app experience has been made better, on par with the iOS experience, there is still no indication of a user growth story. The reason is simple. SNAP’s target audience is on iOS.

As I stated above, the vast majority of SNAP’s userbase is in the US and the generation under 30 who is largely on iOS. SNAP has a lock on this demographic and interestingly, I hear over and over again from parents with kids entering their teen years how their kids are pressuring them for a Snapchat account. So it seems the upcoming Gen Zers will all likely contribute to SNAP’s userbase. In fact, I could make a case at this point that Gen Z will be more invested in Snapchat than millennials are as a whole. Notably, this demographic also longs for iPhones.

In short, I don’t believe the Android story will help SNAP. Their target market simply isn’t and doesn’t want to be on Android.

SNAP Ads Upside
When I evaluate these social platforms, I pay close attention to the ads. And specifically, the type of brands and how those brands are using the platform. It has always intrigued me that SNAP has been able to attract more ads that are brand building, like what you see on TV than what we see on Facebook or Instagram which are direct-to-consumer ads looking to drive conversion. More often than not I’ve never heard of the brand/product of an ad I see on Facebook. Which psychologically is a hurdle because I am not sure if I can trust the company or the product to be of any type of quality. The result is if I’m interested in the product I have to spend a lot of time researching it to see if I can trust it.

SNAP not only shows more brand-centric ads from known companies looking to drive their brand strategy, but they are also much more entertaining. The quality and format of SNAP’s ads are dramatically more tolerable than that of Facebook. Looking at the ads specifically, I compiled a list of the ads I saw on Snapchat and Facebook in a five minute period of using each service.

Snapchat:
Bud light
Exxon Mobile
State Farm
Gatorade
Hotel Artemis Movie Trailer
Zz Quill
Twine
Dairy Queen

Facebook
Headspace
Code42 (GDPR compliance company)
Bonobos (men’s clothing)
Portalus (shoe insert to help with pain in feet)
Thrive (a health pill supplement)
Solo stove (small metal outdoor fire pit)
Jet Smarter (charter jet service)

Most of the brands and products on Snapchat are known to me and most of what I see on Facebook I have never seen or heard of before. Again, very different strategies on how companies use each platform. While it is true, Facebook and Google own the vast majority of digital ad spends, when it comes to social media advertising we still need to wait and see how companies gauge the return on their investment when advertising on Facebook and Snapchat. But the types of ads Snap is getting tend to be higher budget so they may not need as much volume to still get a good chunk of a brands budget.

The Spectacles Experiment
When some news came out that SNAP was working on a new set of Spectacles, the overall sentiment was quite negative. Yes the previous version did not sell well, and no version two will not sell well either, but I consider these efforts important learning missions for SNAP. SNAP is not a hardware company, but they know hardware will play a role in their future one way or another. SNAP’s efforts to make hardware, in this case, faceware, will yield valuable lessons they can build and iterate on. Their challenge will be more centered on fighting those who want to take the short view when Spectacles V2 bomb in sales. And the short view is what most have taken with SNAP.

I still have an optimistic take on SNAP and even if they struggle with the business on their own they make an attractive acquisition target for many and I have no doubt the service will thrive in way or another.

News You might have missed: Week of May 4th, 2018

Given how much was announced during Facebook developer conference F8, I thought I would share some thoughts on the major announcements.

Facebook Dating

Facebook announced it will be soon launching a dating feature. By clicking on a heart on the top right of your ur profile you will now be able to create a profile that’s only visible to non-friends who’ve also opted into the feature. Facebook will match you based on all its data, and messaging will happen in a dedicated inbox rather than Messenger.

  • As I pointed out in my article on Wed., this is a tough time for Facebook to be launching something as personal as a dating service given the recent privacy questions raised by the Cambridge Analytica breach.
  • Theoretically, the data Facebook has is quite valuable in creating a set of matches based on information shared on users profiles as long as of course people are being honest about what they post. I assume the algorithms used for the matchmaking will also be based on actual information volunteered by the individual.
  • Internet dating is not necessarily something you want to share with family, friend and coworkers, which is something Facebook thought of and addressed by not making the profile you create visible to your connections. That said, I do have to wonder if using the data you share coupled with location and events you attend would dramatically increase the chance to have your profile shared with someone you might know in real life.
  • Competition in this segment is quite strong with apps that show a higher level of sophistication. Yet, if I look at the sweet spot of the Facebook demographic I do wonder if the appeal will be mostly with older users who might feel comfortable with the platform precisely because they already use it and offer a simpler approach to dating.

Clear History

The company will soon launch a new privacy feature that will allow users to see and delete the data Facebook has collected from websites and apps that use its ads and analytics tool.

  • The exact availability of the feature was not announced, but Zuckerberg said such feature is under development and will launch soon. Clearly, this was done in response to some of the questions raised during the recent congressional hearing.
  • While actions should speak louder than words it was really how Zuckerberg talked about the feature that says a lot about his lack of belief this is the right thing to do.
  • Zuckerberg pointed out that the feature will make Facebook feel less personal but that this is what consumers were asking for. I thought such a wording was quite condescending and positioned the decision of rolling out the feature to be responsive to a consumer request he did not believe to be right.
  • Warning about a less personal Facebook is, by all means, a fair point but the proper focus of the statement should have been on the fact that giving consumer this option was the right thing to do.

Oculus Go and Oculus TV

The wire-free VR headset Oculus Go is now on sale at  $199 for the version with 32GB of onboard storage and $249 for the 64GB variety.

  • Most of the reviews I read so far can be summed up as decent but not ground-breaking. That said, I think the ease of use that comes from the combination of an easy setup and the lack of cable connections cannot be underestimated.
  • A short battery life is somewhat disappointing as this is one of the shortfalls of mobile solutions like Daydream and Galaxy Gear VR that drain the phone quite rapidly and overheats it making prolonged sessions difficult.
  • It would be interesting to see how quickly we will see the price point drop as this, for me, would be the best indication of how committed Facebook really is to drive adoption
  • Resolution is similar to what can be found with Daydream and Galaxy Gear VR. That coupled with three degrees of freedom rather than the six found in the upcoming Oculus Santa Fe. Basically, you can go side to side, and up and down, but you can’t move closer or further away from an object – and if you try, you’ll start to feel a little nauseous.
  • Together with the availability of Oculus Go, Facebook announced Oculus TV. The app puts a TV experience into your virtual environment with specially adapted on-screen controls, which turn the virtual screen in the virtual room into a sort of TV streaming Box experience. The streaming service available later this month will support Facebook Watch, Red Bull TV and Pluto TV directly integrated at launch.
  • Despite what Facebook said on stage about being currently working with networks to bring native integrations of Netflix, Hulu and Showtime I am just not sure what the incentives for these brands are as they already have standalone apps in the Oculus store.
  • VR in the consumer space remains a chicken and egg problem with content developers waiting for a bigger addressable market and potential buyers waiting for more content. Hopefully, Oculus Go will help broaden that base, although I remain concerned that while the price is more appealing the experience is still not good enough.

Artificial Intelligence

There were a few announcements around AI at F8:

On Wednesday, Facebook revealed that it has formed a special team and developed discrete software to ensure that its artificial intelligence systems make decisions as ethically as possible, without biases.

  • Facebook is not the first big company talking about the importance of ethics in AI but of course, this does not make it less important. As I will never get tired to say, the industry should prioritize ethics in AI because we cannot afford to train machines to be as biased as we are. Microsoft research organization established a Fairness Accountability Transparency and Ethics group over a year ago and last year Alphabet’s DeepMind Ai group formed an ethics and society team.
  • A couple of weeks ago I talked about how AI is not going to be Facebook’s magic wand to solve hate speech on the platform and this announcement is particularly relevant to the points I was making in my article.
  • Facebook also develop a piece of software called Fairness Flow, which is now integrated into Facebook’s FBLearner Flow.The software analyzes the data, taking its format into consideration, and then produces a report summarizing it.There are currently no plans to release Fairness Flow to the public but Facebook’s doesn’t plan to release the new Fairness Flow software to the public under an open-source license. However the team might publish academic papers documenting its findings, Facebook said.

Instagram’s hashtagged images help with machine learning

  • The idea makes a lot of sense. You take all the pictures tagged as #dog or #cat and you use those to train the machine rather than having to go through a set of pictures of animals to tag dogs and cats before showing those to a machine. Facebook used around 3.5 billion Instagram photos (from public accounts) and 17,000 hashtags to train a computer vision system that they say is the best one that they have created yet.
  • When humans tag pictures and then feed them to a machine the process is said to be “fully supervised” As you can imagine this system is pretty error proof but might be difficult to scale or might not be representative enough of the types of cats and dogs in the world.
  • The Instagram hashtagged pictures offer a “weakly supervised” process where machines can use users’ generated tags to be trained.
  • This “weakly supervised” training is much more “noisy” which creates some challenges on accuracy. For instance, you might have tagged a cat as a dog as a joke or you could have tagged a picture of you and your dog as “dog but you don’t see the dog. Computer vision lead at Facebook Manohar Paluri, said however that the results they achieved are very encouraging. Measured by one benchmark, the system—trained on those billions of Instagram pics—was on average about 85% accurate, he said.

Apple building a VR headset is good news for Facebook, Qualcomm

Though there have been rumblings of its existence for years, a more substantial report on Apple’s development of a virtual reality product was released last week. The story indicates that Apple is targeting a 2020 release for headset and that it will use in-house developed chip, screen, and wireless technologies. True or not, the anticipation of the Cupertino-giants’ entry into the VR market will spark development acceleration by competitors and drive interest from consumers into the current state of technology.

With an expected 22 million VR headsets to sell in 2018, and that increasing to 120 million units by 2022 according to CCS Insight, there is a $10B market up for grabs.

Despite some of the language in the CNet story, what Apple is aiming to do isn’t a revolutionary step ahead of the current offerings and roadmaps that exist from other VR technology providers. Apple has a reputation, however, for waiting for the “right time” to introduce new product lines and will likely put its specific touch of refinement and focus on a segment that is viewed as moving in too many directions.

The source for this Apple information, which appears to be strong, believes that Apple is building a wireless configuration for a combined VR/AR headset. Virtual reality and augmented reality are related, but differ in that AR overlays information on the real world while VR completely blocks out your surroundings.

Unlike currently shipping detached VR headsets like the Oculus Go, Apple appears to be utilizing an external box that will provide the majority of the computation necessary. In theory, this allows the company to provide better visuals, longer battery life, and lower costs than if it had decided to go with a fully integrated solution.

Apple will have hurdles to cross. Wireless data transfer from an external unit to the headset isn’t as easy as it sounds, as the amount of bandwidth and low latency required puts specific restraints on the technology it can use. Taking advantage of 60GHz WiGig or millimeter-wave frequencies (the same used for some versions of 5G cellular) means that objects like glass and even the human body can impact performance dramatically. Hiccups in the data stream from the external box to the VR headset will result in nausea and general discomfort.

The Oculus Go from Facebook

One drawback to using an external box for processing is that it limits the portability of the device. The new Oculus Go can travel with the consumer from work, to home, to the plane. A wirelessly “tethered” system is definitely an upgrade over the wired systems that exist on PCs today, but aren’t changing the usage model. Apple could decide to go both ways – allowing the headset itself to be used for lower computational tasks like video playback and basic games, but require the external box for more intense applications and productivity.

Leaked specs from the CNet story included the use of 8K displays for EACH EYE of the headset. This would be a tough undertaking for a few reasons. First, 8K is incredibly nascent, only showing up at CES this year as a technology demo. In 2020, these panels will still be prohibitively expensive. There are hints that Apple will be building its own displays in the near future, and this could be the first target rather than the next generation of iPhone.

I would expect the VR/AR product to use Apple in-house developed silicon. The company has clearly shown that it prefers to develop towards and prioritize specific architectural directions that it has on its roadmap. Knowing that Apple also plans to replace Intel processors in its notebooks in the same time frame, there will be overlap in the chips being used. The source story believe that Apple is waiting for 5nm process technology for this jump, which is at least two process generations ahead. Getting to mass production for chips of that size by 2020 is another hurdle.

Of course, Apple is definitely not the first player in the field. Other major players have been working and developing virtual reality hardware, algorithms, and software systems, creating the capability for software to evolve. Despite the attention that Apple is getting (and will get as more rumors persist), there is a lot of gain for the first adopters.

Oculus and HTC brought the first mainstream VR headsets to the world but they required powerful PCs and were hardwired to the system. Snap-in designs like the Samsung GearVR and Google Daydream allow users to double-up the use of a smartphone with head-mounted units. Though the headset itself is reasonably priced, the phones that can provide the power for them are often $600 of higher, and limit the battery life and capabilities.

Qualcomm started developing chips and reference designs for standalone VR headsets back in 2016, and they utilize a lot of the same technology found in modern, flagship smartphones. Just this week, Oculus, now owned by Facebook, launched the Oculus Go, the first mainstream, high quality sub-$200 device for VR that does not require a PC. Early reviews have been very positive, and having used one myself for a few days, I support the idea that this is the way VR should be utilized going forward.

The current players in the VR market will see an uplift of interest thanks to the strong rumor of Apple getting into the fold within two years. To some degree, it validates the VR/AR markets. Even though there is a sizeable audience for virtual reality products today, its growth has been much slower than expected. Solid information that Apple will be entering the field will force these companies to increase investment, hoping to solidify a leadership position before Apple gets involved. It will also drive consumers to take notice of VR and AR, moving a sub-set of the audience to buy-in early once interest is peaked.

Why Regulators Should Approve the T-Mobile/Sprint Deal

On the heels of the T-Mobile/Sprint merger announcement (Round 3), the market has been pessimistic, with consensus on the Street that chances of approval stands at less than 50%. I disagree. If T-Mobile and Sprint play their cards right, the chances of getting the deal through this time ‘round are much better. Here are some of the main points I believe regulators should consider.

  1. The Market Has Changed. These two companies first broached a deal nearly 5 years ago, not long after SoftBank had acquired Sprint. Much has changed since then. There have been some serious vertical market integrations involving other operators (Comcast-Universal, Verizon-AOL-Yahoo, and pending AT&T-Time Warner). Cable now seems serious about being in mobile, and DISH sits on a treasure trove of spectrum. So even though the number of national facilities-based wireless providers will go from four to three, we’re likely to have 1-3 additional major MVNO/resale players in the future: cable, some incarnation of DISH, and possibly some Internet giant (Google, Facebook, Apple, Amazon…take your pick).
  2. Why the Focus on Wireless When It’s Broadband That Needs More Competition? I’ve always been curious about the FCC’s maniacal focus on the level of competition in wireless, even though the U.S. has less broadband competition than nearly any other OECD country. Only 50% of U.S. households have a choice of more than one broadband provider. The new T-Mobile, with far greater spectrum breadth and capacity, would be in a much better position to offer a competitive broadband service via wireless, in some contexts.
  3. What Would Have Become of Sprint? I’m surprised there hasn’t been more focus on this. Sprint has been losing share, has huge debt, and still lags on network coverage and quality. Even the wunderkind Masayashi Son has not been able to turn the company around. Without a merger, what are Sprint’s real prospects? Wouldn’t it be better for network investment, the market, and consumers to have three strong competitors, rather than two giants, a sort of strong #3, and a weak #4? Can regulators point to any other countries where there are four healthy and profitable national wireless carriers?
  4. This is good for 5G. The challenge of having four strong national competitors is even greater when one considers 5G, given the level of investment that will be required. There’s a very real risk that T-Mobile, and especially Sprint, would fall further behind as 5G gets built. And it’s not only about having a war chest of dollars and spectrum. With the number of small cells that will be required for 5G, especially in urban areas, the zoning/siting/permissioning process alone, spread across four operators, would be a nightmare.
  5. This Is Partially The Fault Of Our Existing Spectrum Policy. It’s ironic to me that the feds might try to block this deal, while at the same time, it’s been their objective for the past 20 years to maximize the $$ intake from spectrum auctions. T-Mobile, Sprint, and other potential upstarts/new competitors would have a far better chance of building a good network and successfully competing in wireless if they didn’t have to spend tens of billions of dollars to merely acquire the spectrum. In several of the more recent spectrum auctions, well-heeled folks from Comcast to Google have bowed out because, well, it got too expensive for them.Tom Wheeler, former FCC Chairman, now at the Brookings Institutions, has been pushing the idea of spectrum sharing for a number of years. He wrote this week that perhaps the best way to compete in 5G would be for the carriers to build a shared 5G network. This is the sort of creativity we need, rather than government’s current approach of talking out of both sides of its mouth.
  6. Perhaps Some Creative Concessions Would Be In Order. In the wake of closing arguments at the AT&T-Time Warner trial, it’s been suggested that one possible outcome might be that AT&T wins approval to move forward, but must make some concessions, possibly divesting of the Turner assets or some way of assuring against discriminatory pricing.In previous wireless mergers, concessions have mainly revolved around divestment of spectrum. What about something more creative here? For example, perhaps the new T-Mobile has to agree to offer wholesale rates on some reasonable basis, thereby encouraging a more vibrant resale market. This has been the approach in some other countries, especially some broadband markets in Western Europe, which has resulted in more competition and lower prices. Naturally, new T-Mobile would argue that the same rules should apply to Verizon and AT&T, which could be a condition attached to deals extant and likely in the future.

There are valid arguments on both sides of this one, from a regulatory perspective. But given important changes in the market’s structure, plus the road ahead from a strategic and financial perspective, the benefits of this proposed combination outweigh the potential downsides.