Making AI Real

Back in the 1980s and ‘90s, General Electric (GE) ran a very successful ad campaign with the tagline “We Bring Good Things to Life.” Fast forward to today, and there’s a whole new range of companies, many with roots in semiconductors, whose range of technologies is now bringing several good tech ideas—including AI—to life.

Chief among them is Nvidia, a company that began life creating graphics chips for PCs but has evolved into a “systems” company that offers technology solutions across an increasingly broad range of industries. At the company’s GPU Technology Conference (GTC) last week, they demonstrated how GPUs are now powering efforts in autonomous cars, medical imaging, robotics, and most importantly, a subsegment of Artificial Intelligence called Deep Learning.

Of course, it seems like everybody in the tech industry is now talking about AI, but to Nvidia’s credit, they’re starting to make some of these applications real. Part of the reason for this is because the company has been at it for a long time. As luck would have it, some of the early, and obvious, applications for AI and deep learning centered around computer vision and other graphically-intensive applications which happened to be a good fit for Nvidia’s GPUs.

But’s it’s taken a lot more than luck to evolve the company’s efforts into the data center, cloud computing, big data analytics, edge computing, and other applications they’re enabling today. A focused long-term vision from CEO Jensen Huang, solid execution of that strategy, extensive R&D investments, and a big focus on software have all allowed Nvidia to reach a point where they are now driving the agenda for real-world AI applications in many different fields.

Those advancements were on full display at GTC, including some that, ironically, have applications in the company’s heritage of computer graphics. In fact, some of these developments finally brought to life a concept for which computer graphics geeks have been pining for decades: real-time ray tracing. The computationally-intensive technology behind ray tracing essentially traces rays of light that bounce off objects in a scene, enabling hyper-realistic computer-generated graphics, complete with detailed reflections and other visual cues that make an image look “real”. The company’s new RTX technology leverages a combination of their most advanced Volta GPUs, a new high-speed NVLink interconnect between GPUs, and an AI-powered software technology called OptiX that “denoises” images and allows very detailed ray-traced graphics to be created in real-time on high-powered workstations.

On top of this, Nvidia announced a number of partnerships with companies, applications, and open standards that have a strong presence in the datacenter for AI inferencing applications, including Google’s TensorFlow, Docker, Kubernetes and others. For several years, Nvidia has offered tools and capabilities that were well-suited to the initial training portion of building neural networks and other tools used in AI applications. At this year’s GTC, however, the company focused on the inferencing half of the equation, with announcements that ranged from a new version (4.0) of a software tool called TensorRT, to optimizations for the Kaldi speech recognition framework, to new partnerships with Microsoft for WindowsML, a machine learning platform for running pre-trained models designed to do inferencing in the latest version of Windows 10.

The TensorRT advancements are particularly important because that tool is intended to optimize the ability for data centers to run inferencing workloads, such as speech recognition for smart speakers and objection recognition in real-time video streams on GPU-equipped servers. These are the kinds of capabilities that real-world AI-powered devices have begun to offer, so improving their efficiency should have a big influence on their effectiveness for everyday consumers. Data center-driven inferencing is a very competitive market right now, however, because Intel and others have had some success here (such as Intel’s recent efforts with Microsoft to use FPGA chips to enable more contextual and intelligent Bing searches). Nevertheless, it’s a big enough market that’s there likely to be strong opportunities for Nvidia, Intel and other upcoming competitors.

For automotive, Nvidia launched its Drive Constellation virtual reality-based driving simulation package, which uses AI to both create realistic driving scenarios and then react to them on a separate machine running the company’s autonomous driving software. This “hardware-in-the-loop” based methodology is an important step for testing purposes. It allows these systems to both log significantly more miles in a safe, simulated fashion and to test more corner case or dangerous situations, which would be significantly more challenging or even impossible to test with real-world cars. Given the recent Uber and Tesla autonomous vehicle-related accidents, this simulated test scenario is likely to take on even more importance (and urgency).

Nvidia also announced an arrangement with Arm to license their Nvdia Deep Learning Accelerator (NVDLA) architecture into Arm’s AI-specific Trillium platform for machine learning. What this does is allows Nvidia’s inferencing capabilities to be integrated into what are expected to be billions of Arm core-based devices being built into IoT (Internet of Things) devices that live on the edge of computing networks. In effect, this allow the extension of AI inferencing to even more devices.

Finally, one of the more impressive new applications of AI that Nvidia showed at GTC actually ties it back with GE. Several months back, the healthcare division of GE announced a partnership with Nvidia to expand the use of AI in its medical devices business. While some of the details of that relationship remain unknown, at GTC, Nvidia did demonstrate how its Project Clara medical imaging supercomputer could use AI not only on newer, more capable medical imaging devices, but even with images made from older devices to improve the legibility, and therefore, medical value of things like MRIs, ultrasounds, and much more. Though no specifics were announced between the two companies, it’s not hard to imagine that Nvidia will soon be helping GE to, once again, bring good things to life.

The promise of artificial intelligence, machine learning and deep learning goes back decades, but it’s only in the last few years and even, really, the last few months that we’re starting to see it come to life. There’s still tremendous amounts of work to be done by companies like Nvidia and many others, but events like GTC help to demonstrate that the promise of AI is finally starting to become real.

Smart Home Competition Fuels Innovation and Creativity

Although still in its infancy, investment and engagement in artificial intelligence (AI) research continues to grow. A recent Consumer Technology Association (CTA) report citing International Data Corporation (IDC) estimates found global spending on AI was nearly 60 percent higher in 2017 than in 2016 and is projected to grow to $57 billion by 2021. And almost half of large U.S. companies plan to hire a chief AI officer in the next year to help incorporate AI solutions into operations.

As exciting as these changes are, however, one of the most exciting examples of AI right now hits a little closer to home – in fact for many of us, it’s in our living rooms.

Digital assistants are one of the hottest trends in AI, in large part thanks to the vast array of functions they offer consumers. These helpful, voice-activated devices can answer questions, stream music and manage your calendar. More, they turn off the lights, lock the doors and start your appliances when connected to compatible home systems. Budding support for digital assistants across the smart home ecosystem shifts the entire landscape of control from a web of apps to the simplicity of the human voice.

At CES® 2018, we saw many different digital assistants in action, from well-known players such as Google Assistant, Apple Siri and Amazon Alexa to other disruptive options such as Samsung’s Bixby, Microsoft’s Cortana and Baidu’s Raven H. Competition has spurred creativity and boosted innovation, as more and more products that connect with these virtual helpers emerge on the scene.

Competition in the smart speaker category, for example, has prompted greater differentiation among these devices as brands deploy unique features to attract consumers. The strategy is expected to pay off. CTA research projects U.S. smart speaker sales will increase by 60 percent in 2018 to more than 43.6 million units. Almost overnight, smart speakers powered by digital assistants have become the go-to smart home hub, a key component of the Internet of Things (IoT) and the catalyst driving smart home technology revenue growth of 34 percent to a predicted $4.5 billion this year.

The smart speaker category is also boosting other categories of smart home innovations. The rise of smart home technology – expected to reach 40.8 million units in the U.S. in 2018, according to CTA research – creates a new space for digital innovators to connect more devices, systems and appliances in more useful ways. This, in turn, is redefining the boundaries of the tech industry. Competition has fueled creativity, and creativity has expanded convenience – and Americans love it.

Fifteen years ago, we didn’t necessarily think of kitchen and bath manufacturers such as Kohler or Whirlpool as tech companies. Today, these companies are finding ways to integrate their products into the IoT, such as Whirlpool’s “Scan-to-Cook” oven and Kohler’s Verdera smart mirror. And Eureka Park™ – the area of the CES show floor dedicated to startups – hosted dozens of smart home innovators from around the world in January, launching their products for the first time to a global audience. Part of what’s so amazing about these technologies is they work together across platforms to create more efficient, more economical, more livable homes.

For example, South Carolina-based Heatworks developed a non-electric system for heating water, along with an app that lets system users control water temperature and shower length from their phones. New York-based Solo Technology Holdings has created the world’s first smart safe that sends you mobile alerts when it opens. Lancey Energy Storage, out of Grenoble, France, introduced the first smart electric heater, which saves more money and energy than traditional space heaters. And Israeli startup Lishtot showcased a keychain-sized device that tests drinking water for impurities and shares that data wirelessly via Bluetooth. These are just a few of the innovations made possible by IoT.

The IoT revolution has leveraged what I like to call the four C’s: connectivity, convenience, control and choice. Just as we experience the physical world with our five senses, we experience the digital world through the four C’s – they’ve become organic to our modern daily life, yet they are subtle enough that we often take them for granted. Consumers expect the four C’s to be ubiquitous. They are the default settings that anchor our digital experiences, which now increasingly includes our homes and our appliances.

The smart home phenomenon at CES represents what the tech industry does so well: companies big and small leading the IoT charge, crafting unique innovations that can be implemented across ecosystems. And everyone – from the largest multinational companies to the smallest, most streamlined startups – has an opportunity to redefine what it means to be at home.

It’s a redefinition that consumers embrace. Over the course of this year, I have no doubt that we’ll see the efficiencies and improvements technology delivers expanding beyond the home, into our workplaces and our schools. This remarkable evolution – driven by visionary innovation and fierce competition – is proof that technology is improving our lives for the better, saving us time and money, solving problems large and small and raising the standard of living for all.

Podcast: Apple Education, NVidia Tech Conference, Microsoft Reorg, Facebook Memo

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing the news from Apple’s Education event, analyzing the NVidia GPU Technology Conference, chatting about the recent Microsoft reorganization, and debating the impact of the recent Facebook memo release.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Facebook’s Oculus Go Looks Great, But Will People Buy It?

I recently had the opportunity to test Facebook’s upcoming Oculus Go virtual reality headset. Announced last year, and due to ship later this year, the announcement made waves because Facebook plans to sell the standalone headset-which works without a PC or smartphone-for $200. My hands-on testing showed a remarkably polished device that yields a very immersive experience. But Facebook has yet to articulate just how it plans to market and sell Oculus Go, so its success is far from assured.

High-quality Optics and Sound
When Facebook first announced Oculus Go, and its price point, many presumed that the device would drive a VR experience more comparable to today’s screenless viewers (such as Samsung’s Gear VR) than a high-end tethered headset such as Facebook’s own Oculus Rift. While it’s true that the hardware that constitutes Oculus Go may not measure up to spec-for-spec to high-end rigs connected to top-shelf PCs, the device itself is a testament to what’s possible when the one vendor is producing a product that tightly integrates the hardware and the software. It’s clear that Facebook and hardware partner Xiaomi has done a masterful job of tuning the software to utilize the hardware’s capabilities.

I spent about 20 minutes in the headset and was amazed at how easy it was to wear, how great the optics looked, the high quality of the integrated audio, and the functionality of the hand-held controller. I have tested most the VR hardware out there, and this was among the most immersive experiences I’ve had in VR. That’s an incredible statement when you consider the cost of the hardware, and the fact it is inherently limited to three-degrees-of-freedom motion-tracking capabilities (high-end rigs offer six degrees).

Facebook has slowly been rolling out details about the hardware inside Oculus Go, including details about the next-generation lenses that significantly improve the screen-door effect that impacts most of today’s VR experiences. The company has also talked about some of the tricks it employs to drive a higher-quality optical experience while hammering the graphics subsystem less, leading to better battery life and less comfort-destroying heat.

One of my key takeaways from the demonstration was that with the Oculus Go, Facebook had created an immensely comfortable VR headset, and I can’t overstate the importance of that. Today, even the most die-hard VR fan must contend with the fact that if they’re using a screenless viewer such as the Oculus-powered Gear VR with a Samsung smartphone, they can only do it for short periods of time before the heat emanating from the smartphone makes you want to take it off. Heat is less of an issue with tethered headsets, but the discomfort of the tether weighing down the headset means there are limits to just how much time you can spend fully immersed in those rigs, too.

But Can They Sell It?
So the Oculus Go hardware is great, and the standalone form factor drives a unique and compelling virtual reality experience. But the question remains: How is Facebook going to market and sell this device, and is there enough virtual reality content out there to get mainstream customers to lay down $200?

To date, Facebook hasn’t said much publicly about the way it intends to push Oculus Go into the market, and through which channels. The company undoubtedly learned a great deal about channels with its successes (and failures) with the Oculus Rift. The bottom line is that, for the foreseeable future people really want to try out virtual reality before they buy it. Oculus Go should be significantly easier to demonstrate in store than a complicated headset tethered to a PC, but how will Facebook incentivize the channel? What apps will it run? Who will ensure that the devices are clean and operational?

When I talk to Oculus executives, their belief that virtual reality is an important and vital technology is immediately clear. Often it feels as if they see its ascension as a certainty and just a matter of time. But for the next few years, moving virtual reality from an early adopter technology to something the average consumer will want to use is going to take herculean marketing, education, and delivery efforts. With Oculus Go, Facebook has a key piece of the puzzle: a solid standalone device at a reasonable price. Now it needs to put into place the remaining pieces to ensure a successful launch.

NVIDIA DGX-2 solidifies leadership in AI development

During the opening two-plus-hour keynote to NVIDIA’s GPU Technology Conference in San Jose this week, CEO Jensen Huang made announcements and proclamations on everything from autonomous driving to medical imaging to ray tracing. The breadth of coverage is substantial now for the company, a dramatic shift from roots in graphics and gaming solely. These kinds of events underlie the value that NVIDIA has created as a company, both for itself and the industry.

In that series of announcements Huang launched a $399,000 server. Yes, you read that right – a machine with a $400k price tag. The hardware is aimed at the highest end, most demanding AI applications on the planet, combining the best of NVIDIA’s hardware stack with its years of software expertise. Likely the biggest customer for these systems will be NVIDIA itself as the company continues to upgrade and improve its deep learning systems to aid in development of self-driving cars, robotics, and more.

The NVIDIA DGX-2 makes the claim of being the world’s first 2 petaFLOPS system, generating more compute power than any competing server in a similar size and density.

The DGX-2 is powered by 16 discrete V100 graphics chips based on the Volta architecture. These sixteen GPUs have a total of 512GB of HBM2 memory (now 32GB per card rather than 16GB) and an aggregate bandwidth of 14.4 TB/s. Each GPU offers 5,120 CUDA cores for a total of 81,920 in the system. The Tensor cores that make up much of the AI capability of the design breach the 2.0 PFLOPS mark. This is a massive compilation of computing hardware.

The previous DGX-1 V100 system, launched just 6 months ago, ran on 8 GPUs with half the memory per GPU. Part of the magic that makes the DGX-2 possible is the development of NVSwitch, a new interconnect architecture that allows NVIDIA to scale its AI integrations further. The physical switch itself is built on 12nm process technology from TSMC and encompasses 2 billion transistors all on its own. It offers 2.4 TB/s of bandwidth.

As PCI Express became a bottleneck for multi-GPU systems that are crunching on enormous data sets typical of deep learning applications, NVIDIA worked on NVLink. First released with the Pascal GPU design and used with Volta as well, the V100 chip has support for 6 NVLink connections and a total of 300 GB/s of bandwidth for cross-GPU communication.

NVSwitch builds on NVLink as an on-node design and allows for any two pairs of GPUs to communicate at full NVLink speed. This facilitates the next level of scaling, moving behind the number of NVLink connections on a per GPU basis and allows for a network to be built around the interface. The switch itself has 18 links and is capable of eight 25 Gbps bi-directional connections. Though the DGX-2 is using twelve NVSwitch chips for connecting 16 GPUs, NVIDIA tells me that there is no technological reason they couldn’t push beyond that. There is simply a question of need and physical capability.

With the DGX-2 system in place, NVIDIA claims to see as much as a 10x speedup in just the 6 months since the release of DGX-1, on select workloads like training FAIRSEQ. Compared to traditional data center servers using Xeon processors, Huang stated that the DGX-2 can provide computing capability at 1/8 the cost, 1/60 the physical space, and 1/18 the power. Though the repeated line of “the more you spend, the more you save” might seem cliché, NVIDIA hopes that those organizations investing in AI applications see value and adopt.

One oddity in the announcement of the DGX-2 was Huang’s claim that it represented the “world’s largest GPU”. The argument likely stems from Google’s branding of the “TPU” as a collection of processors, platforms, and infrastructure into a singular device and NVIDIA’s desire to show similar impact. The company may feel that a “GPU” is too generic a term for the complex systems it builds, which I would agree with, but I don’t think co-opting a term that has significant value in many other spaces is the right direction.

In addition to the GPUs, the DGX-2 does includes substantial hardware from other vendors that act as support systems. This includes a pair of Intel Xeon Platinum processors, 1.5 TB of system memory, eight 100 GigE network connections, and 30TB of NVMe storage. This is an incredibly powerful rackmount server that services AI workloads at unprecedented levels.

The answer that I am still searching for is for the simple question of “who buys these?” NVIDIA clearly has its own need for high performance AI compute capability, and the need to simplify and compress that capability to save money on server infrastructure is substantial. NVIDIA is one of the leading developers of artificial intelligence for autonomous driving, robotics training, algorithm and container set optimization, etc. But other clients are buying in – organizations like New York University, Massachusetts General Hospital, and UC Berkeley have been using the first-generation device in flagship, leadership development roles. I expect that will be the case for the DGX-2 sales targets; that small group on the bleeding edge of AI development.

Announcing a $400k AI accelerator may not have a direct effect on many of NVIDIA’s customers, but it clearly solidifies the company’s position of leadership and internal drive to maintain it. With added pressure from Intel, which is pushing hard into the AI and machine learning fields with acquisitions and internal development, NVIDIA needs to continue down its path and progression. If GTC has shown me anything this week, it’s that NVIDIA is doing just that.

Apple Renews its Love for Education not the Education Market

Forty years on from the first device that was ever targeted to education, Apple held an event in Chicago at the largest single building high-school in the country. Apple did this not to just introduce an iPad but to tell their story on education, to show the end-to-end solution they now have to help teachers to really transform their curriculum.

I came into this event hoping to see three things: hardware pricing, an improved productivity and collaboration suite and a bigger focus on managing the classroom. Apple addressed my three points but in true Apple fashion it did so in a way that was not obvious to me.

Pricing

I am sure, nobody was expecting Apple to hit Chromebooks pricing. Apple’s approach was to deliver more at the same price. So the new 9.7” iPad has a Retina display, a A10 Fusion chip support for Apple Pencil all for $329 to the consumer and $299 after education discount. It’s interesting that Apple continued to partner with Logitech for the rugged case and Crayon rather than designing their own. I understand why Apple would not try to do a rugged case after all not even Apple could make a rugged case that looks attractive. But I see the benefit of having something designed by Apple that was a companion to Pencil aimed at younger users. There would have been no cannibalization of the current Pencil. Of course, Apple could still do something for the consumer market where price elasticity is less of an issue than in education. I think of this in the same way I do Apple Watch bands, there is an opportunity to bring in color and design to accessorize with a more kid friendly stylus.

I know many will evaluate Apple’s opportunity only by considering the price of the new iPad, which is of course high when compared to most Chromebooks and especially once you add the case and Pencil or Crayon. But a way I think about it, is that with iPad you get more than a computing device. You get a camera, a video camera, music instruments and now a drawing board and support for AR. AR support in particular is interesting as it would allow schools to experiment with more immersive teaching without having to invest on a separate headset as it is the case with VR and Mixed Reality. In this way justifying the price is much easier. If you are investing in an iPad to continue to focus on traditional work with text, charts and slides you would be better served by a lower priced device with a good productivity suite.

Cost is not just about hardware, however. Total cost of ownership of hardware deployment and maintenance plays a big part and this is where Apple had made great progress. I had a chance to see Apple School Manager in action and it took only a few minutes to set up my classroom, my lessons and my books. For teachers to able to handle deployment and management helps to lower the overall TCO, something that Chromebooks have been very good at.

Productivity and Collaboration

Now that Pencil support goes across the entire iPad line, Apple has finally updated iWork to support it. I have not played with these apps much, but, from what I have seen, it will help create richer documents. I see how much my daughter uses the Pen with her Surface Pro and not just to draw but to interact with text on the web and in Word. Apple new Smart Annotation feature for Pages, now in beta, is also interesting, as it is not static but it changes as you update your document. I will be curious to play around with it more to see how useful it is in a real collaboration workflow.

Talking about collaboration, Apple also increased education storage to 200GB per Apple ID. This is key, as collaboration without cloud just does not happen. It is also key for using all the different media Apple is suggestions like video, photography, drawing and so on as project file sizes will grow considerably.

Interestingly, the Logitech Crayon also addresses collaboration within the classroom. With no pairing required it means that multiple kids can draw or write on the same iPad.

While I am not sure yet if these changes are enough for a consumer to switch from Microsoft Office or G-suite, I think they are welcome additions in education.

Classroom Management

This was for me the most important part of the day and what really shows that Apple now as a full solution rather than a series of features. There are different layers to it. First Classroom which has been available since 2016 and helps with managing iPads within the classroom. Then Apple School Manager that, as I explained helps with setting up IDs, lessons and books. And finally ClassKit that helps teachers get more out of the Apps they are using by delivering an assessment on the student interaction with the apps and their progress. The teacher not only gets a single app view but also an across the board view so they can get a more precise picture of how their students are doing. This last component is something that speaks to Apple wanting to make sure that apps are integrated in the curriculum in a way that is productive and rewarding for both teacher and student. It also pushes management from people and hardware to content which is as important if not more. Of course, Apple underlined how they do not see, nor they want to see any of that data.

After creating a curriculum for coding called “Everybody can code” Apple has now created one called “Everybody can create” focused on video, photography, drawing and music. This is another tool that Apple is giving teachers to help them think more out of the box and integrate the pillars of STEAM in their everyday teaching.

Apple has kept true to its core strength of the app ecosystem by providing tools for teachers to make their everyday job of selecting the apps they want to use and assess their effectiveness which I think will improve learning but it will also help with validating their teaching choices and ultimately justifying the investment in these tools.

My Big Takeaway

Back in 1996, Steve Jobs during an interview with Wired said:

“When you have kids you think, What exactly do I want them to learn? Most of the stuff they study in school is completely useless. But some incredibly valuable things you don’t learn until you’re older — yet you could learn them when you’re younger. And you start to think, What would I do if I set a curriculum for a school? God, how exciting that could be! But you can’t do it today. You’d be crazy to work in a school today. You don’t get to do what you want. You don’t get to pick your books, your curriculum. You get to teach one narrow specialization. Who would ever want to do that?”

Sadly, I think that what Steve Jobs said back 22 years ago is still relevant today. You have heard me before being very critical of the current school system where kids are measured on standardized tests and teachers teach in the same way they have done for the past several years. That is what Apple is setting up to change. The new tools in ClassKit allow to build a much more personalized teaching environment. Not all teachers might have the flexibility to build their own curriculum but they can take a more tailored approach to teaching based on what teaching method resonates best with a student. It might not be an approach that results in the biggest market share grab but it sure is strongly impactful.

There were many teachers at the Apple event today and seeing how much they bring subjects to life in school is exciting for me as a mom. At the end of the day, it is all about providing our kids with the right skills to be successful and for doing that we must provide teachers with the right tools to be successful.

Of course, in this process all the tech companies addressing education are trying to engage children as early as possible to hook them into their ecosystem so while there is deep care for education across the board there is also the realization that you are influencing the next generation of buyers and users. This is why ultimately in my view aiming at the user – teacher, student or parent – rather than the institution is the key to long term success.

Will Apple IBM Deal Let Watson Replace Siri For Business Apps?

Even though it wasn’t the first time that Apple and IBM have announced partnerships in the enterprise space, as a long-time tech industry observer, there’s still part of me that finds it surprising to see an Apple executive speak at an IBM event.

Such was the case at last week’s IBM Think conference in Las Vegas, where the two announced that IBM’s Watson Services was going to be offered as an extension to Apple’s CoreML machine learning software. Essentially, for companies who are creating custom mobile applications for iPhones (and iPads), the new development means that enterprises can get access to IBM’s Watson AI tools in their iOS business applications.

On the surface, it’s easy to say that this is just an extension of some of the work IBM and Apple announced several years back to bring some of IBM’s industry-specific vertical applications to the iPad. In some ways, it is.

But in many other ways, this announcement is arguably more important and will generate more long-term impact than whatever new products, software and/or services that Apple announces later today at their education event in Chicago.

The reasons are several. First, the likely focus of much of this work will be on the iPhone, which has a larger and more important presence in businesses throughout the world than iPads. Depending on who you ask, iPhones have nearly 50% share in US smartphones in business, for example, which is several points higher than their share of smartphones in the overall US market.

More important than that, however, is the new dynamic between Apple’s machine learning software and the capabilities offered by IBM. At a basic level, you could argue that there may be future battles between Siri and Watson. Given all the difficulties Apple has had with Siri, versus the generally much more positive reaction to Watson, that could prove to be a significant challenge for Apple.

The details of the agreement specify that Watson Services for CoreML will allow applications created for iOS to leverage pre-trained machine learning models/algorithms created with IBM’s tools as a new option. As part of CoreML, Apple already offers machine learning models and capabilities of its own, as well as tools to convert models from popular neural network/machine learning frameworks, such as Caffe and TensorFlow, into CoreML format. The connection with IBM brings a higher level of integration with external machine learning tools than Apple has offered in the past.

Initially, the effort is being focused on the visual recognition tools that IBM has made available through its Watson services. Specifically, developers will be able to use Watson Visual Recognition to add computer vision-style capabilities to their existing apps. So, for example, you could point your iPhone’s camera at an object and have the application recognize it and provide exact details about specific characteristics, such as determining a part number, recognizing whether a piece of fruit is ripe, etc. What’s interesting about this is that Apple already has a Vision framework for letting you do similar types of things, but this new agreement essentially lets you swap in the IBM version to leverage their capabilities instead.

IBM also has voice-based recognition tools as part of Watson Services that could theoretically substitute for Apple’s Foundation Natural Language Processing tools that sit at the heart of Siri. That’s how we could end up with some situations of Siri vs. Watson in future commercial business apps. (To be clear, these efforts are only for custom business applications and are not at all a general replacement for Apple’s own services, which will continue to focus on Siri for voice-driven interactions in consumer applications.) The current announcement specifically avoids mentioning voice-based applications, but knowing that ongoing machine learning efforts between Apple and IBM are expected to grow, it’s not too hard to speculate.

If you’re wondering why Apple would agree to creating this potential internal software rivalry, the answer is simple: legacy. Despite earlier efforts between the two companies to drive the creation and adoption of custom iOS business applications, the process has moved along slowly, in large part because so much of the software that enterprises already have is in older “legacy” formats that is difficult to port to new environments. By working with IBM more closely, Apple is counting on making the process of moving from these older applications or data sets to newer AI-style machine learning apps significantly easier.

Another interesting aspect about the new Apple IBM announcement is the IBM Cloud Developer Console for Apple, which is a simple, web-based interface that lets Apple developers start experimenting with the Watson services and other cloud-based services offered by IBM. Using these tools, for example, lets you build and train your own models in Watson, and even create an ongoing training loop that lets the on-phone models get smarter over time. In fact, what’s unique about the arrangement is that it lets companies bridge between Apple’s privacy-focused policies of doing on-device inferencing—meaning any incoming data is processed on the phone without sending data to the cloud—and IBM’s focus on enterprise data security in the cloud.

Another potentially interesting side note is that, because IBM just announced a deal with Nvidia to extend the amount of GPU-driven AI training and inferencing that IBM is doing in their cloud, we could see future iOS business apps benefitting directly from Nvidia chips, as those apps connect to IBM’s Nvidia GPU-equipped cloud servers.

More than anything, what the news highlights is that in the evolution of more sophisticated tools for enterprise applications, it’s going to take many different partners to create successful mobile business applications. Gone are the days of individual companies being able to do everything on their own. Even companies as large as Apple and IBM need to leverage various skill sets, work through the legacy of existing business applications, and provide access to programming and other support services from multiple partners in order to really succeed in business—even if it does make for some friendly competition.

Is it time for an Ad Free, subscription based version of Facebook?

Facebook is undergoing a great deal of scrutiny these days. Over the last 18 months, they have come under fire for being a service that rampantly spread fake news without any checks and balances applied to their service and did not protect their customer’s data from being hijacked. Some have even gone as far as to accuse them of having an impact on the 2016 presidential election.

Then last week the news that Cambridge Analytica abused Facebook in onerous ways has put Facebook CEO Mark Zuckerberg in the headlights of US and EU officials and has brought Facebook’s leadership under great pressure to clean up their act, or they will come under stiff regulation from US and EU legislative bodies very soon.

In talking to users of Facebook, I am starting to understand their frustrations with this social media service, but I also sense that it still provides real value to them in the way they can connect with family and friends and get legitimate information and even ads that are actually useful to them. However, the thing I keep hearing from these discussions is that they are losing faith in Facebook and are not sure they can trust them to protect them from nefarious actors who prey on them in multiple ways.

If you look deeply at Facebook’s business model, the culprit, so to speak, that allows this unrestrained flow of information is the fact that Facebook is ad driven. Everything they do is tied to the ways they can use data collected through their users to serve them targeted ads. This is at the heart of Facebook’s overall financial growth and keeps them profitable.

In my conversations with Facebook and some Twitter users, there is one theme that keeps coming up, and that is that they want to have these connections with friends and family and that they understand ads are the way this service is free to them. However, what they want are these services to protect their privacy and keep them from being exploited or misused by their data being harvested and used beyond a proper ad model they are willing to accept.

In a recent insider post about Apple’s purchase of Texture, I pointed out that the acquisition of this magazine service is tied to magazines that are trusted sources. In most cases, the magazines use highly accepted journalistic practices, and people trust what they read in them. I called Texture a “Safe Haven” for content and that Apple now has a type of service that will be important to their customers and extends the content reach of their ecosystem.

The idea of trusted content or sources is really important in this day of fake news. People are tired of being duped and are willing to pay for a trusted service like the one Texture delivers for $9.99 a month that gives them access to over 200 magazines.

Given what Facebook is going through and has had to deal with, perhaps it is time that they offer what I call a “trusted” or Safe Haven version of Facebook and do it as a subscription service. I realize this idea has been kicking around most likely inside Facebook as well as with other parties who look at Facebook’s future. But given what happened recently with Cambridge Analytica, and the fact that it does not seem Facebook really knows how to deal with protecting our data given their ad-based business model, I am beginning to think that millions of people may be willing to actually pay for an ad based version of Facebook if it does not use their data for ads and ad targeting.

So how much would a user cost Facebook if they had to charge them? Chris Wilson, Director of Data Journalism at Time Magazine answers this question in a recent post, and I share the relevant part of this article below:

“So how much would this cost users? Facebook estimates that it pulled down $20.21 in revenue per active user worldwide last year, for a total of $40.65 billion. That sum amply covered the $20.45 billion the company paid in costs and expenses. After taxes, Facebook posted $15.92 billion in profits. That per-user revenue was considerably higher in the U.S. and Canada, where a more developed and monetized audience netted $84.41 per user. That’s still less than what you might pay for Netflix or HBO, though perhaps more than many of Facebook’s 2 billion monthly users would be willing to shell out.

But even if subscriptions were prorated by market, and even if a privacy-positive network were to grow to Facebook-sized proportions, it would take less than $84.41 a month in the U.S. and Canada to turn a healthy profit. The trouble with relying on a lot of ads is that you need to store a lot of content. These days, that includes lots of video, with its lucrative “pre-roll” and “mid-roll” ads. To keep up with demand, Facebook has aggressively tried to get users and companies to post their content directly on the social network. When you get into the business of hosting large amounts of original content, however, your bill for storing and indexing rises quickly. Facebook doesn’t disclose its exact obligation for running massive warehouses around the world, but it’s safe to assume it’s a fair portion of its $20 billion in expenses. Without the burden of being a major content platform or pouring money into sophisticated algorithms to serve ads based on rigorous analysis of a person’s profile, we can estimate that $75 a year would cover the operating costs and generate a healthy profit.”

If Mr. Wilson’s math is correct, I monthly subscription price of $6.25 that would serve a user no adds but give them the valued connections to friends, family, and information they specifically want would still net Facebook a healthy profit.
As the Time article points out, Facebook would have to drastically change their business model and cut the costs of hosting video intensive ads for this to work. But I have to believe that for a significant portion of users, especially in the US, Canada and much of the EU, a subscription-based Facebook could be acceptable.

If I were going to pay a subscription fee for Facebook, here is what I would want in that service:

1-Facebook would not serve me any ads of any type and not use any of my personal dates for this purpose or share that data with advertisers even if it was to be used for market research purposes of any advertiser. My data is may data and not for Facebook to use in any way other than to serve me.

2-I would want a secure connection to my friends and family and be assured that I have complete control of what I can send them and what they can see as part of this service. More importantly, Facebook is not profiling me or my friends and family and just allowing me to communicate with them and share stuff of interest to me in a secure, safe haven environment.

3-I would want the Facebook messaging service to be encrypted and as secure as Apple’s current messaging service. While Facebook is the medium for making these connections, I do not want them to store that data or have any access to my messages for any purpose.

4-I would be open to them curating the kind of information I want to see based on my preferences. I am willing to let them know I like Golden Retrievers and stories about dogs. I am ok that they know I am a scuba diver and send me stories about scuba diving. But I would set the preferences for what type of information I am interested in and, using AI and machine learning, check articles related to my interest to make sure they are legitimate and not based on fake news.

This is the minimum I would want in a Facebook subscription that I pay for monthly.

Given Facebook’s business model today, I doubt that they would create this kind of service for their users, even if millions of us want this type of privacy and control of Facebook. However, if they don’t deliver this type of true secure safe haven for existing Facebook users, give us more control of our data, make sure our data is truly secure and protect us from fake news, millions upon millions of users will #deletefacebook and look for some other type of social media platform like the one I describe above and opt for this type of social media service instead.

Tech.pinions Podcast: iPhone 10, Not EX

In this week’s podcast, Ben Bajarin is joined by Aaron Suplizio to discuss some recent research they collaborated on around Apple’s iPhone X. The survey covered things like customer satisfaction, net promoter score, features most loved about their iPhone X.

Follow Aaron on Twitter @aaronsuplizio

Wearable Device and Fitness App Companies Should Share More Trend Data

This might be murky territory to wade into this week, considering all the news around Facebook. But consider this: some 200 million people worldwide own a connected wearable device, such as a Fitbit, Apple Watch or a Garmin. Hundreds of millions use apps such as MapMyRun, Runkeeper, or Apple Health to track steps taken, miles walked or run, hours slept, and calories both consumed and expended. But some ten years into this wearable device/fitness app market, remarkably little is shared from the treasure trove of information that the leading companies possess. By comparison, we see these aggregated data reports in so many other corners in tech and media. Akamai has its ‘State of the Internet’ report. App Annie releases all sorts of reports on app data and usage.

I think this is a missed opportunity for the wearables industry, in two respects. First, as the devices improve and the data becomes more accurate, it’s likely there are findings or trends that could prove valuable from a health outcomes perspective. Second, the leading companies could use some of the data they collect, responsibly and at an aggregate level, in fun and interesting ways, and to differentiate their offerings.

Let’s start by giving credit to the fitness/wearables industry. The leading companies have, so far, acted responsibly with respect to their customers’ data. You haven’t received an email from your health insurance provider raising your premiums because your Fitbit step count went down by half last year. Nor is Asics trying to sell you high-end sneakers, based on your 7-minute miles tracked by their Runkeeper App. So, prior to discussing what these companies might do with this data, the ground rule should be that individual fitness and health data should never be shared or sold, especially without the user’s express permission and with 100% transparency regarding what’s being made available. A good example of how this is done right is the Strava segments and leaderboard, where their subscribers opt-in to compare their performance on a particular route or ride.

With that out of the way, I’d love to see companies such as Fitbit, Under Armour, and Apple release some aggregate data from all the activity they track. Some of it can be fun and trivial, such as what is the highest number of steps taken by someone in a single day last year? Are there some cities or countries where people walk or exercise more? How might it differ seasonally? Does walking more have any impact on heart rate measurements?

I also think these companies could use the data to increase engagement with their customers. There are primitive examples already, such as ‘run 10 miles this week and get 50% off a T-shirt”. But how about some more interesting contests, such as a Boston vs. New York ‘step challenge’, where, adjusting for population, which city has the highest number of average steps over a certain period? There are all sorts of opportunities to create competitions between not only users but across cities, companies, and so on. This could be a fun incentive to get people outside and active.

For the fitness set, there are myriad geeky possibilities. Take running, as an example. What’s the most frequent time of day people run? Across the zillions of miles tracked every day, what’s the average distance for a run, or time? How many people run more than twice a week? That sorta stuff.

And at an individual level, I’d love to know more. Right now, the only comparison I can set up on my Fitbit is steps compared to other friends with Fitbits I’ve selected. But how do my steps/sleep/other activity compare across other cohorts, such as age, gender, location, season, weather, length of device ownership, and so on?

Then there’s the health side of the equation, which could become more interesting over time as there are more devices that are tracking sleep, ongoing heart rate, etc. How do sleep patterns vary by age?  What % of those over the age of 50 get up more than once a night? How do some of the fitness/exercise patterns tracked by these devices/apps correlate to what we know about national health outcomes?

The enterprise piece is also interesting. Companies have been buying Fitbits and other like devices for their employees, in rather large numbers, for years. They do fun things like run step count contests, and so on. But it would be interesting to hear, even at an anecdotal level, the impact of these devices on employee health. Do companies that run wearable ‘programs’ see any benefits in terms of employee health and wellness? Does this translate into cost savings?  The wearable firms might already share some of this data in their ‘pitch’ to corporate accounts, but little is known beyond that closed circle.

There’s a sense that the wearable device and fitness app category is stagnating. Fitbit had a crummy fourth quarter. Perhaps this data opportunity can give the industry a jolt. And over time, as these devices and apps track more categories of activity, and with higher levels of accuracy, the data will similarly evolve from being merely fun and interesting to compelling and useful.

As Tech in Education Matures, it calls for more than Hardware

Our kids might not remember what schools were like before so many started focusing on STEM, and coding. Most schools had a computer room but technology, or computer science, was very much a subject rather than a tool to use throughout the school day. We can argue whether or not technology made things better or worse but that deserves a totally separate discussion. Like it did in the enterprise market, since its launch in 2010, the iPad started making its way into education bringing technology into classrooms. It did not take long for the first one to one iPad school to get established.

The first Chromebooks hit the market in 2011 but it was not until 2013 that they started to make a considerable impact in K-12 education and they have been growing ever since across American schools and mostly at the expense of Apple.

When the iPad was first brought into the classroom it was done in schools where, by and large, budget was not an issue and teachers were empowered to invest time in finding the best way to use technology to reinvent and energize teaching. It was really about rethinking how to teach and connect with students. As technology became more pervasive, schools discovered that it was not just about teaching but it was also about managing the classroom. This is what Google was able to capitalize on. Yes, schools turn to Chromebooks because the hardware is cheaper but also because the total cost of ownership when it comes to deployment, management, and teacher’s involvement is much lower.

A very Different Approach

When analyzing a go to market strategy I always point to how the “why” of the approach rests on the core strength of the brand in question. In this case, Apple’s strength is in the ecosystem and its weakness is in the cloud. For Google the opposite is true. So it makes sense that Apple built its education strategy around the ecosystem of developers that jumped at the opportunity to sell into the education market. Google, in the meantime, built on its cloud strength to enable a device that could be easily shared and managed as well as strong collaborative tools in their G Suite for Education.

I am focusing on Apple and Google as Microsoft’s efforts into K-12 are more recent at least outside of the administration and into the classroom. Yet, even with Microsoft, the approach fits their strength which is Office first and then cloud.

There is no right or wrong in the approach but there is more or less scalable and that is linked to total cost of ownership and it starts with hardware. While it would be belittling to Google’s effort to say they are winning in education because Chromebooks are cheap it is fair to say that they get in the door because of that and from there the conversation is certainly easier.

Three Areas that Would Help Apple Grow Share in Education

Apple just announced an education event in Chicago for March 27 and, as you know, I am wiser than trying to predict what they will and will not do. That said, there are three areas that I would like Apple to address when it comes to their education offering: hardware pricing, an improved productivity and collaboration suite and a bigger focus on managing the classroom.

Hardware Pricing

Apple already cut the price of the 9.7” iPad to $329 in March last year but even with education discounts, there is still a considerable gap with Chromebooks. Of course, I do not expect Apple to ever reach the below $200 that we see with some Chromebooks but I do think the prices still need to be lowered considering most schools also have to consider the cost of rugged cases, storage carts, and styluses on top of the iPad itself.

Considering the design of the invite, I do wonder if there might be some bundling of Pencil as, of course, this will be a great tool across the board from writing to arts. This would, in turn, imply that Pencil support would be expanded outside the Pro family, which is something that makes perfect sense given its popularity.

For higher education, I would also expect an updated MacBook Air at a very competitive price. A sub $900 MacBook would certainly put pressure on Windows manufacturers as well as make it harder to justify a PixelBook, Google’s second attempt to show Chromebooks don’t all have to be lower end hardware.

Improved Productivity and Collaboration Suite

Apple giving up on iWork in favor of Microsoft Office 365 might be ok with people like me (AKA older users!) who have grown up using Office for the most part of their career. But Gen Z and Millennials live and breathe G Suite. While Google has been platform agnostic when it comes to consumer tools, I very much doubt they will support education skews on other devices as deeply as they do on Chromebooks.

This opens up an opportunity for Apple to create a productivity suite that competes with G Suite. Of course, this will require a bigger investment in iCloud as Apple will need to push collaboration to the next level. iMessage is such a powerful workflow tool for many that I am surprised that Apple has yet to integrate it into the Classroom tools.

iWork, which could do with a different name altogether, should be the toolset the next generation wants to work with no more so than the Mac has been the computer students wanted to have when they went off to college and then to work.

Lastly, given the recent focus on families, I would like to see how Apple can better address parents who are often left out of the classroom physically and digitally. G Suite is great for kids to log in and do their homework or projects from any device they have at home, but parents are not necessarily included in that loop. Thinking about how better serve parents will certainly help Apple to be more top of mind in family choices.

Classroom Management Tools

In 2016, Apple released Classroom which allows for automatic connectivity across iPads and helps to manage iPads in schools that are not 1 to 1 by allowing the teacher to log students into the most recent iPad they used. Classroom also allows teachers to launch apps, websites or books and push such content to students locking their device on a specific view.

With the release of iOS 11.3 it also seems that Apple has developed a framework called ClassKit that aims to help developers of educational apps to create student evaluation features like questionnaires that they can fill in and then automatically send to their teacher. It also seems that there will be a “kiosk mode” so that the students will not be able to access anything else on the device while they are undergoing the test. These are features that Chromebooks already make available which I am sure will be welcomed by educators using iPads.

It appears that with the addition to ClassKit, Apple will check many boxes in a feature by feature showdown with Google.

Aside from not talking about their education offer as much as they should, I fear that Apple comes across more as a DIY solution. While this allows for flexibility for every teacher and allows them to pick best of breed apps for their specific classroom needs it can feel as quite a daunting task compared to Google. Of course, now that Chromebooks support Android there is a choice of apps there as well but the core of what Google brings to schools is nicely wrapped up into their G Suite for education and most teachers do not even look past that.

I am sure if you ask any school, aside from budget, time is the one other thing they will tell you they do not have enough of. As Chromebooks get more and more established in education the biggest issue Apple will face is having schools consider a change. The advantage Google had here was cost. For Apple to have schools make a switch convenience and ease of use should be the key. Freeing up teachers time from administrative tasks and empowering them to teach is a great selling point.

Edge Servers Will Redefine the Cloud

Talk to most people about servers and their eyes start to glaze over. After all, if you’re not an IT professional, it’s not exactly a great dinner party conversation.

The truth is, in the era of cloud-driven applications in which we now live, servers play an incredibly vital role, functioning as the invisible computing backbone for the services upon which we’ve become so dependent.

Most servers live either in large cloud-hosting sites or within the walls of corporate data centers. The vast majority of them are Intel x86-based computing devices that are built similarly to and essentially function like large, powerful PCs. But that’s about to change.

Given the tremendous growth in the burgeoning world of edge computing—where computing resources are being pushed out towards the edge of the network and closer to us and our devices—we’re on the cusp of some dramatic changes in the world of servers. The variations are likely to come in the size, shape, capabilities, number, and computing architecture of a whole new category of devices that some have started to call gateways or, in more powerful forms, edge servers.

The basic idea driving edge computing is that current centralized cloud computing architectures are simply not efficient enough for, nor capable of, meeting the demands that we will soon be placing on them. Thanks to new types of applications—everything from voice-based personal assistants that use the cloud for translation, to increasingly connected cars that use the cloud for mapping and other autonomous features—as well as the continued growth of existing applications, such as streaming media, there’s an increasing recognition that new types of computing infrastructure are necessary. Distributing more computing intelligence out to the edge can reduce latencies and other delays, improve network efficiencies, reduce costs, enhance privacy, and improve overall capacity and performance for intelligent services and the connected devices which rely on them.

Because this intelligence is going to be needed in so many places, for so many devices, the opportunity for edge servers will be tremendous. In some instances, these edge servers may end up being downsized versions of existing servers, with similar architectures, similar applications, and similar types of nearby connected infrastructure components, such as storage and networking.

In many more cases, however, edge computing applications are likely going to demand a different type of server—at many levels. One likely scenario is best exemplified by hyperconverged server appliances, which essentially provide the equivalent of a complete data center in a single box, offering intelligent software-controlled storage and networking components, in addition to the critical compute pieces. The beauty of hyperconverged devices is that they require significantly less space and power than traditional servers, but their software-based architectures make them just as flexible as large data centers. This will be critical for edge servers because the need to have them be reconfigured on the fly to meet rapidly shifting application demands will be essential.

Another likely scenario is a shift towards other types of computing architectures. While Intel-based x86 dominates the very conservative traditional server market, the fresh approach that edge-based applications servers and applications are likely to take removes the onus of legacy support. This will free companies to choose the types of architectures best suited to these new applications. A clear potential winner here is Arm, whose power-efficient designs could find a whole new set of opportunities in cracking the server market for edge-based devices. A number of vendors, including HPE, Cavium and others are just starting to deploy Arm-based servers and edge computing applications will likely be a strong new market for these products.

Even within x86, we’ll likely see variations. With AMD’s well received Epyc line of server chips, there will likely be more acceptance of it in edge server applications. In addition, because many edge computing applications are going to be connected with IoT (Internet of Things) devices, new types of data and new types of analytics applications are going to become increasingly important. A lot of these new applications will also be strong users of machine learning and artificial intelligence. Nvidia has built a strong business in providing GPUs to traditional servers for these kinds of AI and machine applications already and they’ll likely see even more use in edge servers.

On top of GPUs, we’ll likely see the introduction of other types of new architectures in these new, edge servers. Because they’re different types of servers, running new types of applications, they’re the perfect place for vendors to integrate other types of chip architectures, such as the AI-specific chips that Intel’s Nervana group is working on, as well as a host of others.

Software integration is also going to be critical for these new edge servers, as some companies will opt to transition existing cloud-based applications to these new edge servers, some will build tools that serve as feeders into cloud-based applications, and some will build new applications entirely, taking advantage of the new chip architectures that many of these new servers will contain. This is where companies like IBM have an opportunity to leverage much of their existing cloud and IoT work into products and services for companies who want to optimize their applications for the edge.

Though most of us may never physically see it, or even notice it, we are entering a phase of major disruption for servers. The degree of impact that edge-focused servers will ultimately have is hard to predict, but question of whether that impact will be real is now a foregone conclusion.

Podcast: Digital Assistants, AMD Chip Flaws, Apple Education Event, Fitbit

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing recent developments around digital assistants such as Apple’s Siri, discussing AMD chip flaws, chatting about the upcoming Apple Education event, and talking about Fitbit’s new smartwatch.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

AMD Security Concerns Overshadowed by Circumstances

On Tuesday, a security research firm called CTS Labs released information regarding 13 security vulnerabilities that impact modern AMD processors in the Ryzen and EPYC families. CTS launched a website, a couple of explanatory videos, and a white paper detailing the collection of security issues, though without details of implementation (which is good).

On the surface, these potential exploits are a serious concern for both AMD and its customers and clients. With the recent tidal wave caused by the Spectre and Meltdown security vulnerabilities at the beginning of the year, which have led to some serious talk of hardware changes and legal fallout like lawsuits against chip giant Intel, these types of claims are taken more seriously than ever before. That isn’t by itself a negative for consumers – putting more emphasis of security and culpability on the technology companies will result in positive changes.

CTS Labs has four different categories of the vulnerabilities that go by the name Ryzenfall, Fallout, Masterkey, and Chimera. The first three affect the processor itself and the secure processor embedded in it while the last one (Chimera) affects the chipset used on Ryzen motherboards. The heart of the exploit on the processor centers on an ability to overwrite the firmware of the “Secure Processor”, a dedicated Arm Cortex A5 part that runs a separate OS. Its job is to handle security tasks like password management. Being able to take control of this part has serious implications for essentially all areas of the platform, from secure memory access to Windows secure storage locations.

Source: CTS Labs

The Chimera vulnerability stems from a years-old exploit in a portion of the ASMedia designed chipset that supports Ryzen processors, allowing for potential man-in-the-middle attacks to access network and storage traffic.

In all of these cases, the exploits require the attacker to have physical access to the system (to flash a BIOS) or elevated, root privileges. While not a difficult scenario to setup, it does put these security issues into a secondary class of risk. If you have a pre-compromised system, then there are a significant number of exploits that all systems are at risk of.

It is interesting to note from a technical standpoint that all of the vulnerabilities center around the integration of the Secure Processor, not the fundamental architecture of the Zen design. It is a nuanced difference, but one that separates this from the Spectre/Meltdown category. If these concerns are valid, its possible that AMD could somewhat easily swap out this secure processor design for another, or remove it completely for some product lines, without touching the base architecture of the CPU.

For its part, AMD has been attentive to the new security claims. The company was given less than 24 hours notice of the security vulnerabilities, a significant alteration to common security research practices. For Spectre/Meltdown, Intel and the industry were given 30-90 days notice, giving them time to do research and develop a plan to address it. CTS Labs claims that the quick release of its information was to keep the public informed. Without the time to do validation, AMD is still unable to confirm the vulnerabilities, as of this writing.

CTS is holding back details of implementation for the vulnerability from the public, which is common practice until the vendor is able to provide a fix.

There is more to this controversy, unfortunately, than simply the potential security vulnerabilities. CTS Labs also talked with other select groups prior to its public data release. The research entity pre-briefed some media outlets, which is not entirely uncommon. Secondary security researchers were given access to the POCs (proof on concepts) to validate the vulnerabilities. Again, that is fairly expected.

But CTS also discussed the security issues with a company called Viceroy Research that has been documented in the past as creating dicey financial situations for companies in order to make a short term profit, at least based on accusations. In this case, Viceroy published a paper on the same day of the release of CTS Labs own report calling for AMD to file for bankruptcy and that the stock should have a $0.00 value.

To be frank, the opinions contained in the paper are absurd, and show a clear lack of understanding of the technical concerns surrounding security issues and of the market conditions for high-tech companies. Calling for a total recall of products for what CTS has detailed on AMD’s Ryzen hardware, without understanding the complexity of the more direct hardware-level concerns of Spectre/Meltdown that have been in the news for three months leaves me scratching my head.

Because of this secondary paper and the implications of finances in play regarding the news, it paints the entire CTS Labs report and production in a very bad light. If the security concerns were as grave as the firm claims, and the risk to consumers is real, then they did a disservice to the community by clouding the information with the circus that devoured it.

With all that said, AMD should and appears to be taking the security concerns raised in this report with the level of seriousness it demands. AMD is working against a clock that might be unfair and against industry norms, but from my conversations with AMD personnel, the engineering and security teams are working around the clock to get this right. With the raised level of scrutiny around chip security after the Meltdown and Spectre release, no company can take the risk of leaving security behind.

Digital Assistants: When the Pretty Voice makes You forget about the Brains!

Since Digital assistants have entered our homes and settled more or less comfortably among us, there has been a discussion around whether or not they should be personified or if we were better off thinking of them as bots.

Brands have adopted different strategies in this area with Amazon clearly betting on personification with Alexa who not only has a name but a personality too. Apple and Siri, maybe a more abstract name and some personality. Samsung with Bixby another abstract name with not much of a personality. Microsoft and Cortana who to me seems like a cross between the game warrior and Helen Mirren. And of course, Google who just decided its endeavor in this space was not even worthy of a name albeit having personality.

Back in 2016, I was adamant that humanizing digital assistants was going to help cement the bond between the user and the agent.

 

“Personifying the assistant might also make it easier for some people to understand what exactly the role is it has in their life….Giving it a name allows for it to change shape and form like a genie in a bottle – one moment being in your home speaker, the next in your phone, the next in your car helping you with different tasks throughout the day. If the digital assistant is very successful, you might even forget who is powering it. Alexa might indeed become bigger than Amazon. 

It seems to me Google’s approach wants to make sure that, whatever I do, whatever I use, and whoever I use as a medium, especially on a non-Google product or service, I am very clear Google is the one making it possible… 

Yet, while I entrust my life to Google, I am still very aware it is a corporation I am dealing with. Building an emotional connection would be much harder. After the initial Echo set up, my eight-year-old daughter asked Alexa to play a song and, as soon as the song started, she said excitedly, “Oh mom! She is awesome! Can we keep her, please?” I very much doubt “Amazon” would get that level of bonding.”

Two years on I still think that digital assistants’ personification does indeed help with engagement, but I also start to believe this bond might make it difficult for brands to have us users go beyond the voice.

Digital Assistants are a Battle in the AI War

Digital Assistants have been the easiest way for brands to show off their smarts. The problem is that AI goes way beyond that voice that replies to you through your phone or speaker. There is intelligence impacting many aspects of the devices and services we use every day whether it is called out in marketing as “AI enabled” or if we just notice that some things are just easier to perform than they used to.

Our obsession with digital assistants, however, seem to make it harder for brands to just talk about the smarts and this is true for some more so than others.

Samsung struggled with positioning Bixby as an intelligent interface rather than an assistant. One that involves voice but that also AR. Giving it a name made it a personal assistant to some extent, which drove industry watchers and consumers into making direct comparisons with Alexa and Google Assistant even though what Samsung tried to accomplish, at least to start with, was slightly different.

In my column last week I talked about the latest “Make Google Do It” commercial and how:

“It is all about Google and the relationship you, as a user, have with Google….Interestingly, the commercial also cements the different approach Google is taking to the digital assistant by not personifying it. The assistant is a mean to get to Google a clear separation of voice and brains.”

In a way, Google separating the voice and the brains assures that users give credit to Google across the board. This means that Google can capitalize on AI even when Google Assistant is not involved, think about Google Photos for instance or Google Translate or Google Lens.

For Amazon, the dynamic is quite different. Amazon did not “own” an operating system nor controlled an ecosystem, so Alexa became all of those things. Alexa started as the point of engagement with the user and quickly developed into an ecosystem enabler in a similar way than iOS and Android have been for Apple and Google.

For Microsoft, AI is a much bigger game than Cortana has ever been. But truth be told, Cortana has not been given the attention it deserves by management. Maybe precisely to my previous point about Amazon, Microsoft does not see Cortana as an enabler but merely a feature of an operating system. While Cortana has been criticized for not being competitive with other digital assistants, it seems that most have written her off as a contender in the race. This does not seem to be hindering people’s perception of Microsoft in AI, which of course is good news for Microsoft. One has to wonder, however, if people’s believe that Cortana not standing a chance is rooted in the assumption that Microsoft does not stand a chance in the consumer market.

The Peculiar Case of iOS and Siri

Apple does not quite fit the mold of any of the companies I mention above. It rarely does, of course. Apple has a healthy and widely adopted operating system, iOS, as well as an ecosystem with highly engaged users.

Siri was born before any other digital assistant we have in our homes today. Siri was born under Apple’s believe that voice would play a role in the future of interfaces but not necessarily that voice would be a platform in itself. It was 2011 and if you go back and watch the iPhone 4s launch event when Siri made her debut you will hear a more robot voice but very much recognize the Siri of today. And for many this lies the issue: Siri has not changed much over the past seven years. While my statement might seem more perception than reality most would agree that, it feels that way when you compare the fast pace of innovation around Alexa.

Siri’s development pace, however, does not reflect the development we have seen on iOS especially since Apple has double down on machine learning and AI. The platform is getting smarter even though our exchanges with Siri do not seem to. What I think people do not seem to realize is that the brain that powers those iOS improvements is shared by Siri, but its reach goes way beyond it.

In a world where the digital assistant is not only personified but also the personification of intelligence is Apple running the risk to be perceived as being behind across the board? Siri has grown to mean more than a voice assistant. Siri is an “intelligent power” that impacts many aspects of our platform and ecosystem interactions.

Plenty of iOS users continue to be happy with their choice of phone or tablet but just decide not to engage with Siri. As Apple starts to talk about how Siri is behind some of the tasks we perform every day – like picking our favorite chill music – is the perception we have of her impacting our appreciation of other services and experiences?

It seems to me that for the industry overall advances in natural speech will take much longer to materialize than other AI-driven improvements around context, search, cameras, home automation and more. Trying to separate voice and brains might be a smart step to take, so brands make sure consumers look for intelligent solutions beyond the voice.

Is it Too Late for Data Privacy?

The numbers are staggering. Last year’s Equifax breach, along with more recent additions, have resulted in nearly 150 million Americans—more than half of all those 18 and older—having essential identity data exposed, such as Social Security numbers, addresses, and more. And that’s just in the past year. In 2016, 2.2 billion data records of various types were poached via Internet of Things (IoT) devices—such as smart home products. Just yesterday, a judge ruled that a class action case against Yahoo (now part of Verizon) regarding the data breach of all 3 billion (yes, with a “B”) of its Yahoo mail accounts could proceed. Is it any wonder that according to a survey by the National Cybersecurity Alliance, 68% of Americans don’t trust brands to handle their personal information appropriately?

The situation has become so bad, in fact, that there are some who are now questioning whether the concept of personal privacy has essentially disappeared into the digital ethers. Talk to many young people (Gen Z, Millenials, etc.) and they seem to have already accepted that virtually everything about their lives is going to be public. Of course, many of them don’t exactly help their situation, as they readily share staggering amounts of intimate details about their lives on social media and other types of applications, but that’s a topic for another day.

Even people who try to be cautious about their online presence are starting to realize that there’s a staggering amount of information available about virtually every one of us, if you bother to look. Home address histories, phone numbers, employment histories, group affiliations, personal photos, pet’s names, web browsing history, bank account numbers, and yes, Social Security numbers are all within relatively easy (and often free) reach for an enormous percentage of the US population.

Remember all those privacy tips about shredding your mail or other paper documents to avoid getting your identity stolen? They all seem kind of quaint (and, unfortunately, essentially useless) now, because our digital footprints extend so much farther and deeper than any paper trail could possibly go that I doubt anyone would even bother trying to leverage paper records anymore.

While it may not be popular to say so, part of the problem has to do with the enormous amounts of time that people spend on social media (and social media platforms themselves). In fact, according to a survey of cyberstalkers reported by the Identity Theft Resource Center, 82% of them use social media to gather the critical personal information they need to perform their identity thefts against potential victims.

My perspective on the extent of the problem with social media really hit home a few weeks ago as I was watching, of all things, a travel program on TV. Like many of these shows, the host was discussing interesting places to visit in various cities—in this case, one of them was a museum in Nuremberg, Germany dedicated to the Stasi, the infamous (and now defunct) secret police of former East Germany. A guide from the museum was describing the tactics this nefarious group would use to collect information on its citizens: asking friends and family to share the activities of one another, interceding between people writing to each other, secretly reading letters and other correspondence before they got passed along, and so on.

The analogies to modern social media, as well as website and email tracking, to generate “personalized” ads, were staggering. Of course, the difference is that now we’re all doing this willingly. Plus, today it’s in easily savable, searchable, and archivable digital form, instead of all the paper forms they used to organize into physical folders on everyone. Frankly, the information that many of our modern digital services are creating is something that these secret police-type organizations could have only dreamt about—it’s an Orwellian tragedy of epic proportions.

So, what can we do about it? Well, for one, we all need to pull our collective heads out of the sand and acknowledge that it’s a severe problem. But beyond that, it’s clear that something needs to be done from a legislative or regulatory perspective. I’m certainly not a fan of governmental intervention, but for an issue as pervasive and unlikely to change as this one, there seems little choice. (Remember that companies like Facebook, Google and others are making hundreds of billions of dollars every year leveraging some of this data for advertising and other services, giving them absolutely zero incentive to adjust on their own.)

One interesting idea to start with is the concept of data labelling, a la the food labelling standards now in place. With data labelling, any online service, website, application or other data usage would be required to explain exactly what information they were collecting, what it was used for, who it was sold to, etc., all in plain, simple language in a very obvious location. Of course, there should also be options that disallow the information from being shared. In addition, an interesting twist might be the potential to leverage blockchain technology to let each person control and track where their information went and potentially even financially benefit from its sale.

The problem extends beyond the more obvious types of information to location data as well. In fact, even if all the content of any online activity you did was blocked, it turns out that a tremendous amount of information can be gathered just by tracking your location on a regular, ongoing basis, as the January story about the tracking US military personnel through their Strava/Fitbit wearables fitness apps so glaringly illustrated. Even outside military situations, the level of location tracking that can be done through a combination of smartphones, GPS, connected cars, ride sharing applications, WiFi networks, Bluetooth, and more is staggering, and there’s currently no legislation in place to prevent that data from being used without your permission.

All of us can and should be smarter about how we spend our time online, and there are organizations like Staysafeonline.org that offer lots of practical tips on things you can do. However, the issues go way beyond simple tricks to help protect your digital identity. It’s time for Congress and other representatives to take a serious look at things they can do to protect our privacy and identity from the digital world in which we live. Even legislative efforts won’t solve all the data privacy issues we face, but the topic is just too important to ignore.

Apple’s Programming and Content Challenge

One of the most important growth businesses for Apple has been their services division. It brings in about $7.5 billion a quarter now, and it could be a Fortune 100 business if it were ever spun off to be a business on its own.

As I have been thinking about Apple’s services business over the last few weeks, two key conversations I had with Sony Co-Founder Akio Morita and Steve Jobs many years ago came to mind.

Not long after Sony purchased a movie studio, I had the privilege of interviewing Mr. Morita on one of my trips to Japan. Sony was known primarily as a hardware company that made TV’s, portable music players and stereo equipment at that time. I was curious as to why Mr. .Morita bought a movie company, and he told me that he saw movies as just “digital bits,” and to him, it represented important content that could be shown or used on his devices. Keep in mind this was over a decade before the idea of content tied to devices was really in focus and showed the incredible foresight Mr. Morita had as Sony’s CEO.

It’s sad that Sony’s leadership has never had the forward thinking that Mr. Morita brought to his role as CEO once he retired and Sony lost their portable music lead to Apple and the iPod. They also missed out when it came to laptops, smartphones, and tablets too. They are being challenged again by competition in smart TV’s in a big way, and even their game console is coming under greater pressure as we are seeing more and more gamers moving to PC gaming and starting to leave their console game systems behind. Sony’s constant restructuring and cost-cutting and leadership that does not plan for the long range will continue to challenge their market positions if this keeps happening.

Steve Jobs was a real fan of Mr. Morita, and he had a similar view of content being digital, especially music content. On numerous occasions when I spoke with Jobs about his focus for Apple’s future, he made it very clear that Apple is at first a software company and the hardware they create is there to be the vehicle for their software and content to be deployed. It is essential to look at Apple from a holistic approach since their software drives hardware designs and becomes the way they also deliver content and services.

However, services have become even more critical to their overall business since it is not only a major revenue source, but it is one of the ways they are future proofing their business for the long run. Indeed, Apple’s goal is to use software, hardware, and services to tie people to their overall ecosystem and continue to give them solid reasons to either stay with Apple products or entice users of alternative operating systems to switch to Apple products.

Given that Jobs understood the role content plays in tying software to devices as part of the Apple’s ecosystem, it has been surprising how far behind Apple is to competitors when it comes to how much they are investing in content beyond their current music offerings.

The chart below shows Apple investing about $1.0 billion on non-sports video programming in 2017 compared to Netflix who spent $6.3 billion and Amazon who spent $4.5 billion. And Netflix is said to be planning about 700 original series in 2018 and could spend up to $8 billion this year on programming alone.

Given Steve Jobs strong position on content and Apple knowing they need more to keep people in their ecosystem, this current spending on content and programming seems pretty unaggressive. That said, if you look at what they spend in contrast to competitors, and the fact that they need to be more aggressive in obtaining the kind of programming that will keep people coming to or staying in their ecosystem, it leads one to think that perhaps Apple has their eye on some bigger prize in the content space.

Apple could create more original content and also go after some existing shows to add to their video programming. However, it might make sense for Apple to take a page from Sony’s playbook and buy a major movie studio, or at the very least, perhaps acquire some dedicated production companies that already have proven content and the ability to create more shows quickly to help add to Apple’s overall programming for their customers.

However, with Amazon and Netflix also bidding for more content and pushing production companies to create new shows for their services, the competition for Apple to get great programming for themselves will be fierce. That is why buying a major studio with an existing library, and the means to create more original movies and TV shows might be the best way for Apple to gain more control of their content future.

Podcast: Qualcomm-Broadcom, Gaming Industry Meets Trump, Machine Learning Software

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell analyzing the latest developments in the proposed Qualcomm-Broadcom merger, discussing this week’s meeting between gaming industry executives and President Trump, and chatting about the latest machine learning software developments from Microsoft, Intel, Arm, Qualcomm and others.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

It’s the U.S. vs. China in the 5G Olympics

Russia might be winning the cyberwars, but it’s China that is emerging to challenge the United States for Global 5G dominance. This issue has crystallized in days pre- and post- the 5G-themed Mobile World Congress. Huawei continues to be blocked from competing in the U.S wireless infrastructure market, and the major U.S. operators were pressured to not sell its phones. Earlier this week, the Committee on Foreign Investment in the United States (CFIUS) stepped in to review Broadcom’s purchase of Qualcomm, over concerns about Broadcom’s relationships with foreign entities, and the possibility that it would sell off piece parts of Qualcomm to…China.

Much of this revolves around concerns about threats to national security, and it looks like 5G is going to be an important battleground. While Europe led the 3G revolution and the U.S. led 4G LTE development and deployment, China is emerging as a major force in the nascent 5G market. Huawei gained significant global share during the 4G era, mainly due to aggressive pricing that made it difficult for companies such as Ericsson and Nokia to compete in many markets outside the U.S. Now, Huawei is seen as an innovator, and offers a 5G kit that is competitive with, and in some respects exceeds, that of its other global competitors. It is also doing leading-edge work in nearly every other telco/Internet infrastructure segment you can think of, from IoT to NFV and cloud.

Second, the Chinese government is playing an active role, investing in infrastructure, and promoting the 3.5 GHz spectrum as a global 5G band. In fact, the pressure being exerted on the FCC to allocate more mid-band spectrum is largely the result of what’s happening in China. And while we dither over issues such as small cell siting and can’t find a way to invest in infrastructure projects, the Chinese are running laps around us with initiatives such as ‘One Belt One Road.’ You can bet that all those road and rail projects will pave the way (or lay the track) for lots of telecom infrastructure deals.

Third, the sheer size of China’s market and workforce has become an incontrovertible force. It is the world’s largest wireless market, by far. And the country’s growing wealth is allowing Chinese students to study at leading U.S. universities and take that knowledge back with them back home.

There are huge complexities here. China is a huge market for U.S. tech companies. On the other hand, companies such as Facebook and Google are largely blocked from doing business there, which has allowed home-grown firms such as Alibaba to achieve outsized market share in China.

Why the focus on 5G in the U.S.-China economic war and the evolving chilly-if-not-cold war/cyberwar? Well, it’s a going to be a multi-trillion dollar market over the next 15 years. Not just 5G infrastructure but all the devices and billions of connected things that form the business case for 5G. And, the adjacent markets, such as connected/driverless car, that are enabled by 5G and are yet another important U.S. v China battleground.

So, what should we do about this? It might take something akin to a national industrial policy, which is anathema to those who promote free market forces. But throw national security concerns into the mix, and at least we might get their attention.

First, a review of the Broadcom-Qualcomm deal is warranted. I’m not saying kill it but let’s make sure there are conditions that address not only Qualcomm’s interests but U.S. national interests. Despite its occasionally icky practices, Qualcomm is a very important company to U.S. interests from a patent and innovation standpoint. I was concerned when I saw activist investors complaining that Qualcomm spends an outsized 25% of its revenues on R&D. Particularly as the U.S. government seems to be relinquishing its support of science and technology, we need the Qualcomm’s and the Googles of the world to invest in the frontiers of tech such as 5G and AI.

Second, if past behavior predicts future results, we need to step back and think about the national security issues related to 5G, and not in the ad-hoc way we’ve been dealing with it. We should define the safeguards that need to be undertaken given the risks of infrastructure, in chipsets, and in all those billions of connected devices. It is good practice to define the rules, and the steps/precautions that must be taken, for foreign companies and governments that want to do business here.

Finally, we need to think seriously about the education/talent aspect here. I hear almost daily from tech execs about the lack of suitable talent to fill jobs in emerging areas such as AI. In 5G, there is enormous turnover, and a different skill set needed, for the jobs that will be involved in building next-generation networks. There is a deficiency of Higher-Ed programs in these areas. Greater public-private cooperation is warranted. Wealthy foreign nationals are coming here and getting their pick of programs at universities where they’re paying full freight, while the average U.S. college student has to spend zillions and get saddled with debt to get the same education that is nearly free (or even sponsored) elsewhere.

In June 2017, China very publicly announced its plans to become the world leader in AI by 2025. China’s ambitions about 5G are similar, if less overt. It will take a coalition of forces – private, public, and institutional – to counter that.

Windows ML Standardizes Machine Learning and AI for Developers

During its Windows Developer Day this week, Microsoft took the covers off of its plans to help accelerate and dominate in the world of machine learning. Windows ML is a new API that Microsoft will be including in the RS4 release of Windows 10 this year, enabling a new class of developer to access the power and capability of machine learning for their software.

Microsoft already uses machine learning and AI in Windows 10 and on its Azure cloud infrastructure. This ranges from analyzing live camera feeds to AI for game engines and even indexing for search functionality on your local machine. Cortana is the most explicit and public example of what Microsoft has built today, with the Photo-app based facial recognition and image classification being a close second.

Windows ML allows software developers to utilize pre-trained machine learning models to power new experiences and classifications of apps. The API allows for simple integration with existing Microsoft development tools like Visual Studio. Windows ML supports direct importing of ONNX (Open Neural Network Exchange) formatted files that represent deep learning compute models, allowing for easy transferal and sharing between application environments. This format was introduced by Microsoft and Facebook back in September of last year. Frameworks like Caffe2, PyTorch, and Cognitive support ONNX export, so models that are trained in them can utilize inference through any system that integrates ONNX.

To be clear, Windows ML isn’t intended to replace the training activity that you would run on larger, high-performance server clusters. Microsoft still touts its Azure Cloud infrastructure for that, but it does see benefits to pairing that with the Windows ML enabled software ecosystem on edge devices. Software that wants to support updating training models with end-user input can do so with significantly less bandwidth required, as only the much smaller, pre-defined Windows ML result would need to be returned.

With Windows ML, an entire new class of developer will be able to utilize machine learning and AI systems to improve the consumer experience. We will see spikes in AI-driven applications for image recognition, automated text generation, gaming, motion tracking, and so much more. There is a huge potential to be fulfilled by simply getting the power of machine learning into the hands of as many software developers as possible, and no one can offer that better than Microsoft.

Maybe the most exciting part about Windows ML to me is the support for hardware acceleration. The API will be able to run on CPUs, GPUs, and even newer AI-specific add-in hardware like the upcoming Intel Movidius chip. Using DirectX 12 hardware acceleration and DX12 compute capabilities that were expanded with Windows 10, Microsoft will allow application developers to write applications that don’t need to worry about code changes for the underlying hardware in a system to ensure compatibility. While performance will obviously scale from processor to processor, as will user experiences based on that, Windows ML aims to create the same kind of API layer advantages for machine learning as DirectX has done for gaming and graphics.

Microsoft not only will support discrete graphics solutions but also integrated graphics from Intel (and AMD I assume). Windows ML will be one of the first major users of Intel’s AVX-512 capabilities (vector extensions added to consumer hardware with Skylake-X) and Movidius dedicated AI processor. Qualcomm will also support the new API on its upcoming Always Connected PCs using the Snapdragon 835 platform, possibly opening us up to the first use case for the company’s dedicated on-chip AI Engine.

This new API will be supported with both Windows UWP apps (Windows Store) and Win32 apps (classic desktop apps).

It is still in the early phases of development when it comes to the true AI-driven future of computing. Microsoft has been a player in the consumer market with the Cortana integration on Windows, but it has seen limited success compared to the popularity of Google, Amazon, and even Apple systems. By enabling every Windows application developer to take advantage of machine learning with Windows ML, Microsoft will see significant movement in the space, much of it likely using its Azure cloud systems for training and management. And for consumers, the age of artificial intelligence ubiquity looks closer than ever.

What the Commercials aired during the Oscars say about Tech Brands

Like I do every year when the Super Bowl is on I watched the Oscars with an eye out for any exciting new ads from tech brands. This year was the 90th edition of the Academy Awards, and it took place in the midst of #MeToo and the Time’s Up movement. Some spoke out before the event about being tired of all the politics and the controversy calling for the ceremony to just be about movies. Needless to say, some brands might have preferred to take a pass when it came to running an ad during the broadcast of the show on ABC.

Preliminary Nielsen’s numbers show that the TV audience dropped 16% compared to 2017. This preliminary drop would suggest an overall viewership below 32 million which was the previous lowest point recorded in 2008. Nevertheless, the Oscars are expected to be the most watched non-sporting event on American TV.

As the ads rolled, I thought it was very interesting to so clearly see how they matched the most significant focus of the brands they were representing, mostly transcending one single product to highlight underlying enablers.

Samsung

This was not the first Academy Awards presence for Samsung. Aside from being the main sponsor they also ran a new ad featuring the Galaxy S9 as part of the “Do what you can’t” campaign. In the product placements during the red carpet, Samsung chose to highlight the slow-motion video feature of the S9 camera.

The commercial is full of celebrities and influencers and shows the clear target audience Samsung is trying to attract with its new smartphone: Gen-Z. Without being political, the “Make It Yours” commercial is helping highlight the work of women who are first for being nominated in their field.  Dee Rees, who directed and wrote the adapted screenplay for “Mudbound,” is the first black woman to be nominated for best-adapted screenplay and she directed the commercial. Rachel Morrison is the first woman ever to receive a nomination in cinematography and she was the cinematography for the campaign. Both women are also openly gay which is quite a forward pick for Samsung, a conservative brand thus far.

This is certainly a departure for Samsung from the traditional tech-focused or competition focused ads, and I have to say I like it a lot. The feeling you get is quite similar to the recent “what’s a computer” iPad ad, but it touches on more personal issues like having regrets, making mistakes and being passed over while also focusing on overcoming obstacles.

I always thought Samsung as a brand lacked a clear identity and I hope this ad is a first step in finding a different voice for a company that plays a significant role in the life of millions of people across the world. Especially for millennials and older Gen Z knowing what a brand stands for,  their values, their social responsibility is important and can make or break a brand.

Google

Google aired a star-ridden ad around its digital assistant capability with the words “Make Google Do it” appearing any time the person in the commercial was trying to do something from ordering some dope tape” to turning your lights on in the dark, remembering the alarm code or making an action list.

The commercial is cute, but I thought the choice of words was very telling. First, the assistant is not mentioned until the very end when on a white screen you see: “Get the Google Assistant and make Google do it.”

It is all about Google and the relationship you, as a user, have with Google. The ad could have said “Let Google do it,” but that would have implied some form of permission you as a user grant Google. Saying “make” implies a position of control for the user over Google. You are not letting Google do something that you could do; you make it do something like you would a subordinate person to you. I think that is very clever. It aims at shifting the perception that you are working for Google that many have when it comes to thinking that Google wants to know more and more about consumers to better monetize them.

Interestingly, the commercial also cements the different approach Google is taking to the digital assistant by not personifying it. The assistant is a mean to get to Google a clear separation of voice and brains.

Microsoft

Microsoft is a Super Bowl sponsor through its Surface and Xbox business and the company has run ads during the TV broadcast of the game. I do not seem to find any evidence that Microsoft has run an ad during the Oscars before this year.

Featuring Common, the ad is an ode to technology and Artificial Intelligence and what they empower. Empowerment is a common thread in Microsoft CEO, Satya Nadella’s presentations. He firmly believes that technology should enable humanity to do more, be better, fulfill its potential.

AI and Mixed Reality are two key areas for Microsoft in business. After missing mobile it was clear they did not want to miss out on any technology that will empower the next generation, and they moved in early with HoloLens. Their business-first approach, however, is limiting their exposure to consumers. This is particularly true about AI that, right or wrong, is often equated to digital assistant. Here, Microsoft’s Cortana is trailing in adoption compared to Alexa, Google Assistant, and Siri. The ad is raising awareness among a wide range of people that Microsoft is not only about Windows and PCs. The commercial alone, though, will not help consumers think there is a role for Microsoft AI in their life anymore so than they think there is one for IBM Watson. That is totally fine, of course, if Microsoft is not interested in the consumer market. But AI will touch every aspect of their portfolio including Windows which might be perceived as lagging compared to other platforms if consumers just do not know how it is made better by AI.

Twitter

This was Twitter first-ever TV commercial. The ad featured a poem written and performed by Denice Frohman, a New York City-born poet, over black and white static pictures of prominent media and marketing executives as well as filmmakers Ava DuVernay and Julie Dash, documentarian Jennifer Brea and “Insecure” director and actress Issa Rae. The commercial ended with the hashtag #HereWeAre, which first appeared in December when Twitter chief marketing officer Leslie Berland announced that a group of female leaders would appear during Twitter’s event at the CES technology show.

Twitter has been under fire for a long time about doing more to monitor and police those users who are engaging in hate speech and sexual harassment.  So it is no surprise that the response to the commercial was mixed. Some praised the poem for being powerful and appreciated the effort. Some gave the benefit of the doubt but saying they now want to see Twitter put their money where their mouth is and do more on the platform. Others just outright criticized the choice of investment pointing to the fact that the money spent on the commercial would have been better served on improving the platform itself either by hiring more engineers or considering new AI driven tech to help with the monitoring.

I was in the in-between group. I hope that the effort is more than a beautiful ad and I am sure Jack is very well aware that after that commercial the stakes are now even higher. There is no question in my mind that abuse can kill engagement.

 

There were more ads during the Academy Awards from T-Mobile, Walmart, GE, and Nest, but the ones I picked for this article are the ones that I thought best represented where the brand is in its business and brand identity. You can find them all here.

The Hidden Technology Behind Modern Smartphones

Sometimes it’s not just the little things, but the invisible things that matter.

This is even true in the world of technology, where the focus on the physical characteristics of devices such as smartphones, or the visual aspects of the applications and services that run on them, is so dominant.

At the Mobile World Congress (MWC) trade show last week, the importance of this observation became apparent on many different levels. From technologies being introduced to create the invisible wireless networks that our smartphones are so dependent on, to the semiconductor chip innovations hidden inside our smartphones, to the testing required to make all of this work, these “invisible” developments were some of the biggest news to come out of the show.

On the network side, discussions focused entirely on the equipment necessary to create next-generation 5G networks and real-world timelines for delivering them. Historically, MWC was a telecom equipment show, and its heritage shown through strongly this year. Traditional network infrastructure companies such as Ericsson, Nokia, and Huawei were joined by relative newcomers to this particular area, including Intel and Samsung, to talk about how they’re planning to deliver 5G-capable equipment to telco operators such as AT&T, T-Mobile, and Verizon later this year.

The details behind 5G network equipment technologies, such as millimeter wave, network slicing, and others, become extraordinarily complex very quickly. The bottom line, however, is that they’re going to enable 5G networks to support not only significantly faster upload and download speeds for 5G-enabled smartphones, but also much more consistent speeds. This translates into smoother experiences for applications such as high-resolution video streaming, as well as new kinds of applications that haven’t been possible before, such as self-driving cars.

To make this work, new types of 5G-capable modems are needed inside next generation smartphones, automobiles and other devices, and that’s where chip companies like Qualcomm and Intel come in.

One of the great things about the forthcoming transition to 5G is that existing 4G LTE modems and the devices they currently are used in (specifically, all our current smartphones) will be able to connect to and work with these new 5G networks.

New telecom industry standards were specifically designed to initially put new 5G enhancements on top of existing 4G ones, thereby guaranteeing compatibility with existing devices. In fact, there are even some situations where our current smartphones will get a bit faster on 5G networks as 5G phones becomes available, because the new phones will essentially move their traffic onto new lanes (radio bands) of traffic, reducing congestion for existing 4G devices. Eventually, we will move beyond what is termed these “non-standalone” or NSA, generation networks to standalone (SA) 5G-only networks, but for the next several years, the two network “generations” will work together.

At last year’s MWC, Qualcomm introduced a prototype of the world’s first 5G-capable modem (the X50). With the recent finalization of the 5G radio standard (called 5G NR) last December, this year they discussed the first successful 5G tests using the X50 to connect to network equipment from providers like Ericsson and Nokia. More importantly, they announced that the X50 would be shipping later this year and that the first 5G-capable smartphones will be available to purchase in early 2019.

Intel and Huawei also joined the 5G modem fray this year. Intel discussed their own successful trials with their prototype 5G modems and said that they would provide both a 5G modem for PCs and, thanks to work with chip company Spreadtrum, a 5G modem and applications processor for smartphones by the end of 2019. Huawei’s new modem is much larger and won’t be in smartphones initially, but instead will be used for applications such as 5G-based fixed wireless broadband services for home or business internet connections.

Another “hidden” technology is the testing that’s necessary to make all these devices and networks work together. Companies like National Instruments (NI) have worked silently in the background for the last several years creating test equipment and test beds that allow chipmakers, device makers, network equipment makers and telecom carriers to assure that the new standards at the heart of 5G actually work in their simulations of real-world environments. At MWC last week, NI showed a new 5G NR radio emulator, a new millimeter wave test bed in conjunction with Samsung network equipment, and an analog RF front end for 5G modems done in conjunction with Qorvo.

As we bury ourselves in our daily smartphone usage, it’s easy to forget how much technology is working behind the scenes to ensure that they deliver all the capabilities to which we’ve become accustomed. While there’s little need for most people to think about how it all works, it’s still important to know that the kinds of “invisible” advancements that were presented at this year’s MWC offer a strong path for smartphones’ continued and growing use.

Facebook’s Declining Value

When I first discovered Facebook, I thought it was so clever and useful. The company created a new way to connect with friends and business associates, something that was less intrusive than a phone call or text and timelier than a holiday card. It enabled me to get an update from my daughter while she was on vacation and for me to make contact with old friends, business associates and schoolmates I had lost touch with.

I loved the product and, as an early adopter and tech columnist, I raved to others about how useful it was. But over the past year, its become much worse, at least when judged by the way I was using it. Instead of seeing a stream of messages from my friends around the world, I see all sorts of other ads and posts that have no value to me and get in the way of Facebook’s original value. The ads are understandable, considering the company needs a way to pay its way. But I also see stories that have very little interest to me. I subscribe to the new organizations I choose. And I don’t like seeing news from ghost sources with no news gathering organization behind them.

But putting aside these intrusions, my original feed of posts from friends is gone. Posts are no longer sent in chronological order; I see the same post repeated a day or two later for no apparent reason and missed many posts that I should be seeing. My hunch is I’m seeing an older post because I liked it, and someone later commented on it, but that’s just a guess, and really, I don’t care about a comment from a stranger.

Working in the world of product development, it’s rare that I’ve seen a company turn a great product into a mediocre one. Yet the quality of the Facebook product, by all measures, is so much worse than it once was. Yes, I know that Facebook spends lots of time to add features to extend the time we spend on the site. But spending more time is a lot different than spending less time to do what you want and then leave. In fact, the premise of Facebook making changes to cause us to spend more time is not necessarily a measure of how much we’re enjoying it.

When I get into a rental car and need to spend 10 minutes to figure out how the controls work instead of three minutes, that’s not a better experience.

But there’s another reason that Facebook has lost its appeal. That’s because I’ve lost all confidence in the company to address the issues with the Russian troll farms. It seems each day Facebook has another crisis and is clearly in cover-up mode. They use biometric data without permission, they deceive us on how badly Instagram was compromised, and they just continue to cover up. So not only have I lost confidence in the product but also in the company.

If they had been more forthcoming and gave me some assurance that they recognized these issues, I might have stuck around. And one more thing that’s so puzzling. Where is Sheryl Sandberg, their once articulate and outspoken COO? She was so impressive when I heard her on her book tour, yet she seems completely missing in action now.

It’s a shame because if they weren’t so obtuse and stubborn, I might stick around. But I just can’t. I’ve just left Facebook as have the rest of my family. All of us are just tired of the cesspool we need to endure to see those posts from our friends, associates, and each other.

Podcast: MWC 2018, Samsung Galaxy S9, Qualcomm, Intel, Huawei 5G Modems

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing important developments from the Mobile World Congress trade show, analyzing the impact of Samsung’s new Galaxy S9 smartphones, and describing the 5G modem introductions from Qualcomm, Intel and Huawei and what they mean for the rollout of 5G.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Evolution of the Wearables Market

The Apple Watch had a very good 2017, with shipment volumes growing 56% year over year, catapulting the product to the top of the wearables market in terms of both shipment volumes and revenues according to IDC data. This has led many to suggest that there’s no real wearables market, just an Apple Watch market. But that’s far from the truth, as recent year-end data proves. While Apple clearly leads the market, there are plenty of other interesting developments occurring in this still-growing market.

Smart vs. Basic
Apple’s strong year helped accelerate one evolutionary change that’s occurring in the wearable market: A market-share shift toward smart wearables. A smart wearable is one that can run third-party apps; a basic wearable just runs integrated apps. Most of the original fitness trackers shipping into the market were basic wearables, whereas the Apple Watch-and its third-party-app-supporting Watch OS-entered the market as a smart wearable back in 2015.

IDC began tracking the wearable market in 2013, and during that year basic wearables constituted 83.5% of the market, with smart wearables making up the remaining 17.5%. By 2017 the smart wearable segment had grown to encompass almost 30% of the market. Market pioneer Fitbit shipped exclusively basic wearables into the market until 2016, when it launched its first smart wearables in the form of smartwatches. Later that year Fitbit bought assets from smartwatch pioneer Pebble to help accelerate its evolution. In 2017, 4.4% of the company’s total shipments fell into smart category. While Fitbit’s shift toward offering smart wearables hasn’t been without its challenges, the company is reacting to two key market forces: Consumer demand for smart watches, and the rapid average selling price decline of basic wearables.

Back in 2013, the average basic wearable sold for about $119, but as more vendors have entered the space-including a flood of China-based vendors-the average selling price has declined to $88. China’s Xiaomi, which overtook Fitbit to grab the number one spot in the basic wearable category, had an average selling price of $16 in 2017. During that same time span, the ASP for smart wearables has gone the opposite direction, increasing from $218 in 2013 to $375 in 2017.
Interestingly, while consumer demand for smart wearables has grown, the developer appetite for creating the third-party apps that define the category has seemingly declined. When Apple launched the first Apple watch, many developers rushed to put out mini-versions of their iOS apps for the phone. But the hardware and software limitations of that first device led to poor performance of those apps. Apple Watch hardware and WatchOS 4.0 offer a much-improved platform for apps, but many developers have slowed or stopped development of Apple Watch apps. It’s not clear yet whether Apple can reverse this trend, or if it’s even a priority for the company. Over at Fitbit, the company continues to work to integrate features of the Pebble smartwatch ecosystem into its smartwatch platform.

Shifting Form Factors
In addition to tracking smart versus basic wearables, IDC also captures a wide range of form factors. It’s the growth of new types of wearables that have kept the basic wearable category from ceding more share to smart wearables than it has. To date, the only wearables to qualify for the smart category have been smart watches and smart wrist bands. The broader basic wearable category, which includes basic watches and wrist bands, also includes clothing, earwear, and modular products.

Modular wearables are products that can be worn on different parts of the body depending upon accessories. Fitbit’s third shipping product was the Fitbit One, a modular product that you inserted into a clip and wore on your belt. The Misfit Shine could be worn on a strap on the wrist or ankle, or around the neck in a pendant. Back in 2013, the modular segment of the market constituted about 37% of total basic shipments; by 2017 it represented just 1.6% of total basic shipments. Basic watches and basic wristbands have seen their share of the market decline too, although not as dramatically.

The two categories of basic wearables that have seen dramatic growth are clothing and earwear. Clothing with wearable technology typically focuses on fitness or health features; earwear are products that offer wearable functionality beyond standard Bluetooth connectivity. So, for example, today IDC counts the Bose SoundSport Pulse in the wearable category because it includes heart-rate tracking features. To date, we’ve excluded the Apple AirPods from the category, but future iterations with additional functionality could change that.

In 2014, clothing represented just 0.1% of basic wearable shipments, and earwear was 0.3%. At the close of 2017, clothing represented 2.8% of total shipments (2.3M units) for a year-over-year growth rate of 79% and an average selling price of $62. Earwear increased 129% to reach 2.1% of total shipments (1.7M units) with an ASP of $198. It’s early days in these categories, and looking ahead IDC is forecasting dramatic growth for both.

A Growing Market, Not Just for Apple
The wearables market may no longer be considered the next big thing by many market watchers, but growth here continues. Between 2015 and 2016 the entire market grew by 27.3%; in 2017 that growth slowed to 7.7%, reaching 115.4M units. Wearables face new competition for share of the consumer wallet from emerging categories such as smart home and virtual reality. Some consumers have entered and already exited the market; many others are still figuring out the best way these products fit into their lives.

New technologies and capabilities will bring wearables back into the spotlight over the next few years, and I also expect them to play an increasingly important role on the commercial side of the market over time. And as I’ve noted before, I’m also convinced that wearables will play an important role in the evolution of augmented reality technologies. So, while Apple may well own a significant chunk of the wearables market for years to come, there are still plenty of opportunities for other vendors in this space, and it is much more than just an Apple Watch market.