Police released video footage of the incident in Tempe, Arizona, where an Uber self-driving car hit and killed a pedestrian. The video is not conclusive but shows Elaine Herzberg coming into view a number of seconds before the crash as well as the operator behind the wheel looking down on the side of the steering wheel for several seconds while the car was driving itself. Uber autonomous cars remain grounded while the investigation continues.
This might be murky territory to wade into this week, considering all the news around Facebook. But consider this: some 200 million people worldwide own a connected wearable device, such as a Fitbit, Apple Watch or a Garmin. Hundreds of millions use apps such as MapMyRun, Runkeeper, or Apple Health to track steps taken, miles walked or run, hours slept, and calories both consumed and expended. But some ten years into this wearable device/fitness app market, remarkably little is shared from the treasure trove of information that the leading companies possess. By comparison, we see these aggregated data reports in so many other corners in tech and media. Akamai has its ‘State of the Internet’ report. App Annie releases all sorts of reports on app data and usage.
I think this is a missed opportunity for the wearables industry, in two respects. First, as the devices improve and the data becomes more accurate, it’s likely there are findings or trends that could prove valuable from a health outcomes perspective. Second, the leading companies could use some of the data they collect, responsibly and at an aggregate level, in fun and interesting ways, and to differentiate their offerings.
Let’s start by giving credit to the fitness/wearables industry. The leading companies have, so far, acted responsibly with respect to their customers’ data. You haven’t received an email from your health insurance provider raising your premiums because your Fitbit step count went down by half last year. Nor is Asics trying to sell you high-end sneakers, based on your 7-minute miles tracked by their Runkeeper App. So, prior to discussing what these companies might do with this data, the ground rule should be that individual fitness and health data should never be shared or sold, especially without the user’s express permission and with 100% transparency regarding what’s being made available. A good example of how this is done right is the Strava segments and leaderboard, where their subscribers opt-in to compare their performance on a particular route or ride.
With that out of the way, I’d love to see companies such as Fitbit, Under Armour, and Apple release some aggregate data from all the activity they track. Some of it can be fun and trivial, such as what is the highest number of steps taken by someone in a single day last year? Are there some cities or countries where people walk or exercise more? How might it differ seasonally? Does walking more have any impact on heart rate measurements?
I also think these companies could use the data to increase engagement with their customers. There are primitive examples already, such as ‘run 10 miles this week and get 50% off a T-shirt”. But how about some more interesting contests, such as a Boston vs. New York ‘step challenge’, where, adjusting for population, which city has the highest number of average steps over a certain period? There are all sorts of opportunities to create competitions between not only users but across cities, companies, and so on. This could be a fun incentive to get people outside and active.
For the fitness set, there are myriad geeky possibilities. Take running, as an example. What’s the most frequent time of day people run? Across the zillions of miles tracked every day, what’s the average distance for a run, or time? How many people run more than twice a week? That sorta stuff.
And at an individual level, I’d love to know more. Right now, the only comparison I can set up on my Fitbit is steps compared to other friends with Fitbits I’ve selected. But how do my steps/sleep/other activity compare across other cohorts, such as age, gender, location, season, weather, length of device ownership, and so on?
Then there’s the health side of the equation, which could become more interesting over time as there are more devices that are tracking sleep, ongoing heart rate, etc. How do sleep patterns vary by age? What % of those over the age of 50 get up more than once a night? How do some of the fitness/exercise patterns tracked by these devices/apps correlate to what we know about national health outcomes?
The enterprise piece is also interesting. Companies have been buying Fitbits and other like devices for their employees, in rather large numbers, for years. They do fun things like run step count contests, and so on. But it would be interesting to hear, even at an anecdotal level, the impact of these devices on employee health. Do companies that run wearable ‘programs’ see any benefits in terms of employee health and wellness? Does this translate into cost savings? The wearable firms might already share some of this data in their ‘pitch’ to corporate accounts, but little is known beyond that closed circle.
There’s a sense that the wearable device and fitness app category is stagnating. Fitbit had a crummy fourth quarter. Perhaps this data opportunity can give the industry a jolt. And over time, as these devices and apps track more categories of activity, and with higher levels of accuracy, the data will similarly evolve from being merely fun and interesting to compelling and useful.
Like many of my tech friends who are over 55 but started in the world of technology when we were in our early 20’s or 30’s, technology is second nature to us. For those of us who grew up with tech, we often forget that the large majority of people in the US and around the world (especially in the over 50 age bracket) have not been as fortunate as we have been. In most cases, they have only embraced a technology if it can make their lives easier or provides new forms of services such as mobile telephony, instant messaging, and in much older demographics, a lifeline line to emergency services should they need them.
To say Facebook has been in the news lately is an understatement. I added some additional commentary in my article a few months ago The Beginning of the End of Facebook – Part 2. A few main points I made in that article are worth repeating so I can build out a few more components from recent events. Here is the excerpt we will build upon.
Our kids might not remember what schools were like before so many started focusing on STEM, and coding. Most schools had a computer room but technology, or computer science, was very much a subject rather than a tool to use throughout the school day. We can argue whether or not technology made things better or worse but that deserves a totally separate discussion. Like it did in the enterprise market, since its launch in 2010, the iPad started making its way into education bringing technology into classrooms. It did not take long for the first one to one iPad school to get established.
The first Chromebooks hit the market in 2011 but it was not until 2013 that they started to make a considerable impact in K-12 education and they have been growing ever since across American schools and mostly at the expense of Apple.
When the iPad was first brought into the classroom it was done in schools where, by and large, budget was not an issue and teachers were empowered to invest time in finding the best way to use technology to reinvent and energize teaching. It was really about rethinking how to teach and connect with students. As technology became more pervasive, schools discovered that it was not just about teaching but it was also about managing the classroom. This is what Google was able to capitalize on. Yes, schools turn to Chromebooks because the hardware is cheaper but also because the total cost of ownership when it comes to deployment, management, and teacher’s involvement is much lower.
A very Different Approach
When analyzing a go to market strategy I always point to how the “why” of the approach rests on the core strength of the brand in question. In this case, Apple’s strength is in the ecosystem and its weakness is in the cloud. For Google the opposite is true. So it makes sense that Apple built its education strategy around the ecosystem of developers that jumped at the opportunity to sell into the education market. Google, in the meantime, built on its cloud strength to enable a device that could be easily shared and managed as well as strong collaborative tools in their G Suite for Education.
I am focusing on Apple and Google as Microsoft’s efforts into K-12 are more recent at least outside of the administration and into the classroom. Yet, even with Microsoft, the approach fits their strength which is Office first and then cloud.
There is no right or wrong in the approach but there is more or less scalable and that is linked to total cost of ownership and it starts with hardware. While it would be belittling to Google’s effort to say they are winning in education because Chromebooks are cheap it is fair to say that they get in the door because of that and from there the conversation is certainly easier.
Three Areas that Would Help Apple Grow Share in Education
Apple just announced an education event in Chicago for March 27 and, as you know, I am wiser than trying to predict what they will and will not do. That said, there are three areas that I would like Apple to address when it comes to their education offering: hardware pricing, an improved productivity and collaboration suite and a bigger focus on managing the classroom.
Apple already cut the price of the 9.7” iPad to $329 in March last year but even with education discounts, there is still a considerable gap with Chromebooks. Of course, I do not expect Apple to ever reach the below $200 that we see with some Chromebooks but I do think the prices still need to be lowered considering most schools also have to consider the cost of rugged cases, storage carts, and styluses on top of the iPad itself.
Considering the design of the invite, I do wonder if there might be some bundling of Pencil as, of course, this will be a great tool across the board from writing to arts. This would, in turn, imply that Pencil support would be expanded outside the Pro family, which is something that makes perfect sense given its popularity.
For higher education, I would also expect an updated MacBook Air at a very competitive price. A sub $900 MacBook would certainly put pressure on Windows manufacturers as well as make it harder to justify a PixelBook, Google’s second attempt to show Chromebooks don’t all have to be lower end hardware.
Improved Productivity and Collaboration Suite
Apple giving up on iWork in favor of Microsoft Office 365 might be ok with people like me (AKA older users!) who have grown up using Office for the most part of their career. But Gen Z and Millennials live and breathe G Suite. While Google has been platform agnostic when it comes to consumer tools, I very much doubt they will support education skews on other devices as deeply as they do on Chromebooks.
This opens up an opportunity for Apple to create a productivity suite that competes with G Suite. Of course, this will require a bigger investment in iCloud as Apple will need to push collaboration to the next level. iMessage is such a powerful workflow tool for many that I am surprised that Apple has yet to integrate it into the Classroom tools.
iWork, which could do with a different name altogether, should be the toolset the next generation wants to work with no more so than the Mac has been the computer students wanted to have when they went off to college and then to work.
Lastly, given the recent focus on families, I would like to see how Apple can better address parents who are often left out of the classroom physically and digitally. G Suite is great for kids to log in and do their homework or projects from any device they have at home, but parents are not necessarily included in that loop. Thinking about how better serve parents will certainly help Apple to be more top of mind in family choices.
Classroom Management Tools
In 2016, Apple released Classroom which allows for automatic connectivity across iPads and helps to manage iPads in schools that are not 1 to 1 by allowing the teacher to log students into the most recent iPad they used. Classroom also allows teachers to launch apps, websites or books and push such content to students locking their device on a specific view.
With the release of iOS 11.3 it also seems that Apple has developed a framework called ClassKit that aims to help developers of educational apps to create student evaluation features like questionnaires that they can fill in and then automatically send to their teacher. It also seems that there will be a “kiosk mode” so that the students will not be able to access anything else on the device while they are undergoing the test. These are features that Chromebooks already make available which I am sure will be welcomed by educators using iPads.
It appears that with the addition to ClassKit, Apple will check many boxes in a feature by feature showdown with Google.
Aside from not talking about their education offer as much as they should, I fear that Apple comes across more as a DIY solution. While this allows for flexibility for every teacher and allows them to pick best of breed apps for their specific classroom needs it can feel as quite a daunting task compared to Google. Of course, now that Chromebooks support Android there is a choice of apps there as well but the core of what Google brings to schools is nicely wrapped up into their G Suite for education and most teachers do not even look past that.
I am sure if you ask any school, aside from budget, time is the one other thing they will tell you they do not have enough of. As Chromebooks get more and more established in education the biggest issue Apple will face is having schools consider a change. The advantage Google had here was cost. For Apple to have schools make a switch convenience and ease of use should be the key. Freeing up teachers time from administrative tasks and empowering them to teach is a great selling point.
Talk to most people about servers and their eyes start to glaze over. After all, if you’re not an IT professional, it’s not exactly a great dinner party conversation.
The truth is, in the era of cloud-driven applications in which we now live, servers play an incredibly vital role, functioning as the invisible computing backbone for the services upon which we’ve become so dependent.
Most servers live either in large cloud-hosting sites or within the walls of corporate data centers. The vast majority of them are Intel x86-based computing devices that are built similarly to and essentially function like large, powerful PCs. But that’s about to change.
Given the tremendous growth in the burgeoning world of edge computing—where computing resources are being pushed out towards the edge of the network and closer to us and our devices—we’re on the cusp of some dramatic changes in the world of servers. The variations are likely to come in the size, shape, capabilities, number, and computing architecture of a whole new category of devices that some have started to call gateways or, in more powerful forms, edge servers.
The basic idea driving edge computing is that current centralized cloud computing architectures are simply not efficient enough for, nor capable of, meeting the demands that we will soon be placing on them. Thanks to new types of applications—everything from voice-based personal assistants that use the cloud for translation, to increasingly connected cars that use the cloud for mapping and other autonomous features—as well as the continued growth of existing applications, such as streaming media, there’s an increasing recognition that new types of computing infrastructure are necessary. Distributing more computing intelligence out to the edge can reduce latencies and other delays, improve network efficiencies, reduce costs, enhance privacy, and improve overall capacity and performance for intelligent services and the connected devices which rely on them.
Because this intelligence is going to be needed in so many places, for so many devices, the opportunity for edge servers will be tremendous. In some instances, these edge servers may end up being downsized versions of existing servers, with similar architectures, similar applications, and similar types of nearby connected infrastructure components, such as storage and networking.
In many more cases, however, edge computing applications are likely going to demand a different type of server—at many levels. One likely scenario is best exemplified by hyperconverged server appliances, which essentially provide the equivalent of a complete data center in a single box, offering intelligent software-controlled storage and networking components, in addition to the critical compute pieces. The beauty of hyperconverged devices is that they require significantly less space and power than traditional servers, but their software-based architectures make them just as flexible as large data centers. This will be critical for edge servers because the need to have them be reconfigured on the fly to meet rapidly shifting application demands will be essential.
Another likely scenario is a shift towards other types of computing architectures. While Intel-based x86 dominates the very conservative traditional server market, the fresh approach that edge-based applications servers and applications are likely to take removes the onus of legacy support. This will free companies to choose the types of architectures best suited to these new applications. A clear potential winner here is Arm, whose power-efficient designs could find a whole new set of opportunities in cracking the server market for edge-based devices. A number of vendors, including HPE, Cavium and others are just starting to deploy Arm-based servers and edge computing applications will likely be a strong new market for these products.
Even within x86, we’ll likely see variations. With AMD’s well received Epyc line of server chips, there will likely be more acceptance of it in edge server applications. In addition, because many edge computing applications are going to be connected with IoT (Internet of Things) devices, new types of data and new types of analytics applications are going to become increasingly important. A lot of these new applications will also be strong users of machine learning and artificial intelligence. Nvidia has built a strong business in providing GPUs to traditional servers for these kinds of AI and machine applications already and they’ll likely see even more use in edge servers.
On top of GPUs, we’ll likely see the introduction of other types of new architectures in these new, edge servers. Because they’re different types of servers, running new types of applications, they’re the perfect place for vendors to integrate other types of chip architectures, such as the AI-specific chips that Intel’s Nervana group is working on, as well as a host of others.
Software integration is also going to be critical for these new edge servers, as some companies will opt to transition existing cloud-based applications to these new edge servers, some will build tools that serve as feeders into cloud-based applications, and some will build new applications entirely, taking advantage of the new chip architectures that many of these new servers will contain. This is where companies like IBM have an opportunity to leverage much of their existing cloud and IoT work into products and services for companies who want to optimize their applications for the edge.
Though most of us may never physically see it, or even notice it, we are entering a phase of major disruption for servers. The degree of impact that edge-focused servers will ultimately have is hard to predict, but question of whether that impact will be real is now a foregone conclusion.
Over the past year, I’ve read a number of reports, studies, and case studies, that overwhelmingly confirms consumers are getting savvy to techniques used to exploit their online behavior during the growth boom of the Internet. How consumers react, or the backlash an alteration in their behavior creates is going to be a critical narrative for how the Internet changes over the next few years.
This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing recent developments around digital assistants such as Apple’s Siri, discussing AMD chip flaws, chatting about the upcoming Apple Education event, and talking about Fitbit’s new smartwatch.
Earlier this week, Fitbit introduced a new smartwatch, the Fitbit Versa. Describe in their release as “The lightest metal smartwatch in the U.S. market, it offers a comfortable design and a new dashboard that simplifies how you access your health and fitness data. Advanced health and fitness features like 24/7 heart rate tracking, onscreen workouts, and automatic sleep stages tracking meet smart features like quick replies on Android, wallet-free payments on Fitbit Versa Special Edition, and on-device music – all with 4+ days battery life. Versa is available for presale at $199.95, with global retail availability in April 2018.”
Reports like this that largely site ex-employees are always hard to know how much you can believe, however, the issues around management decisions regarding Siri and the shifting of blame stated in the article were quite concerning.
On Tuesday, a security research firm called CTS Labs released information regarding 13 security vulnerabilities that impact modern AMD processors in the Ryzen and EPYC families. CTS launched a website, a couple of explanatory videos, and a white paper detailing the collection of security issues, though without details of implementation (which is good).
On the surface, these potential exploits are a serious concern for both AMD and its customers and clients. With the recent tidal wave caused by the Spectre and Meltdown security vulnerabilities at the beginning of the year, which have led to some serious talk of hardware changes and legal fallout like lawsuits against chip giant Intel, these types of claims are taken more seriously than ever before. That isn’t by itself a negative for consumers – putting more emphasis of security and culpability on the technology companies will result in positive changes.
CTS Labs has four different categories of the vulnerabilities that go by the name Ryzenfall, Fallout, Masterkey, and Chimera. The first three affect the processor itself and the secure processor embedded in it while the last one (Chimera) affects the chipset used on Ryzen motherboards. The heart of the exploit on the processor centers on an ability to overwrite the firmware of the “Secure Processor”, a dedicated Arm Cortex A5 part that runs a separate OS. Its job is to handle security tasks like password management. Being able to take control of this part has serious implications for essentially all areas of the platform, from secure memory access to Windows secure storage locations.
The Chimera vulnerability stems from a years-old exploit in a portion of the ASMedia designed chipset that supports Ryzen processors, allowing for potential man-in-the-middle attacks to access network and storage traffic.
In all of these cases, the exploits require the attacker to have physical access to the system (to flash a BIOS) or elevated, root privileges. While not a difficult scenario to setup, it does put these security issues into a secondary class of risk. If you have a pre-compromised system, then there are a significant number of exploits that all systems are at risk of.
It is interesting to note from a technical standpoint that all of the vulnerabilities center around the integration of the Secure Processor, not the fundamental architecture of the Zen design. It is a nuanced difference, but one that separates this from the Spectre/Meltdown category. If these concerns are valid, its possible that AMD could somewhat easily swap out this secure processor design for another, or remove it completely for some product lines, without touching the base architecture of the CPU.
For its part, AMD has been attentive to the new security claims. The company was given less than 24 hours notice of the security vulnerabilities, a significant alteration to common security research practices. For Spectre/Meltdown, Intel and the industry were given 30-90 days notice, giving them time to do research and develop a plan to address it. CTS Labs claims that the quick release of its information was to keep the public informed. Without the time to do validation, AMD is still unable to confirm the vulnerabilities, as of this writing.
CTS is holding back details of implementation for the vulnerability from the public, which is common practice until the vendor is able to provide a fix.
There is more to this controversy, unfortunately, than simply the potential security vulnerabilities. CTS Labs also talked with other select groups prior to its public data release. The research entity pre-briefed some media outlets, which is not entirely uncommon. Secondary security researchers were given access to the POCs (proof on concepts) to validate the vulnerabilities. Again, that is fairly expected.
But CTS also discussed the security issues with a company called Viceroy Research that has been documented in the past as creating dicey financial situations for companies in order to make a short term profit, at least based on accusations. In this case, Viceroy published a paper on the same day of the release of CTS Labs own report calling for AMD to file for bankruptcy and that the stock should have a $0.00 value.
To be frank, the opinions contained in the paper are absurd, and show a clear lack of understanding of the technical concerns surrounding security issues and of the market conditions for high-tech companies. Calling for a total recall of products for what CTS has detailed on AMD’s Ryzen hardware, without understanding the complexity of the more direct hardware-level concerns of Spectre/Meltdown that have been in the news for three months leaves me scratching my head.
Because of this secondary paper and the implications of finances in play regarding the news, it paints the entire CTS Labs report and production in a very bad light. If the security concerns were as grave as the firm claims, and the risk to consumers is real, then they did a disservice to the community by clouding the information with the circus that devoured it.
With all that said, AMD should and appears to be taking the security concerns raised in this report with the level of seriousness it demands. AMD is working against a clock that might be unfair and against industry norms, but from my conversations with AMD personnel, the engineering and security teams are working around the clock to get this right. With the raised level of scrutiny around chip security after the Meltdown and Spectre release, no company can take the risk of leaving security behind.
In late February, I started writing a piece entitled “Why Apple should buy Texture.” For months I had been studying Apple’s need to acquire more content and original programming. In my main Techpinions column on Monday, I laid out the challenge they have in this area given the strong investments competitors like Netflix and Amazon are making, especially in video content.
Since Digital assistants have entered our homes and settled more or less comfortably among us, there has been a discussion around whether or not they should be personified or if we were better off thinking of them as bots.
Brands have adopted different strategies in this area with Amazon clearly betting on personification with Alexa who not only has a name but a personality too. Apple and Siri, maybe a more abstract name and some personality. Samsung with Bixby another abstract name with not much of a personality. Microsoft and Cortana who to me seems like a cross between the game warrior and Helen Mirren. And of course, Google who just decided its endeavor in this space was not even worthy of a name albeit having personality.
Back in 2016, I was adamant that humanizing digital assistants was going to help cement the bond between the user and the agent.
“Personifying the assistant might also make it easier for some people to understand what exactly the role is it has in their life….Giving it a name allows for it to change shape and form like a genie in a bottle – one moment being in your home speaker, the next in your phone, the next in your car helping you with different tasks throughout the day. If the digital assistant is very successful, you might even forget who is powering it. Alexa might indeed become bigger than Amazon.
It seems to me Google’s approach wants to make sure that, whatever I do, whatever I use, and whoever I use as a medium, especially on a non-Google product or service, I am very clear Google is the one making it possible…
Yet, while I entrust my life to Google, I am still very aware it is a corporation I am dealing with. Building an emotional connection would be much harder. After the initial Echo set up, my eight-year-old daughter asked Alexa to play a song and, as soon as the song started, she said excitedly, “Oh mom! She is awesome! Can we keep her, please?” I very much doubt “Amazon” would get that level of bonding.”
Two years on I still think that digital assistants’ personification does indeed help with engagement, but I also start to believe this bond might make it difficult for brands to have us users go beyond the voice.
Digital Assistants are a Battle in the AI War
Digital Assistants have been the easiest way for brands to show off their smarts. The problem is that AI goes way beyond that voice that replies to you through your phone or speaker. There is intelligence impacting many aspects of the devices and services we use every day whether it is called out in marketing as “AI enabled” or if we just notice that some things are just easier to perform than they used to.
Our obsession with digital assistants, however, seem to make it harder for brands to just talk about the smarts and this is true for some more so than others.
Samsung struggled with positioning Bixby as an intelligent interface rather than an assistant. One that involves voice but that also AR. Giving it a name made it a personal assistant to some extent, which drove industry watchers and consumers into making direct comparisons with Alexa and Google Assistant even though what Samsung tried to accomplish, at least to start with, was slightly different.
In my column last week I talked about the latest “Make Google Do It” commercial and how:
“It is all about Google and the relationship you, as a user, have with Google….Interestingly, the commercial also cements the different approach Google is taking to the digital assistant by not personifying it. The assistant is a mean to get to Google a clear separation of voice and brains.”
In a way, Google separating the voice and the brains assures that users give credit to Google across the board. This means that Google can capitalize on AI even when Google Assistant is not involved, think about Google Photos for instance or Google Translate or Google Lens.
For Amazon, the dynamic is quite different. Amazon did not “own” an operating system nor controlled an ecosystem, so Alexa became all of those things. Alexa started as the point of engagement with the user and quickly developed into an ecosystem enabler in a similar way than iOS and Android have been for Apple and Google.
For Microsoft, AI is a much bigger game than Cortana has ever been. But truth be told, Cortana has not been given the attention it deserves by management. Maybe precisely to my previous point about Amazon, Microsoft does not see Cortana as an enabler but merely a feature of an operating system. While Cortana has been criticized for not being competitive with other digital assistants, it seems that most have written her off as a contender in the race. This does not seem to be hindering people’s perception of Microsoft in AI, which of course is good news for Microsoft. One has to wonder, however, if people’s believe that Cortana not standing a chance is rooted in the assumption that Microsoft does not stand a chance in the consumer market.
The Peculiar Case of iOS and Siri
Apple does not quite fit the mold of any of the companies I mention above. It rarely does, of course. Apple has a healthy and widely adopted operating system, iOS, as well as an ecosystem with highly engaged users.
Siri was born before any other digital assistant we have in our homes today. Siri was born under Apple’s believe that voice would play a role in the future of interfaces but not necessarily that voice would be a platform in itself. It was 2011 and if you go back and watch the iPhone 4s launch event when Siri made her debut you will hear a more robot voice but very much recognize the Siri of today. And for many this lies the issue: Siri has not changed much over the past seven years. While my statement might seem more perception than reality most would agree that, it feels that way when you compare the fast pace of innovation around Alexa.
Siri’s development pace, however, does not reflect the development we have seen on iOS especially since Apple has double down on machine learning and AI. The platform is getting smarter even though our exchanges with Siri do not seem to. What I think people do not seem to realize is that the brain that powers those iOS improvements is shared by Siri, but its reach goes way beyond it.
In a world where the digital assistant is not only personified but also the personification of intelligence is Apple running the risk to be perceived as being behind across the board? Siri has grown to mean more than a voice assistant. Siri is an “intelligent power” that impacts many aspects of our platform and ecosystem interactions.
Plenty of iOS users continue to be happy with their choice of phone or tablet but just decide not to engage with Siri. As Apple starts to talk about how Siri is behind some of the tasks we perform every day – like picking our favorite chill music – is the perception we have of her impacting our appreciation of other services and experiences?
It seems to me that for the industry overall advances in natural speech will take much longer to materialize than other AI-driven improvements around context, search, cameras, home automation and more. Trying to separate voice and brains might be a smart step to take, so brands make sure consumers look for intelligent solutions beyond the voice.
I’m not sure anyone could have predicted the predicament we have with the subject of news, and fake news to be exact. A fascinating report, which quantified something any observant person would have realized, validates that fake news spreads faster and wider than the truth. Here is a key excerpt from the study:
The numbers are staggering. Last year’s Equifax breach, along with more recent additions, have resulted in nearly 150 million Americans—more than half of all those 18 and older—having essential identity data exposed, such as Social Security numbers, addresses, and more. And that’s just in the past year. In 2016, 2.2 billion data records of various types were poached via Internet of Things (IoT) devices—such as smart home products. Just yesterday, a judge ruled that a class action case against Yahoo (now part of Verizon) regarding the data breach of all 3 billion (yes, with a “B”) of its Yahoo mail accounts could proceed. Is it any wonder that according to a survey by the National Cybersecurity Alliance, 68% of Americans don’t trust brands to handle their personal information appropriately?
The situation has become so bad, in fact, that there are some who are now questioning whether the concept of personal privacy has essentially disappeared into the digital ethers. Talk to many young people (Gen Z, Millenials, etc.) and they seem to have already accepted that virtually everything about their lives is going to be public. Of course, many of them don’t exactly help their situation, as they readily share staggering amounts of intimate details about their lives on social media and other types of applications, but that’s a topic for another day.
Even people who try to be cautious about their online presence are starting to realize that there’s a staggering amount of information available about virtually every one of us, if you bother to look. Home address histories, phone numbers, employment histories, group affiliations, personal photos, pet’s names, web browsing history, bank account numbers, and yes, Social Security numbers are all within relatively easy (and often free) reach for an enormous percentage of the US population.
Remember all those privacy tips about shredding your mail or other paper documents to avoid getting your identity stolen? They all seem kind of quaint (and, unfortunately, essentially useless) now, because our digital footprints extend so much farther and deeper than any paper trail could possibly go that I doubt anyone would even bother trying to leverage paper records anymore.
While it may not be popular to say so, part of the problem has to do with the enormous amounts of time that people spend on social media (and social media platforms themselves). In fact, according to a survey of cyberstalkers reported by the Identity Theft Resource Center, 82% of them use social media to gather the critical personal information they need to perform their identity thefts against potential victims.
My perspective on the extent of the problem with social media really hit home a few weeks ago as I was watching, of all things, a travel program on TV. Like many of these shows, the host was discussing interesting places to visit in various cities—in this case, one of them was a museum in Nuremberg, Germany dedicated to the Stasi, the infamous (and now defunct) secret police of former East Germany. A guide from the museum was describing the tactics this nefarious group would use to collect information on its citizens: asking friends and family to share the activities of one another, interceding between people writing to each other, secretly reading letters and other correspondence before they got passed along, and so on.
The analogies to modern social media, as well as website and email tracking, to generate “personalized” ads, were staggering. Of course, the difference is that now we’re all doing this willingly. Plus, today it’s in easily savable, searchable, and archivable digital form, instead of all the paper forms they used to organize into physical folders on everyone. Frankly, the information that many of our modern digital services are creating is something that these secret police-type organizations could have only dreamt about—it’s an Orwellian tragedy of epic proportions.
So, what can we do about it? Well, for one, we all need to pull our collective heads out of the sand and acknowledge that it’s a severe problem. But beyond that, it’s clear that something needs to be done from a legislative or regulatory perspective. I’m certainly not a fan of governmental intervention, but for an issue as pervasive and unlikely to change as this one, there seems little choice. (Remember that companies like Facebook, Google and others are making hundreds of billions of dollars every year leveraging some of this data for advertising and other services, giving them absolutely zero incentive to adjust on their own.)
One interesting idea to start with is the concept of data labelling, a la the food labelling standards now in place. With data labelling, any online service, website, application or other data usage would be required to explain exactly what information they were collecting, what it was used for, who it was sold to, etc., all in plain, simple language in a very obvious location. Of course, there should also be options that disallow the information from being shared. In addition, an interesting twist might be the potential to leverage blockchain technology to let each person control and track where their information went and potentially even financially benefit from its sale.
The problem extends beyond the more obvious types of information to location data as well. In fact, even if all the content of any online activity you did was blocked, it turns out that a tremendous amount of information can be gathered just by tracking your location on a regular, ongoing basis, as the January story about the tracking US military personnel through their Strava/Fitbit wearables fitness apps so glaringly illustrated. Even outside military situations, the level of location tracking that can be done through a combination of smartphones, GPS, connected cars, ride sharing applications, WiFi networks, Bluetooth, and more is staggering, and there’s currently no legislation in place to prevent that data from being used without your permission.
All of us can and should be smarter about how we spend our time online, and there are organizations like Staysafeonline.org that offer lots of practical tips on things you can do. However, the issues go way beyond simple tricks to help protect your digital identity. It’s time for Congress and other representatives to take a serious look at things they can do to protect our privacy and identity from the digital world in which we live. Even legislative efforts won’t solve all the data privacy issues we face, but the topic is just too important to ignore.
I tweeted over this tweet with the caption Amazon’s Wake, and it ended up being a pretty popular tweet. I did get some feedback as some observers wanted to make points that not all of these retailers failed because of Amazon. Toy’s R Us was a prime example since it seems a private equity group is more the cause of their troubles, however, whether you can draw a direct cause and effect of Amazon, there is no question these retailers and more are seeing more of their business move online.
One of the most important growth businesses for Apple has been their services division. It brings in about $7.5 billion a quarter now, and it could be a Fortune 100 business if it were ever spun off to be a business on its own.
As I have been thinking about Apple’s services business over the last few weeks, two key conversations I had with Sony Co-Founder Akio Morita and Steve Jobs many years ago came to mind.
Not long after Sony purchased a movie studio, I had the privilege of interviewing Mr. Morita on one of my trips to Japan. Sony was known primarily as a hardware company that made TV’s, portable music players and stereo equipment at that time. I was curious as to why Mr. .Morita bought a movie company, and he told me that he saw movies as just “digital bits,” and to him, it represented important content that could be shown or used on his devices. Keep in mind this was over a decade before the idea of content tied to devices was really in focus and showed the incredible foresight Mr. Morita had as Sony’s CEO.
It’s sad that Sony’s leadership has never had the forward thinking that Mr. Morita brought to his role as CEO once he retired and Sony lost their portable music lead to Apple and the iPod. They also missed out when it came to laptops, smartphones, and tablets too. They are being challenged again by competition in smart TV’s in a big way, and even their game console is coming under greater pressure as we are seeing more and more gamers moving to PC gaming and starting to leave their console game systems behind. Sony’s constant restructuring and cost-cutting and leadership that does not plan for the long range will continue to challenge their market positions if this keeps happening.
Steve Jobs was a real fan of Mr. Morita, and he had a similar view of content being digital, especially music content. On numerous occasions when I spoke with Jobs about his focus for Apple’s future, he made it very clear that Apple is at first a software company and the hardware they create is there to be the vehicle for their software and content to be deployed. It is essential to look at Apple from a holistic approach since their software drives hardware designs and becomes the way they also deliver content and services.
However, services have become even more critical to their overall business since it is not only a major revenue source, but it is one of the ways they are future proofing their business for the long run. Indeed, Apple’s goal is to use software, hardware, and services to tie people to their overall ecosystem and continue to give them solid reasons to either stay with Apple products or entice users of alternative operating systems to switch to Apple products.
Given that Jobs understood the role content plays in tying software to devices as part of the Apple’s ecosystem, it has been surprising how far behind Apple is to competitors when it comes to how much they are investing in content beyond their current music offerings.
The chart below shows Apple investing about $1.0 billion on non-sports video programming in 2017 compared to Netflix who spent $6.3 billion and Amazon who spent $4.5 billion. And Netflix is said to be planning about 700 original series in 2018 and could spend up to $8 billion this year on programming alone.
Given Steve Jobs strong position on content and Apple knowing they need more to keep people in their ecosystem, this current spending on content and programming seems pretty unaggressive. That said, if you look at what they spend in contrast to competitors, and the fact that they need to be more aggressive in obtaining the kind of programming that will keep people coming to or staying in their ecosystem, it leads one to think that perhaps Apple has their eye on some bigger prize in the content space.
Apple could create more original content and also go after some existing shows to add to their video programming. However, it might make sense for Apple to take a page from Sony’s playbook and buy a major movie studio, or at the very least, perhaps acquire some dedicated production companies that already have proven content and the ability to create more shows quickly to help add to Apple’s overall programming for their customers.
However, with Amazon and Netflix also bidding for more content and pushing production companies to create new shows for their services, the competition for Apple to get great programming for themselves will be fierce. That is why buying a major studio with an existing library, and the means to create more original movies and TV shows might be the best way for Apple to gain more control of their content future.
This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell analyzing the latest developments in the proposed Qualcomm-Broadcom merger, discussing this week’s meeting between gaming industry executives and President Trump, and chatting about the latest machine learning software developments from Microsoft, Intel, Arm, Qualcomm and others.
This week, two independent reports pointed to weak pre-order numbers for the new Samsung’s flagship models the Galaxy S9 and S9+.
First, a Korean publication called Yonhap News Agency reported a Samsung executive commenting that the preorders of the Galaxy S9 are similar or slightly lower to that of the Galaxy S8. He also reportedly blamed the dip in demand on the two-day gap between unveiling and pre-orders opening, compared to an eight-day delay for the Galaxy S8.
Russia might be winning the cyberwars, but it’s China that is emerging to challenge the United States for Global 5G dominance. This issue has crystallized in days pre- and post- the 5G-themed Mobile World Congress. Huawei continues to be blocked from competing in the U.S wireless infrastructure market, and the major U.S. operators were pressured to not sell its phones. Earlier this week, the Committee on Foreign Investment in the United States (CFIUS) stepped in to review Broadcom’s purchase of Qualcomm, over concerns about Broadcom’s relationships with foreign entities, and the possibility that it would sell off piece parts of Qualcomm to…China.
Much of this revolves around concerns about threats to national security, and it looks like 5G is going to be an important battleground. While Europe led the 3G revolution and the U.S. led 4G LTE development and deployment, China is emerging as a major force in the nascent 5G market. Huawei gained significant global share during the 4G era, mainly due to aggressive pricing that made it difficult for companies such as Ericsson and Nokia to compete in many markets outside the U.S. Now, Huawei is seen as an innovator, and offers a 5G kit that is competitive with, and in some respects exceeds, that of its other global competitors. It is also doing leading-edge work in nearly every other telco/Internet infrastructure segment you can think of, from IoT to NFV and cloud.
Second, the Chinese government is playing an active role, investing in infrastructure, and promoting the 3.5 GHz spectrum as a global 5G band. In fact, the pressure being exerted on the FCC to allocate more mid-band spectrum is largely the result of what’s happening in China. And while we dither over issues such as small cell siting and can’t find a way to invest in infrastructure projects, the Chinese are running laps around us with initiatives such as ‘One Belt One Road.’ You can bet that all those road and rail projects will pave the way (or lay the track) for lots of telecom infrastructure deals.
Third, the sheer size of China’s market and workforce has become an incontrovertible force. It is the world’s largest wireless market, by far. And the country’s growing wealth is allowing Chinese students to study at leading U.S. universities and take that knowledge back with them back home.
There are huge complexities here. China is a huge market for U.S. tech companies. On the other hand, companies such as Facebook and Google are largely blocked from doing business there, which has allowed home-grown firms such as Alibaba to achieve outsized market share in China.
Why the focus on 5G in the U.S.-China economic war and the evolving chilly-if-not-cold war/cyberwar? Well, it’s a going to be a multi-trillion dollar market over the next 15 years. Not just 5G infrastructure but all the devices and billions of connected things that form the business case for 5G. And, the adjacent markets, such as connected/driverless car, that are enabled by 5G and are yet another important U.S. v China battleground.
So, what should we do about this? It might take something akin to a national industrial policy, which is anathema to those who promote free market forces. But throw national security concerns into the mix, and at least we might get their attention.
First, a review of the Broadcom-Qualcomm deal is warranted. I’m not saying kill it but let’s make sure there are conditions that address not only Qualcomm’s interests but U.S. national interests. Despite its occasionally icky practices, Qualcomm is a very important company to U.S. interests from a patent and innovation standpoint. I was concerned when I saw activist investors complaining that Qualcomm spends an outsized 25% of its revenues on R&D. Particularly as the U.S. government seems to be relinquishing its support of science and technology, we need the Qualcomm’s and the Googles of the world to invest in the frontiers of tech such as 5G and AI.
Second, if past behavior predicts future results, we need to step back and think about the national security issues related to 5G, and not in the ad-hoc way we’ve been dealing with it. We should define the safeguards that need to be undertaken given the risks of infrastructure, in chipsets, and in all those billions of connected devices. It is good practice to define the rules, and the steps/precautions that must be taken, for foreign companies and governments that want to do business here.
Finally, we need to think seriously about the education/talent aspect here. I hear almost daily from tech execs about the lack of suitable talent to fill jobs in emerging areas such as AI. In 5G, there is enormous turnover, and a different skill set needed, for the jobs that will be involved in building next-generation networks. There is a deficiency of Higher-Ed programs in these areas. Greater public-private cooperation is warranted. Wealthy foreign nationals are coming here and getting their pick of programs at universities where they’re paying full freight, while the average U.S. college student has to spend zillions and get saddled with debt to get the same education that is nearly free (or even sponsored) elsewhere.
In June 2017, China very publicly announced its plans to become the world leader in AI by 2025. China’s ambitions about 5G are similar, if less overt. It will take a coalition of forces – private, public, and institutional – to counter that.
In this column/analysis, The Semiconductor Golden Era I wrote in January, I outlined how the next decade would yield more creative and innovative semiconductor designs than anything we saw in the race for performance era of the past few decades. We have enough CPU performance, and we will still see innovation in GPU design, but the overall trend is moving to efficiency around new use cases. Machine learning at the forefront.
During its Windows Developer Day this week, Microsoft took the covers off of its plans to help accelerate and dominate in the world of machine learning. Windows ML is a new API that Microsoft will be including in the RS4 release of Windows 10 this year, enabling a new class of developer to access the power and capability of machine learning for their software.
Microsoft already uses machine learning and AI in Windows 10 and on its Azure cloud infrastructure. This ranges from analyzing live camera feeds to AI for game engines and even indexing for search functionality on your local machine. Cortana is the most explicit and public example of what Microsoft has built today, with the Photo-app based facial recognition and image classification being a close second.
Windows ML allows software developers to utilize pre-trained machine learning models to power new experiences and classifications of apps. The API allows for simple integration with existing Microsoft development tools like Visual Studio. Windows ML supports direct importing of ONNX (Open Neural Network Exchange) formatted files that represent deep learning compute models, allowing for easy transferal and sharing between application environments. This format was introduced by Microsoft and Facebook back in September of last year. Frameworks like Caffe2, PyTorch, and Cognitive support ONNX export, so models that are trained in them can utilize inference through any system that integrates ONNX.
To be clear, Windows ML isn’t intended to replace the training activity that you would run on larger, high-performance server clusters. Microsoft still touts its Azure Cloud infrastructure for that, but it does see benefits to pairing that with the Windows ML enabled software ecosystem on edge devices. Software that wants to support updating training models with end-user input can do so with significantly less bandwidth required, as only the much smaller, pre-defined Windows ML result would need to be returned.
With Windows ML, an entire new class of developer will be able to utilize machine learning and AI systems to improve the consumer experience. We will see spikes in AI-driven applications for image recognition, automated text generation, gaming, motion tracking, and so much more. There is a huge potential to be fulfilled by simply getting the power of machine learning into the hands of as many software developers as possible, and no one can offer that better than Microsoft.
Maybe the most exciting part about Windows ML to me is the support for hardware acceleration. The API will be able to run on CPUs, GPUs, and even newer AI-specific add-in hardware like the upcoming Intel Movidius chip. Using DirectX 12 hardware acceleration and DX12 compute capabilities that were expanded with Windows 10, Microsoft will allow application developers to write applications that don’t need to worry about code changes for the underlying hardware in a system to ensure compatibility. While performance will obviously scale from processor to processor, as will user experiences based on that, Windows ML aims to create the same kind of API layer advantages for machine learning as DirectX has done for gaming and graphics.
Microsoft not only will support discrete graphics solutions but also integrated graphics from Intel (and AMD I assume). Windows ML will be one of the first major users of Intel’s AVX-512 capabilities (vector extensions added to consumer hardware with Skylake-X) and Movidius dedicated AI processor. Qualcomm will also support the new API on its upcoming Always Connected PCs using the Snapdragon 835 platform, possibly opening us up to the first use case for the company’s dedicated on-chip AI Engine.
This new API will be supported with both Windows UWP apps (Windows Store) and Win32 apps (classic desktop apps).
It is still in the early phases of development when it comes to the true AI-driven future of computing. Microsoft has been a player in the consumer market with the Cortana integration on Windows, but it has seen limited success compared to the popularity of Google, Amazon, and even Apple systems. By enabling every Windows application developer to take advantage of machine learning with Windows ML, Microsoft will see significant movement in the space, much of it likely using its Azure cloud systems for training and management. And for consumers, the age of artificial intelligence ubiquity looks closer than ever.
Most established businesses have grown up with Microsoft tools when it comes to business productivity. However, a younger generation of users seems to be using Google’s G-Suite offerings almost exclusively when it comes to creating documents, collaboration and many other forms of productivity in school and their early business lives.
My youngest granddaughter is in a public school that uses Chromebooks and not one of Microsoft’s tools are used when it comes to doing her assignments or homework. All of their tasks are being done in G-Suite. As of now, she would not even know how to use Microsoft Word, Excel, etc. While she currently uses Snapchat to collaborate with schoolmates on joint assignments, Google’s newest chat tools will make it easier for her to use these tools so she can stay in G-Suite when working with other classmates on a project, instead of jumping off to SnapChat to handle that part of the collaboration process.
I recently attended a G-Suite briefing at Google that shared the three new updates to Microsoft’s Office alternative, in which the number one thing asked for by users was the integration of this chat feature to become part of the total G Suite Solution. At this briefing, they highlighted their recent partnership with Salesforce.com and pointed out that Salesforce, along with other major customers in the enterprise and education, drove this demand for chat to be integrated into the G-Suite collaboration tools.
This was the first time I got a chance to hear and talk to the team behind G-Suite and saw how well this product was designed and how much Google pays close attention to their customer’s interests and demands when it comes to adding new features and functions. While I had read about the Google/Salesforce deal, I was not aware of how encompassing it is when it comes to how Salesforce will use Analytics 360 and G-Suite within their overall application.
I had a conversation with a high ranking exec recently whose daughter also uses a Chromebook in her school, and he pointed out that his daughter recently asked him to look at a doc she was working on and needed his input. He mostly uses Microsoft Office in his work and expected her to show him a Word document. But she pulled up Google’s G-Suite and showed him the doc in this application and a light went off in his head. At that moment he realized that this younger generation is growing up entirely without using any traditional Windows apps and by using Chromebooks in their schools, they are being programmed to use these tools as they grow up and most likely, will be using G-Suite when they eventually enter the business world.
After I left the Google briefing on new additions to G-Suite, I realized just how serious Google is when it comes to not only going after eduction but also business markets. G-Suite is already a real competitor to Microsoft’s Office and is the primary tool used in education today, especially where Chromebooks are being used. And, we are hearing that by this fall, Google will have a massive marketing campaign pushing Chromebooks to business users and consumers.
Along with that push will come many new models from the top three PC makers who are becoming more bullish on Chromebooks within their education programs. However, we see these PC makers willing to be more aggressive in designing new versions of the Chromebooks for business users too. In fact, don’t be surprised if at least one or two significant vendors become big proponents of Chromebooks and Chrome OS for business over the next few years as they are seeing more interest in these types of laptops by IT departments who, like Salesforce.com, are starting to see the value of these Chromebooks for their workforce.
That is why I also expect Microsoft to become even more aggressive with their Surface laptops and 2-in-1 products. The Surface has always been Microsoft’s way to try and compete with Chromebooks, and their Windows 10S software initially was explicitly focused on education. However, Windows 10S is being morphed into a broader version of Windows OS and is on track to become the core OS for Microsoft across all PC products in the future.
Given Google’s stronger focus on Chromebooks and advancing GSuite to meet the demands of consumers and business users, Google has emerged as a compelling alternative to what Microsoft has provided the PC world for decades. While Google has a long way to go to catch up with Microsoft regarding broad worldwide reach, I no longer think of Google as just another player who has an alternative PC OS and Office competitor. Thanks to serious attention from the major PC makers and Google’s own efforts to make their software applications better for business and education, Google has emerged as a force to reckoned with.
Microsoft should be worried about Google’s ability to challenge them in markets around the world and be more aggressive with both their Surface products, Office and Windows OS evolutions to stay competitive. Google is in this for the long run and at the very least will keep Microsoft on their toes and pushing them to innovate. But I also see Google gaining ground on Microsoft and becoming a solid competitor to them in education, consumer and business markets going forward.
Like I do every year when the Super Bowl is on I watched the Oscars with an eye out for any exciting new ads from tech brands. This year was the 90th edition of the Academy Awards, and it took place in the midst of #MeToo and the Time’s Up movement. Some spoke out before the event about being tired of all the politics and the controversy calling for the ceremony to just be about movies. Needless to say, some brands might have preferred to take a pass when it came to running an ad during the broadcast of the show on ABC.
Preliminary Nielsen’s numbers show that the TV audience dropped 16% compared to 2017. This preliminary drop would suggest an overall viewership below 32 million which was the previous lowest point recorded in 2008. Nevertheless, the Oscars are expected to be the most watched non-sporting event on American TV.
As the ads rolled, I thought it was very interesting to so clearly see how they matched the most significant focus of the brands they were representing, mostly transcending one single product to highlight underlying enablers.
This was not the first Academy Awards presence for Samsung. Aside from being the main sponsor they also ran a new ad featuring the Galaxy S9 as part of the “Do what you can’t” campaign. In the product placements during the red carpet, Samsung chose to highlight the slow-motion video feature of the S9 camera.
The commercial is full of celebrities and influencers and shows the clear target audience Samsung is trying to attract with its new smartphone: Gen-Z. Without being political, the “Make It Yours” commercial is helping highlight the work of women who are first for being nominated in their field. Dee Rees, who directed and wrote the adapted screenplay for “Mudbound,” is the first black woman to be nominated for best-adapted screenplay and she directed the commercial. Rachel Morrison is the first woman ever to receive a nomination in cinematography and she was the cinematography for the campaign. Both women are also openly gay which is quite a forward pick for Samsung, a conservative brand thus far.
This is certainly a departure for Samsung from the traditional tech-focused or competition focused ads, and I have to say I like it a lot. The feeling you get is quite similar to the recent “what’s a computer” iPad ad, but it touches on more personal issues like having regrets, making mistakes and being passed over while also focusing on overcoming obstacles.
I always thought Samsung as a brand lacked a clear identity and I hope this ad is a first step in finding a different voice for a company that plays a significant role in the life of millions of people across the world. Especially for millennials and older Gen Z knowing what a brand stands for, their values, their social responsibility is important and can make or break a brand.
Google aired a star-ridden ad around its digital assistant capability with the words “Make Google Do it” appearing any time the person in the commercial was trying to do something from ordering some dope tape” to turning your lights on in the dark, remembering the alarm code or making an action list.
The commercial is cute, but I thought the choice of words was very telling. First, the assistant is not mentioned until the very end when on a white screen you see: “Get the Google Assistant and make Google do it.”
It is all about Google and the relationship you, as a user, have with Google. The ad could have said “Let Google do it,” but that would have implied some form of permission you as a user grant Google. Saying “make” implies a position of control for the user over Google. You are not letting Google do something that you could do; you make it do something like you would a subordinate person to you. I think that is very clever. It aims at shifting the perception that you are working for Google that many have when it comes to thinking that Google wants to know more and more about consumers to better monetize them.
Interestingly, the commercial also cements the different approach Google is taking to the digital assistant by not personifying it. The assistant is a mean to get to Google a clear separation of voice and brains.
Microsoft is a Super Bowl sponsor through its Surface and Xbox business and the company has run ads during the TV broadcast of the game. I do not seem to find any evidence that Microsoft has run an ad during the Oscars before this year.
Featuring Common, the ad is an ode to technology and Artificial Intelligence and what they empower. Empowerment is a common thread in Microsoft CEO, Satya Nadella’s presentations. He firmly believes that technology should enable humanity to do more, be better, fulfill its potential.
AI and Mixed Reality are two key areas for Microsoft in business. After missing mobile it was clear they did not want to miss out on any technology that will empower the next generation, and they moved in early with HoloLens. Their business-first approach, however, is limiting their exposure to consumers. This is particularly true about AI that, right or wrong, is often equated to digital assistant. Here, Microsoft’s Cortana is trailing in adoption compared to Alexa, Google Assistant, and Siri. The ad is raising awareness among a wide range of people that Microsoft is not only about Windows and PCs. The commercial alone, though, will not help consumers think there is a role for Microsoft AI in their life anymore so than they think there is one for IBM Watson. That is totally fine, of course, if Microsoft is not interested in the consumer market. But AI will touch every aspect of their portfolio including Windows which might be perceived as lagging compared to other platforms if consumers just do not know how it is made better by AI.
This was Twitter first-ever TV commercial. The ad featured a poem written and performed by Denice Frohman, a New York City-born poet, over black and white static pictures of prominent media and marketing executives as well as filmmakers Ava DuVernay and Julie Dash, documentarian Jennifer Brea and “Insecure” director and actress Issa Rae. The commercial ended with the hashtag #HereWeAre, which first appeared in December when Twitter chief marketing officer Leslie Berland announced that a group of female leaders would appear during Twitter’s event at the CES technology show.
Twitter has been under fire for a long time about doing more to monitor and police those users who are engaging in hate speech and sexual harassment. So it is no surprise that the response to the commercial was mixed. Some praised the poem for being powerful and appreciated the effort. Some gave the benefit of the doubt but saying they now want to see Twitter put their money where their mouth is and do more on the platform. Others just outright criticized the choice of investment pointing to the fact that the money spent on the commercial would have been better served on improving the platform itself either by hiring more engineers or considering new AI driven tech to help with the monitoring.
I was in the in-between group. I hope that the effort is more than a beautiful ad and I am sure Jack is very well aware that after that commercial the stakes are now even higher. There is no question in my mind that abuse can kill engagement.
There were more ads during the Academy Awards from T-Mobile, Walmart, GE, and Nest, but the ones I picked for this article are the ones that I thought best represented where the brand is in its business and brand identity. You can find them all here.
Spotify has been a subject of interest of late, especially among many investors I work with. I try to be relatively optimistic when it comes to the tech industry, but there are times where it becomes harder when patterns align that you have seen so many times before. When it comes to Spotify, it seems as though we have seen this movie before and it doesn’t end well.