NVIDIA – Bringing Super Computing Power to New Forms of Computing Devices

About five years ago, I was talking with an Intel executive who said that one of the companies he had his eye on was Nvidia. At the time Intel was creating their own integrated GPU’s, but he thought there were occasions when it might have made sense for them to be able to have a discrete GPU of their own. Indeed, recently Intel did a deal with AMD to add some of their GPU’s to some of Intel’s higher-end chipsets.

Unfortunately for Intel, an acquisition of nVida was not in the cards. Interestingly, about this same time we started hearing from some investors that they too thought nVidia would be a good acquisition target since Nvidia was struggling with their mobile chip strategy and their high end computing vision had not been solidified at that time. Intel could have probably bought them at a reasonable price and who knows, an Intel and Nvidia marriage could have given Intel even more firepower for their computing arsenal today.

Today, Nvidia’s market cap is $136 billion and has become a powerhouse in supercomputing, AI and Machine learning, high-end graphics and ray tracing and is the #1 supplier of technology for autonomous vehicles. I spent the day at Nvidia’s GTC users conference this week and sat through their CEO Jensen Huang ’s keynote as he shared the incredible advancement’s the company has made just in the last year. He introduced the most powerful GPU ever created, showed how their new high power processors are driving higher levels of AI and Machine learning and how their ecosystem of high-end processors and sensors are making it possible for the auto industry and the world of tech create autonomous vehicles that by 2030, will be the preferred vehicles for most people.

I won’t share details on the keynote as there have been many articles already written on it, and here are some links if you want more details on some of the keynote announcements.

Most Powerful Graphics Card

Quadro GV 100

Autonomous Vehicles

An Interview with Nvidia’s CEO on AI and Machine Learning

Every one of the new products and programs nVidia Introduced at GTC 2018 this week is a game changer and will have a huge impact on many industries that use them and will help make Nvidia customers in multiple industries more competitive and help them differentiate themselves from their competitors.

During the keynote, It became much clearer to me that in hindsight I am really glad Intel did not buy Nvidia. While I am sure it would have been a good acquisition for Intel and it would have given them a better position in discrete GPU’s, they would have probably lost much of Nvidia’s major talent, and I am not sure Intel would have integrated nVidia into Intel’s culture properly. Even worse, Intel might have kept Nvidia from continuing the kind of innovative research that has made nVidia the powerful company it is today.

Over the last five years, Nvidia turned up the heat on their research towards supercomputing and high-end GPUs’ that are now used in Hollywood, gaming, visualization apps and high-end medical imaging. They also put serious investments and R&D into technology that will drive the future of autonomous vehicles, as well as AI and machine learning. By doing this, Nvidia reinvented themselves. Today they are one of the most important companies in semiconductors with a broad reach of products that will only help them grow even more in the future.

Led by Mr. Haung and some of the most talented teams of engineers in the industry, I expect that Nvidia will continue to innovate and keep Moore’s law alive in their own way.

Is it time for an Ad Free, subscription based version of Facebook?

Facebook is undergoing a great deal of scrutiny these days. Over the last 18 months, they have come under fire for being a service that rampantly spread fake news without any checks and balances applied to their service and did not protect their customer’s data from being hijacked. Some have even gone as far as to accuse them of having an impact on the 2016 presidential election.

Then last week the news that Cambridge Analytica abused Facebook in onerous ways has put Facebook CEO Mark Zuckerberg in the headlights of US and EU officials and has brought Facebook’s leadership under great pressure to clean up their act, or they will come under stiff regulation from US and EU legislative bodies very soon.

In talking to users of Facebook, I am starting to understand their frustrations with this social media service, but I also sense that it still provides real value to them in the way they can connect with family and friends and get legitimate information and even ads that are actually useful to them. However, the thing I keep hearing from these discussions is that they are losing faith in Facebook and are not sure they can trust them to protect them from nefarious actors who prey on them in multiple ways.

If you look deeply at Facebook’s business model, the culprit, so to speak, that allows this unrestrained flow of information is the fact that Facebook is ad driven. Everything they do is tied to the ways they can use data collected through their users to serve them targeted ads. This is at the heart of Facebook’s overall financial growth and keeps them profitable.

In my conversations with Facebook and some Twitter users, there is one theme that keeps coming up, and that is that they want to have these connections with friends and family and that they understand ads are the way this service is free to them. However, what they want are these services to protect their privacy and keep them from being exploited or misused by their data being harvested and used beyond a proper ad model they are willing to accept.

In a recent insider post about Apple’s purchase of Texture, I pointed out that the acquisition of this magazine service is tied to magazines that are trusted sources. In most cases, the magazines use highly accepted journalistic practices, and people trust what they read in them. I called Texture a “Safe Haven” for content and that Apple now has a type of service that will be important to their customers and extends the content reach of their ecosystem.

The idea of trusted content or sources is really important in this day of fake news. People are tired of being duped and are willing to pay for a trusted service like the one Texture delivers for $9.99 a month that gives them access to over 200 magazines.

Given what Facebook is going through and has had to deal with, perhaps it is time that they offer what I call a “trusted” or Safe Haven version of Facebook and do it as a subscription service. I realize this idea has been kicking around most likely inside Facebook as well as with other parties who look at Facebook’s future. But given what happened recently with Cambridge Analytica, and the fact that it does not seem Facebook really knows how to deal with protecting our data given their ad-based business model, I am beginning to think that millions of people may be willing to actually pay for an ad based version of Facebook if it does not use their data for ads and ad targeting.

So how much would a user cost Facebook if they had to charge them? Chris Wilson, Director of Data Journalism at Time Magazine answers this question in a recent post, and I share the relevant part of this article below:

“So how much would this cost users? Facebook estimates that it pulled down $20.21 in revenue per active user worldwide last year, for a total of $40.65 billion. That sum amply covered the $20.45 billion the company paid in costs and expenses. After taxes, Facebook posted $15.92 billion in profits. That per-user revenue was considerably higher in the U.S. and Canada, where a more developed and monetized audience netted $84.41 per user. That’s still less than what you might pay for Netflix or HBO, though perhaps more than many of Facebook’s 2 billion monthly users would be willing to shell out.

But even if subscriptions were prorated by market, and even if a privacy-positive network were to grow to Facebook-sized proportions, it would take less than $84.41 a month in the U.S. and Canada to turn a healthy profit. The trouble with relying on a lot of ads is that you need to store a lot of content. These days, that includes lots of video, with its lucrative “pre-roll” and “mid-roll” ads. To keep up with demand, Facebook has aggressively tried to get users and companies to post their content directly on the social network. When you get into the business of hosting large amounts of original content, however, your bill for storing and indexing rises quickly. Facebook doesn’t disclose its exact obligation for running massive warehouses around the world, but it’s safe to assume it’s a fair portion of its $20 billion in expenses. Without the burden of being a major content platform or pouring money into sophisticated algorithms to serve ads based on rigorous analysis of a person’s profile, we can estimate that $75 a year would cover the operating costs and generate a healthy profit.”

If Mr. Wilson’s math is correct, I monthly subscription price of $6.25 that would serve a user no adds but give them the valued connections to friends, family, and information they specifically want would still net Facebook a healthy profit.
As the Time article points out, Facebook would have to drastically change their business model and cut the costs of hosting video intensive ads for this to work. But I have to believe that for a significant portion of users, especially in the US, Canada and much of the EU, a subscription-based Facebook could be acceptable.

If I were going to pay a subscription fee for Facebook, here is what I would want in that service:

1-Facebook would not serve me any ads of any type and not use any of my personal dates for this purpose or share that data with advertisers even if it was to be used for market research purposes of any advertiser. My data is may data and not for Facebook to use in any way other than to serve me.

2-I would want a secure connection to my friends and family and be assured that I have complete control of what I can send them and what they can see as part of this service. More importantly, Facebook is not profiling me or my friends and family and just allowing me to communicate with them and share stuff of interest to me in a secure, safe haven environment.

3-I would want the Facebook messaging service to be encrypted and as secure as Apple’s current messaging service. While Facebook is the medium for making these connections, I do not want them to store that data or have any access to my messages for any purpose.

4-I would be open to them curating the kind of information I want to see based on my preferences. I am willing to let them know I like Golden Retrievers and stories about dogs. I am ok that they know I am a scuba diver and send me stories about scuba diving. But I would set the preferences for what type of information I am interested in and, using AI and machine learning, check articles related to my interest to make sure they are legitimate and not based on fake news.

This is the minimum I would want in a Facebook subscription that I pay for monthly.

Given Facebook’s business model today, I doubt that they would create this kind of service for their users, even if millions of us want this type of privacy and control of Facebook. However, if they don’t deliver this type of true secure safe haven for existing Facebook users, give us more control of our data, make sure our data is truly secure and protect us from fake news, millions upon millions of users will #deletefacebook and look for some other type of social media platform like the one I describe above and opt for this type of social media service instead.

Don’t Forget the Baby Boomers When Creating and Marketing New Tech Products

Like many of my tech friends who are over 55 but started in the world of technology when we were in our early 20’s or 30’s, technology is second nature to us. For those of us who grew up with tech, we often forget that the large majority of people in the US and around the world (especially in the over 50 age bracket) have not been as fortunate as we have been. In most cases, they have only embraced a technology if it can make their lives easier or provides new forms of services such as mobile telephony, instant messaging, and in much older demographics, a lifeline line to emergency services should they need them.

I have been reading a fascinating book entitled “The Longevity Economy-Unlocking the World’s Fastest Growing, Most Misunderstood Market” by Joseph F Coughlin, who is the Founder and Director of the MIT Agelab.

Mr. Coughlin states in his book that business leaders need to take more seriously the need to serve the growing older market, which he defines as a “vast, diverse group of consumers representing every possible level of health and wealth, worth about $8 trillion in the United States alone and climbing.”

The book is a fascinating read the delves into the fact that with modern medicine people are living much longer and in a lot of cases are more active and becoming more interested in the role technology could play in their lifestyles.

The Chart below shows that about 55% of folks over 50 in America have a smartphone, but I think that number is low. I have been inside some senior citizen facilities and most of the folks there have a cell phone and, in many cases, a smartphone. The chart also shows that about 56% have laptops and that also may be too low as we have seen laptops gain more acceptance in this age group too. Even tablets have become of interest to those over 50, and in a lot of instances, the tablet is the personal computer for people in this age bracket.

The other area that the over 50 crowd has embraced is social media. 65% of them are on Facebook, 21% on Instagram and 24% on Linked In while 19% are on Twitter.

As Mr. Coughlin points on in his book on “The Longevity Economy” this demographic also has money to spend. The 50-65 demographic is in all likelihood still working and has more dispensable income during these years as kids have left the nest and many are getting close to the end of 30 years of house payments. The over 65 crowd are starting to move to retirement, and they often have extra income to spend on tech if it meets a particular need.

Even armed with this information and knowing that an older demographic could buy their tech products, most PC makers and tech vendors design and market their products for a demographic of 18-45. While some vendors do place ads in publications like AARP and others mags aimed at an older demographic, ads placed there are an after-thought and not really part of a focused marketing push.

That is a mistake. This older generation needs technology more than ever as part of their aging lifestyle. This is especially true of things like smartwatches and fitness trackers with their added health tracking features, as well as dedicated devices like the ones from AliveCor who has an EKG watchband for the Apple Watch and a mobile EKG device. Other vendors have connected blood pressure readers and various connected tools for monitoring blood sugars for diabetics and other diseases.

This market is too large to ignore and all tech companies need to reassess this market that could be very lucrative for them.

The Real Importance of Apple’s Acquisition of Texture

In late February, I started writing a piece entitled “Why Apple should buy Texture.” For months I had been studying Apple’s need to acquire more content and original programming. In my main Techpinions column on Monday, I laid out the challenge they have in this area given the strong investments competitors like Netflix and Amazon are making, especially in video content.

I did not add Texture to Monday’s column as I was mainly focusing on video, but last month, when I was researching this topic, I concluded that Apple needed to be more aggressive with their books and magazine services as they had been languishing way behind Amazon and Texture.

So I was not surprised when Apple SVP of Internet Software and Services, Eddie Cue, during a session at SXSW on Monday, stated that Apple had bought Texture. In the column I was writing about this back in February, I reasoned that “Texture already had done the kind of legwork that would give Apple an edge in the publishing business and that this service would make them more competitive in the world of magazine publishing. It would also give them more content to add to their overall services offerings.” BTW, the reason I did not finish the article at that time was that so many other major tech news and issues came up that took precedence over this article and I held it with the goal of publishing it later this month.

I am a big fan of Texture and have used it since it first came to market. It has close to 200 magazines that I can read for a price of $9.99 a month. While I subscribe to only 20% of what is available, the magazines I do get are of real interest to me. As a self-described foodie, I subscribe to all food magazines and read all of them each month. It has two specialty mags on Diabetes that I also read religiously. I subscribe to a couple of car magazines to keep up with another area of interest and download each month or week most tech and business pubs such as PC Mag, Fast Company, Bloomberg Businessweek, MacWorld, Wired, Fortune. I also subscribe to news and commentary magazines like Time, The New Yorker and The Atlantic, as well as a few sports mags such as ESPN and Golf Magazine.

What is unique in almost all of these magazines is that they subscribe to high levels of journalism and I have much more trust in what I read in them than I do from items being posted on Facebook and Twitter, in which so much of it is “fake news.”

From my experience, when I am in Texture I feel like I am in a “safe zone” and while some of the more opinionated magazines could have some fake news in their commentary, for the most part, I find the articles are written to be well researched and representative of the old school journalism that I have grown up with and admired for decades.

Apple may not understand this “safe zone’ concept that Texture provides and most likely purchased Texture to bolster their desire to be a bigger player in the publishing business. But for many Texture fanatics like myself, this is one of the real reasons I go to the mags in Texture on a daily basis, besides the fact that it has publications of real interest to me.

Fake news on social media sites is not going away-ever. These sites are now abused in ways nobody imagined until recently. At the same time, people do want to read well written and researched content that they can trust, and the magazines in Texture, for the most part, delivers on that promise. While the strategic goal for Apple will be to bulk up their publishing business and add more content for customers to broaden their ecosystem, I believe for millions of people, Texture delivers one of the best “safe haven’s” for quality journalism. And if Apple can control the quality of their magazine offerings, it will have another hit on its hands.

Apple’s Programming and Content Challenge

One of the most important growth businesses for Apple has been their services division. It brings in about $7.5 billion a quarter now, and it could be a Fortune 100 business if it were ever spun off to be a business on its own.

As I have been thinking about Apple’s services business over the last few weeks, two key conversations I had with Sony Co-Founder Akio Morita and Steve Jobs many years ago came to mind.

Not long after Sony purchased a movie studio, I had the privilege of interviewing Mr. Morita on one of my trips to Japan. Sony was known primarily as a hardware company that made TV’s, portable music players and stereo equipment at that time. I was curious as to why Mr. .Morita bought a movie company, and he told me that he saw movies as just “digital bits,” and to him, it represented important content that could be shown or used on his devices. Keep in mind this was over a decade before the idea of content tied to devices was really in focus and showed the incredible foresight Mr. Morita had as Sony’s CEO.

It’s sad that Sony’s leadership has never had the forward thinking that Mr. Morita brought to his role as CEO once he retired and Sony lost their portable music lead to Apple and the iPod. They also missed out when it came to laptops, smartphones, and tablets too. They are being challenged again by competition in smart TV’s in a big way, and even their game console is coming under greater pressure as we are seeing more and more gamers moving to PC gaming and starting to leave their console game systems behind. Sony’s constant restructuring and cost-cutting and leadership that does not plan for the long range will continue to challenge their market positions if this keeps happening.

Steve Jobs was a real fan of Mr. Morita, and he had a similar view of content being digital, especially music content. On numerous occasions when I spoke with Jobs about his focus for Apple’s future, he made it very clear that Apple is at first a software company and the hardware they create is there to be the vehicle for their software and content to be deployed. It is essential to look at Apple from a holistic approach since their software drives hardware designs and becomes the way they also deliver content and services.

However, services have become even more critical to their overall business since it is not only a major revenue source, but it is one of the ways they are future proofing their business for the long run. Indeed, Apple’s goal is to use software, hardware, and services to tie people to their overall ecosystem and continue to give them solid reasons to either stay with Apple products or entice users of alternative operating systems to switch to Apple products.

Given that Jobs understood the role content plays in tying software to devices as part of the Apple’s ecosystem, it has been surprising how far behind Apple is to competitors when it comes to how much they are investing in content beyond their current music offerings.

The chart below shows Apple investing about $1.0 billion on non-sports video programming in 2017 compared to Netflix who spent $6.3 billion and Amazon who spent $4.5 billion. And Netflix is said to be planning about 700 original series in 2018 and could spend up to $8 billion this year on programming alone.

Given Steve Jobs strong position on content and Apple knowing they need more to keep people in their ecosystem, this current spending on content and programming seems pretty unaggressive. That said, if you look at what they spend in contrast to competitors, and the fact that they need to be more aggressive in obtaining the kind of programming that will keep people coming to or staying in their ecosystem, it leads one to think that perhaps Apple has their eye on some bigger prize in the content space.

Apple could create more original content and also go after some existing shows to add to their video programming. However, it might make sense for Apple to take a page from Sony’s playbook and buy a major movie studio, or at the very least, perhaps acquire some dedicated production companies that already have proven content and the ability to create more shows quickly to help add to Apple’s overall programming for their customers.

However, with Amazon and Netflix also bidding for more content and pushing production companies to create new shows for their services, the competition for Apple to get great programming for themselves will be fierce. That is why buying a major studio with an existing library, and the means to create more original movies and TV shows might be the best way for Apple to gain more control of their content future.

Why Microsoft Should Fear Google’s Push Into the Enterprise

Most established businesses have grown up with Microsoft tools when it comes to business productivity. However, a younger generation of users seems to be using Google’s G-Suite offerings almost exclusively when it comes to creating documents, collaboration and many other forms of productivity in school and their early business lives.

My youngest granddaughter is in a public school that uses Chromebooks and not one of Microsoft’s tools are used when it comes to doing her assignments or homework. All of their tasks are being done in G-Suite. As of now, she would not even know how to use Microsoft Word, Excel, etc. While she currently uses Snapchat to collaborate with schoolmates on joint assignments, Google’s newest chat tools will make it easier for her to use these tools so she can stay in G-Suite when working with other classmates on a project, instead of jumping off to SnapChat to handle that part of the collaboration process.

I recently attended a G-Suite briefing at Google that shared the three new updates to Microsoft’s Office alternative, in which the number one thing asked for by users was the integration of this chat feature to become part of the total G Suite Solution. At this briefing, they highlighted their recent partnership with Salesforce.com and pointed out that Salesforce, along with other major customers in the enterprise and education, drove this demand for chat to be integrated into the G-Suite collaboration tools.

This was the first time I got a chance to hear and talk to the team behind G-Suite and saw how well this product was designed and how much Google pays close attention to their customer’s interests and demands when it comes to adding new features and functions. While I had read about the Google/Salesforce deal, I was not aware of how encompassing it is when it comes to how Salesforce will use Analytics 360 and G-Suite within their overall application.

I had a conversation with a high ranking exec recently whose daughter also uses a Chromebook in her school, and he pointed out that his daughter recently asked him to look at a doc she was working on and needed his input. He mostly uses Microsoft Office in his work and expected her to show him a Word document. But she pulled up Google’s G-Suite and showed him the doc in this application and a light went off in his head. At that moment he realized that this younger generation is growing up entirely without using any traditional Windows apps and by using Chromebooks in their schools, they are being programmed to use these tools as they grow up and most likely, will be using G-Suite when they eventually enter the business world.

After I left the Google briefing on new additions to G-Suite, I realized just how serious Google is when it comes to not only going after eduction but also business markets. G-Suite is already a real competitor to Microsoft’s Office and is the primary tool used in education today, especially where Chromebooks are being used. And, we are hearing that by this fall, Google will have a massive marketing campaign pushing Chromebooks to business users and consumers.

Along with that push will come many new models from the top three PC makers who are becoming more bullish on Chromebooks within their education programs. However, we see these PC makers willing to be more aggressive in designing new versions of the Chromebooks for business users too. In fact, don’t be surprised if at least one or two significant vendors become big proponents of Chromebooks and Chrome OS for business over the next few years as they are seeing more interest in these types of laptops by IT departments who, like Salesforce.com, are starting to see the value of these Chromebooks for their workforce.

That is why I also expect Microsoft to become even more aggressive with their Surface laptops and 2-in-1 products. The Surface has always been Microsoft’s way to try and compete with Chromebooks, and their Windows 10S software initially was explicitly focused on education. However, Windows 10S is being morphed into a broader version of Windows OS and is on track to become the core OS for Microsoft across all PC products in the future.

Given Google’s stronger focus on Chromebooks and advancing GSuite to meet the demands of consumers and business users, Google has emerged as a compelling alternative to what Microsoft has provided the PC world for decades. While Google has a long way to go to catch up with Microsoft regarding broad worldwide reach, I no longer think of Google as just another player who has an alternative PC OS and Office competitor. Thanks to serious attention from the major PC makers and Google’s own efforts to make their software applications better for business and education, Google has emerged as a force to reckoned with.

Microsoft should be worried about Google’s ability to challenge them in markets around the world and be more aggressive with both their Surface products, Office and Windows OS evolutions to stay competitive. Google is in this for the long run and at the very least will keep Microsoft on their toes and pushing them to innovate. But I also see Google gaining ground on Microsoft and becoming a solid competitor to them in education, consumer and business markets going forward.

Should Intelligent Agents Teach Us How to Use Them

Not long after Apple introduced the Newton in 1991, their first personal digital assistant (PDA), it became pretty clear that this product’s life would be a short one. While the concept of the Newton got real attention, its design and functions were weak and in the end did not work as stated by Apple. Its most significant problem was its handwriting recognition technology that was deeply flawed. It did not work for a lot of reasons, the key one being the mobile processors available at that time were incapable of handling this task with any level of accuracy or precision. And the software Apple used for this was inferior in its execution.

I remember flying to Chicago for the launch of the Newton at the request of then Apple CEO John Scully, who drove this project from the beginning. He introduced the concept of the PDA to a very broad audience and tried to make Newton Apple’s next big hit.

But at that event, when it was demoed, the handwriting recognition failed continually, and even though we were told it was an early version of this software, I had a strong feeling that this was a product that Apple overpromised and unfortunately would under deliver.

Newton had a short life but during its early years Jeff Hawkins, who I had met when he was at Grid computing, began working on his own version of a PDA that he called the Palm Pilot. In the early development stages of his PDA, Jeff invited me over to his office to see his mockup, which was a wooden block sculpted to look like what eventually became the first Palm Pilot.

During that time we talked about Apple’s Newton, and I asked him why he thought it had failed. He stated that while he was at Grid Computers, who introduced the first real pen computing laptop in 1988 called the GridPad, he learned that when it came to pen input and character recognition, one needed to follow an exact formula and write the characters as stated in the manual.

At the time the GridPad came out, it too had a low-level CPU and was not able to handle accurate character recognition. He said that Apple was overly optimistic about Newton’s ability to manage real character recognition and with so many writing variables, it was doomed to fail.

That is why when he introduced the Palm Pilot he also introduced the Graffiti writing system which taught a user how to write a number, letter of the alphabet or specific characters like #, $, etc., in ways the technology in the Palm Pilot could recognize these inputs.

I was one of the first to test a Palm Pilot. I found that Graffiti was very intuitive and within a couple of days, I had mastered its characters as long as I used the prescribed ways to write these letters, numbers or characters. By doing that it was translated into digital information on the screen in real time. One could call this a form of reverse programming as in this case; the machine was teaching me how to use it in the language it understands.

Fast forward to today, and I believe we have a similar thing going on with digital assistants whether they are delivered on a PC, smartphone, smart speaker, or smart TV. One big difference this time around is that the processing power, along with AI and machine learning, is making these digital assistants much smarter, but not always accurate.

In what I think of as a Graffiti-like move, Amazon sends me weekly emails that include over a dozen new questions Alexa can answer. This too is a reverse programming example, and by teaching me to ask Alexa the proper questions, I am assured of getting highly accurate answers.

Here are some of the new things Alexa can respond to that came in an email from Amazon last week:

* “Alexa, what’s on your mind?”
* “Alexa, what are you doing for Black History Month?”
* “Alexa, give me a presidential speech.”
In honor of President’s Day, listen to 2-minute speeches from past US presidents—search by decade or president and let the inspiration begin.
* “Alexa, what’s another word for ‘happy?'”
* “Alexa, who is hosting the Oscars?”
* “Alexa, play the Long Weekend Indie playlist from Amazon Music.”
* “Alexa, give me a quiz for Black History Month.”
* “Alexa, what can I make with chicken and spinach?”
* “Alexa, tell me a President’s Day fact.”
* “Alexa, call Mom.”
Try a new way of connecting with the people you love. Learn more about Alexa calling and messaging.
* “Alexa, test my spelling skills.”
* “Alexa, wake me up in the morning.”
* “Alexa, how long is the movie Black Panther?”
* “Alexa, speak in iambic pentameter.”
* “Alexa, how many days until Memorial Day?”

These types of prompts that I get every week allow me or any user to begin to understand the proper way to ask Alexa a question and has the bonus of giving users a set of questions that may be of interest to them to get an accurate answer. Doing this builds up a users confidence in using, in this case, the Alexa smart assistant.

I have no doubt that as faster processors, machine learning and AI are applied to digital assistants, they will get smarter. I suspect that more and more companies who create digital assistants will also start using Amazon’s model of teaching people how to ask questions that are more in line with how their digital assistants want a query to be stated. It will also give us more ideas of things to ask so that it teaches people how to use them so that they can get more accurate answers to their questions in the future.

Apple and iPhone Revenues

Why Apple dominates smartphone revenues but may need some lower priced models to keep growing their services business.

Not long after Steve Jobs came back to Apple in 1997, I had a couple of talks with him that ranged from how he was going to save Apple to what some of his guiding principles would be when it came to bringing Apple back from the brink of bankruptcy.
At the time he re-joined Apple, I had been spending time with current Apple CEO, Bill Amelio looking at ways he could keep Apple afloat and get it out of its downward spiral.

In my first meeting with Jobs, which took place the second day he was in the role of CEO or interim CEO as he liked to say, he told me that one of the guiding principles for making Apple relevant again was to focus on industrial design. At the time I questioned this focus, but as history has shown, industrial design has indeed played a key role in bringing Apple back from the brink of disaster.

A few weeks later, when I bumped into him at the Apple Campus, I asked a follow-up question to that first meeting and asked his view about margins. I had had this discussion with him before he left Apple in 1985 where his goal was to price any product Apple brought to market in the 22% and above range. Interestingly, that seemed low at the time as these were the early days of the PC and margins were closer to 35%-40% in those days. If I remember correctly, Mac’s margins were in these same ranges at that time too.

By the time Jobs returned to Apple in 1997, PC margins had shrunk to under 20%, and today, in some cases, those margins are closer to 5%. Apple’s margins on Mac’s, smartphones and most hardware products continue to be in the 35-37% range continuously.

I attribute this to two main factors-

1-Jobs’ initial goal of having margins of at least 22% and that being engrained into their leadership rules that guide Apple’s current management team. To their credit, Apple’s CEO and his team probably look at that 22% as lowest margin rate they would ever have on any product they ship.

2-Apple has always aimed to be the premium provider of any product they bring to market in any category. This is not news to anyone who follows Apple since this has been in their DNA since the Mac was introduced in 1984. With a premium focus on anything they make, premium pricing comes along with that and Apple makes no apologies for creating and delivering the best of breed on everything they ship. The new HomePod is an excellent example of this strategy.

Unlike other home speakers whose focus is on providing an assistant in low-end speaker designs, Apple went with a DNA based product in which the quality of the speaker was critical to their premium design thinking. Yes, they have a lot of work to do to get SIRI up to par with some of the voice assistants from others, but the AI engine is getting smarter, and with the dedicated specialty microphones tied to SIRI, Apple’s HomePod will get better over time.

The chart below emphasizes how Apple’s focus on premium and commitment to healthy margins impact their market position regarding smartphone revenue. Apple had the lions share of smartphone revenues in the last reported quarter. Samsung was #2 with 15.7 % of all smartphone revenue in the same quarter. Even the other category, which represents hundreds of millions of smartphones but with much lower ASP’s, were only 26.3% of this category.

Apple’s total revenue growth is also impressive. Apple’s Q1 (Sept to Dec 2017) broke records again in almost all categories. This reflects Apples focus on premium products with higher margins.

While I don’t believe Apple will deviate much from their premium products and high margin strategy that has served them well for decades, I have lately been wondering if we could see a bit of a shift in pricing over the next few years. If you look at the services category in the chart above, this segment of their business is one that is multiplying. Services brought in $8.5. Billion in revenue last quarter and continues to grow.

However, it is growing because it is tied to hundreds of millions of iOS devices, which include iPhones, iPads, iPods and Macs of all flavors. For services to continue to grow, Apple needs to continue to see hundreds of millions of new iOS devices sold year after year.

Today’s premium pricing serves Apple well today, but I believe there will be a time when Apple will need to rethink pricing for future smartphones and perhaps even iPads and Mac’s that have lower ASP’s and thus lower margins if they want to keep their services business growing. If and when that will happen is anybody’s guess, but Apple and Wall Street understand that their services business is a critical part of Apple’s future growth and for it to keep growing it will need to be connected to more and more iOS devices in the future.

Premium products and premium pricing can always drive the lions to share of profits, but I believe Apple may have to create a range of products that have lower ASP’s and slightly lower margins if they want to keep seeing their services business expand and grow and remain a big contributor to their bottom line.

The Importance of Smart Speakers

One of the most important markets for the tech industry is the connected home. Connected thermostats, televisions, lights, appliances, security cameras, door locks, etc. have gained strong consumer interest around the world and is at the heart of making homes and even offices smarter.
I have been studying the connected home since 2002 and wrote one of the first reports on this idea in 2004 about having a home with devices connected to the Internet. In that report, I stated that while I saw the potential of connected devices, I believed that they would only gain real traction when they had more processing power behind them, better connectivity, and were controlled centrally.

In 2004, we did not have smartphones, smart speakers or even any standardized wireless protocols that could deliver on the concept of the smart home but I proposed that once they did have this capability, it would need some hub that served as a control center. The device I suggested that could serve as this hub should be the television. We were working with a semiconductor company who specialized in television processors at the time and had a vision for making the TV more intelligent. As I looked at their processor roadmap I could see a glimpse of where they could go with these chips to deliver a TV that could be connected to the Internet and putting two and two together, I surmised that a TV, with the right intelligence and connectivity, just might be able to serve as control hub of a connected home.

Fast forward to today and TV’s indeed have become smarter with greater levels of connectivity. While they are placed in most homes in a central location that could serve as a smart hub, none of the TV vendors have designed them to be connected smart home control centers. Interestingly, I believe that this was the thinking behind Steve’s Job’s desire to create a TV. While Jobs told his biographer Walter Isaacson that his vision for the TV was focused on user interfaces and ease of use, I believe his vision was more Trojan Horse in that it could also become a smart hub for a connected home. Now, this is pure conjecture on my part, but I’ve spent enough time following Steve Jobs and tracking his motives and could easily see that he had more in mind than just a better UI for a TV.

The idea of a hub that sits at the center of controlling a smart home is more relevant today than ever. While I thought that a TV was the most logical device to serve as that hub, it is clear now that a smart speaker has become the right device that can serve that purpose now. With a voice interface, connectivity and placed at the center of a home’s activity, such as a kitchen or a den, it is quickly becoming the best way to interact with, and control, the multitude of smart devices people are employing in their homes these days.

But let’s be clear, we are still at the early stages of making homes smart and the very early stages of making smart speakers in one form or another the primary hub that controls the smart home.

The chart below puts this fact into perspective.

As stated above, this chart only shows the market before Apple entered the smart speaker market. It also shows that there is a lot of room to grow this market in the US.

The chart below shows the pricing of these speakers and shows Apple has some real headwinds when it comes to the Homepod’s ability to gain market share in the smart speaker market. However, If Apple can sell a projected 5-6 million in calendar 2018, they could end the year making the most money and profit in this market segment.

As I have mentioned above, the real purpose of smart speakers, besides giving us information on demand, is really to serve as the control center of a smart home and this will become much more important as a plethora of smart devices flood homes around the world in the next few years.

In my home I have been using Amazon’s Echo and Echo Dots, the Google Mini and Apple’s New Homepod for getting information, playing music on demand, ordering stuff online and controlling lights and other connected devices around the house. By far the best device I have is Apple’s Homepod since it provides superior sound quality compared to the others I use and I have found Siri to be surprisingly quite accurate since I started using it a few weeks back. And as a hub, it works flawlessly with the made for Apple Home devices too. But so has the Echo and Mini when it comes to controlling connected devices in their respective eco-systems.

Given what I believe was Steve Jobs’ Trojan Horse thinking about the TV as a control hub and my belief that Apple has made the Homepod an extension of Jobs’ TV vision, I think Apple also needs to follow what Amazon and Google have done and created mini versions of their larger speaker models. The fact is that a hub of this type needs to reside in other places of the home, not just one central location like the kitchen or the den. In my home, the larger speakers are in the Kitchen, but I have the Google Mini’s and Echo Dot’s in our bedroom, my study, and even our master bathroom.

For Apple to make the Homepod a whole home hub they eventually need to do what I would call a Mini-Homepod, but with better quality speakers that are in the Echo Dot and the Google Mini. While the Google Mini and Echo Dot are $49.99, I would be willing to pay $99 for a Homepod mini if the speaker quality was at least four times better than what is on the Mini and Dot today.

I believe the battle to control the smart home will go through the smart speaker since it will serve as the central control system of the connected home. In that sense, it also becomes a real eco-system battle. But it starts with the quality and functionality of the smart speaker and the accuracy of its intelligence and controlling functions. The smart speaker is much more than an intelligent speaker. It is on track to become the central controller of the smart home and serve a much greater purpose than just being a speaker and smart agent.

Is Qualcomm’s Connected PC a threat to Intel?

I have been spending a lot of time with clients and people in the industry lately about Qualcomm and Microsoft’s push to create an ARM-based platform for Windows-based laptops. Although these two companies launched a Windows on ARM program four years ago, that initiative failed due to the underpowered ARM processor available at that time and the version of Windows used on these laptops that did not work well on these early ARM based laptops.

The new program launched by Qualcomm and Microsoft in early December is called the Connected PC initiative and uses Qualcomm’s 835 and 845 Snapdragon processors along with a new version of Windows 10S that is optimized for use on these ARM-based laptops. As I have written recently, while the connected portion of this program is interesting, our research shows that actual demand for connectivity in a notebook was #6 on a list of preferred features when we surveyed people who were interested in buying new laptops.

Number #1 on this list was battery life. The good news for Qualcomm and Microsoft is that their Connected PC program also stresses long battery life and Qualcomm expects that laptops using the Snapdragon 835 and 845 will provide at least 22 hours of continuous use. My own belief is that instead of touting Always Connected and Always On as their tagline they should reverse the order and Always On and Always connected alternatively.

I also see this push towards all-day computing to be a potential game-changer for the PC industry and, if done right, could spur new demand for laptop refresh rates that could last at least three years.

While this push for all-day computing looks to be important to Qualcomm and Microsoft, it should be equally important to Intel if the concept of an all-day laptop has the potential to drive a lot of new laptops sales in the future. But here lies the big question. Intel’s PC processors have never really been optimized for long battery life because performance has been at the heart of their CPU mantra. Indeed, as semiconductor mfg processes have gone from 22 nm to Intel’s current 10 nm process, better battery life and more power do happen. But performance has always topped battery life in their strategy.

The processor Intel would be pushing into this all day computing genre most likely will be Lakefield and its future iterations. This is a very important mobile chip for Intel in that while delivering solid performance; its second goal is to deliver longer battery life.
But here lies the billion-dollar question for Intel and the industry. Can Intel compete at the long battery life level with Qualcomm’s 835 and 845 processor that they state can deliver at least 22 hours of continuous use and Qualcomm belief’s that with new chips in the works, could get to well over 30 hours by mid-2019?

Sources I have talked to who have tested the current Lakefield processor say at best it can get perhaps 18-20 hours of continuous use if the conditions are just right. But when I asked these sources if they believe Intel could ever get the kind of battery life like Qualcomm can deliver, they say they doubt it. Of course, Intel would argue that they will still have the edge in performance and might even say that in a real world people plug their laptops in overnight and nobody uses a PC for 22+ hours continually.

While Intel’s argument has merit, their ability to compete in the all-day computing thrust that I believe will jumpstart a new refresh cycle for the PC industry will depend on the answers to these following questions-

1.How much battery life does a user want? Is 18 hours enough, 22 hours enough, etc.?

2-Will Qualcomm’s long life processor has enough power to meet users basic computing needs, i.e., video, web browsing, etc.? Or do they need more processing power to handle advanced graphics, extensive numerical calculations, etc.?

3-Will a 22-30+ hour battery life in a laptop be enough to cause people to want to refresh their older models that in most cases get less than 10 hours of battery life today? Is the prospect of heading off for the day and not thinking about even needing to carry a charging cable appealing enough to get people to start replacing their current laptops in large numbers?

4-How will long-life batteries in laptops influence design?

5-Microsoft’s significant endorsement of Qualcomm’s Windows on ARM speaks volumes about what seems to be a processor-neutral position Microsoft is taking when it comes to CPU support. Intel is where it is today in PCs because of Microsoft. Could Qualcomm ride this support from Microsoft and the Always On, Always Connected initiative to become a serious threat to Intel’s PC future?

These and other critical questions about the future of laptops are part of our research, and as we get answers to these questions, we will report back. My take is that Intel will be challenged by this Microsoft and Qualcomm initiative, and they could see their dominant role as the main CPU suppliers be challenged by Qualcomm shortly.

Can the PC Market Ever Grow Again?

One of the big questions we at Creative Strategies get asked about by all of the big PC and semiconductor companies who have much skin in the PC game is whether the PC market could ever grow again. If you look at the Gartner chart below, you see that starting in 2012; the PC industry has been in decline significantly. Since 2011, the PC market shrank by 32%, and while 2017 numbers are not in yet, we believe it was down to 3-4% last year. That is a huge drop in PC sales that has had a major impact on just about everyone in the PC ecosystem today.

For the last seven years, when asked about the PC market ever growing again, our answer was always no. But recently, we have seen some significant technology becoming more available that can be applied to PC’s that, at least for a few years, could reverse the negative decline in PC’s and cause millions of users to upgrade there PC’s even if they are only two years old.

At the heart of this new technology, push is Qualcomm’s “Connected PC” design that they launched in Maui last December. Touting a more powerful Snapdragon processor with a cellular radio built into the overall chip design, and making it always connected as well as powerful enough to run Windows 10S. Qualcomm and Microsoft hope to entice people to upgrade their PC’s faster using the “Connected PC” idea and promising them a better overall experience since their PC, like a smartphone, would always be connected.

However, as I pointed out in my Techpinions Column on Monday, our research shows that what people really want is longer battery life and this was the #1 requested feature in our recent survey on what features people want when they buy a new laptop. But interest in adding a cellular modem to their laptop came out at #6 in this survey.

The good news for Qualcomm and Microsoft is that the other part of the “Connected PC” program is Snapdragon’s ability to deliver long battery life in these new laptops. At the event, they stated that a “Connected PC” could deliver at least 22 hours of continuous use and sources tell me that QQ is working on more advanced processors that could give users closer to 30+ hours by early next year.

I see two major things that will happen that could impact a greater demand for laptops by mid to late 2018. That means that perhaps by 2019, we could see new demand for PC’s rise as people want a laptop that delivers all-day computing. If so, this is something that I believe could be a catalyst for a major three-year refresh cycle for portable PC’s and see the PC market grow again.

The first thing that will be in play is a major battle between Qualcomm and Intel to become the one who delivers the ultra-long battery life that people want if they are to upgrade their PC. This is where I see Qualcomm having a major advantage over Intel. From what sources tell me, Intel’s most advanced mobile processors that will be available mid-year and, at best, can deliver only 18-20 hours of continues use. Qualcomm has already guaranteed 22 hours with their current 835/845 Snapdragon processors, and I do not doubt that by late 2018 they could deliver potentially up to at least 30 hours of continuous use.

That said, Intel will have an advantage from the performance standpoint since all Windows operating systems and apps can run natively on their X 86 processors. Microsoft is working very closely with Qualcomm to make a version of Windows 10S work very well on Snapdragon with minimal emulation. If Microsoft can deliver on this promise, it could help Qualcomm make significant inroads with PC vendors, who already know that the #1 thing their customers want is longer battery life and need to make that a part of their product roadmaps by years end.

The second thing I see happening could be a huge market push by Qualcomm, Intel, Microsoft and all PC vendors to launch and brand a new category of laptops, loosely called “All-Day, Always-On” laptops. As I stated in Mondays Techpinions column, I felt that Qualcomm’s emphasis on the connected PC idea was, from a marketing viewpoint, off the mark and felt they should have lead with all day computing message first and tied always connected to their overall design messaging.

If Qualcomm and Intel, along with Microsoft and PC vendors create all day laptops and market the dickens out of these new systems, I suspect it will resonate well with business and consumers users who have told us battery life is the most important feature they want in a new laptop. More importantly, all day computers could be the real motivation for people to start upgrading their PC’s faster than normal and get the demand for PC’s in positive territory at least through 2021-2022.

Why the Connected PC Initiative Misses the Mark

Last December, Qualcomm held a major media event in Maui, HI to launch what they call their connected PC initiative. Qualcomm is best known for their cellular radios that are in almost all smartphones, and their new SnapDragon 735, and 845 processors are now capable enough to also power a laptop. The key idea is to add a cellular connection to laptops using their Snapdragon processors thus making them a “connected PC” since that laptop would always have a connection via WIFI or cellular just as our smartphones have today.

Joining them in this announcement was Microsoft who strongly supported Windows OS on a Qualcomm processor, also known as Windows on ARM. If this sounds familiar, Microsoft launched a similar program with various ARM processor companies in 2014, but it failed since the processors back then were not powerful enough to handle Windows OS and Windows had to be run in an emulation mode which made these ARM-based laptops run sluggishly at best.

This time around the processor that Qualcomm is bringing to the table is fast enough to run Windows OS 10 even when, in some cases, it has to revert to emulation mode to do so.

As I sat through the major presentation by Qualcomm and Microsoft Executives describing their new “Connected PC” program at the Maui event, the first thing I thought was “is this just a new try at Windows on ARM” and remembering what a disaster that was the last time this was tried. But as I got to check out the demos and do some one on one’s with Qualcomm and Microsoft Executives about the role a more powerful Snapdragon processor and a tailored version of Windows 10S created for this program could deliver, I saw that this idea had real merit and potential.

While in theory, I like the idea of always being connected, anytime and anywhere, I knew from our research that connectivity via cellular was not a high priority when it comes to features wanted in a laptop. Indeed, we have had the availability of cellular modems as options for laptops for over ten years, and demand for this feature in laptops is very low.

Another good benchmark to measure demand for cellular connectivity beyond a smartphone is the cellular activation rates of iPads. It turns out that of all iPads sold, around 50% buy up to include a cellular modem. But our research shows that less than 20% of those iPads with a cellular modem in them activate them.

The key reason for lack of real demand for a cellular connection in a laptop or a tablet is the additional cellular costs this adds to a person’s cell phone bill. When I asked one major cellular carriers about how they would price the connection on a connected PC, they said it would be an additional $10 or 12 dollars a month fee, and data used on a laptop would count against the person’s monthly data allotment they pay for already.

I could imagine that a younger demographic user who watches a lot of Youtube videos and accesses a lot of content on their laptops now, could go through their allotted all-you-can-eat 22-25 gig personal data plan in one or two weeks and then their data speeds on both their smartphone and connected laptop go down to 128 kbps.

Our research about the demand for cellular in a laptop was done sometime back so early this year we updated this survey by asking people “what are the three most important features you want in the next notebook or laptop you will buy.” As you can see from this chart below, long battery life, more memory, and larger hard drive storage topped their list. Cellular connectivity came in farther down the list at just over 20% interested, which pretty much maps to our iPad research mentioned above.

The good news for Qualcomm and Microsoft is that while both touted the “connected PC” initiative at the event, they also emphasized that by using these new Snapdragon processors one could get as much as 22 hours of continuous battery life. In talks with their execs after the main announcement, they hinted that people could probably get even more hours of battery life depending on how their OEM partners configured them and the OS versions they would use from Microsoft.

My fear for both Qualcomm and Microsoft is that by leading with the connected PC story and subsequent marketing pushes around this focus, it will not drive the kind of tech adoption they hope to get from this program, and we could have another Windows on Arm failure in the works. The research we did a year ago and in the last week shows that the real interest is in longer battery life. That would drive significant demand for Windows on ARM with QQ this time around provided it delivers the kind of performance they stated at the launch event in Maui in early December. They should brand this the “All Day PC” and make this the new battle cry for laptop upgrades going forward.

This is an important moment for the PC industry. While consumers like new designs that are thinner and lighter, as our survey points out, that is not what drives purchases of new laptops. Longer Battery life, more memory, and storage top their buying criteria. If Qualcomm and Microsoft, along with others, who want to compete with a feature that may drive a new level of demand for laptops in the future then they need to cater to these interests and make cellular connectivity a nice to have feature for those who are willing to pay the connectivity tax they will get from their carriers.

Are AR and VR the New VisiCalc of Our Age?

If you know your computer history, you know that the one product that put the PC on the radar of potential business users was a product called Visicalc. Created by two brilliant people, Dan Bricklin and Bob Frankston in Boston in the late 1970’s, it was the first spreadsheet designed for a PC. In this case, it was created for the Apple II, and it changed the way people did accounting projects. More importantly, while it was designed for the Apple II, which at the time was viewed as a hobbyist computer, an Apple II with Visicalc found its ways into the accounting offices of Fortune 500 companies and became an indispensable tool.

It was VisiCalc that caused IBM to kick off their PC project, which resulted in the birth of the IBM PC in 1981. IBM was highly interested in what Visicalc could to on a desktop computer like the Apple II because IBM’s history and DNA were rooted in computational calculating machines that were used in the late 30’s and most of the 40’s to manage things like the US Census as well as IRS related tabulations.

The real value of VisiCalc and subsequent spreadsheets like Lotus 123, and Microsoft’s Excel is that it introduced the “what if” concept to numerical data applications. As these spreadsheets became more powerful, asking the “what if” question has now gone well beyond just numerical equations and has become more important to all types of data-driven projects where using the “what if” questions is important to gaining insight to all types of questions.

Over the years I have been fascinated with the role computers play in answering the “what If” query since much of what is done in business productivity is focused on asking this vital question about all types of productivity projects people may be working on. As I thought about the “what if” question as it relates to today’s age of computing, it became clear to me that AR and VR technology is now being applied in many types of applications to answer this question in very new ways.

I recently met with a Prague based company called VRG vrgineers.com who showed me what is one of the most powerful VR headsets I have seen to date. More importantly, they showed me various VR apps used by auto designers, architects, and individuals that were creating products and wanted to see them in various forms or dimensions before they created the final product. The architectural example they showed me was great in that a person can add various wall types or colors to a blank wall or floor designs to a virtual room in a virtual house home and work with with the architect in advance of building the home to their customer’s desires.

What was interesting about what VRG has is that they have licensed the hand gesture technology from Magic Leap and this is by far the best way to interact with data as the current joy stick like hand devices are less precise.

From an AR perspective, you get that same “What If” concept with AR apps like the one from Ikea. With this app you can go into a room and point at it an empty room and then, using a side menu that has furniture, lights, etc., you can drop them into the empty room to see what they would look like in that room should you buy these products from Ikea.

To be fair, we have 2D apps that can do some of this now, but with AR you get new forms of visualization added to various “what if” scenarios and with VR you can be in the virtual house or virtual car being designed to see a final product or solution. In fact, I think of VR in this case as a WYSIWYG What if a solution that adds more accuracy and new dimensions to the many “what if” scenarios for both business and consumers.

Computers are essential for helping with “what if’ questions and PC’s were and still are key tools for working with “what if” scenarios. However, I see AR and VR being the next generation of technology to deliver even greater ways to answering new forms of “what if” questions in the next two decades.

How AMD CEO Dr. Lisa Su Has Made AMD Relevant Again

One of the most important leaders in tech today is Dr. Lisa Su, the CEO of AMD. Lisa Su became CEO three years ago, and since then she has turned the company around in big ways and made AMD a formidable competitor to Intel in the PC CPU space and made AMD a force to be reckoned with in server chips.

At CES I had a chance to sit down and talk to Dr. Su and ask her about the last three years of her leadership at AMD and why AMD has now become an even stronger force in PC CPU’s, GPU’s and Server processors. If you follow AMD, you know that before Lisa took over, AMD had some serious financial challenges. While it stabilized under former CEO Rory Read, AMD’s strategy for the future was not clear, and the company was often guilty of overpromising and under delivering.

Dr. Su told me that when she first took over as CEO, her mantra to the team was “don’t worry about the financials. Just focus on delivering great products.”
At the strategic level, she worked with her teams to create a product roadmap that reflected future trends and coalesced around their core competencies and told her team to “put all of your energy into building those products and concentrate on executing this visionary roadmap.”

She also made an important strategic decision that said: “AMD would concentrate on being a high performance computing company” which meant anything that did not meet this criterion would not pass muster with her.

One detour the company took during Mr. Read’s leadership is that AMD decided to pursue, what at the time, seemed like a safe bet. They added development around a processor for tablets, and as you know, that market just did not develop like some had thought it would. Now, under Dr. Su’s guidance, all work is focused on high-performance computing, designs, and platforms that demand more computing power to operate.

Another observation she shared with me is that when she took over as CEO, the industry thinking was that demand for discrete graphics would decline and the graphics functions integrated into the core CPU’s would increase. But as she pointed out, that has not happened, and demand for discrete graphics chips and graphics cards are on the rise. This is driven by the higher demand for performance PC’s that are used in gaming and, shortly, will be needed in PC’s and laptops that support 4K and eventually 8K graphics screens.

One factoid from our research is that we see more and more millennial’s whom, in the past, were satisfied with gaming consoles, move over to high-performance gaming PC’s in droves.

This trend has helped AMD grow their market in gaming PC’s and, given the interest in higher resolution screens, this should continue to help AMD drive higher sales in the gaming and high-performance PC space.

Three of their most recent products have already had a significant impact on AMD’s bottom line, and more importantly, has helped them rise in both stature and acceptance in the eyes of their OEM’s and business partners. AMD’s EPYC server chips are world class in power and functionality and, as Dr. Su told me, “the network guys are jumping in with two feet” to buy them and use them in their server operations.

Their Ryzen CPU’s compete head-on with Intel and every week they get new design wins from most of the top PC makers who are adding Ryzen processors to their PC and laptop product mixes. In fact, I spoke with two of the top PC makers about AMD, and for the first time in many years they were extremely bullish about AMD and pleased with the new CPU’s they are making available to them.

Add to that their new Radeon Graphics chips and cards and AMD now has a trifecta of products that are competitive and industry leading in their scope and reach.

Intel will always be the dominant player in PC CPU’s and server Chips, but AMD has now become a world-class competitor to them. This type of competition is not only good for Intel in that it keeps them on their toes, but it also is excellent for consumers who now have new powerful alternatives when they shop for PCs.

As an analyst, I have covered AMD since its early days and often dealt with their colorful founder Dr. Jerry Sanders while he was CEO. AMD has always had great potential and promise, although subsequent leaders who followed Dr. Sanders have taken AMD on many bumpy rides in the past. My sense is Dr. Lisa Su has now brought a great deal of vision and discipline to the company and has set it on a path for steady growth in the future. She has also emerged as one of the top spokespeople for our industry, and her leadership role in the tech industry will only grow as she guides AMD in the coming years.

One last thought about AMD-

The fact that AMD has come back strong as a competitor to Intel can’t be underestimated. Intel is always aggressive with their roadmaps, but serious competition from AMD will only help them become more competitive themselves. Also, the recent partnership Intel developed with AMD to include AMD’s graphics processor in a co-designed chip could be a precursor of a more tighter relationship with the two companies in the future. AMD’s graphics chips are very powerful, and Intel does not need to reinvent these graphics chips on their own given the other irons in the fire in areas such as 5 G and autonomous vehicles that are even more important to their future.

Tech’s Role in Early Warning Systems

By now almost everyone is aware of the false incoming missile attack alert that was sent to people’s cell phones in Hawaii recently. The Filipino side of my family is in Hawaii, and I have worked with two of their governors and many of their business leaders on tech-related projects for 20 years. So the news of this “attack” was very personal for me.

The good news is that within 20 minutes, people were sent an update that the alarm was false and that they were safe. The bad news is that is scared people in ways that one who was not there and experienced this could hardly imagine. I know of one person who just sat in the tub and prayed. In many other cases, I am aware of people who called relatives in Hawaii and around the world to tearfully say goodbye. In my case, I began praying for the safety of my family, and all who were in the Hawaii Islands since the message that was sent suggested disaster was imminent.

We now know how this happened. A worker pushed the wrong button in part of a test and caused severe havoc among the people of Hawaii. The fact that there were no safeguards that allowed for a single button action that sent this alert is amazing.

At a minimum, there should have been a dual authentication process in place before that button could be pushed. And if the UI was designed properly it needed to be tied to an authentication process that included commands from the military and government bodies who are chartered with monitoring the threat of things like incoming missiles and any other type of alert that impact the people of these islands. This alert appears to have been tied to a system that was more like an Amber alert than one tied to government and military officials who actually would have a say in this type of matter

I know the current governor of Hawaii David Ige well since I worked with him on tech projects while he was a senator and again since he has become governor. I can see the pain and frustration on his face during the many press conferences he has held to discuss this problem and explain how the State will make sure this does not happen again. Governor Ige is an electrical engineer and understands technology well. I am certain had he known about the particulars around the UI of this warning system he would have had it changed well before this could have happened.

While the mistake was human error, it was preventable if the user interface of this system was designed with more safeguards and stronger IT oversight. This is as much a technology problem as it was one caused by human error. And Silicon Valley and the tech world need to think harder about how to help state and local governments deal with the magnitude of a serious attack if and should it ever happen.

The first thing tech needs to address and perhaps lay out is some guidelines around user Interface designs for disaster-related alerts. The fact that Hawaii did not have this place suggests that maybe a best practice in this area is not designed or at the very least, not well known.

The world has become a more dangerous place in the last year, and for the second time in my life, I have had to deal with the threat of a nuclear attack. I was in grade school during the Cold War, and we had to have nuclear bomb drills during our school year. In hindsight, these drills were absurd since all we did was hide under our desks when the siren went off. Also in those days, by the time a missile was headed for us, the technology was not there to even give us the type of alerts that would allow us time to get to a local bomb shelter fast enough if there even was one nearby.

Now, with just about everyone having a cell phone or having access to a radio and TV, any alert of something like an incoming missile could be sent in seconds. But an alert like this must only be sent if there was a real threat. The technology is there to make sure that is the only time it is used, but from what I have heard about a few other state systems for handling these alerts, they are not that far off from what Hawaii had in place. One unfortunate thing from this alert fiasco is that from now on the people in Hawaii will always question an alert like this and for some, it could become a “sky is falling” message.

The second thing tech could help with is preparing people for what to do should a nuclear attack take place. Surviving a direct hit is not possible but for many under threat and not in the direct line of the attack, moving to a basement, the center of the building or if one is available, a bomb shelter could save their lives in the short term. Of course, the impact of radiation could have long-term effects, but a preventative step like going underground I am told would be the best thing to do if a danger like this is immediate.

At the very least any state should have in place as part of their alert system clear instructions about what someone should do immediately if a missile or some bomb threat were to happen. And tech apps could be downloaded in advance with these types of instructions that give people clear things to do for their safety if an attack is imminent.

As I write about this topic I still have a hard time with the idea that we could be closer to a nuclear confrontation then we have been in decades. I sincerely hope that cooler heads prevail and we avoid any nuclear attack or war at all costs.

However, given the nuclear arsenal that exists in rogue states as well as many other nations who use it as a deterrent, the threat of nuclear war is always a possibility.

Given this current nuclear climate, the Federal Government, States and the world of Tech need to be more aligned when it comes to creating and implementing the types of alerts that could help people deal with and survive an attack of this nature. They need to work together to educate them on what to do should a warning be sent that is real and make sure people are prepared as much as possible should they ever face a real nuclear threat in their areas.

Envisioning CES in 2024

Last week I was in Las Vegas to attend my 43rd Las Vegas CES. The show had 2.5 million sq feet of exhibit space, and over 180,000 people attended this show to see the latest and greatest in technology. CES is one of the largest trades shows in the world, and for most of us in tech, we have to go for many reasons. In my case, I have multiple meetings with clients and potential clients and, since I do my homework, I pre-select key products I want to see in person.

The reason I can pre-select the products I really want to see is that as a member of the analyst and press community I received over 2500 meeting requests that started showing up in my email in early Sept. While I just can’t read all of them I do look at each subject line, and if the product in that message is of interest to me, I briefly look at it and then put it on a watch list.

As I get closer to CES, I whittle down my watch list to the ones I want to see in person and add that to my actual meeting schedule.

Ironically, once I set my daily schedule at the show it is pretty much taken up with meetings, and this year I only got to on the LVCC show floor for 30 minutes, and at the Sands Convention Center, I only had about 90 minutes to check out the actual exhibits. However, the show is so expansive now, and with exhibits placed in two major venues and six satellite hotels, even if I spent 100% of my time on the show floor I would not have seen it all in the four days the show was live.

One other issue about being on the show floor itself is that at any given time at least 75-80% of the 180,000 attendees are in the LVCC or Sands and the aisles are packed like sardines so even walking from one exhibit to the other is a challenge. As I tried to walk the aisles at LVCC, the Yogi Berra line of “it’s so crowded nobody goes there anymore” kept ringing in my ears. CES is no longer just about consumer electronics, and as it has diversified and grown, the show itself has become difficult for most people to see the show and especially check out all of the things that would be of interest to them.

While CES will always have a place as a trade show, I believe that key technology that was shown at CES in 2014 will push this show and many others in the direction of what will be called virtual trade shows. This is the year Oculus Rift launched at CES. The concept of virtual trade shows has been around for some time, but the technology has not available to create the kind of experience one must have that replicates being at the show and walking through the exhibit as if you were there.

Thanks to virtual reality goggles and platforms like Oculus, Vive and others in the works, I could envision that within the next 6-8 years, it will be possible for every vendor to create a VR based 360 degrees exhibit that anyone could go to on demand and experience the product or service they want to see as if they were at any given trade show.

Imagine a virtual CES. Using VR glasses, you would be able to walk through LG’s video canyon as thousands of us did at CES and marvel at the technology behind it. Or you could stand in front of Sony’s new 146 Inch 8K TV and see and explore it as if you were at the show itself.
Indeed, every vendor could use VR tools and 3D cameras as well as perhaps a dedicated Pagemaker like software solution one might call “TradeShow Maker” that would work with all VR glasses. These types of virtual VR exhibits could be tied to a virtual show, or the vendor could just create a special exhibit that they use and update it multiple times during the year as they launch their products. I suspect for more professional VR exhibits, professional grade tools and services might be employed.

The economics of this is also interesting. If you were at CES and saw the huge exhibits from companies like Sony, Samsung, and LG you might have wondered how much this cost them? While I don’t know the exact amount, I do know that just creating the giant exhibits as well as staffing them is well over $5 million and I have been told that this number might be very conservative. Add the booth space costs, hotels, travel expenses and wear and tear on exhibit staff and these costs are nothing to laugh about.

On the other hand, the cost of creating a virtual exhibit, using VR tools and even professional media services to produce them would probably cost no more than 20% the cost of doing these large booths, paying for space and staffing them. And the fundamental designs of these virtual booths could be modified and thus reusable. Even more interesting to the vendor is that instead of showing their wares to 180,000 and hoping the media can get the word out about their products, VR booths could now become available to millions of people who would like to see an actual demo of their products before they buy them.

I became convinced that the idea of a virtual or VR based trade shows was in our future after I was shown how Cirque Du Soleil used VR to put a person on a stage and do the show around them. You can see this virtual show using Samsung’s Gear or Google’s Daydream to understand what it is like to have a show like this where the acts are done a few feet from your virtual seat. While this represented virtual entertainment, my mind wandered over to the idea of virtual product reviews and how that might impact trade shows in the future.

As for the meetings and interaction with people at these booths, video conference systems such as Zoom and Skype video make it simple to provide the interactive video component of a virtual trade show. While viewing the exhibit through VR glasses, a representative from the company could be doing the demo as if they were standing next to you at the show.

Of course, the consequences of VR based trade shows could have a huge impact on convention centers and any venue such as hotel ballrooms where smaller trades shows are done. If we get the technology right and VR could be used to deliver virtual trade shows of any size, vendors could eventually opt for this means of getting the word out on their products and trade shows in general could eventually become less important.

Now, this won’t happen anytime soon. But after seeing the demo of HTC’s Vive Pro and the vivid VR content it can deliver, I am convinced that the time will come when we can create virtual product pitches and use them as if they were part of a trade show.
Also, VR headsets are not close to being prime time products for consumers. AR will be the next evolution where mixed reality will take off, but I now believe that a killer app for VR just might be for virtual product demos and in creating virtual trade show exhibits that can be viewed on demand.

Each year I leave CES I say that this will be my last. It takes a toll on my body and each day this year I walked close to 10 miles to get to meetings, different venues and to see the exhibits that I could. Perhaps my vision of a virtual CES is more personal in that I would prefer this way to see products than to have to deal with 180.000 people and endure the costs of travel, hotels, meals, etc.

But if you study VR and have a bit of imagination, you can see that the virtual trade show concept I laid out is inevitable. The technology that could deliver this VR to the mass market may still be as much as 8-10 years away, but I no longer think virtual trade shows are a pipe dream.
I believe VR technology will make virtual trade shows possible and expand many vendors reach exponentially and become one of the most important selling tools they will have in their sales arsenal.

Health Related Game Changing Tech at CES

CES has become a zoo. 180,000 people jostling their way around 2.5 million square feet of exhibit space has become unmanageable.
Even harder is to try and find gems or game-changing technology that will have an impact on people in the way they work, play or learn.

So instead of doing a scatter shot approach to trying to find game-changing products this year, I focused on one key area of interest around game-changing health technology that I suspected I would find at the show if I looked hard enough.

Also, I had one area of health that I have been looking at for a while around products that impacted peoples sight. I grew up with a blind maternal grandmother and understand first hand the challenges and problems that people who are blind or legally blind deal with every day of their lives. For years I have kept my eyes on technology that in some way could impact people with this condition.

So, in advance of the show, I saw two products that were in this category that I wanted to check out. The first one is called eSight and these are electronic glasses that let the legally blind see.

I went to their suite at the Palazzo to see how this worked. I was shown a set of electronic glasses that deliver a powerful level of magnification. The result of this magnification is so legally blind people can see things all around them and read material more clearly. In the demo I saw, the legally blind woman, using these glasses could see the street from the 30th floor of the hotel and could make out what people were wearing at street level.

eSight uses breakthrough technology that enhances functional vision for the legally blind. It does this by providing a high speed-high-resolution camera that captures what a user sees in real time. eSight algorithms enhance the video and project it onto two OLED screens in front of the user’s eyes. These users see full-color video images that can be seen with unprecedented visual clarity and practically no delay.

The woman who demoed it for me and who is legally blind told me this had changed her life. It has given her new freedom and even will allow her to work in the future.

They are costly though. They sell for $10,000, and as of now, insurances don’t cover them. There are some organizations who work with the blind and legally blind that provide some types of grants so that in the case of the woman who was showing this to me, a grant covered the cost of her eSight glasses. This is an extremely important use of technology, and for those who are legally blind, they are actual game changers.

The second product of interest comes from OrCam. www.orcam.com The person behind the Orcam is one of the co-founders of MobileEye, the company that Intel acquired for $15.3 billion to help Intel jumpstart their autonomous driving program.

The Orcam is a small camera that fits on the side of any glasses that can look at any text and read it back to you in real time.
The device that houses it is the size of a medium sized finger, and it reads printed or digital text. When I tested it, I pointed it at an article in a magazine and using my fist I pointed to the article I wanted it to read. Within in seconds of seeing the article, it started reading it back to me through a tiny speaker placed at the end of the camera housing and spoke every word of the article to me in a clear voice.

It can also recognize faces. Point the camera at a person in front of you, and you get real-time recognition of faces. You can use it to identify products, credit cards, money, etc. What is astonishing is the fact that all of the processing is done internally in this device and all you have to do is point and using a simple hand gesture it verbally gives you the info you asked for.

This is ideal for the blind and partially sighted people that uses AI and OCR technology in a unique way that helps impact the lives of people with these serious vision-related problems.

It costs $2500, and I also see this as being a game-changing technology that for the blind and legally blind and is important at helping them live normal lives.

My Researched Predictions for 2018

I’ve been writing a tech predictions column for over 30 years now. I study our research and look for trends and information that give me hints of what I believe might be the hot topics, trends or issues that will impact the tech industry in the coming year.
Here are what I believe will be some of the biggest trends and issues in tech for 2018:

Cyber Security Threats Become Even Worse
For the last five year’s I have led most of my informed prediction columns by making this particular observation. But in the past, I mostly noted that the cybercriminals were small groups or even dastardly individuals. But in 2017 we found that “state” actors have entered the scene with what appears backing from countries like North Korea, Russia, and China who have created actual “armies” of hackers to try and steal everything from nuclear secrets to bank codes, hacking into power grids and stealing from people both money and personal identity. It is not a stretch to predict that this will get even worse in 2018 now that these hacking armies have learned how to even game our systems and will be even more aggressive in the new year and especially as we head to mid-term elections next fall.

What makes this worse for us in America specifically is that we just don’t have enough security expert talent available to counter many of these major threats. Without that talent and newer and more powerful cybersecurity tools, our various banking, power grids, etc. continue to be highly vulnerable. I fear that this will lead to many new hacking disasters in 2018 and am not very optimistic that we can thwart all of the security threats that will be leveled at us next year.

The First Folding Smartphone
I have seen some very interesting prototypes during the latter part of 2017 that suggest we could see the first products that have what will loosely be called folding designs both in smartphones and tablets.

I think that these types of new foldable devices will have a significant impact in 2019. ZTE has a foldable smartphone out now but has little distribution in the US, and I consider this a good prototypical design of this idea. But we could see at least one foldable smartphone and one foldable tablet late in the new year from some major players and set in motion a new trend in mobile designs going into 2019.

Augmented Reality Will Not Have a Major Impact in 2018
Although Apple popularized the term and concept of Augmented Reality in 2017, it is pretty clear that augmented reality is more a work in progress than it is a serious impact product that touches our lives in big ways anytime soon.

Many had expected that since Apple introduced their AR Kit and Google AR Core mid-year, that we would see perhaps the first killer apps by the Holidays. As of now, I have not seen what I or anyone can call a killer app that will make AR something we can’t live without. I do think Apple and Google will gain grown]th in AR by using their smartphone platforms to get more people interested in the idea and concept of AR but I am becoming more and more convinced that for AR to really impact our lives that it will have to be delivered through some smart glasses and I don’t see models of these that would be consumer priced and consumer-friendly before 2020.

On a related issue, we do see some real activity in VR recently, but outside of its use in gaming, which is consumer friendly, the real interest in VR is in vertical markets. You should expect to see much more of an uptake in VR within these markets in 2018 as all types of industries are experimenting with VR technology to see how it could impact that their workflow and potential profitability.

Alway’s connected PC’s will Morph Into All Day Computers
MY PC Mag colleague Sasha Sagan and I were in Hawaii recently to attend Qualcomm’s launch event for their Always Connected PC initiative. Qualcomm created a laptop design that uses their Snapdragon 835 mobile processor to power what they dubbed an “Always Connected PC.” The premise is that with the Snapdragon 835, which includes the LTE radio in its design, that people will be more likely to want a portable computer that can always be connected and have at least 20 hours of battery life.

They may be right, but to date, laptops with LTE built in have not been big hits. In fact, in our research on iPads, we found that 50% of all iPad’s sold include the LTE radio chip but that only 25% of those machines ever activate the LTE radios that are inside.

I think the bigger story from this event and the one that probably should be their lead story is the incredible battery life one will get with this type of laptop. Imagine heading off for the day and not even having to think about carrying a power cord for the laptop since you know you will get at least 20 hours of real use on this type of portable computer.

I think what Qualcomm has done has broken new ground in PC designs, and while I like the Always-Connected PC concept, I think their push also to create what I call “all-day” computing may be their bigger contribution to the world of portable computing.

Social Media Sites Become Regulated
I know this might be considered a bold prediction but my contacts in Washington say that legislators from both sides of the aisle are becoming greatly concerned about the negative impact social media has had on America, its election processes and the overall level of divisiveness that it has had on our worldview. Although people in Washington had hoped that Facebook, Twitter, and Google would police themselves, I continue to hear from Washington insiders that they don’t believe that they can do that without some government intervention. While full regulation is probably not likely, I would not be surprised if we do see some of the legislation put forward to reign in these social media sites and force some restrictions on them, especially when it comes to our mid-term elections and areas where hate groups are free to post anything they want without any serious checks and balances coming from these social media giants.

Why Always Connected PC’s will Morph Into an All-Day Battery (and Beyond) Focus

I was in Hawaii recently to attend Qualcomm’s launch event for their Always Connected PC initiative. Qualcomm created a laptop design that uses their Snapdragon 835 mobile processor to power what they dubbed an “Always Connected PC.” The premise is that with the Snapdragon 835, which includes the LTE radio in its design, that people will be more likely to want a portable computer that can always be connected and have at least 20 hours of battery life.

They may be right, but to date, laptops with LTE built in have not been significant hits. In fact, in our research on iPads, we found that 50% of all iPads sold include the LTE radio chip but that only 25% of those machines ever activate the LTE radios that are inside.

I think the bigger story from this event and the one that probably should be their lead story is the incredible battery life one will get with this type of laptop. Imagine heading off for the day and not even having to think about carrying a power cord for the notebook since you know you will get at least 20 hours of real use on this type of portable computer.

I think what Qualcomm has done has broken new ground in PC designs, and while I like the Always-Connected PC concept, I think their push also to create what I call “all-day” computing may be their more significant contribution to the world of portable computing.

I believe that the concept of an all-day computer could become the next big thing in laptops. While some of our ultra-thin notebooks can squeeze up to 12-14 hours of use, if you do any serious video processing or use the laptops graphics processor often, total battery life is more like 5-7 hours at best. I have at least seven ultrabooks or ultra-thin laptops, and with videos playing and a graphics-based game playing in separate windows simultaneously, I am lucky if I get 5-6 hours from any one of these thinner laptops.

While this new idea of 20 hours of battery life will become the new rallying cry for all laptop makers in the next two years, the term Always Connected and All Day computing does not roll off the tongue. I believe that Qualcomm, Intel and the industry in general needs to come up with a reliable, identifiable name as they initially did with the term Netbook or Ultrabook to define a specific type of new laptop design that can work all day even if you want to watch videos, play games or do productivity.

Since Qualcomm and Intel believe connectivity is key to all day computers, a name that either directly states or hints to connectivity should be in this type of portable computing moniker. However, I am still not convinced that people will activate the LTE connection given the fact that laptops and even iPads that have them in them now have low activation rates. On the other hand, I am convinced that an all-day 20+ hour laptop will resonate big time with mobile users and in that sense, this should be the real focus of any device and dedicated naming scheme related to this new type of mobile computing experience.

Apple’s Acquisition Strategies Boosts its Earning Potential

Very soon, Apple will make its 100th M&A acquisition since it acquired NeXT Computer in 1996. This acquisition was done by then Apple CEO Gil Amelio, and just before he made that acquisition, he asked me about my thoughts about buying this company from Steve Jobs. At the time, because I was helping Mr. Amelio with Apple’s mobile strategy, he and I talked weekly about his goal of reviving Apple. Long time Apple watchers will remember that Gil Amelio was on Apple’s board when the company had lost its way and was over $1 billion in the red. When the board ousted Apple CEO Michael Spindler in 1995, Mr. Amelio was asked by the board to become CEO and try and turn the company back to profitability.

When Gil Amelio told me about the idea of buying NeXT, I have to admit that I was pretty skeptical and was not sure it was a good idea. But as he shared with me how he thought Apple could integrate the NeXT OS into Mac OS and make it even more powerful, he began to win me over. But when the deal was announced, the overall market perception was that the deal made no sense and many were concerned, yet intrigued with the idea that if Steve Jobs got anywhere near Apple, he could find a way to gain more influence on Apple’s future.

Well, history has shown that Mr. Amelio and Apple’s Board’s decision to buy NeXT and bring Steve Jobs back into the company he co-founded was a streak of genius and thanks to Steve Jobs, Apple was put on a fast track to becoming one of the most profitable companies in the world today. The move by Mr. Amelio to buy NeXT set the tone for Apple to become very aggressive with their M&A strategies and given the fact that they have over $240 billion in the bank, they certainly have the money to buy IP and technology companies to bolster their overall hardware, software, and services platform.

The CB Insights chart below shows the forward path Apple has taken with their M&A activity since 1996 and last weeks’ apparent purchase of Shazam for a reported $400 million is another excellent example of how Apple uses the acquisition of IP and real companies to help them maintain an edge when it comes to innovation.

This does not mean Apple does not innovate from inside the company. Indeed, Apple files hundreds of patents based on internally created IP each year and has shown that they continue to develop new technologies and intellectual property for all of their products and services on a regular basis.
But as the chart above shows, all of the acquisitions shown above are strategic and are used to help bolster their overall IP portfolio in one way or another.

Recently, Apple committed $1 billion towards new types of partnerships and allocated $100 million of that to a joint venture between them and Corning Glass. This investment apparently is to create a new facility in the south that presumably will be used both for R&D and some manufacturing that has not be detailed yet by either company.
At the same time, we expect Apple to expand their overall M&A activity in the next three years as they seek ways to tie more customers to their ecosystem.

While the need for innovation in hardware and software is always part of their overall strategy, Apple will also probably be aggressive with M&A activity around their services business. One way to look at Apple’s overall approach would be to call it ONE Apple..an environment where hardware, software, and services encompass a total package that Apple’s customers buy into to get the best-integrated experiences within their Digital lifestyle.

This is becoming more evident with the continuous growth of their services business. Services account for around $30 billion of total revenue and are growing each quarter. This is extremely important for Apple’s future.
Apple sells about 16 million Mac’s a year, and that number stays pretty constant. And iPhone sales, while still strong and growing, needs more and more content and services if they want to get more switchers and keep current customers in their ecosystem on a continual basis. This in turn helps drive services revenue and on the whole, keeps Apple one of the most profitable companies in the world.

While it is hard to predict what type of mergers and acquisitions are next, we know that Apple is highly interested in AI, Self Driving cars and LIDAR, enhanced voice and text recognition, music and streaming media as well as new ways to strengthen their CPU’s, GPU’s and various radio technologies. I sense that Apple will be particularly aggressive in acquiring AI based IP as this technology will be highly critical for Apple to stay competitive and offer more innovative products and services in the near term. I also believe that Apple will do even have more M&A activity in the next 18 months as Apple will be under even more pressure from Google, Microsoft, and Amazon, especially in the area AI and media services.

Apple’s pile of cash in the bank gives them an excellent position to accelerate their M&A activity, as long as it is strategic for the business. That is why I expect them to be more active in this area in 2018 and early 2019.

STEM and STEAM Gifts for the Holidays

One of the areas of great interest to me over the last five years has been the movement behind helping kids gain interest in Science, Technology, Engineering, and Math. Recently, the groups advocating for STEM education have added an “A” to this moniker arguing that the ART’s and creativity are also important to round out a tech-focused education and thus the newer acronym STEAM has been added to describe this educational focus.

In past column’s, I have chronicled how the SF 49ers have made STEM education a key part of their new stadium’s museum and written about “How STEM skills are the next great equalizer“.
I have also profiled how Chevron is helping fund STEM education and through these columns emphasized how important I feel STEM and STEAM are for the educational future of our youth.

My Interest in STEM and STEAM has led me over the years to look for STEM gifts during the holidays for my two granddaughters and nieces and nephews and compile a short list of products that would make great gifts for both boys and girls.

Here are a few of the products for this year that I believe will help any kid gain more interest in STEM and STEAM and could provide hours of learning fun and perhaps get them interested in STEM and STEAM careers in the future.

I like this one for the Under five age group with a high tech take on learning their ABC’s-
Codebabies’ ABCs of the Web picture book. Start early with this alphabet picture book — written by a web designer — that aims to introduce the under-fives to “the language of the web”. So instead of ‘A is for Aardvark’ you get ‘A is for Anchor tag’… Link

Smart Gurlz (recently seen on Shark Tank) currently available online only. Teaches girls how to code using self-balancing robots and action dolls via mobile devices. SmartGurlz helps girls 6 and up and is a great way to get them interested in STEM.

Stembox_ Subscription service. Available in 3,6 or 12-month subscriptions and priced between $87 and $300. StemBox is also geared to girls and designed to be engaging for ages 8 to 13. It helps them develop an emotional connection to STEM and hopefully encourages them to gain greater interest in the Sciences.

Creation Crate. Creation Crate drops the technical lingo and increases in difficulty each month so that users can be fluent in the language of technology by the end of the 24-month curriculum. Projects range from building a mood lamp to a memory game focused on programming, to learning how to input a distance reading from an ultrasonic sensor. Unlike other technology subscription boxes, they use raw electronic components and offer users real-world skills. These boxes designed to be beginner friendly with no previous experience needed. Subscriptions start at $30 a month with 3, 6, or 12-month subscription packages to choose from.

Wonder Workshop. Wonder Workshop uses what they call CleverBot’s to teach early robotics and interactive programming. They are packed with technology that helps kids develop critical problem-solving skills through challenging educational projects designed to make learning to code fun. Most of their Bots are for ages 11+.

Thimble. Thimble also uses electronics to tech robotics and programming and a 1, 3, 6 or 12-month subscription service. There are a dozen projects to choose from, and in each project, you get the proper components, an online learning platform, and they even have a forum for kids to exchange ideas, collaborate and help each other.

KiwiCo – Subscription service of STEAM (offers Art as well)
Offers a range of products for infants to high school. Offers Monthly, 3,6, and 12-month subscriptions for about $20/month. This is a much broader service for all age groups, and you can pick the projects you want to work on. For the 24-36-month-old kids, the focus is exploring and learn. For ages 3-4 the projects are about playing and learning. Ages 5-8 have projects aimed at science and art, and ages 9-16 include projects for art, design, science, and engineering.

Circuit Cubes from TenkaLabs. Circuit Cubes teach kids the basics of circuitry while they’re engaged in creative STEM play. Kids learn how to complete circuits to light an LED, power a geared motor, and how serial and parallel circuits create different effects in their projects. They integrate with LEGO-style bricks for endless projects.” Ages 8-12.

Barbie STEM kit – Thames & Kosmos/Mattel ages 4-8. When my granddaughters were younger they were Barbie fans and would have loved these Barbie STEM kits. There are seven different projects to work with the kit, ranging from a spinning closet rack to a gear based washing machine and a greenhouse. They even have some specialty kits including Barbie Crystal Geology and Barbie Fundamental Chemistry set. One of the great examples of learning about STEM while playing with a beloved figure.

Code Kit from LittleBits. Since I first heard about LittleBits, I have been a big fan of their STEM kits. One new STEM kit from them that are geared towards learning about electronics is this Code Kit of snap together magnetic Arduino modules of “bits.” The idea is to simplify breadboarding and never need to get out the soldering iron. The bits are then connected — via computer — with another block based graphical coding environment so kids can play around with and program the hardware.

Lego Boost Creative Toolbox building & coding kit. What kid does not like Lego Blocks? Lego understands the STEM movement well and has created the Lego Boost Creative Tool box which is a robotics and programming system aimed at seven year and older kids. With this toolkit, kids can build and customize a robot and learn how to code its movements and navigations. It has drag and drop icons for easy programming and teaches kids the basics of robotics and coding.

Last but not the least is one of my favorites:

STEAM Kids ebook. “A year’s worth of captivating STEAM (Science, Technology, Engineering, Art & Math) activities that should provide hours of fun. This is a downloadable book with projects in each area designed to engage parents and children in new areas of discovery and skills. Books are sold individually or in bundles, including specific books for Holiday themed projects, (I.e., Christmas, Valentine’s Day, etc. For ages 4-12. $14.99 Comes in both eBook and traditional book formats.

Monitoring Heart Health

Long-time readers of my column will know that I suffered a heart attack in 2012 and underwent a triple bypass. As you can imagine, this was a serious operation brought on by long hours, extensive travel, not eating correctly and minimal exercise over a 25+ year period. The good news is that when the heart attack struck, I knew what was happening and got to the hospital in time for them to stabilize me and start preparing me for open-heart surgery within 36 hours of the actual attack.

But from that point on I was and still am a heart patient. Even though the surgery corrected the main issues with three of my arteries, I am still an at-risk person and have to closely monitor things like blood pressure, cholesterol, heartbeat, etc.
One other thing that could be an issue but hasn’t been so far being something called AFIB, or an irregular heart beat that could lead to other serious issues related to my heart and health. AFIB is the leading cause of strokes and is responsible for approximately 130,000 deaths and 750,000 hospitalizations in the US every year.

Until recently, the only way I could get this tested was to go to my doctor’s office, which I do twice a year and have an EKG which charts my heart rate and looks for any abnormalities such as AFIB. But earlier this year I was sent a product from AliveCor to test. The device has a small mobile device in which I can put my fingers or thumbs on it, and it registers my heart rate in detail and delivers a signal to my iPhone that gives me an actual EKG reading.

This device is FDA approved and allows me to take a personal EKG to check for AFIB or any heartbeat irregularities anytime I want.
This mobile solution also has an important option to get an expert to read the EKG should you see something in the chart that looks different or abnormal. The two options are to have a clinician read it and give feedback for $9.99 or get an actual MD to look at it and advise for $19.99.
Thankfully, all of my readings over the year were normal, and I have not had to call for outside analysis.

On Nov 30, AliveCor introduced a new way to do this in the form of a watch band that is tied to the Apple Watch. While their KardiaMobile reader works well, it is another thing I have to carry with me if I am going to do this daily and especially while on the road. Called the KardiaBBand, it sells for $199 and requires a $99.00 a year subscription, but I consider this a small price to pay for the ability to have early warnings of AFIB and the ability to do an EKG easily and anytime I want. I have been testing the Kardia Band for about a week now, and like the KardiaMobile device, it monitors my heart rate and via the KardiaBand, it gives me an EKG reading on demand. But since I am wearing the band, it is a bit easier than digging out the Kardia Mobile device and using it, which means I can get readings more often to stay on top of my overall heart health.

I realize that this probably will get more attention from an older audience or people with Type 2 Diabetes and high blood pressure as AFIB is a leading cause of strokes and watching for any changes in EKG readings can and will save lives of high-risk people. However, I have friends who had a stroke in their 20’s and 30’s, and if any heart disease runs in your family, the KardiaMobile reader, which costs $99 or the Kardia Band needs to be considered as part of your overall health monitoring program.

Also on Nov 30, Apple introduced a most important heart study they are doing in conjunction with Stanford that uses the Apple Watch to do a similar EKG like a test to check specifically for AFIB. https://www.apple.com/newsroom/2017/11/apple-heart-study-launches-to-identify-irregular-heart-rhythms/

Since this is a study it does not need FDA approval but the program does provide a direct contact with a physician should the Apple Watch, through this special study monitoring program, detect any abnormalities in your heart readings. At this point, you will be notified that there might be a problem and they will send a special patch that you wear for seven days to monitor your heart readings 24/7 to get a more concise analysis. If AFIB is detected, they will have you see a Dr or Cardiologist as soon as possible.

According to Apple, “To calculate heart rate and rhythm, Apple Watch’s sensor uses green LED lights flashing hundreds of times per second and light-sensitive photodiodes to detect the amount of blood flowing through the wrist. The sensor’s unique optical design gathers signals from four distinct points on the wrist, and when combined with powerful software algorithms, Apple Watch isolates heart rhythms from other noise. The Apple Heart Study app uses this technology to identify an irregular heart rhythm.”

Apple’s interest in health stems from Steve Jobs own health issues. As he became more in tune with the importance of a person needing to find more proactive ways to impact and monitor their health, he started to make this one of the tenets of Apple’s overall vision. As I have often written over the last few years, Apple is serious about helping their customers staying healthy, and this Heart Study is another sign of that commitment.

Will Apple Use Tax Breaks to Create More Jobs?

If you keep an eye on what the financial analysts are saying about Apple these days, you know that almost all have raised their stock price targets closer and closer to the $200 per share range. Almost all are bullish, and some believe Apple’s new fiscal year will break all records and that we could see Apple become the first company ever with a trillion dollar valuation sometime in 2018.

Our research continues to show high demand for Apple’s iPhones and Services and a very lucky $299 HomePod can’t but help Apple reach this valuation. They Have $250 billion in the bank and are increasing that cash on hand every quarter. But if you think Apple is sitting pretty now, just imagine the position they will have once the Republican tax cuts take affect, and their tax rate goes to about 12%-14%. When this happens, they can repatriate billions of cash held overseas at a highly reduced rate of 12% instead of the hefty 30+% it would be today if they brought any of that money back to the US.

One of the beliefs of the Republicans who passed this tax cut bill is that if corporate taxes are reduced, and companies like Apple can bring billions of dollars back into the US, they and others will build new factories and create new businesses which in turn will create new jobs.
History suggests that tax cuts like this could work if companies did put that cash into building new factories and starting new businesses but in many cases that are not what happens.
In a lot of instances, they use the surplus cash to buy back more of their stock and then increase the dividends they give back to their shareholders. If you are a shareholder, this is good. But if you are a normal citizen who does not own stocks of these companies, it has minimal impact on you and your financial situation.

I can’t speak to how other big companies will use their new cash-rich windfalls, but in Apple’s case, they have so much money in the bank now that if they wanted to build new factories or hire more people, they could already do that. While I do think Apple may invest in more factory partnerships with suppliers, I highly doubt that many will be in the US. Most of their suppliers are outside the US and even the ones who are based here, most of their manufacturing is being done overseas.

As for hiring more people, Apple’s business is growing exponentially, and they are hiring as fast as they can. Also, most of the kind of talent Apple is looking for are in the area of programming and engineering and highly skilled positions. At the moment, all of their US facilities are jam-packed, and they are even planning to build at least two new significant complexes in Silicon Valley, and as each goes up, they will fill up as fast as possible.

So if Apple already has gobs of money to work with and will now get more via tax breaks and overseas fund repatriation, how will Apple spend this extra cash? Most financial analysts I have talked to see Apple buying back more of their stock and then increasing the dividend to shareholders. Yes, some money will undoubtedly be applied to R&D and perhaps expanded investments through acquisition, real estate and building new properties to house more workers, but as I said, they already had enough money to do that even without the tax break or cash repatriation program.

While those behind the tax cuts would like Apple and other tech companies to hire unskilled labor, the fact is that most of the people Apple and many other tech companies need are trained programmers and engineers. And the idea that US companies will build more US factories is probably also a non-starter as anyone who studies worldwide manufacturing knows that it is just not economical in most cases to build many of the kind of manufacturing plants those who voted for these tax cuts want in the US.

It will be really interesting to watch how Apple and others companies use these big tax breaks and if it will be used to hire more workers. But if history is our guide, that just might not happen as our legislators hope it will come via the big tax breaks they are giving American corporations.

The Mini OS Wars

During most of my time covering personal computers, the OS Wars were basically between Apple’s Mac OS and Microsoft’s Windows OS.
The battle between these two operating systems has been fierce at times with loyal users on each side swearing by their preferred operating system.

Then came the mobile OS wars, and while at first there were at least 4 or 5 mobile operating systems vying for supremacy, today the real battle is between Apple’s iOS and Google’s Android. But we are about to move into a new area of what I call Mini operating systems wars, designed to work with a new class of CPU’s that run devices at what is called the Edge.

Edge computing has become a hot term in tech as it refers to actual devices that sit at the edge of a cloud-based solution. For example, an edge device could be a smart thermostat, smart light, smart refrigerator, or in a smart city example, smart parking meters, smart light posts, etc. For decades we have had many devices at the edge that used sensors in cars, appliances, etc. but their intelligence was controlled by some other type of internal CPU, and they had a tiny OS called a real-time OS (RTOS).

However, Qualcomm, Intel and other makers of core computing silicon are putting more power in the chips used at the edge to deliver a form of distributed computing where some of the actual computing is done on the edge device, and a minimal amount is done in the cloud.

These new CPUs will also find their way into what we call dedicated devices such as a new category of products that will hit the market next year that will be more like Digital Note-taking devices and not full tablets. It is estimated that close to 1 billion people use some pen and paper for note taking and various companies see an opportunity to create a new category of devices that simplify and digitize this task.

At the moment there are two main contenders for what I call a Mini-OS, and those are Microsoft and Google. In a recent post from Windows Central entitled “Microsoft’s ‘Windows Core’ aims to turn Windows 10 into a modular platform, the article outlines what is called Windows Core OS (WCOS). The article stated:

“In short, WCOS is a common denominator for Windows that works cross-platform, on any device type or architecture, which can be enhanced with modular extensions that give devices features and experiences where necessary.”

“In layman’s terms, its ultimate goal is to make Windows 10 much more flexible, allowing it to be installed on a wider variety of devices without being based on specific, pre-existing product variants. As a result of this, Windows itself can become smaller depending on the device, the OS itself can be built faster, and devices won’t be encumbered by components and features they don’t actually need; speeding up overall performance in the process on smaller or less capable devices.”

This would allow Microsoft or their partners to create application-specific devices and use this CORE OS to give intelligence to edge devices that has its low energy CPU and turns the edge device into a smart module of a distributed computing architecture.

Google also sees the need to provide an OS for edge devices and has what seems to be a mini OS in the works called Fuchsia. While details are still sketchy, Richard Windsor an analyst at Edison Investment Research states:

“Our take home from this analysis is that Fuchsia looks most suited to be used in embedded systems such as vehicles, white goods, machinery, wearables and so on.Consequently, this could be a single replacement for Android Auto and Android Wear.”

Mr. Windsor goes on to say:

“Fuchsia was first noticed on GitHub in August 2016 and differs from Android in that it is not based on Linux but on a kernel called Magenta which looks more like a kernel that is typically used for embedded systems such as vehicle infotainment units, white goods and so on. Fuchsia is also a real-time operating system (RTOS) which tend to be used for smaller systems which are typically embedded where response time is critical to the user experience.”

This news should not come as much of a surprise to serious industry watchers since the idea of edge devices becoming part of a distributed computing model has been in play for a long time. However, the move by Intel, Qualcomm, and others to give more intelligence to edge devices so that they can handle some of the computing locally do beg for some beefed-up RTOS, or in these cases, perhaps a serious modular extension of each of these companies current operating systems.

I disagree with Mr. Windsors suggestion that this might end up as a replacement for Android. I suspect this is a new type of mini-OS for edge devices that would allow Google to extend their reach well beyond their computing and mobile worlds today. And for Microsoft, if the Windows Central story is correct, it would give them a stronger foothold at the edge and bring perhaps billions of devices into the Windows World and allow them to grow their services model exponentially.

To date, I don’t see a similar Mini-OS coming from Apple although it is conceivable that they are working on a similar mini-OS that is a subset of IOS or a separate Mini-OS of their own from scratch. You can bet Apple wants to extend their reach to the edge too, so even if Apple does not have something we know about publicly, I can’t imagine they don’t see this same trend and are working on something similar.

Apple Switchers and a Secure Ecosystem

Apple Switchers and a secure ecosystem.

At Creative Strategies, we recently did a report that showed about 30% of Android users who do not have an iPhone are thinking of switching over to an iPhone.

This should not be a surprise given that Apple, in their last two earnings call’s, has stated that they see a rise in switchers every quarter.
Data from other researchers also confirm this trend.

The latest Consumer Intelligence Research Partners data for the April to June quarter showed that Apple was attracting more Android switchers than at any time in the past 12 months.

The good news for Apple is that relative to earlier quarters, they attracted a higher percentage of iPhone buyers from Android phones. In the past three quarters before the June 2017 quarter, Android owners had represented 14% to 17% of iPhone buyers. With lengthening upgrade cycles and a growing percentage of owners with the most recently released models, continued platform switching will be important to the success of the next iPhones.

Apple even has the campaign to answer the question of why someone should switch to Apple iPhones over Android phones.

Head over to the iPhone tab on Apple.com, and you’ll see a new box in the middle of the page. Called “Why Switch,” it declares that “Life is easier on the iPhone,” and offers ten questions potential switchers might be asking:

[ Further reading: Everything you need to know about iOS 11 ]

1. Will it be easy to switch?
2. Is the camera as good as they say?
3. Why is the iPhone so fast?
4. Will iPhone be easy to use?
5. How does iPhone help protect my personal information?
6. What makes Messages so great?
7. Can I get help from a real person?
8. Can I switch at an Apple Store?
9. What about the environment?
10. Will I love my iPhone?

This interest in switching comes at a time when Samsung and Google both have stellar smartphones that are equal to and in some ways, have even a few better features than Apple’s newest iPhone models. The interest in Apple’s iPhone line of smartphones is getting greater interest these days from the Android crowd.

Our research on this suggests that there are four main reasons for Android users to seriously consider the move over the iPhone and Apple’s eco system of products and services.

The first reason is an age-old one and focuses on Android’s basic security. Googles have so many versions of Android out there and until only recently started updating their current versions on a more regular basis. But in many reports, basic Android versions, even one’s using specialized software from dedicated vendors, is shown to being the most vulnerable mobile OS and in many ways insecure. Samsung appears to have done the best job with their extra layer of security software via KNOX, which has helped them gain traction in enterprise and business accounts, although Apple’s iPhone still dominates the smartphone market for business at almost all levels.

The second reason is perhaps the most important one, and that is Apple’s overall ecosystem continues to get better and is becoming a real draw for Android users. The OS fragmentation within Google’s Android ecosystems still makes it difficult to manage all of a users content seamlessly across other Android devices. And, when we look at switchers and ask about why they want to switch, the reason of interest also includes Apple’s security because Apple has, at the technical level, the tightest controls over their apps and ecosystem so that they are more secure than what is available within the Android environment.

Some consider Apple’s approach as a closed system, and for many Android users and non-iPhone users, this is a reason for them not going over to Apple products. But we continue to see the public looking at Apple’s closed system as a protected environment that keeps out false apps, services, and outside intruders and in that context, Apple’s ecosystems are looked at in a very positive light. This seems to be a highly cherished part of Apple’s world as users are becoming even more afraid of hackers, identity thefts, and all sorts of nefarious threats facing them these days. To them, Apple provides a “safe harbor” to digitally live their lives out, which I believe is why we are seeing such a high interest in switching.

The other thing we keep hearing from those looking at switching is the fact that Apple’s continuity system, the feature that keeps all of your Apple passwords, settings, pictures and video always in sync across all Apple devices is of very high interest to them. Yes, within Google and Android they have simmer features, but in my experience, they don’t work as well as those within the Apple ecosystem when it comes to seamless synchronization and integration within Apple’s protected ecosystem.

The fourth thing in our estimation that is the big differentiator is Apple’s store, Genius bars and overall customer service and support. I recently had a serious issue with Android on a high-end smartphone and trying to get answers from Google, or even vendor support to solve this problem was like pulling teeth. Ben and I have written in the past about Apple’s stores giving Apple a huge advantage over their smartphone competitors, and this is not going to change.

Microsoft has done a good job with their retail stores that sell Windows laptops and desktops, but they also provide classes and customer service when it comes to Windows-based devices.

As we go into this holiday season, our research suggests that Apple will continue to draw strong interest from Android users and could accelerate the pace of switchers in the new year. Apple is well aware of the opportunity they have in moving more and more people over to Apple products, and like the page, they have for switchers shows, they are becoming even more adept and aggressive in trying to lure them over to the Apple ecosystem.