Intel CEO Krzanich Predicts the World Will Run on Intel Silicon

on December 21, 2017
Reading Time: 4 minutes

In an internal memo to employees that surfaced earlier this week, Intel CEO Brian Krzanich told staff that the company’s new, aggressive strategy for 2018 would involve taking more risks and expanding the areas in which Intel can impact technology. Citing change as the “new normal,” the leader often called simply “BK” is telling his engineers, marketers, product managers, and sales teams to prepare for a very different Intel in the coming months. This is a change that many analysts in the tech community have been asking for, and one that Intel finally seems to be publicly committing to.

For the better part of the 2010s, Intel has played an interesting role in the technology field. Though it dominated in the PC space, most pundits viewed the corporate direction as somewhat aimless and lacking a drive to push forward in any one particular space. That included the client compute and enterprise groups, both of which enjoyed a stranglehold in market share. The Xeon group was maintaining near 99% saturation in traditional server platforms and the Core-branded processors were the dominant player in PC gaming, mobility, and professional computer segments.

During that time certain sub-groups noticed a lack of innovation. PC gamers and enthusiasts are a small but vocal segment of the PC ecosystem. It is this group that is the canary in the coal mine; when it starts to recognize the lack of change and performance or feature improvement, it is a tell-tale sign that the rest of Intel’s business units will suffer the same fate as the technology roadmaps progress.

As a result, Intel allowed several key competitors to re-enter valuable PC and server segments, threatening to take over market share and decrease Intel’s margins. AMD was able to offer new products like Ryzen for the consumer desktop, Threadripper for enthusiasts and workstation users, and EPYC for the enterprise and cloud, all of which will reduce the dominance of Intel in those specific fields. Qualcomm is also entering the cloud computing space with its own Centriq 2400 server processors based on the Arm platform, circumventing traditional processor designs with a more power-efficient implementation.

Intel has fallen behind in the current race for dominance in the machine learning and artificial intelligence fields. Once considered the pinnacle for any compute workload, Intel processors have been overshadowed by the designs from NVIDIA (and AMD to some degree) that started as graphics and gaming chips but have migrated and evolved to focus on the massive amounts of data and parallel processing required to develop intelligent systems. As a result, NVIDIA stock has skyrocketed, and CEO Jensen Huang is cited as a leading mind in compute fields in countless media representations.

Even the crown jewel of Intel’s operation, the capabilities of Intel silicon chip fabrication technology, is starting to show signs of worry. Intel has been building processors on the 14nm process node for much longer than originally planned or expected based on roadmap projections from years back. This is a result of either a lack of desire to push forth to the next-generation option (10nm) or the inability to build this new process node to satisfactory status to maintain margins and yields. Companies like Samsung, TSMC, and Global Foundries continue to push new process node tweaks to customers despite Intel’s stationary position. Samsung has been producing and shipping 10nm chips for customers like Qualcomm for nearly a full year while Intel will only start doing so sometime in mid-2018.

The memo from BK weighs in on the cultural changes necessary to push Intel over this speed bump with surprising candor. Krzanich says that the future of Intel will be as a “50/50 company,” one that sees half of its revenue from the stalwart PC space and half from “new growth markets.” These will include memory, FPGA (programmable arrays for faster development of new products), the Internet of Things, artificial intelligence, and even autonomous driving. The memo states in many of these areas Intel will be the underdog in the fight and that it will require the use of “new, different muscles” in order for the company’s products to have the impact he sees coming.

Part of flexing these new muscles will mean taking more risks, something that Intel has been extremely hesitant to do in the last decade. For a blue-chip company that has been as successful as it has for the last 50 years, trying new things will mean an increase in R&D budget along with the expectation and acceptance of failure in some portion of these new ventures. Being able to “determine what works and moving forward” is a mentality that exists naturally at organizations like Google but must be grown and matured manually at Intel.

BK mentions a term called the “One Intel,” calling it an “aggressive company.” Part of that shift is being driven by stock holders that demand Intel move forward to address the market of opportunity before it. No other company in the world has a portfolio of product and minds like Intel can offer and though it may make more mistakes than successes at first, there is little doubt what it CAN DO if the management and leadership is truly committed to the changes put forth in this memo.

Intel believes that there is as much as $260 billion worth of addressable market for it to go after in 2018 and beyond. It’s a pot of gold that many other talented and driven organizations are going after. But if you believe Intel’s CEO, “the world will run on Intel silicon.”

Magic Leap, Augmented Reality, Apple, and the Big Picture

on December 21, 2017
Reading Time: 3 minutes

I know my title insinuates I’m going to cover some ground in this article, and I hope to do that just. Yesterday, Magic Leap unveiled their long-awaited product called the Magic Leap One Creator Edition. Here is what it looks like in case you hadn’t seen it yet.

Dear Santa, All I want for Christmas is….

on December 20, 2017
Reading Time: 5 minutes

It is the season and as I think I have been pretty good this year, I thought I’d come up with my Letter to Santa. There are a few things on the list but when it comes to tech it comes down to five things: more diversity in leading positions in tech, more tech for good, a true conversational assistant, smart today and the death of passwords.

More Diversity in Leading Positions in Tech

I might start to sound like a broken record when it comes to diversity in tech. I have talked about women as well as racial diversity  Yet, not much is changing and I will continue to talk about it as long as it takes to really see significant change.

In 2017, by and large, we have seen more women on stage at tech events. Surely that is progress, I hear you say. Well not quite! The great majority of women we saw on stage were invited on it by a more senior, often older, male colleague who gave up his spot for them to do a demo. There have been some exceptions of women owning the stage of course. The two who come to mind are Angela Ahrendts at the iPhone launch and Julia White at Microsoft Ignite. You only needed to follow those events on Twitter and the focus on their outfit to understand how much more work we have to do.

Diversity on stage should reflect diversity at the decision table of these company, a diversity whose sole purpose is not to make the diversity report published every year look better. Corporations should be seeking such a diversity to make sure they do not have any blind spots when it comes to shaping our future, a future that we should be more inclusive not elitist.

This awakening is what I what I want to see in 2018. A sense of purpose in wanting to relate to as many people as possible by hiring and empowering a diverse group of tech leaders.

More Tech for Good

I have seen great tech innovation helping improving life for a variety of people who face challenges either due to the environment they live in or due to their own health. From prosthetic 3D printing, to medicines delivered via drones, to wearables that help reduce tremors or apps that help you see the world around you, there are many companies working for social good. Yet, we hear far more often about tech innovation that makes the life of the wealthy and fortunate even better.

While it might be easy to come to the conclusion that this is just the reflection of what tech is focusing on I am hoping that it is more a messaging problem than a greed problem. It is far sexier to talk about the next gadget that lets you take the ultimate selfie than talk about a smartwatch that helps those suffering from PTSD fall asleep at night.

In 2018, I would love to hear more about what tech companies are doing to solve world problems. As some of these very smart innovations come from startups, they need their message to be amplified as much as possible. Increasing awareness of what is available by dedicating tracks at tech shows, talks opportunities on podcasts and of course press coverage could be the first step.

Hearing more about the good that comes out of tech not only might help improve or save more lives but it hopefully will help inspire more kids to get into tech than the ones who are attracted by the idea of being a millionaire by the time they are 20!

A Conversational Assistant

I have been using pretty much all the assistants that are out there for as long as they have been available. My engagement varies depending on where I am and what I am doing. While overall digital assistants seem to do what they say on the box, there is one thing I detest: the lack of context.

All the big companies will tell you that if you ask their assistant the weather in San Francisco and you follow up with a question about things to do you do not have to repeat the name of the city to get your answer. Some even understand a bit of context but they do not go very far. If I tell Siri when replying to a message from my husband “tell him, see you soon” she types that verbatim rather than just type “see you soon”.

In 2018, I want to stop having to adapt to how assistants understand and process what I say and have them do the work and just talk more like me instead. When failing to understand what I asked, it would be great if they use AI to guess based on previous queries and what they know about me the most obvious word I could have used vs. replying back with the most obscure one. I am not expecting to have a 10-minute conversation with any of my assistants. I would simply like to be able to have a conversation that does not make me think I am talking to Pixar’s Dory the fish.

Smart Today

Artificial intelligence is everywhere, or at least that is what we are told. The other day I even heard toymaker Mecchano advertise their latest toy saying “with artificial intelligence”. Most of what is deemed to be infused with AI, however, is barely showing some signs of cerebral activity. While general intelligence might be wide-spread, context-aware intelligence, cross-device intelligence, and personal intelligence are still in their infancy.

I also feel that we are focusing so much on what tech will be able to deliver in the next five, ten years that we ignore what could be done today. Autonomous cars are probably the best example. We focus so much on the vision of no longer driving that we do not see what could be done today.

In 2018, I would like to hear more about cars as an extension of the connected home. For many who commute daily, the car is an extension of the home. Some of us end up spending more hours in their car than at home! It is only to be expected that my home and my car were able to exchange information about my day, my music preferences, my likes, and dislikes. I do not expect the car to talk to the fridge to warn of a difficult commute and the need for a glass of wine. But I do expect my car to be able to automatically control my lights and thermostat as I approach home. My assistant, sensing the car is approaching, could ask me to ID myself before saying “welcome home” and opening the garage door and ask me if I want to transfer the call I am on or the content I am consuming to my home.

Another good example would be our home that might be connected but not necessarily smart. Last week I had to change my Wi-Fi SSID and that meant resetting the connection with all my devices. Why after setting up the first device was I not prompted to allow other devices to share the password is unclear to me. It really should not be this hard. If my iPhone can share my Wi-Fi password with my iPad or a friend who is close to me, the devices in my home should be able to do the same.

The Death of Passwords

If you have ever tried Apple’s Face ID, Windows Hello or Samsung’s Iris recognition you will appreciate how much more efficient these methods are. Face ID, in particular, makes you forget you are being authenticated.

This, coupled with the need to protect more and more information and devices around us will get us to want a better way than typing a password to get what we want.

In 2018, I want to be able to use my face, iris or finger in as many situations as possible. How quickly developers have embraced Face ID leaves me optimistic about my desire to kill passwords. I have the feeling, however, that what I am hoping for, like my connected devices to reconnect to a new home Wi-Fi, might be more negatively impacted by turf wars and interoperability issues than tech limitations.

A lot of wishful thinking, I know, but it is Santa I am writing to, after all 😉

 

 

Why Always Connected PC’s will Morph Into an All-Day Battery (and Beyond) Focus

on December 20, 2017
Reading Time: 2 minutes

I was in Hawaii recently to attend Qualcomm’s launch event for their Always Connected PC initiative. Qualcomm created a laptop design that uses their Snapdragon 835 mobile processor to power what they dubbed an “Always Connected PC.” The premise is that with the Snapdragon 835, which includes the LTE radio in its design, that people will be more likely to want a portable computer that can always be connected and have at least 20 hours of battery life.

Signs of a Great Product or Feature

on December 19, 2017
Reading Time: 3 minutes

I often get to speak with entrepreneurs as well as a lot of VC/early stage investors and one thing I get asked quite a bit is what to look for that signals a product or product experience is great. I want to share a few brief highlights that I consider show signs of a great product.

Tech’s Biggest Challenge: Fulfilling the Vision

on December 19, 2017
Reading Time: 3 minutes

The last several years have seen a tremendous expansion in the ideas, concepts, and overall vision of where the world of technology is headed. From mind-bending escapades into virtual and augmented reality, on to thought-provoking voice interactions with AI-powered digital assistants, towards an Internet filled with billions of connected things, into enticing experiments with autonomous vehicles, and up through soaring vistas enabled by drones, the tech world has been far from lacking in big picture perspectives on where things can go.

Vision, however, isn’t the hard part. The really challenging task, which the industry is just starting to face, is actually executing on those grandiose ideas. It’s all fine and good to talk about where things are going to go—and building out grand blueprints for the future is a critical step for setting industry direction—but it’s becoming clear that now is the time for true action.

Excitement around these big picture visions has begun to fade, replaced increasingly by skepticism of their feasibility, particularly when early efforts in many of these areas have failed to meet the kind of mass success that many had predicted. People have heard enough about what we could do, and are eager to see what we can do.

It’s also more than just a simple dip in the infamous Gartner hype cycle, which describes a path that many new technologies face as they enter the market. According to that widely cited predictive tool, initial excitement around a new technology grows, eventually reaching the point where hype overtakes reality. After that, the technology falls into the trough of disillusionment, as people start to question its impact, before finally settling into a more mature, balanced perspective on its long-term value.

What’s happening in the tech industry now is a much bigger change. After years of stunning new ideas and concepts that hinted at a radically different tech future way beyond the relatively simple advances that were being made in our core tech devices, there’s an increasing recognition that it’s a very long road between where we are now, and where we need to be in order for those visions to be realized.

As a result, there’s a major resetting of expectations going on in the industry. It’s not that the ultimate goals have changed—we’re still headed towards truly immersive AR/VR, conversation-ready AI tools, fully autonomous cars, a seamlessly connected Internet of Things, and much more—but timelines are shifting for their full-fledged arrival.

In the meantime, the industry has to dig into the nitty-gritty of developing all the critical technologies and standards necessary to enable those game-changing developments. Unfortunately, much of that work is likely to be slow-going, and, in many instances, won’t necessarily translate into immediately obvious advances. It’s not that technological innovation will cease or even slow down, but I do believe many advances are going to be more subtle and much less obvious than what many have become accustomed to. As a result, some will think that major tech developments have started to slow.

Take, for example, the world of Artificial Intelligence. By all accounts, refinements in AI algorithms continue at a frenetic pace, but how those get translated into real-world uses and practical implementations isn’t at all clear and, therefore, isn’t moving nearly as quickly. Part of the reason is that the difference between, say, today’s digital assistants and future versions that are contextually intelligent are likely to occur along a long, mildly-sloped line that will be challenging for many people to remember. The difference between a current assistant that can only respond to a relatively simple query and a future version that will be able to engage in intelligent, multi-part conversations is certainly going to be noticeable, but there will likely be lots of subtle, difficult-to-distinguish changes along the way. Plus, it seems a lot less dramatic than the first few times you spoke to a smart speaker and it actually responded back.

If we take a step back and look at the larger global arch of history that the tech industry currently finds itself in, I’d argue we’re in a transitional period. After decades of evolution centered around PCs, smartphones, and simple web browsing, we entered an epoch of intelligent machines, seamless connectivity, and web-based services several years back that allowed the industry to dream big about what it could achieve. Now that we understand those visions, however, the industry needs to get to the hard work of truly bringing those visions to life.

A New Design Requirement: Delightware

on December 18, 2017
Reading Time: 2 minutes

Having been in the consumer tech product development business for much of my career, I’ve seen how the industry has developed a methodology for turning an idea into a product. It involves using a team of experts in their various engineering specialties: industrial design, mechanical, software, electrical, quality, and manufacturing engineering.

By the nature of their work, one of the challenges is to ensure that these activities are coordinated, and the team is communicating and working together to come up with a clear, coherent product, while each is off doing their tasks.

But, by the nature of this process, it’s rare that there is a focus on the customer experience and its impact on the design of the product. That’s just too hard to do with the team members focusing on their specialized work. While industrial designers and marketing managers come closest to influencing the customer experience, usually by creating a wish list and product goals, it’s hard for them to impact the day to day design decisions being made by the individual engineers, who often worry more about just getting the designs to work.

As products become more complex, the customer experience element is becoming a significant product differentiator, and its importance needs to be elevated to the same level as the other functions.

This area of product development is so important; there really should be a name for it. I call it “Delightware,” that element of the product that makes using it delightful and provides unexpected value and functionality that you never even thought about before buying. With the complexity and versatility of today’s products, it can be as important as any other element of the product.

A perfect example is the Apple AirPods. When I first saw them, I mocked, them much like many others: How strange looking, they might fall out, and so expensive!

A few months ago, while visiting an Apple store, I bought a pair to try, just to get familiar with them, thinking I could return them if they were as bad as I had imagined.

What I didn’t anticipate, and what was rarely mentioned in Apple’s promotional material, were features that delighted and surprised, it’s Delightware. When you remove an AirPod from the case, the phone call transfers to it immediately, but only if it’s in your ear. Open the lid of the AirPod case and the battery level is shown on the phone. Remove it from your ear, and the Podcast pauses. While some of these features could be found on older Bluetooth headphones, none worked as seamlessly.

For these features to work so well, someone on the team had to think through the design carefully to be sure it contained the right sensors, processors, and was properly implemented by each of the disciplines. Such capabilities are missing from most products, which often are hard to set up and confusing to use. 

Another example, also from Apple, is the ease of upgrading from one device to another. To move from one iPad to a second, you simply place the two together, answer a few questions, and the new unit is updated automatically.

Compare the Delightware of these products with that of IoT door locks, cameras, and doorbells that are hard to setup and don’t work as expected. I have two of these products that insist on sending me alerts when it detects me.

I’m convinced this area of Delightware will become the major product differentiator in the future.

Podcast: Net Neutrality, Disney-Fox, Apple-Shazam, Microsoft AI

on December 16, 2017
Reading Time: 1 minute

This week’s Tech.pinions podcast features Carolina Milanesi, Ben Bajarin and Bob O’Donnell discussing the Net Neutrality decision, Disney’s purchase of 20th Century Fox, Apple’s purchase of Shazam, and Microsoft’s new AI-related announcements.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

News You might have missed: Week of December 15th, 2017

on December 15, 2017
Reading Time: 4 minutes

Apple Acquires Shazam

After TechCrunch first broke the story last Friday, Apple confirmed on Monday the acquisition of UK based Shazam. Apple said:

“We are thrilled that Shazam and its talented team will be joining Apple. Since the launch of the App Store, Shazam has consistently ranked as one of the most popular apps for iOS. Today, it’s used by hundreds of millions of people around the world, across multiple platforms. Apple Music and Shazam are a natural fit, sharing a passion for music discovery and delivering great music experiences to our users. We have exciting plans in store, and we look forward to combining with Shazam upon approval of today’s agreement.”

Apple did not disclose the price but we have several sources that have confirmed to us that the deal is in the region of $400 million.

A Post Net Neutrality World Order

on December 15, 2017
Reading Time: 4 minutes

Yesterday, as expected, the Federal Communications Commission repealed 2015’s Open Internet Rules, also known as ‘net neutrality’. Much ink has been spilled (or keys tapped) on this issue, including my own Techpinions piece two weeks ago, arguing about the paradoxy of repealing Network Neutrality, while at the same time blocking AT&T’s acquisition of Time Warner.

I see credence on both sides of the Network Neutrality argument, and I tend to agree with Jon Leibowitz’s Wall Street Journal op-ed yesterday that the sky didn’t fall when Title II was imposed in 2015, nor will it now that it has been repealed.

So, as this continues to be litigated over the coming months, it might be a good time to think about the best protections from anticompetitive practices, while recognizing the rapid changes occurring in content, digital media, and communications. In this “Post Net Neutrality World Order”, I urge the major actors in the game—service providers, content providers, regulators—to adopt the following Code of Conduct.

  1. Task 1: Read and adopt the words of AT&T’s Senior Vice President Bob Quinn, who pledged in a November blog post that “We Will not block websites, degrade internet traffic based on content, [or] unfairly discriminate in our treatment of Internet traffic”. This, in principle, is now in the remit of the Federal Trade Commission, which in the past has been fair and balanced on this issue.
  2. Positive Practices Are More Permissible Than Negative Practices. It’s not that big a deal if AT&T zero-rates content DTV content for its wireless subscribers, or offers some attractive bundles. It’s more concerning if engage in a practice to slow down services for subscribers that are competitive with DTV. Similarly, on the B2B side of the equation, there’s a good case for ‘fast lanes’ in some instances. ‘Slow lanes’ will be harder to justify.
  3. Refrain From Practices Clearly Impinge On the Idea of the Open Internet. Some of the biggest concerns have to do with the potential for service providers to take a “cable” approach to the Internet, such as charging for access to specific sites. It will take only one or two airline-esque practices like this to set us back.
  4. Recognize that Wireless Is different than Fixed. I’ve long argued that the FCC should look at wireless through a different lens, when compared to broadband. Wireless services will forever be capacity constrained, even in a 5G world. That’s why ‘unlimited’ plans always come with an asterisk. There have been practices such as throttling and zero rating, where regulators, even in a Title II world, treaded lightly. New services such as LAA, and the concept of network slicing will introduce more opportunities to offer tiers of service.
  5. There’s Nothing Wrong With Tiers of Service. It’s 2020, and Nintendo introduces a new, multi-player online virtual reality game requiring faster speeds and lower latencies. So they pay some sort of ‘fast lane’ surcharge to a service provider, some of which gets passed on to the consumer. I don’t see anything wrong with that. Even though more and more households might be able to get 1 GB services, they might not necessarily need Or need them all the time.
  6. DOJ meet FCC, FCC Meet DOJ. We have to work toward a broader policy framework. As I argued in an earlier column, repealing network neutrality and blocking AT&T-Time Warner don’t seem to be coming from the same thought process (yes, I recognize that these are handled by different agencies). That said, we’ve seen more practices in the content business that have been detrimental to consumers—DISH standoffs with networks, Amazon-Google—than any violations of Open Internet rules.
  7. Be Transparent. There are going to be situations where some of the practices that have been the focus of those in favor of greater regulation make sense, given business realities or this changed landscape. This is where the FTC could step in if the service providers are not more proactive themselves.

More broadly, the tectonic changes occurring in our communications, digital media, and content landscape beg for a broader strategic review of our policy framework. This is one of the reasons there’s been a call for Congress to legislate this, rather than have it be in the hands of the FCC, whose philosophy could change every four years.  The 1996 Telecom Act seems increasingly outmoded, as it fails to properly account or adjust for, the emergence of wireless broadband (LTE, 5G), smartphones, the rise of OTT and streaming, and consolidation in the content landscape (Comcast-NBC Universal, AT&T-Time Warner, Disney-Fox).  It seems like right now, we’re dealing with all of these changes on a deal-by-deal basis: impose NN and then repeal it; allow Comcast-NBC/Universal but block AT&T-Time Warner; allow Internet companies to do things that telecom companies can’t. In this giant Venn diagram of telecom and the Internet, you’ve got AT&T and Verizon owning important content assets, while Google and Facebook provide broadband services and OTT communications and messaging services.

The other thing this all points to is that we need more competition in broadband. Currently, only 50% of households have access to more than one decent broadband provider. A more competitive broadband market would more naturally prevent some of the practices we’re now trying to legislate our way out of. There’s the potential for some change here with the approach of 5G and fixed wireless.  A 2020 Telecom Act might, for example, revisit the rules around network resale, which has led to more robust broadband competition in other countries.

TITAN V launch strengthens machine learning lead for NVIDIA

on December 14, 2017
Reading Time: 3 minutes

Earlier this week, NVIDIA launched the Titan V graphics card at the NIPS (Neural Information Processing Systems) conference in Long Beach, to the surprise of many in the industry. Though it uses the same Volta architecture based GPU that has been shown and discussed and utilized in the Tesla V100 product line for servers, this marks the first time anything based on this GPU design has been directly available to the consumer.

Which consumer though, is an interesting distinction. With its $3000 price tag, NVIDIA positions the Titan V towards developers and engineers working in the machine learning fields, along with other compute workloads like ray tracing, artificial intelligence, and the oil/gas industry. With the ability to integrate a single graphics card into a workstation PC, small and growing businesses or entrepreneurs will be able to develop applications utilizing the power of the Volta architecture and then deploy them easily on cloud-based systems from Microsoft, Amazon, and others that offer NVIDIA GPU hardware.

Giving developers this opportunity at a significantly reduced price and barrier to entry helps NVIDIA cement its position as the leading provider of silicon and solutions for machine learning and neural net computing. NVIDIA often takes the top down approach to new hardware releases, first offering it at the highest cost to the most demanding customers in the enterprise field, then slowly trickling out additional options for those that are more budget conscience.

In previous years, the NVIDIA “Titan” brand has targeted a mixture of high-end enthusiast PC gamers and budget-minded developers and workstation users. The $2999 MSRP of the new Titan V moves it further into the professional space than the enthusiast one, but there are still some important lessons that we can garner about Volta, and any future GPU architecture from NVIDIA, with the Titan V.

I was recently able to get a hold of a Titan V card and run some gaming and compute applications on it to compare to the previous flagship Titan offerings from NVIDIA and the best AMD and its Radeon brand can offer with the Vega 64. The results show amazing performance in nearly all areas, but especially in the double precision workloads that make up the most complex GPU compute work being done today.

It appears that gamers might have a lot to look forward to with the Volta-based consumer GPU that we should see arriving in 2018. The Titan V is running at moderate clock speeds and with unoptimized gaming drivers but was still able offer performance that was 20% faster than the Titan Xp, the previous king-of-the-hill card from NVIDIA. Even more impressive, the Titan V is often 70-80% faster than the best that AMD is putting out, running modern games at 4K resolution much faster than the Vega 64. Even more impressive, the GV100 GPU on the card is doing this while using significantly less power.

Obviously at $3000, the Titan V isn’t on the list of cards that even the most extreme gamer should consider, but if it is indicative of what to expect going into next year, NVIDIA will likely have another winner on its hands for the growing PC gaming community.

The Titan V is more impressive when we look at workloads like OpenCL-based compute, financial analysis, and scientific processing. In key benchmarks like N-body simulation and matrix multiplies, the NVIDIA Titan V is 5.5x faster than the AMD Radeon RX Vega 64.

Common OpenCL based rendering applications use a hybrid of compute capabilities, but the Titan V is often 2x faster than the Vega graphics solutions.

Not every workload utilizes double precision computing, and those show more modest, but noticeable improvements with the Volta GPU. AMD’s architecture is quite capable in these spaces, offering great performance for the cost.

In general, the NVIDIA Titan V proves that the beat marches on for the graphics giant, as it continues to offer solutions and services that every other company is attempting to catch up to. AMD is moving forward with the Instinct brand for enterprise GPU computing and Intel is getting into the battle with its purchase of Nervana and hiring of GPU designer Raja Koduri last month. 2018 looks like it should be another banner year for the move into machine learning, and I expect its impact on the computing landscape to continue to expand, with NVIDIA leading for the foreseeable future.

The Future of Work

on December 14, 2017
Reading Time: 3 minutes

As my daughters were both in Jr. High, and now one in High School, I’ve been increasingly convinced the future of the workplace is happening before my eyes. This is a workplace very different from the one many of us experience today. It truly is hard to appreciate how different the world will look from a technology standpoint without seeing how kids today use the digital tools they have at their disposal.

Apple’s Acquisition Strategies Boosts its Earning Potential

on December 13, 2017
Reading Time: 4 minutes

Very soon, Apple will make its 100th M&A acquisition since it acquired NeXT Computer in 1996. This acquisition was done by then Apple CEO Gil Amelio, and just before he made that acquisition, he asked me about my thoughts about buying this company from Steve Jobs. At the time, because I was helping Mr. Amelio with Apple’s mobile strategy, he and I talked weekly about his goal of reviving Apple. Long time Apple watchers will remember that Gil Amelio was on Apple’s board when the company had lost its way and was over $1 billion in the red. When the board ousted Apple CEO Michael Spindler in 1995, Mr. Amelio was asked by the board to become CEO and try and turn the company back to profitability.

Diversity and Competitive Advantage

on December 13, 2017
Reading Time: 3 minutes

When I first saw Hidden Figures, it got me thinking about workplace diversity. In the movie, Katherine Goble helped a group of white American men does something they could not do on their own–send a man into space to orbit the earth and eventually help the US space program become the first country to put a man on the moon. One could argue this group of white American men has eventually figured it out, but in that scenario how long would it have taken? Would Russia have beat the US to the moon in that scenario? We will never know, because of Katherine Goble, an African American woman, played a key role in helping the US get there first. You could make a strong case that it was her presence on the team which gave the US space program the competitive advantage in the global space race.

What strikes me about this line of thinking is how workplace diversity is a competitive advantage and should be viewed as such. When we hear about companies trying to become more diverse, they appear to do so to come across mostly as an equal opportunity employer. However, if companies viewed workplace diversity as a competitive advantage than it is in their, and their shareholders best interests to aggressively pursue this course of action.

Interestingly, there is an increasing body of evidence to show that ethnic and gender diverse teams tend to be more creative and solve problems better than ones that are not. Many psychologists have been working and studying what makes teams more effective than others, and interestingly, psychologist Christopher Chabris co-authored this article in the NYTimes titled Why Some Teams are Smarter than Others. While there was a range of factors contributing to teams success but one of the three tentpole reasons was not just diversity but teams with more women than men.

Christopher Chabris has been writing research studies and working with the social science community digging into the broader theme of collective intelligence. And as many many new research reports have begun to suggest, the collective intelligence of a group gets better when there is diversity (broadly defined). And what is a company than a giant group of collective intelligence with the same sets of goals in mind? If these studies are correct than diversity will play a role, not just in the teams competitive advantage, but for the entire collective intelligence of any workforce.

This is not a new idea as I’m not the first the position diversity as an important element in companies, or even a nation’s, competitive advantage but it is a theme worth remembering and cementing into the mindset and culture of an institution. The reality is, however, just having diversity as a goal is not enough. A company has to have a process to utilize their diversity effectively.

The NASA space program was on the verge of squashing their competitive advantage by not effectively empowering Katherine Goble to do what she does best. Equally important were the team members in the NASA program’s ability to be willing to listen and accept her ideas and input. Had the NASA Space Programs boss Al Harrison, not stepped in and empowered Katerhine as a part of the team there is a good chance the US may have lost the Space race to Russia. Having diversity on teams and having procedures in place that empower that diversity can lead to a powerful competitive advantage.

If more CEO’s and executive teams understood how diversity is a competitive advantage, then they would rush to be as diverse as possible because of a competitor, who does understand this, becomes a much bigger threat.

Another factor to consider here is from the vantage point of competitive advantage for a nation. Not only should this point apply to a Nation’s leadership structure to include diversity but, the policies that a nation puts in place which could become an inhibitor to create diverse companies. For example, a concern with the current political climate in the US could threaten more top talent from other nations and ethnic backgrounds to leave the US and go to other count.ries to start and join companies.

Treating diversity among a company workforce, and management teams should be understood as competitive advantage as much of the research suggests. Hopefully, companies in every industry start understanding this additional angle as another reason to aggressively pursue a diverse culture in their companies.

TV in All Its Forms

on December 12, 2017
Reading Time: 3 minutes

One of the things that stood out in some recent analysis I did of media consumption trends is how the plethora of options available to consumers to consumer TV content the more of it they seem to consume. It seems obvious that if consumers could consume their TV content on any device they choose, at any time they choose, they would consume as much of it they can. Nobody wanted their TV stuck, well, on their TV.

The Dawn of Gigabit Connectivity

on December 12, 2017
Reading Time: 3 minutes

From cars to computers to connectivity, speed is an attractive quality to many people. I mean, who can’t appreciate devices or services that help you get things done more quickly?

While raw semiconductor chip performance has typically been—and still is—a critical enabler of fast tech devices, in many instances, it’s actually the speed of connectivity that determines their overall performance. This is especially true with the ongoing transition to cloud-based services.

The problem is, measuring connectivity speed isn’t a straightforward process. Sure, you can look for connectivity-related specs for your devices, or run online speed tests (like Speedtest.net), but very few really understand the former and, as anyone who has tried the latter knows, the results can vary widely, even throughout the course of a single day.

The simple truth is, for a lot of people, connectivity is black magic. Sure, most people have heard about different generations of cellular technology, such as 4G or the forthcoming 5G, and many even have some inkling of different WiFi standards (802.11n, 11ac,11ad, etc.). Understanding how or why your device feels fast doing a particular online task one day and on other days it doesn’t, however, well, that’s still a mystery.

Part of the reason for this confusion is that the underlying technology (and the terminology associated with it) is very complex. Wireless connectivity is a fundamentally difficult task that involves not only complex digital efforts from very sophisticated silicon components, but a layer of analog circuitry that’s tied to antennas and physical waveforms, as well as interactions with objects in the real world. Frankly, it’s amazing that it all works as well as it does.

Ironically, despite its complexity, connectivity is also something that we’ve started to take for granted, particularly in more advanced environments like the US and Western Europe. Instead of being grateful for having the kinds of speedy connections that are available to us, we’re annoyed when fast, reliable connectivity isn’t there.

As the result of all these factors, connectivity has been relegated to second-class status by many, overshadowed by talk of CPUs, GPUs, and other types of new semiconductor chip architectures. Modems, however, were arguably one of the first specialty accelerator chips, and play a more significant role than many realize. Similarly, WiFi controller chips offer significant connectivity benefits, but are typically seen as basic table stakes—not something upon which critical product distinctions or buying decisions are made.

People are starting to finally figure out how important connectivity is when it comes to their devices, however, and that’s starting to drive a different perspective around communications-focused components. One of the key driving factors for this is the evolution of wireless connectivity to speeds above 1 gigabit per second (1 Gbps). Just as the transition to 1 GHz processors was a key milestone in the evolution of CPUs, so too has the appearance of 1 Gbps wireless connectivity options enabled a new perspective on communications components such as modems and WiFi controllers.

Chipmaker Qualcomm was one of the first to talk about both Gigabit LTE for cellular broadband modems, as well as greater than 1 Gbps speeds for 802.11ac (in the 5 GHz band) and 802.11ad (in the distance-constrained 60 GHz band). Earlier this year, Qualcomm demonstrated Gigabit LTE in Australia with local Aussie carrier Telstra, and just last month, they showed offer similar technology here in the US with T-Mobile. In both cases, they were using a combination of Snapdragon 835-equipped phones—such as Samsung’s S8—which feature a Category 16 (Cat16) modem, and upgraded cellular telecom equipment from telecom equipment providers, such as Ericsson. The company also just unveiled their new Snapdragon 845 chip, expected to ship in smartphones in later 2018, that offers an even faster Cat18 modem, with maximum download speed of 1.2 Gbps.

In the case of both faster LTE and faster WiFi, communications component vendors like Qualcomm have to deploy a variety of sophisticated technologies, such as MU-MIMO (multi-user, multiple input, multiple output) transmission and antenna technologies, and 256 QAM data modulation (e.g., compression) schemes, among others.

The net result is extremely fast connection speeds that can (and likely will) have a dramatic impact on the types of cloud-based services that can be made available, as well as our quality of experience with them. There’s no denying that the technology behind these speedy connections is complicated, but with the dawn of the gigabit connectivity era, it’s time to at least acknowledge the impressive benefits these speedy connections provide.

STEM and STEAM Gifts for the Holidays

on December 11, 2017
Reading Time: 4 minutes

One of the areas of great interest to me over the last five years has been the movement behind helping kids gain interest in Science, Technology, Engineering, and Math. Recently, the groups advocating for STEM education have added an “A” to this moniker arguing that the ART’s and creativity are also important to round out a tech-focused education and thus the newer acronym STEAM has been added to describe this educational focus.

In past column’s, I have chronicled how the SF 49ers have made STEM education a key part of their new stadium’s museum and written about “How STEM skills are the next great equalizer“.
I have also profiled how Chevron is helping fund STEM education and through these columns emphasized how important I feel STEM and STEAM are for the educational future of our youth.

My Interest in STEM and STEAM has led me over the years to look for STEM gifts during the holidays for my two granddaughters and nieces and nephews and compile a short list of products that would make great gifts for both boys and girls.

Here are a few of the products for this year that I believe will help any kid gain more interest in STEM and STEAM and could provide hours of learning fun and perhaps get them interested in STEM and STEAM careers in the future.

I like this one for the Under five age group with a high tech take on learning their ABC’s-
Codebabies’ ABCs of the Web picture book. Start early with this alphabet picture book — written by a web designer — that aims to introduce the under-fives to “the language of the web”. So instead of ‘A is for Aardvark’ you get ‘A is for Anchor tag’… Link

Smart Gurlz (recently seen on Shark Tank) currently available online only. Teaches girls how to code using self-balancing robots and action dolls via mobile devices. SmartGurlz helps girls 6 and up and is a great way to get them interested in STEM.

Stembox_ Subscription service. Available in 3,6 or 12-month subscriptions and priced between $87 and $300. StemBox is also geared to girls and designed to be engaging for ages 8 to 13. It helps them develop an emotional connection to STEM and hopefully encourages them to gain greater interest in the Sciences.

Creation Crate. Creation Crate drops the technical lingo and increases in difficulty each month so that users can be fluent in the language of technology by the end of the 24-month curriculum. Projects range from building a mood lamp to a memory game focused on programming, to learning how to input a distance reading from an ultrasonic sensor. Unlike other technology subscription boxes, they use raw electronic components and offer users real-world skills. These boxes designed to be beginner friendly with no previous experience needed. Subscriptions start at $30 a month with 3, 6, or 12-month subscription packages to choose from.

Wonder Workshop. Wonder Workshop uses what they call CleverBot’s to teach early robotics and interactive programming. They are packed with technology that helps kids develop critical problem-solving skills through challenging educational projects designed to make learning to code fun. Most of their Bots are for ages 11+.

Thimble. Thimble also uses electronics to tech robotics and programming and a 1, 3, 6 or 12-month subscription service. There are a dozen projects to choose from, and in each project, you get the proper components, an online learning platform, and they even have a forum for kids to exchange ideas, collaborate and help each other.

KiwiCo – Subscription service of STEAM (offers Art as well)
Offers a range of products for infants to high school. Offers Monthly, 3,6, and 12-month subscriptions for about $20/month. This is a much broader service for all age groups, and you can pick the projects you want to work on. For the 24-36-month-old kids, the focus is exploring and learn. For ages 3-4 the projects are about playing and learning. Ages 5-8 have projects aimed at science and art, and ages 9-16 include projects for art, design, science, and engineering.

Circuit Cubes from TenkaLabs. Circuit Cubes teach kids the basics of circuitry while they’re engaged in creative STEM play. Kids learn how to complete circuits to light an LED, power a geared motor, and how serial and parallel circuits create different effects in their projects. They integrate with LEGO-style bricks for endless projects.” Ages 8-12.

Barbie STEM kit – Thames & Kosmos/Mattel ages 4-8. When my granddaughters were younger they were Barbie fans and would have loved these Barbie STEM kits. There are seven different projects to work with the kit, ranging from a spinning closet rack to a gear based washing machine and a greenhouse. They even have some specialty kits including Barbie Crystal Geology and Barbie Fundamental Chemistry set. One of the great examples of learning about STEM while playing with a beloved figure.

Code Kit from LittleBits. Since I first heard about LittleBits, I have been a big fan of their STEM kits. One new STEM kit from them that are geared towards learning about electronics is this Code Kit of snap together magnetic Arduino modules of “bits.” The idea is to simplify breadboarding and never need to get out the soldering iron. The bits are then connected — via computer — with another block based graphical coding environment so kids can play around with and program the hardware.

Lego Boost Creative Toolbox building & coding kit. What kid does not like Lego Blocks? Lego understands the STEM movement well and has created the Lego Boost Creative Tool box which is a robotics and programming system aimed at seven year and older kids. With this toolkit, kids can build and customize a robot and learn how to code its movements and navigations. It has drag and drop icons for easy programming and teaches kids the basics of robotics and coding.

Last but not the least is one of my favorites:

STEAM Kids ebook. “A year’s worth of captivating STEAM (Science, Technology, Engineering, Art & Math) activities that should provide hours of fun. This is a downloadable book with projects in each area designed to engage parents and children in new areas of discovery and skills. Books are sold individually or in bundles, including specific books for Holiday themed projects, (I.e., Christmas, Valentine’s Day, etc. For ages 4-12. $14.99 Comes in both eBook and traditional book formats.

The Commercial Opportunity for the Always-Connected PC

on December 8, 2017
Reading Time: 3 minutes

At the Always-Connected PC launch event earlier this week, Microsoft and Qualcomm seemed to focus a great deal of their attention on the consumer opportunity for these new Snapdragon-based Windows computers. While there is certainly a market for this technology among some percentage of consumers, I would argue that the larger near-term opportunity is in the commercial segment where connectivity and long battery-life drive real-world productivity gains and measurable cost benefits.

Connected Consumer?
Carolina Milanesi discussed the launch event in detail earlier this week, including some of the ongoing app issues Microsoft faces as well as the challenges associated with convincing consumers to pay for the carrier service required for an always-connected PC. Beyond these roadblocks, there’s this additional fundamental issue: Many consumers, with students being the exception, tend to use their PCs in one place: Inside their house. In other words, ultra-long battery life, and LTE connectivity are both nice to have, but not critical to a large percentage of consumer PC users.

However, for highly mobile workers, those two features are the holy grail of productivity. I travel extensively for work, and while today’s modern PCs offer substantially more battery life than ever before, I still often find myself working around my PC’s battery limitations. Sometimes it’s a 13-hour trip to Asia, where I do the important work up front, constantly eyeing the battery life indicator as it slides toward zero. Other times it’s running from one presentation to another, invariably forced to plug in before the last meeting, so the PC doesn’t die mid-presentation. The idea of a notebook that runs for 20 hours between charges is a game changer for users like me. The prospect of going days at a time between charges sounds almost too good to be true.

Likewise, there’s the issue of connectivity. Invariably somebody will point out that you can always connect to your phone as a hotspot, and yes that is an option. But it’s a task that takes time and effort to do, which can be problematic in some back-to-back meeting scenarios. And when you’re connecting like this, in an ad hoc way, everything must update at once, which means a flood of emails, etc. And tethering invariably leads to a secondary issue: Running down your smartphone battery. After years of carrying an LTE-enabled iPad, the benefits of an integrated LTE connection are quite clear to me.

Another interesting feature of these new PCs is their instant-on capability. Today’s PCs boot up and resume from sleep much faster than ever before, but they’re still far from instantaneous. The idea of a PC that wakes at the speed of a smartphone has clear productivity benefits.

Cost Savings and Challenges
So it’s clear that a subset of commercial users would embrace the opportunity to use an Always-Connected PC. Convincing their companies these devices are a cost-effective idea is the next challenge. But that’s not difficult when you can articulate the productivity advantages of outfitting high-output mobile employees with these devices. And yes, there is a monthly cost associated with connecting them to the network, but that cost can be rather quickly justified when you consider the ongoing costs many employees accrue while traveling and connecting to fee-based WiFi networks in hotels and other locations. Plus, there are the real-world security issues associated with connecting to random WiFi networks in the wild. And an LTE notebook might also drive cost savings for companies who have full-time remote employees that currently expense their home office broadband connections.

Probably the bigger challenge here is convincing old-school IT departments to try a non-Intel/non VPro-enabled Windows PC. These folks will also likely balk at the idea of Windows 10 S (the shipping OS on the initial launch devices, which is upgradeable to Windows 10 Pro). Some will also cringe when they hear that 32-bit X86 apps run via emulation (and 64-bit apps aren’t compatible). Finally—and this is the most reasonable pushback—many will need to see real-world benchmarks that prove these systems are competitive with today’s X86-based systems for the use cases in question.

While some of these IT departments will likely pilot some of these new consumer-focused products, others will undoubtedly wait until Microsoft, Qualcomm, and their hardware partners move to ship more commercial-focused products. Others will undoubtedly wait to see how commercial LTE-enabled systems based on Intel’s 8th generation processors compare to Windows on Qualcomm. And that may well be the most exciting result of the news this week. With Qualcomm focused on the Windows PC segment, AMD resurgent in the space, and Intel working hard to sustain its position, all Windows PC users—consumer and commercial—will eventually benefit, and I can’t wait to test the first systems. Likewise, it will be interesting to see the eventual response from competing platforms such as Google’s Chrome OS and Apple’s MacOS.

News You might have missed: Week of Dec 8th, 2017

on December 8, 2017
Reading Time: 2 minutes

Microsoft Whiteboard
At the Surface Pro launch a few months back, Microsoft had previewed Whiteboard, a Windows 10 app designed to offer creative and business collaboration across devices. Up until this week, the app had been available in private beta, but starting this past Tuesday, Microsoft kicked off a public beta for anyone with a Windows 10 device..

Monitoring Heart Health

on December 7, 2017
Reading Time: 4 minutes

Long-time readers of my column will know that I suffered a heart attack in 2012 and underwent a triple bypass. As you can imagine, this was a serious operation brought on by long hours, extensive travel, not eating correctly and minimal exercise over a 25+ year period. The good news is that when the heart attack struck, I knew what was happening and got to the hospital in time for them to stabilize me and start preparing me for open-heart surgery within 36 hours of the actual attack.

But from that point on I was and still am a heart patient. Even though the surgery corrected the main issues with three of my arteries, I am still an at-risk person and have to closely monitor things like blood pressure, cholesterol, heartbeat, etc.
One other thing that could be an issue but hasn’t been so far being something called AFIB, or an irregular heart beat that could lead to other serious issues related to my heart and health. AFIB is the leading cause of strokes and is responsible for approximately 130,000 deaths and 750,000 hospitalizations in the US every year.

Until recently, the only way I could get this tested was to go to my doctor’s office, which I do twice a year and have an EKG which charts my heart rate and looks for any abnormalities such as AFIB. But earlier this year I was sent a product from AliveCor to test. The device has a small mobile device in which I can put my fingers or thumbs on it, and it registers my heart rate in detail and delivers a signal to my iPhone that gives me an actual EKG reading.

This device is FDA approved and allows me to take a personal EKG to check for AFIB or any heartbeat irregularities anytime I want.
This mobile solution also has an important option to get an expert to read the EKG should you see something in the chart that looks different or abnormal. The two options are to have a clinician read it and give feedback for $9.99 or get an actual MD to look at it and advise for $19.99.
Thankfully, all of my readings over the year were normal, and I have not had to call for outside analysis.

On Nov 30, AliveCor introduced a new way to do this in the form of a watch band that is tied to the Apple Watch. While their KardiaMobile reader works well, it is another thing I have to carry with me if I am going to do this daily and especially while on the road. Called the KardiaBBand, it sells for $199 and requires a $99.00 a year subscription, but I consider this a small price to pay for the ability to have early warnings of AFIB and the ability to do an EKG easily and anytime I want. I have been testing the Kardia Band for about a week now, and like the KardiaMobile device, it monitors my heart rate and via the KardiaBand, it gives me an EKG reading on demand. But since I am wearing the band, it is a bit easier than digging out the Kardia Mobile device and using it, which means I can get readings more often to stay on top of my overall heart health.

I realize that this probably will get more attention from an older audience or people with Type 2 Diabetes and high blood pressure as AFIB is a leading cause of strokes and watching for any changes in EKG readings can and will save lives of high-risk people. However, I have friends who had a stroke in their 20’s and 30’s, and if any heart disease runs in your family, the KardiaMobile reader, which costs $99 or the Kardia Band needs to be considered as part of your overall health monitoring program.

Also on Nov 30, Apple introduced a most important heart study they are doing in conjunction with Stanford that uses the Apple Watch to do a similar EKG like a test to check specifically for AFIB. https://www.apple.com/newsroom/2017/11/apple-heart-study-launches-to-identify-irregular-heart-rhythms/

Since this is a study it does not need FDA approval but the program does provide a direct contact with a physician should the Apple Watch, through this special study monitoring program, detect any abnormalities in your heart readings. At this point, you will be notified that there might be a problem and they will send a special patch that you wear for seven days to monitor your heart readings 24/7 to get a more concise analysis. If AFIB is detected, they will have you see a Dr or Cardiologist as soon as possible.

According to Apple, “To calculate heart rate and rhythm, Apple Watch’s sensor uses green LED lights flashing hundreds of times per second and light-sensitive photodiodes to detect the amount of blood flowing through the wrist. The sensor’s unique optical design gathers signals from four distinct points on the wrist, and when combined with powerful software algorithms, Apple Watch isolates heart rhythms from other noise. The Apple Heart Study app uses this technology to identify an irregular heart rhythm.”

Apple’s interest in health stems from Steve Jobs own health issues. As he became more in tune with the importance of a person needing to find more proactive ways to impact and monitor their health, he started to make this one of the tenets of Apple’s overall vision. As I have often written over the last few years, Apple is serious about helping their customers staying healthy, and this Heart Study is another sign of that commitment.

Misunderstanding Smart Speakers

on December 7, 2017
Reading Time: 4 minutes

I’m not sure if there is a misunderstanding of the smart speaker category or if it is just because of how I have my news, analysis, and Twitter feed curated. Whichever the case, I’d like to elaborate on how I think about this space now that we have a few years of market intelligence and consumer behavior with these products in our database.

In retrospect, we should have seen this category coming from a mile away. To more greatly understand why this category took off so quickly all we need to do is look back at the iPod, perhaps even further to the Walkman. The value of a personal and mobile music collection was a driving force behind some of the most successful consumer electronics products up until the smartphone. To this day, listening to that music collection on smartphones remains one of the most common use cases. Bottom line, music matters a lot to consumers.

Will Apple Use Tax Breaks to Create More Jobs?

on December 6, 2017
Reading Time: 3 minutes

If you keep an eye on what the financial analysts are saying about Apple these days, you know that almost all have raised their stock price targets closer and closer to the $200 per share range. Almost all are bullish, and some believe Apple’s new fiscal year will break all records and that we could see Apple become the first company ever with a trillion dollar valuation sometime in 2018.

The Always Connected PC will need more than Connectivity to be a Hit

on December 6, 2017
Reading Time: 5 minutes

This week, at the Snapdragon Summit, Qualcomm and Microsoft launched their Always Connected PC initiative. This is not the first time we hear about connected PCs. Cellular connectivity has been available on PCs for years, and thus far penetration among consumers has been relatively low. There are of course regional differences in markets like Europe where WiFi connectivity is both hard to come by and expensive. But overall, consumers seem to be happy to use their phone hotspot for those time they really need to be connected.

When it comes to tablets, connectivity has mattered more to consumers especially in those regions where you can either add a tablet to your data plan for as little as $10, or you can have a separate data plan with no contract obligations. iPads have had a reasonably high attach rate of cellular connection with our data showing numbers as high as 49% in the US. Yet, of those devices only 45% have an active data plan associated with them. The most significant driver (52%) for these consumers is the peace of mind of always having connectivity just in case they need it. And peace of mind as well as convenience are always big drivers!

With the holiday shopping in full swing, PCs vendors are looking for ways to entice consumers to spend their holiday budget on a new PC. Intel has been showing you how all the new technologies like 4K gaming and video, as well as VR will not be available to you unless you invest in a new PC. And soon Microsoft, Qualcomm, and their partners will be busy talking about the joys of the Always Connected PC.

The Always Connected tagline is not limited to cellular connectivity. It also speaks to a PC that has instant-on and a long battery life. It promises to deliver a computing experience that will come with you wherever you are and that will free you from looking for a power source, being dependent on free unsecured Wi-Fi or jumping through hoops to connect to your phone. While somewhat reliant on what kind of offers we will see from carriers who would not want to be free to work or play anytime anywhere? This will be particularly true if devices start shipping with a free connection trial so users will get hooked on that convenience and peace of mind

Apps & Services drive the Need for Connectivity

If you have seen the latest iPad commercial and can relate to it, you might have already bought into the promise of an always connected computing experience. It is ironic that Apple is helping sell the vision that Microsoft, Qualcomm and their partners want to deliver. Except, of course, Apple is also telling you that your always connected life does not require a PC.

And here is the heart of the matter. For consumers, the desire to be connected has little to do with being productive and a lot to do with getting “stuff” done whenever we want. That stuff can range from streaming music, to upload to social media, to playing online games, to shopping online…. basically being able to do the same things we do on our smartphones but with the advantage of having a larger screen and a keyboard. Forty-three percent of the consumers we interviewed who have a connected iPad said they do a little bit of everything. This does not mean we will carry our smartphones less or rely on them any less. It simply means we will have the option to choose the best tool for the job without having to compromise on connectivity and battery life.

While connectivity and battery life will no longer be in question, the Always Connected PC must deliver on the variety of apps and services we can access with it. This will require a stronger investment in the Windows App Store than what we have seen so far from Microsoft, especially as they try to position Windows 10 S – which is fully reliant on store apps – as the most modern computing experience.

The Windows App Store was the weakest link for Windows Mobile, and it cannot be the weakest link for the Always Connected PC, or for Windows 10 S, for that matter. It would be a terrible mistake to think that being able to be connected only for productivity reasons will be enough of a drive to see consumers flock to stores and buy these devices. I am sure both OEMs and carriers have learned a lot from the netbook experiment. Not just in terms of design and marketing but also in terms of the value proposition that consumers must see in a device.

An Opportunity for Phone Manufacturers to Broaden their Scope

Traditional PC manufacturers are continuing to look for new drivers to fuel sales and Always Connected PCs is just another way to get consumers’ attention. Yet, some might be a little shy in investing too much in this segment seen how the netbook and Windows RT experiments ended. At the Snapdragon Summit we saw devices from Asus and HP and Lenovo was mentioned as having a device in time for CES. The challenge for pure PC manufacturer rests on the balance of supporting all connected PCs, Intel as well as Qualcomm based ones, as well as to help consumers decide between their full product portfolios.

Given Always Connected PCs will speak more to highly mobile users, I see a great opportunity for phone manufacturers such as Samsung and Huawei to invest in these devices to widen their reach. They have the relationship with the carriers and with Qualcomm as well as their own semiconductor capabilities. Plus they do not have to figure out where to place these products within a wider PC offering. Samsung, in particular, with its trusted Galaxy brand, might be seen by many consumers as a natural choice for an Always Connected PC.

Samsung has played in the PC space at a world-wide level with few devices but has not really put much marketing push behind its effort. This initiative might indeed offer a good opportunity to try a more aggressive approach without having to commit to becoming an all around PC vendor. Samsung could, of course, consider the enterprise market as well as its ambitions of delivering Knox as a full-fledged platform strengthen. Yet, this road will require a tighter collaboration with Microsoft than we have seen thus far.

Consumers will not care about who empowers their Connectivity

Connectivity must come with the right design, the right marketing and most of all at the right price point. What the right price point will be is heavily dependent on the value buyers will see in the experience that is delivered to them. What will not matter to consumers is how the connectivity is delivered and we already know that while the Always Connected PC effort is now driven by Microsoft and Qualcomm, Intel will be jumping on the bandwagon too while trying to position their solution as superior.

What will ultimately matter to consumers when choosing the solution remains to be seen. Will consumer trust Qualcomm, who is responsible for their everyday connectivity on their phones? Or will consumers be looking for an Intel Inside logo as they always do when they buy a new PC? Hard to say at this point, but two things are clear: Microsoft and Qualcomm must invest in building a differentiated value proposition and they must help consumer understand what it is they are buying into.

The value proposition of Always Connected PCs might  revolve around positioning these devices closer to a smartphone than a traditional PC. The freedom of a phone experience when it comes to things to do, battery, ease of connectivity, coupled with a bigger screen, a modern PC OS and a highly mobile form factor is what consumers are looking for. A solution that if implemented right, might even have Windows users question what a PC really is, as they embrace a modern computing experience.

Rethinking Software

on December 5, 2017
Reading Time: 4 minutes

Virtually everyone who closely watches the tech industry has heard venture capitalist Marc Andreesen’s famous quote about “software eating the world.” The implication, of course, is that software plays the most important role in tech and the capabilities of software are the only ones that really matter. In addition, there’s the further suggestion that the only way to really make money in tech is with software.

While I won’t disagree with the underlying principles, I am starting to wonder if what we’ve traditionally thought of as software will really continue to exist several years into the future. It’s not that there won’t be code running on hardware devices of all types, but the way it’s packaged, sold, discussed, and even developed is on the cusp of some radical transformations.

In fact, there have already been substantial changes to the traditional types of software that were so dominant in the tech industry for decades: operating systems and applications.

Operating systems (OS’s) used to be considered kings of the software hill. Not only did they sit at the heart of client devices, servers and virtually every intelligent device ever created, they also enabled the all-powerful ecosystems. It was their structure, rules, APIs and other tools that enabled 3rd party companies to create applications, utilities, add-ons, and other software pieces that turned OS’s into platforms.

While those structures remain in place, the world around us has evolved to include multiple important OS options. In addition, though there are certainly important differences between OS choices across different types of devices, most application vendors have had to focus on the commonality across platforms, rather than those unique differences, leading to applications that run across multiple platforms. For this, and many other reasons, platforms and specific operating systems have lost much of their value. Yes, they still serve an important purpose, but they are no longer the sole arbiters of what kinds of applications can be built.

Applications have also seen dramatic transformations. Gone are the days of large, monolithic applications that only run on certain platforms. They’ve been replaced by smaller “apps” that run across a variety of different platforms. From a business model perspective, we’ve gone from standalone applications costing hundreds of dollars to single digit dollar mobile apps to completely free apps that rely on services and subscriptions to make money.

Even in the world of large applications, there’s been a dramatic shift to subscription-driven pricing, with Microsoft’s Office 365 and Adobe’s Creative Cloud services being some of the most popular. Not all end users are excited about this model, but it seems clear that’s where traditional applications are heading.

Service and subscription-driven models have also come to mobile clients, servers and other devices, as companies have realized that the continuous flow of smaller amounts of regular income provided by these models (as opposed to large lump sum purchases) offers much more stable revenues.

Even the structure of software has changed, with large applications being broken down into smaller chunks that can act independently, but work together to provide the functionality of a full application. This notion of containers (or chunks of code that function as independent software objects) is particularly prevalent among cloud-based applications, but it’s not hard to imagine it being applied to device-based applications as well. In addition to their other benefits, containers bring with them platform and physical location independence and portability, two key attributes that will be essential for new types of computing architectures—such as edge computing—which are widely expected to dramatically influence many future tech developments.

Another benefit of containers is reusability, meaning they could be leveraged across multiple applications. While this is certainly interesting, is does start to raise questions around complexity and monetization for containers, that don’t yet have easy answers.

There are even growing questions about what really constitutes software as we know it. Technically, building voice-based “skills” for an Amazon Echo-based product is software design, but the manner with which people interact with skills is much different than how they’ve interacted with other types of software. As digital assistant models continue to evolve, the nature of how these component-like pieces are integrated into the assistant platform will also likely change. Plus, as with containers, though some new experiments have been started, there are still serious questions about how this type of code can be monetized.

Finally, and most importantly, virtually everyone is adding in Artificial Intelligence (AI) and machine learning capabilities into their software code. Right now, much of these additions are relatively simple pattern-recognition based functions, but the future is likely to be driven by software that, in many ways, can start to rewrite itself as it learns these patterns and adjusts appropriately. This obviously marks a significant shift in the normal software development process, and it remains to be seen how companies will try to package and sell these capabilities.

Taken together, the implications for all of these software-related developments are profound. In fact, one could argue that software is being “eaten” by services. That’s already occurring in several areas (think Software as a Service, or SaaS), and the future of code-based capabilities will likely all be delivered through some type of monetized service offering. While that may be appealing in some ways, there is a legitimate question about how many services any person, or any company, will be willing to sign up for. Particularly when there are costs related to these services—we need to realistically recognize that this business model can only be taken so far.

Watching the tech industry evolve over the last several decades, it’s fascinating to see how many pendulum shifts occur across many different segments. From computing paradigms to semiconductor architectures to the role and balance between hardware, software and services, it seems that what was once old can quickly become new again. In the case of software—which used to be bundled for free with early computing hardware—we may be coming full circle, with most code soon becoming little more than a means to sell services that leverage its capabilities. It certainly won’t happen overnight, but the end of software as we know it may be sooner than we think.

Machine Learning and the Camera Sensor

on December 4, 2017
Reading Time: 4 minutes

One of the larger themes I’ve been integrating in industry conversations with tech leaders and executives is how giving computers the ability to see and hear us will define a new era of computing. All the advancements in machine learning the past few years have set a new foundation to give computers the ability to learn on their own. A computers ability to learn depends entirely on the information available to it. This data has largely been textual but we are on the cusp of the computing being able to utilize all its sensors to learn as well. This includes the microphone, camera sensor, GPS, accelerometer, and any other host of optional sensors available to integrate into our computers. I do believe, however, the camera sensor will be the one which will lead to the most obvious customer value of machine learning and AI.