Podcast: NPD Holiday Sales for Tech Products

on January 20, 2018
Reading Time: 1 minute

This week’s Tech.pinions podcast features Stephen Baker of NPD, along with Ben Bajarin and Bob O’Donnell discussing NPD’s holiday time retail sales data for a number of major tech product categories including PCs, tablets, TVs, smart home and more.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

HTC’s Vive Pro Targets Growing Commercial VR Market

on January 19, 2018
Reading Time: 4 minutes

Last year I wrote about the growing interest in virtual reality (VR) from industries such as retail, education, manufacturing, healthcare, and construction. These types of companies–and others–continues to show strong interest in the category, but some of the technical limitations and ergonomic issues with existing high-end VR hardware has been a roadblock for some. At CES, HTC announced a new version of its VR headset called the Vive Pro that addresses many of the issues commercial users have with today’s shipping headsets and positions the company well for accelerated commercial shipment growth in 2018.

Resolution Boost
I had the opportunity to demo the new Vive Pro at CES, and the resolution upgrade in the Pro is a noticeable improvement. The standard Vive offers a 3.6-inch dual OLED display with 1080 by 1200 resolution, while the Pro utilizes new 3.5-inch OLED displays with 1440 by 1600 resolution per eye. I had the opportunity to try several different applications, including a social networking app that took place inside a scene from the upcoming Ready Player One movie, and a medical training app, where another person guided me through a medical procedure. The increased resolution drives a much more immersive experience. It also makes it much easier to identify small details in the environment, as well as read text (a key for many commercial use cases).

Moving to offer improved headset resolution was a key target for hardware vendors across the VR landscape in late 2017 and headed into 2018. All of the shipping Microsoft-based mixed reality headsets from Dell, Lenovo, Acer, and HP have higher resolution, 2.9-inch LCD panels (1440 by 1440), and Samsung’s Odyssey headset utilizes what is likely the same 3.5-inch, 1440 by 1600 resolution OLEDs as the Vive Pro.

HTC hasn’t yet disclosed the minimum PC specifications required to utilize the improved resolution of the Pro best, but company executives did note that driving a higher resolution experience will likely require more PC horsepower. And, of course, the content and apps need to support the increased resolution, too.

Improved Sound, Ergonomics
In addition to the improved displays, HTC also added integrated headphones and amplifier into the Vive Pro headset. The sound is a crucial element of full immersion in virtual reality, and the lack of an integrated solution in the existing Vive was problematic. It means every time you take off the headset you have to remove and manage a set of headphones, too. While this is merely irritating to most consumer users, it’s a larger problem for commercial users who need to slip in and out of the headset often. It’s also an issue in B2C scenarios such as retail where sales associates are moving people in and out of the headset on a regular basis.

In addition to integrating the headphones, HTC also took the opportunity with the Pro to rebalance the entire headset with the goal of making it more comfortable to wear for longer periods of time. I didn’t spend enough time in the Vive Pro to decide how big an improvement this was, but any improvement is a welcome one. HTC also updated the user’s ability to readjust the headset with a new sizing dial, and there is also a setting that lets you adjust the distance of the screens from your eyes. Additional improvements include new dual microphones with noise canceling and dual front-facing cameras.

Wireless Connectivity
Probably the single biggest request from commercial buyers when it comes to VR is the ability to ditch the cables that tether the headset to the PC. While there have been third-party accessories that do this, at CES HTC announced it would ship its own Vive Wireless Adapter later this year. Based upon Intel’s WiGig technology, it utilizes the 60-GHz band. I wasn’t able to test the adapter, but HTC says it offers a high-performance, low latency experience.

Eliminating the cable addresses one of the biggest concerns that businesses have with VR: The danger of somebody tripping over the cable. Whether it is an employee or a customer, today’s tethered headsets represent a messy environment at best, and the move to wireless will help address this. Unfortunately, HTC doesn’t plan to ship the accessory standard with the Vive Pro, instead offering it as a separate upgrade for both the standard and pro versions of the headset when it ships in the third quarter of 2018. The company hasn’t set pricing yet.

One area that HTC hasn’t addressed with the Pro is the continued need for standalone sensors in the room for six-degree-of-freedom tracking. Both Vive headsets and today’s Oculus Rift use two sensors stationed in the room to do what is called outside-in tracking. The Microsoft-based products track movement using inside-out tracking integrated into the headset, which removes the need for these external sensors. The Microsoft-based products I’ve tested do this well, but many in the industry—including HTC—still consider external sensors more accurate. The Vive Pro will initially use the existing Valve-created Steam VR Tracking 1.0 software to drive the same sensors that ship with the standard Vive headset. Later this year, when Valve releases the Steam VR 2.0 Tracking software HTC will bundle new sensors to support it. The new standard will offer an expanded ten by ten-meter coverage area, as well as the ability to use up to four sensors for additional tracking.

In all, with the Vive Pro and wireless accessory, HTC has done a good job of putting together a solid new package that addresses many of the hardware hang-ups that have caused some businesses pause when considering VR deployments. HTC says the Vive Pro will ship in the first quarter of this year, but it hasn’t announced pricing yet. I look forward to seeing how developers and companies utilize this updated technology, and how competitors respond with their own new hardware in the coming months.

News You might have missed: Week of January 19th

on January 19, 2018
Reading Time: 3 minutes

Amazon 2HQ is not coming to the Bay Area

Amazon is looking for a city to host its HQ2 and San Francisco, Oakland, Fremont, Richmond, and Concord had put in a joint bid to offer Amazon locations in each of those cities, while San Jose had put in a separate bid.

The San Francisco-Oakland-East Bay coalition’s bid included locations such as the former Concord Naval Weapons Station, a Coliseum City location and sites in downtown Oakland, Fremont’s Warm Springs Innovation District, the Hunter’s Point Shipyard in San Francisco and the Hilltop Mall and Richmond Field Station in Richmond.

Health Tech is Coming Into Its Own

on January 18, 2018
Reading Time: 3 minutes

It has been interesting to watch health-related technology solutions accelerate over the last few years. Much of this has been driven by the smartphone and the bar that it raised in terms of our expectations with many products. The smartphone also contributed to helping commoditize many technological components making them affordable for companies to begin to integrate into their products. In my opinion, we can also attribute some of health tech’s momentum to wearables, and the Apple Watch. Now that the tech industry understands consumers are interested in using technology to monitor and track their health, they have awoken to the market opportunity.

Qualcomm Lays Out Plans for Future without Broadcom

on January 18, 2018
Reading Time: 4 minutes

In early November of last year, Broadcom submitted an unsolicited bid to purchase Qualcomm at a price of $70 per share, or roughly $130B. It would have sustained Qualcomm as the third largest chip maker in the world behind only Intel and Samsung, and would send a shockwave through the semiconductor and technology ecosystem as never before. The largest completed technology sector acquisition was Dell’s purchase of EMC for $67B; the Broadcom bid to purchase Qualcomm would double it.

However, Qualcomm’s board rejected the offer while promising shareholders value and direction for the company going forward. Fireworks ensued and Broadcom has now launched a hostile takeover that starts with a Board of Directors replacement, to be voted upon by shareholders. In an attempt to maintain its independence and direction, Qualcomm decided to go on the offensive and detail for the public its outlook and roadmap for the future, confident that the opportunities before it exceed what Broadcom has brought to the table.

There are numerous angles that have been written on how the merger of Broadcom and Qualcomm would have a negative impact on the industry. These range from the slowing of 5G progress, a lack of competing solutions in the networking space, slower introduction of new cellular technologies, a shifting R&D cycle, damaging cost reductions in product development, among others. Today I want to quickly touch on how Qualcomm has presented its future without Broadcom through a 35-minute long video and presentation this week. CEO Steve Mollenkopf and the top executives at Qualcomm have a substantial vision for where this company will be in just a few short years.

To be blunt, the position Qualcomm finds itself in today is unenviable. The licensing division remains in major dispute with its largest customer (Apple) and regulatory groups in several regions of the globe are investigating Qualcomm’s business models and licensing tactics. With licensing income being withheld by Apple and another supplier, Qualcomm may appear weak and ripe for acquisition. Though the bid from Broadcom was near the median of acquisition of offers for the technology industry (based on Bloomberg’s estimation), Qualcomm and its board believe that the next several years for Qualcomm warrant a much bigger discussion than Broadcom is willing to engage in.

The key to Qualcomm’s argument and fight against the buy-out stems from the growth it projects in the core mobile and adjacent growth markets. Even without the NXP acquisition completed, Qualcomm sees value in those adjacent markets for FY19 revenues. RF front-end development in tier-1 designs and the leading configurable front-end that will be optimized for 5G migration and continued 4G technology support, with a $2-3B revenue target, offer substantial windows for revenue. The automotive industry is growing in complexity with the advent of self-driving technology, but Qualcomm has $1B in opportunity for FY19 with its strength in telematics, Bluetooth, and infotainment systems. Though NVIDIA dominates headlines when it comes to autonomous driving and its high-performance graphics systems, and Qualcomm will need more engineering time target that segment, the surrounding systems are ripe for the performance and connectivity that the company can offer that see benefits from the power efficiency Qualcomm chips can provide.

Qualcomm is a leader in the IoT space with computing and connectivity options that others are unable to match. It currently works with more than 500 customers across voice and music, wearables, and even smart city integration. Qualcomm believes this will become a $2B revenue opportunity for them by FY19. The compute segment the recently announced Windows 10 based PCs, growing to another $1B in revenue for FY19. Qualcomm’s advantages in connectivity, 4G/5G, and the always on, always connected battery life gains provide stand out features that give it the potential to dislodge Intel in key market segments.

Finally, Qualcomm estimates the networking segment is has built for home and enterprise spaces will grow into another $1B revenue segment by FY19. Qualcomm and its partners are already the leader in home and enterprise wireless networks, and the spike in growth for mesh Wi-Fi networking will be a requirement to enable many carrier solutions and 802.11ax proliferation.

Beyond new product types are new product growth regions. China provides as much as $6B in potential product revenue in the mobile space and grew 25% YoY in FY17. That is two times the revenue that Qualcomm receives from Apple today, a statement clearly made to alleviate the concern that Qualcomm’s future is inexorably tied to the outcome of the Apple litigation and future relationship.

Because the Chinese OEMs are gaining market share globally, including the likes of Xiaomi, Vivo, and Oppo, with rapid expansion into India, Europe, and eventually the US, Qualcomm’s existing relationships with these customers will result in increased revenues. The China market is going through a transition and consumers are migrating to higher tiers of devices with more features and more capabilities. This movement favors the higher performance Snapdragon lineup when compared to compete options from MediaTek and will likely result in higher domestic share on Chinese OEMs product lines.

The migration to 5G will play a critical role in the growth of Qualcomm’s technological roadmap and the company believes it has a 12-24 month lead over its merchant competitors (those that sell to OEMs) in this space. Qualcomm is well known to be a driver and creator of new industry standards, with the push from 3G to 4G as a prime example. During that transition period of FY10 to FY13 Qualcomm revenue doubled and though the company won’t put specific estimates like that in place for the move from 4G to 5G, with that roll out starting in early 2019 it is the prime opportunity for Qualcomm connectivity advantage to be showcased.

From the licensing angle, Qualcomm believes that as much as 75% of the 4G patents it holds will be applicable to the 5G roadmap. This puts the company in a great position to leverage its previous technology R&D for future income.

These are just a handful of the emerging opportunities that Qualcomm sees before it in 2018 and through 2020. I didn’t even touch on the 6% annual growth of the Android ecosystem, which holds an has 80% share in smartphones, where Qualcomm is the chip leader. A target of $35-37B for FY2019 revenue is a daunting task and will require execution on multiple fronts for Qualcomm to meet that goal. But with the areas of growth outlined above, the offer from Broadcom stands in stark comparison to the reality of a company poised to cultivate the next generation of connected technologies.

Microsoft Cortana, the Cinderella of CES18

on January 17, 2018
Reading Time: 4 minutes

Everybody was eagerly watching to see whether Alexa maintained the leadership she so clearly established in 2017. This year, of course, she was not the only belle of the ball. Google Assistant had a very strong presence both in advertising and announcements but still trailed behind Alexa. As Ben Bajarin noted, Siri and the Apple ecosystem were much less visible than we have grown accustomed to since the iPhone and iOS hit the market. We can list the reasons why this might be the case: from a lack of strong marketing on Apple’s part for HomeKit and how it is linked to Siri, to a limbo phase while we wait for the new and improved Siri in HomePod, or maybe because when it comes to the home even Apple fans do not only consider Apple. Whatever the reason is, I think that Apple gets the benefit of the doubt that they will be there when it matters but someone who does not get the same treatment are Microsoft and Cortana.

Cortana’s Support is a Tick Box for Windows Partners

Cortana was not totally absent at CES as there were some announcements made like added support for Ecobee, Geeni, Honeywell Lyric, IFTTT, LIFX, TP-Link Kasa and Honeywell Total Connect Comfort. Yet, when it came to Windows partners and bigger brands, Cortana was always the last assistant mention. It seemed that having support for Cortana had more to do with wanting to support Microsoft and Windows rather than believing Cortana has a real shot at becoming one of the assistants in our life.

While this tick in the box approach might be good on paper it is not likely to make a difference in the market. Lack of conviction that Cortana is a differentiator will impact how much it will be highlighted as a feature both in products and in marketing.

The Race is Long, pacing Yourself is Good but so is showing Determination to win

In a recent interview with GeekWire Andrew Shuman, corporate VP of Cortana Engineering stated that “it’s a long journey to make a real assistant that you can communicate with over a longer period of time to really be approachable and interesting and better than the alternative.” I could not agree more with this statement.

The Digital Assistant race is a marathon, not a sprint and we are just at the beginning. Getting it right when it comes to skills such as context-awareness, natural language and empathy will make a huge difference in building engagement and trust. Yet, precisely because the relationship with a digital assistant is not born overnight, engagement must start somewhere.

Today, our exchanges with an assistant might be basic and certainly nowhere close to a conversation. That said, we are starting to learn who does what best, whose personality we like the most and who we trust to get it right. Over time, our engagement will drive loyalty and while we might find a better assistant the idea of having to start again with training her might just put us off too much.

This is Microsoft’s risk, which is not very dissimilar to what they faced in mobile. By the time Windows Mobile was a solid alternative to iOS and Android, consumers were either too vested in one of those two or inertia just did not have them try something new.

Past missteps, like Windows Mobile, is what Microsoft is being judged on here rather than the actual efforts made with Cortana. Microsoft was certainly not late to the voice assistant party. Cortana was announced at Build 2013 a few months before Alexa. But there is a feeling that Microsoft, even more so than Apple, was unable to capitalize on this early move.

Needing vs. having an Ecosystem

A sense of urgency is a great thing for any brand. When it comes to digital assistants I firmly believe that nobody had a stronger sense of urgency than Amazon. Finding a different way to grab consumers after the failed Fire Phone attempt was paramount and they went all in with Echo and Alexa.

The same cannot be said for Microsoft or Apple. Microsoft had a long list of partners around Windows – albeit more on the enterprise side than on the consumer side – and Apple had iOS. This made the assistant a longer-term game in my view and certainly not a short-term necessity.

When it comes to Microsoft, in particular, I do believe they think there is more time given enterprises are yet to show much interest in digital assistants despite Amazon making Alexa for business. The question then becomes, is Microsoft interested in having a digital assistant in the consumer market or at least one that will leave the office and come home with us? If the answer is yes, then their sense of urgency should certainly grow. If smartphones have taught us anything is that the flow of technology influence has shifted from the consumer to the enterprise and I do not see that reversing any time soon.

Cortana needs a Bigger Voice

Finally, if the question to my above question is yes, Microsoft does want Cortana outside of the enterprise, they must talk about her more. Often it feels that Cortana is just another Windows 10 feature rather than, for lack of better words, a “tool” in her own right. While I understand the difficulty of talking about Cortana in a broader context due to her strong presence on PCs, Microsoft must focus on talking about her capabilities, providing more point of contact with users and broadening her reach in the home in particular. Easier said than done, I know, but it could all start with Microsoft being more open on how they see Cortana develop. Microsoft has been able to share the vision around HoloLens since the device was merely a prototype fresh out of the lab, surely, they can articulate how they see Cortana evolve and take us on that journey.

Envisioning CES in 2024

on January 17, 2018
Reading Time: 5 minutes

Last week I was in Las Vegas to attend my 43rd Las Vegas CES. The show had 2.5 million sq feet of exhibit space, and over 180,000 people attended this show to see the latest and greatest in technology. CES is one of the largest trades shows in the world, and for most of us in tech, we have to go for many reasons. In my case, I have multiple meetings with clients and potential clients and, since I do my homework, I pre-select key products I want to see in person.

The Picture is Clear for Virtual Reality

on January 16, 2018
Reading Time: 2 minutes

Each year, virtual reality has become a bigger story. The last few years have brought more questions than answers to the VR category, but I believe the story around virtual realities value is starting to become clear.

At CES this years, I saw positive momentum for VR in both technology and use cases. HTC Vive showed off their Vive Pro headset that includes a wired an wireless solution as well as a dramatic increase in resolution for VR experiences. We have known for quite some time gaming was going to be a driver for virtual reality and that has certianly been the case today. Including Gear VR, most estimates peg the installed base of VR headsets (not including cheap solutions like cardboard) at ~10 million units. Forecasts for 2018 are ~13m units growing to 27m sold in 2020. Definitely a slow burn, however, the next few years adoption of VR headsets will be large enough to be taken seriously by developers and applications/solutions providers.

The Tech Industry Needs Functional Safety

on January 16, 2018
Reading Time: 3 minutes

The tech industry’s infatuation with the automobile industry has become rather obvious over the last few years. Nearly everyone in tech, it seems, is dying to get involved with automotive, either on a component, high-level partnership, or even on a finished vehicle basis.

The reason, of course, is rather simple—it’s the money. As big and strong as the tech industry may be (counting combined revenues of PCs, smartphones, tablets and wearables), the automobile industry is still several times larger by most counts, with worldwide revenues reaching into the trillions of dollars per year.

In addition to the dollars, many in the tech business believe they can bring new capabilities and perspectives into the auto business. Put another way, there’s some pretty flagrant egoism in the tech business with regard to automotive, and many in tech believe they can help drag the traditional, and (in their minds) rather archaic, auto industry into the modern era.

While there may be some nugget of truth to that argument, the reality is that the auto industry actually has a lot it can teach the tech business, specifically around safety and reliability. The concept of functional safety—famously standardized around the ISO 26262 standard—in particular, is something the tech industry should really spend some time thinking about.

The specific requirements for functional safety are varied, but the concept essentially boils down to redundancy and back-up systems and capabilities. Given the potential impact on human lives, automobile makers and their critical suppliers have, for decades, had to create systems within cars that can fall back on an alternative in the event of a critical failure in a system within a modern car. Though it can be challenging to implement, it’s an extremely impressive idea that, conceptually at least, has potential applications in many areas outside the automotive industry, including essential utilities like the power grid, as well as increasingly essential tech components and tech devices.

Thankfully, many in the tech industry have started to catch on. In fact, one of the most impressive demonstrations at the recent CES show was Nvidia’s focus on functional safety in some of their latest components and systems designed for assisted and autonomous driving. The company’s CEO, Jensen Huang, spent a significant amount of time at their CES press conference, highlighting all the work they’d done to get ASIL-D (Automotive Safety Integrity Level D, which is the highest available) certification on their new Nvidia Drive architecture.

While the topic can be complex, Huang did an excellent job explaining the effort required to get their new Nvidia Xavier platform—which integrates Blackberry’s QNX-64 software platform in conjunction with their latest silicon—to be ISO 26262-compliant and reach ASIL-D compliance. He enthusiastically talked about the specific challenges necessary to make it happen, but proudly claimed it to be the first autonomous driving platform to reach that level of functional safety.

As impressive as that development is, it also made me think about the need to apply functional safety-type standards to the tech industry overall. While using tech devices doesn’t typically involve the kinds of life-and-death situations that driving or riding in a car can, it’s no longer an exaggeration to say that tech devices have a profoundly important impact on our lives. Given that importance, doesn’t it make sense to start thinking about the need for tech products that have the same level of reliability and redundancy as cars?

As recent natural disasters of all types have clearly illustrated, our overall dependence on technology has become pervasive. In addition, the recent Meltdown and Spectre chip flaws have shown a rather harsh light on both how dependent, and yet, how illusory, our dependence on technology is. While strong efforts are being made through an impressive collaboration of tech industry vendors to address these flaws, the fact that a technology (speculative execution) that’s been a key part of virtually every major processor that’s been produced by every major chip manufacturer over the last two decades is just now being exploited, clearly highlights how vulnerable our technology dependence has become.

Though there are no easy answers to these big picture challenges, it’s clear that we need to gain a fresh and very different perspective on technology products, our relationship to them, and our reliance on them. It’s also clear that the tech industry could actually learn from some old-school industries—like automotive—and start to apply some of their hard-won lessons into both component and finished product designs. The concept of functional safety may not be a perfect analogy for the tech business, but there’s no question that it’s time to start thinking differently about how tech products are designed, how we use them, and what we should expect from them.

Podcast: CES 2018

on January 13, 2018
Reading Time: 1 minute

This week’s Tech.pinions podcast features Ben Bajarin and Bob O’Donnell discussing the major themes from this year’s CES show, including the growth in robotics, the influence of AI and automation, the declining presence of Apple, the rise in voice-based interfaces, and more.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

News You might have missed: Week of January 12th

on January 12, 2018
Reading Time: 4 minutes

Thanks to CES, this week was certainly not short of news so I thought that rather than picking announcements I would just share some thoughts on trends that might not be top of mind but are impacting the industry. Over the next week or so we will continue to share more on what we saw at #CES18 and what does it mean for the consumer market.

Some Common Sense Approaches to Solving the Smartphone Addiction Problem

on January 12, 2018
Reading Time: 4 minutes

The tech world was atwitter this week and a slight pall was cast over CES with reports that an activist investor is calling on Apple to do something about the ‘smartphone addiction problem’. Let’s admit it: most of us are addicted, in some shape or form, to our pocket computers. No one entity is to blame here. It’s partially our fault and it’s partially industry’s fault. And the responsibility lies just as much with Facebook, Twitter, Snap, all sorts of dumb games, and pretty much any app that abuses ‘notifications’. Solving this issue doesn’t require lawsuits, new regulations, apps that tell us how much time we’re spending on these devices, or any particularly fancy technology. It does require some common step approaches, both on the user side and the industry side. So here’s my prescription.

Step 1 is to admit we have a problem. I’d imagine most of us would admit we both spend too much time on our screens, and engage in some degree of unseemly behavior. We’re distracted. We look at our phones in the middle of conversations, during meals, at movies, in bed, and at our kids’ school performances. Many work tasks take longer to complete because of constant distraction.

Industry must take responsibility too. Yes, Apple, Samsung, and others have made the hardware and capabilities in the OS that foment this addictive behavior. I’m less concerned about ‘substitutive’ use (for example, reading a book on a kindle or watching a TV show or YouTube video on a tablet or a phone) than I am with the types of apps or capabilities that cause near constant checking of devices and interruptions. The culprits for most of this: messaging (texting, Snap, etc.), and, even worse, notifications. Notifications have gotten out of control — and are a big cause of the constant pinging and checking.
This problem is solvable. It will require concrete action and behavior modification on behalf of users, and some recognition and steps by industry, too. Here are my suggested steps, for both users and industry.

Users (Us)

Here are some suggestions on what users might do to reduce screen time and modify what, for some of us, is addictive behavior.

  1. Put yourself on a diet. It’s the New Year. Like other resolutions or those extra trips to the gym that happen in January, resolve to spend less time looking at the phone screen. This might mean pro-active steps, such as going for an hour or two or completing a work task, or even a leisure activity such as playing a game or watching a TV show…without checking your phone.
  2. Reduce the opportunity. If it’s always with you, always on, and constantly pinging and beeping…you’re gonna check it. This is the gerbil-like gene in all of us. So, reduce the opportunity by banning the phone at mealtimes, at bedtime, and during other important moments, such as family conversations, helping the kids with homework, and so on. And, remember there’s something on the phone called Settings. You can put on Do Not Disturb, Airplane Mode, Silent, etc. You can turn off notifications or manage them more effectively. Workplaces can play a role here too. Companies could set rules for phone use during meetings, and other ‘codes of conduct.’
  3. Set some examples. The focus of news this week was on teenage phone addiction – but behavior in many adults is just as bad. We need to start setting better examples. When we check our phones at meals, while in the middle of a conversation with a friend, or during a lull in the monthly poker game, we’re setting a bad example. Our teenagers will think this is OK, and so will their younger siblings.
  4. Make Some Rules. The low-hanging fruit here is banning phone use (or even presence) during meals, and other important ‘family time’ – conversation, games, homework help, etc. A little tougher is setting rules for your teenager while they’re in their room, behind closed doors, doing Lord knows what. But it can start with ‘phone out of the room while you’re doing homework, practicing your instrument’, or after 11 p.m., etc. Schools have actually been reasonably effective at setting rules. We should be able to do the same at home.

Industry

I’m sure there will be a raft of apps that monitor screen time. But this sort of defies what should be common sense stuff, like the calorie count in the donut shop. You know it’s bad for you and when is too much. Still, there are some steps industry can do to help.

  1. Improved ‘do not disturb’ type settings and capabilities. One recommendation here is what I call the ‘homework’ or ‘work’ button. When activated, texts, alerts, notifications, and so on do not get through. And, importantly, those trying to text you know that it’s on, so there’s less ‘attempting’, and you don’t fall victim to the ‘you didn’t respond to my text’ note. This would have to be widely adopted and encouraged, so it’s used and respected. If your teen is the only kid using this feature, you know how that’s going to go.
  2. Smarter Notifications. Notifications and alerts are a major contributor to the overload. I’ll bet you could turn off notifications for half your apps and you wouldn’t be any worse off. Also, app makers need to show some commitment to reducing notifications. And key players such as Facebook and Twitter need to dial it down and provide easier tools for users to reduce or eliminate notifications. This is also an area where AI could play a role.
  3. Better Training/Education. I find that the training and user education capabilities related to phone use are woefully inadequate. Here’s where industry could be more proactive, prodded by some of what we’ve seen this week. Apple, Samsung, the OS crowd, and the messaging/social media crowd should make it easier to change settings, turn off notifications, etc. There should be better and more accessible tools and training videos to help users how to manage this. Maybe users should be required to watch these when purchasing a new device. With that $1,000 device that’s always with you and always on, comes some responsibility, too.

I’m hoping that industry and users can be mature enough to admit that although these smartphones and their apps are truly exciting, there are some disturbing trends. I’d rather us be proactive now, rather than regulators or others needlessly step in, and employ the tech equivalent of Mayor Bloomberg’s Soda Ban.

Apple’s Indirect Presence Fades from CES

on January 11, 2018
Reading Time: 4 minutes

I want to make an observation, which I feel is an important one. However, I don’t want it to be taken the wrong way. For the record, Apple is not doomed. What should be noted, is how fast the Amazon ecosystem is growing. The critical point here is how consumer electronics vendors need platform partners. The question at hand is whether or not that platform partner can or will be Apple or not. As of now, the answer is no.

For many years, articles were published discussing how even though Apple was not present or participating in CES they were still one of the shows biggest winners. This was during, what we may now call “peak Apple ecosystem.” I distinguish this phrase from peak Apple, which I don’t believe is the case. However, one could ask whether Apple’s ecosystem has reached its peak.

We would go to CES and remark at how Apple’s dominance loomed over the show. Vendors of all shapes and sizes were rushing to be a part of the Apple ecosystem. Apple’s ecosystem was front and center with everything from iOS apps, to accessories galore for iPhone and iPad, and even companies looking to copy Apple in many ways. The last year or so, things have dramatically changed, and that change is further evident at this year’s CES.

Gone are the days of Apple’s presence, or observably “winning” of CES, even though they are not present. It was impossible to walk the show floor and not see a vast array of interesting innovations which touched the Apple ecosystem in some way. Now it is almost impossible to walk the floor and see any products that touch the Apple ecosystem in any way except for an app on the iOS App Store. The Apple ecosystem is no longer the star of CES but instead things like Amazon’s Alexa voice platform, and now Google’s assistant voice platform is the clear ecosystem winners of CES.

While many Apple defenders want to dismiss the momentum we are observing with the Amazon ecosystem on display here at CES, while Amazon is similarly not present just like Apple, I believe it is a mistake to do so.

It is easy to say that because Apple was never present at CES that the show didn’t mean something to them or their ecosystem. It is easy, and correct to say that CES was not, or never was, a measure of the health of Apple’s products. It is, however, incorrect and dangerous to miss that CES had been, for some time, a barometer for the health of Apple’s ecosystem.

As I mentioned, our ability to measure any platforms ecosystem from what we observe at CES, is the main reason so many are paying attention to what is happening with Amazon’s Alexa platform. Google Assistant is certainly more present than it was last year, however, when you look at how third parties are talking about-and marketing-their support of these assistants they are putting significantly more effort into talking about Alexa than Google Assistant. Which is a telling signal. Again, to reiterate this point, third parties used to market, and spend energy talking about their integration with iOS or support of iPhone/iPad with the same rigor they are now talking about Amazon’s Alexa. This can not be ignored.

As I outlined, with the two scenarios for Amazon’s Alexa, one could take a position that this is short-lived, and the dust will settle once Apple enters the market with HomePod and you will see more partners and third parties start talking more about HomeKit than anything else. For Apple’s sake, I would love for this to happen but I don’t see it unless Apple’s makes some changes to where Siri can be integrated outside of Apple first-party hardware.

With all of that being said, I am noticing a bit more support of HomeKit this year vs. last and with Apple’s recent pivot surrounding HomeKit requirements which required a dedicated security chip from Apple that now allows that security and authentication to be done in software, I do expect even more HomeKit support next year.

But this point goes beyond just supporting HomeKit. It speaks to the more significant integration story with a platform’s ecosystem, which we know creates momentum and market perception, that one platform is the dominant leader. And as we are so often reminded, perception is often the reality.

Of course, there may be a bigger picture point. During the era where the Apple ecosystem was on display at CES, the consumer/personal electronics category was still just coming into its own. This category is now reaching full maturity and has grown significantly since those days. It is possible the industry has simply grown so much that where it used to sit in Apple’s shadow, it has now fully come into its own and grown up.

Whichever theory you want to land on, the bottom line is the CE industry looks for platform partners and requires fitting in with a mature or maturing ecosystem. We can’t ignore the fact that Apple’s ecosystem, which used to be on display at CES, is no longer and competitors ecosystems are now the ones that dominate the show. How this plays in the market, we aren’t sure, but we need to keep a close eye on these new dynamics.

There will be more to say on this over 2018, but for subscribers today I dove deeper on this observation and how Apple can start to think about their Siri Microphone Strategy. Subscribers to the Think.tank can click here to read my further thoughts.

Health Related Game Changing Tech at CES

on January 10, 2018
Reading Time: 3 minutes

CES has become a zoo. 180,000 people jostling their way around 2.5 million square feet of exhibit space has become unmanageable.
Even harder is to try and find gems or game-changing technology that will have an impact on people in the way they work, play or learn.

So instead of doing a scatter shot approach to trying to find game-changing products this year, I focused on one key area of interest around game-changing health technology that I suspected I would find at the show if I looked hard enough.

Not Everything that has a Voice is an Assistant

on January 10, 2018
Reading Time: 5 minutes

If what we have seen in the first couple of days of CES is a taste of what is in store for us in 2018, we can expect to be talking a lot. Most of the time, probably more like shouting commands right, left and center, to devices scattered around our house, in the office and the car.

What last year was an “Alexa takes all” show turned into a stage for all vendors to show off either their support for Alexa and Google Assistant or to announce their own assistant. But as more vendors jump on the bandwagon, it is essential that a distinction is made between a voice interface and an assistant. While they might seem to be the same, they are not and making it clear what is which will secure its success.

An Assistant is an Investment

For an assistant to be helpful you will need to invest time into it. First, at least for now, the user will need to learn how to talk to it. As much as voice is a very intuitive user interface, current voice assistants just do not communicate like a fellow human being. Lack of context limits most exchanges to a set of simple questions and answer. Like a real assistant, a digital one needs to get to know you, which means to know your preferences and information such as calendar appointments, access to apps you use regularly and the devices you might want it to control for you. It’s a true learning curve that will require time even for the smartest assistant.

Of course simple tasks that are more like commands like turn this on, play that, remind me of this, or search queries require little knowledge of us on the assistant part. However, when you want the assistant to be proactive and start doing things for you without you asking, like a true assistant would do, that is when knowledge Is power. To some extent the smarter the assistant the less I should actually need to talk to it.

The Ubiquitous Nature of an Assistant

In the home, my assistant should be pretty ubiquitous to be useful. This is why speakers have been such a focus for Amazon, Google, Microsoft and Apple. Being able to control my devices and ask questions from wherever I am in the home is key to build engagement and ultimately dependence.

At CES and even leading up to it, we have heard of other devices such as TVs, refrigerators, light switches and even showers all coming soon with an assistant inside. Soon our homes will be seeded with many devices able to assist us and that could potentially have a voice.

As we are at the very beginning of this journey, I can understand why vendors are trying to cover all corners as it is yet unclear what device will be the Trojan horse into someone’s home. So for instance, someone might not be buying a smart speaker but be happy to have an assistant integrated in the new TV they are purchasing. I would argue, however, that there is a limit for how many devices should have an embedded assistant versus being able to be controlled through an assistant. The difference in my mind is proportional to the value that an embedded assistant will  deliver, which must go beyond being able to execute commands. I might want to be able to control my dishwasher with voice but I do not have to engage in conversation with it, the same can be said about my washing machine. My fridge however, could give my assistant access to a lot of information on how fresh my food is to the best temperature to maintain it fresh, to recipes that would use up what I have left in it. In order for this information to be conveyed the fridge should have access to the internet, have a camera and have a voice and of course be smart. Embedding an assistant vs. connecting the fridge to an external assistant seems to be a much more effective implementation.

If adoption of voice enabled devices goes the way vendors are hoping for, we will also have to have a way to manage all these devices that will be voice enabled. This could go two ways. I would either be able to call my assistant something that is device specific or only the assistant in the device I want will respond based on context. So I would either call my assistant for the fridge “chef” or if I ask “what can I cook tonite?” only the assistant in the fridge will answer me. Right now neither of these scenarios is an option. So if I own a TV, a speaker, a phone, and a fridge all assistant enabled, the likely scenario is that at my question “what can I cook tonite?” I will have my TV show me a cooking program, my speaker say “sorry I am not sure how to help with that yet” my phone will say “here is something I found on the internet” and my fridge will actually give me a recipe. Not very helpful!

Voice UI has Value in Itself

There are more devices that will benefit from a voice UI than they would an assistant. The value that a voice first UI will deliver to users could be huge even if there was no full fledge assistant in the device. This is why I strongly feel that vendors should stay clear from using the term Assistant. Roku has recently announced their Roku Entertainment Assistant and immediately the press has been asking wether it will be better than Alexa or Google Assistant. The reality is that such a comparison is unfair because it is not an assistant. It is a voice first UI that will let users ask to play content with their voice. If you have a Comcast remote, you can do that today. I can press a button and ask “Play Scandal” and the TV will show me all the ways I can watch it. This is not an assistant, it is a voice UI that saves me a bunch of steps for which I am very grateful. Should these voice first UIs even have a name? I would say no. As a user all I need to know is that I can use my voice.

If you think I am over complicating this point, look at how hard was for Samsung to pitch Bixby. Bixby started out as a voice first interface but it was called an assistant and because of that the reviews were fairly negative. This mostly was due to the fact that, as an assistant, Bixby did not have access to a deep pool or data and as soon as users started to use it in the same way they would use Alexa or Google Assistant its value was limited.

Differentiating between a voice first UI and an assistant also brings a series of benefits for what needs to be integrated in the device which could be helpful from a price structure perspective.

An assistant should be much more than a user interface and I think this is where the market is struggling at the moment, because assistants are not actually that smart yet. I truly believe the smarter my assistant will be the less I will talk to it because the power of AI will have my assistant do her job which is making my life easier by anticipating my needs.

Two Scenarios for Amazon’s Alexa

on January 9, 2018
Reading Time: 3 minutes

Amazon’s Alexa, or more specifically the Amazon ecosystem, is again the star of CES. This felt somewhat predictable, given the coming out party Alexa had last year at CES, but Google is trying to spoil Amazon’s party by creating the illusion Google Assistant is everywhere by advertising it along with a “Hey Google” branding campaign on nearly every available billboard and sign around Vegas. But that is all it is an illusion

Will AI Power Too Many Smart Home Devices?

on January 9, 2018
Reading Time: 3 minutes

To the surprise of virtually no one, the overriding theme at this year’s CES appears to be artificial intelligence. At press conference after press conference on the media days before the show’s official start, vendor after vendor extolled the virtues of AI, though each of them offered a little bit of their own twist. At Samsung’s preview event on Sunday night, the company talked about using AI to do video upscaling on some of their newest TVs. Later that night, Nvidia CEO Jen-Hsun Huang spent a good amount of time describing the efforts the chip company had spent on developing accelerator chips optimized for both training and inferencing for deep neural networks being used in AI applications.

Monday morning, LG announced their own AI brand—ThinQ—which will be used to delineate all the new products they have which utilize the technology. Monday afternoon, Qualcomm talked about bringing AI to a variety of new applications and platforms, from hearables and other audio-focused products, to automotive applications and beyond. At the Sony press conference, the upgraded Aibo dog—a name now recognized to be a combination of AI and robot—charmed the crowd with its capabilities. Finally, on Monday evening, Intel CEO Bryan Krzanich described a world where AI can be used for everything from space exploration, through content creation, and onto autonomous cars.

In addition to AI, we saw a large number of announcements related to smart home, connected devices, and personal IoT. In most cases, the two concepts were tied together, with the connected home devices being made “smart” by AI technologies, as Samsung displayed at their primary press conference event on Monday.

All told it was an impressive display of both how far AI has come, and how many different ways that the technology could be applied. At the same time, it raised a few potentially disturbing questions.

Most notably, it seems clear that we’re all inevitably going to end up having quite a few AI-enabled devices within our homes. While that’s great on one hand, there’s no clear way to share that intelligence and capability across devices, particularly if they’re made by different companies. The challenge is that just as few ever buy complete home AV stacks from a single vendor for their home theater systems, and few people only buy compute devices from a single vendor running related operating systems, so too is it highly unlikely that we’re going to buy all our AI-enabled smart devices from a single vendor. In other words, we’re likely going to end up having a variety of different products from different vendors, with a high probability that they won’t all seamlessly connect and share information with one another.

In the case of basic connectivity, a number of those issues will likely be overcome, thanks to advancements in connectivity standards, as well as the abundance of gateway products that can bridge across different standards and protocols. What can’t easily be solved, however, is the sharing of AI-enabled personalization across all these smart devices. The result is that several different types of devices will be collecting data about how we interact them, what our habits and preferences are, etc. Not only does that mean a lot of the efforts will be redundant, but concerns about being personally tracked or monitored feel (and are) a lot worse when multiple companies are going to end up doing it simultaneously within our own homes.

Down the road, there may be an opportunity to create standards for sharing personalization information and other types of AI-generated data from our smart connected devices to avoid some of these issues. In the meantime, however, there are some very legitimate Orwellian-type concerns that need to be considered as companies blindly (and redundantly) follow their own approaches for collecting the kind of information they need to make their products more personal and more effective.

Design Decisions and Smartphone Batteries

on January 8, 2018
Reading Time: 2 minutes

When I wrote this, “The Unintended Consequences of a Single Design Decision” on Techpinions almost a year ago, I pointed out

Shorter battery life – Making phones and notebooks as thin as possible and then making them even thinner in each subsequent generation resulted in less volume for batteries. But because the one dimension that reduces a battery’s capacity most is its thickness, battery life of iPhones and MacBooks have suffered. Battery life of iPhones and the latest line of MacBook Pros are well below expectations and are one of the major user complaints. So much so, the battery indicator no longer displays time left. And, since a battery’s life is based on the number of charging cycles, smaller batteries need more recharging cycles, resulting in a shorter life.

Well now we have a new consequence, the need for Apple to throttle down the processor to prevent inadvertent shutdowns as the battery deteriorates. Clearly, this was a design compromise Apple engineers chose, rather than designing their phones to accommodate the deteriorating batteries. Product engineers are very familiar with the behavior of LiIon batteries. While Apple says the batteries deteriorate to 80% after 500 cycles, Samsung’s battery division, one of the world’s largest battery manufacturers, warranties their batteries to deteriorate no less than 70% after 300 cycles. It’s not clear whether Samsung is being cautious or Apple is being optimistic, but engineers know that the battery will reach close to 50% of its original capacity within 3 years with frequent use. And using smaller batteries than most Android phones, means the customer recharges their phones more frequently and they reach the 300 or 500 cycles more quickly than the competition. All because thinness was paramount to Apple.

Essentially, Apple chose to shorten the product’s useful life. While the life could be extended by replacing batteries, that was never a major consideration, because the cost and inconvenience are too high for many customers. And they never communicated that changing the batteries were a realistic option.

While we can debate whether Apple should have been more communicative, their message was one most of their users don’t want to hear, that products are now designed to last much less time than they used be. Design engineers in years past typically considered 5 years to be the useful life for consumer electronic products. All decisions were based around this number, including how many times the buttons would work, the device would charge, or the mechanical parts would work.

When engineers and marketing managers set out to define a new product, one of the first things they do is to make assumptions about how long the product is to last. From this latest incident, it seems apparent to me that Apple knowingly discarded the 5-year life rule and decided that 2 years was more appropriate.

While companies can do what they choose, with this decision they may have helped their bottom line for the short term, but few customers want to knowingly spend close to $1000 for a product that will need to be replaced in 2 years. And while Apple iPhones have held their value so trades-ins made good sense, the value of used iPhones, may have just suffered with this latest news.

So, the news behind the news is that an iPhone’s useful life is shorter than we all expected and what had been the standard for the industry. You could see how with the iPhone’s popularity and customer loyalty, shortening the life directly relates to more sales. While it may make sense from a financial basis, it seems like it’s not the right thing to do.

What Magic Leap One Tells Us About the Near-Future of AR Hardware

on January 5, 2018
Reading Time: 4 minutes

Mega-funded startup Magic Leap recently unveiled its planned hardware developer kit, dubbed the Magic Leap One. The googly-eyed headset drew some unkind remarks for its looks, and some of the company’s comments surrounding it were vague at best and frustrating at worst. But the One does represent a well-financed company’s vision of where augmented reality hardware is heading in 2018, so a brief dissection of its design is illustrative of some of AR’s design challenges.

Three-Pieced Kit
The Magic Leap One consists of the head-mounted goggles (Lightwear), a tethered computer (Lightpack), and a controller (Control). The look of the Lightwear goggles might not win any fashion awards, but its where we’ve long expected Magic Leap to bring its AR special sauce to the market. Specifically, the company says its lightfield photonics “generate light at different depths and blend seamlessly with natural light to produce lifelike digital objects that coexist in the real world.” To do all of this, Lightwear must not only believably create those images, but it must anchor those objects in the real world. A digital representation of a flower vase is only believable if it stays locked to the real-world table upon which you place it. To do this, the headset has numerous cameras and sensors that point outward, capturing what is happening around the wearer. In addition to capturing the environment, these sensors also play a role in capturing what the user is doing, with their head, with their hands, and with their voice. Capturing this information is hard; processing it in real time is a very heavy computing lift.

Which is why I’m very happy to see that, at least for now, Magic Leap moved at least some of that processing off the headset and into the cable-tethered Lightpack. The company says the puck-sized device has processing and graphics power that’s comparable to a notebook computer. As I’ve been studying the AR market the last several years I’ve become increasingly convinced that the best AR experiences for the foreseeable future will require head-mounted displays that utilize computing power located off the headset. That processing may be from a purpose-built unit, as is the case here, or from a more general-purpose computing device, such as a smartphone. There are numerous reasons why this off-the-head computing is necessary, but the key ones include removing the battery weight from the headset, relocating the heat-producing CPU and GPUs away from the user’s face, and repositioning the various necessary radios such as LTE and WiFi away from the head. It will be some time before the industry can address these technical issues in a form factor that’s suitable for wearing on your face. In the meantime, it is best to move them elsewhere. (Indecently, I think this is how Apple’s predicted AR glasses would work, utilizing the processing power of the iPhone in your pocket).

Finally, there’s the Control navigation. The fact that Magic Leap plans to ship its first developer kit with a handheld controller is an acknowledgment that, at least for now, hand tracking isn’t sufficient for the experience the company is trying to create with its platform. Magic Leap says Control includes six degrees of freedom (comparable to the best tethered VR setups today), as well as haptic feedback. Adding the controller to the mix increases the amount of user-interface data the setup can capture, so while holding a controller may seem initially counterintuitive to an immersive experience, it may well bring substantial benefits to the table. The one rub is in commercial use cases where the employee needs both hands to work, but that’s likely not the use case Magic Leap is targeting out of the gate with this product.

By utilizing three different devices, spread around the body, Magic Leap can disperse design challenges such as weight and heat while maximizing the ability to include all the necessary sensors, processors, and batteries. One hopes that the result is a singular cohesive experience.

Artisanal Spatial Computing
Magic Leap unveiled the One on its website and through an in-depth article by Brian Crecente on Rolling Stone’s gaming site Glixel that’s well worth the read. The company hasn’t announced a ship date or price but claims it will ship sometime in 2018. Pushed on price, Magic Leap’s founder Rony Abovitz said, “I would say we are more of a premium computing system. We are more of a premium artisanal computer.” I’m not sure what that means, but I’m guessing it’s not cheap.

In a follow-up piece, Abovitz had this to say about what to call Magic Leap’s technology: “Personally I don’t like the terms AR, VR or MR for us, so we’re calling our computers a spatial computer and our visualization a digital light field because that’s probably the most accurate description of what we do. The other terms are totally corrupted.” While I can appreciate Abovitz’s comments here, I find this line of reasoning problematic for the same reason that I continue to find Microsoft’s use of Mixed Reality instead of Augmented Reality frustrating. As an industry, at some point, we must agree on what to call things. Otherwise, it is very hard to measure a market, drive growth, and facilitate standards.

Pricing, ship dates, and naming discussions aside, what Magic Leap introduced with the One certainly looks promising. And, as noted, this a developer kit, designed to get programmers working on content for the company’s forthcoming platform. I eagerly await the opportunity to try out the hardware and look forward to seeing how this reveal impacts the decision of other AR-focused companies.

News You might have missed: Week of January 5th

on January 5, 2018
Reading Time: 3 minutes

These hardware bugs allow programs to steal data which is currently processed on the computer. While programs are typically not permitted to read data from other programs, a malicious program can exploit Meltdown and Spectre to get hold of secrets stored in the memory of other running programs. This might include your passwords stored in a password manager or browser, your personal photos, emails, instant messages and even business-critical documents. Initially thought to affect only Intel chipsets, these bugs in different forms are in fact impacting AMD and ARM solutions as well. This means that most PCs and phones are affected. Cloud services running Intel-powered servers are also affected

Intel Gemini Lake hopes to hold off Windows on Snapdragon push

on January 4, 2018
Reading Time: 3 minutes

After last week’s pronouncement from Intel CEO Brian Krzanich claimed that “the world will run on Intel silicon,” the company has a lot of ground to cover to make that happen. One recent area of attack from the outside comes from Qualcomm and its push into the world of Windows PCs through the “Windows on Snapdragon” initiative. Using chips initially designed to target smartphones and tablets, Qualcomm is leveeing lower power consumption, a true connected standby capability, and connectivity improvements with an LTE modem to address one of the many markets that Intel has previously held dominance over.

Even with little experience in the world of Windows-based PCs and with silicon designs that are well understood to be built for smartphones first (at least in the initial implementations), Qualcomm’s Snapdragon is able to make a run at the lower tiers of notebook and convertible PC markets in large part due to Intel’s ambivalence. Intel has put seemingly little emphasis on the low power processor space, instead putting weight behind the “Core” family of products that provide the compute capability for higher-end notebooks, desktops, and enterprise servers.

Intel has tried various tactics in the low power space. It tried to revive the Intel Pentium architecture and modify it and also attempts to bring its “big” cores used in those higher performance processors down to a lower power rung. But doing so is difficult and puts great strain on the design engineers and production facilities to offer transistors that can perform optimally at both the high and low end of the performance spectrum. The result has been a family of products over several generations that have showed little improvement in performance, efficiency, or interest from Intel.

It is because of this lack of iterative performance improvement that Qualcomm has the ability to offer Snapdragon as a competitive alternative. Years of ignoring the space left a gap of air that competition could develop towards.

The mid-December announcement of the Pentium Silver and Celeron family of parts was met with very little fanfare, from either press or from Intel PR. Only after the release of architectural information in the form of software development documents do we find hope that the Goldmont Plus architecture that powers the Gemini Lake cores that power the new processors, may offer enough of an improvement in performance to make an impact. As the follow on to the Apollo Lake parts, Gemini Lake was expected to be just another refresh, but early performance metrics indicate that we may see as much as a 25% increase in IPC (instructions per clock) along with slightly increased clock rates. For multi-threaded workloads, on a chip that can integrate as many as four cores, benchmarks show a 45% increase over the previous generation.

Processors based on this design will sport TDPs starting at 6 watts, which is higher than where we expect the Snapdragon 835 in the first generation of Windows devices to operate. Intel does claim that the SDP, or Scenario Design Power, of a part like the Pentium Silver N5000 will be around 4.8 watts, indicating the power level at which Intel expects normal processing to occur. Close may not cut it though, as the importance of power consumption in standby states is going to be critical to the success of platforms in this class.

Maybe most interestingly is the addition of a CNVi, an integrated connectivity portion of the architecture. This is Intel’s attempt to simplify the integration and complexity of an RF chip and should allow OEMs and partners to more easily integrate high-speed WiFi and LTE/cellular connections. If this works out, Intel has clearly targeted the connectivity advantage that Qualcomm holds with its integrated X16 Gigabit LTE modem as a danger to its market leadership.

We will hopefully see the first wave of Gemini Lake powered notebooks and convertible at CES next month, though availability is looking closer to the February or March timeline. Even with announcements from key OEMs, usability testing will be needed to see how much performance the Goldmont Plus architecture truly delivers and if it can offer similar always-on, always-connected capability and battery life. No one expected Intel to sit back and let Qualcomm or other ARM processor vendors simply takeover a sizeable portion of any market, but we will sit back and see if Intel’s first attempt at a product response will hold any water.

Personalized Machine Learning

on January 4, 2018
Reading Time: 4 minutes

In several recent notes I’ve written I have mentioned in passing this term personalized machine learning. It is impossible to talk about artificial intelligence in a broad sense without talking about, and understanding, machine learning.