Making Sense of the GeForce RTX launch

on September 20, 2018
Reading Time: 3 minutes

This week marks the release of the new series of GeForce RTX graphics cards that bring the NVIDIA Turing architecture to gamers around the globe. I spent some time a few weeks back going over the technological innovations that the Turing GPU offered and how it might change the direction of gaming, and that is worth summarizing again.

At its heart, Turing and GeForce RTX include upgrades to the core functional units of the GPU. Based on a very similar structure to previous generations, Turing will improve performance in traditional and current gaming titles with core tweaks, memory adjustments, and more. Expect something on the order of 1.5x or so. We’ll have more details on that later in September.

The biggest news is the inclusion of dedicated processing units for ray tracing and artificial intelligence. Much like the Volta GPUs that are being utilized in the data center for deep learning applications, Turing includes Tensor Cores that accelerate matrix math functions necessary for deep learning models. New RT cores, a first for NVIDIA in any market, are responsible for improving performance of traversing ray structures to allow real-time ray tracing an order of magnitude faster than current cards.

Reviews of the new GeForce RTX 2080 and GeForce RTX 2080 Ti hit yesterday and the excitement about them is a bit more tepid than we might have expected after a two-year hiatus from flagship gaming card launches. I’d encourage you to check out the write ups from PC Perspective, Gamers Nexus, and Digital Foundry.

The RTX 2080 is indeed in a tough spot, with performance matching that of a GTX 1080 Ti but with a higher price tag. NVIDIA leaned heavily into the benefit of Turing over Pascal in regards to HDR performance in games (using those data points in its own external comparison graphics), but the number of consumers that have or will adopt HDR displays in the next 12 months is low.

The RTX 2080 Ti is clearly the new leader in graphics and gaming performance but it comes with a rising price tag as well. At $1199 for the NVIDIA-built Founders Edition of the card (third party vendors will be selling their own designs still), the RTX 2080 Ti now sells for the same amount as the Titan Xp and $400 more than the GTX 1080 Ti launched at. The cost of high-end gaming is going up, that much is clear.

I do believe that the promise of RTX-features like ray tracing and DLSS (deep learning super sampling) will be a shift in the gaming market. Developers and creative designers have been asking for ray tracing for decades and I have little doubt that they are eager to implement it. And AI is taking over anything and everything in the technology field and gaming will be no different: DLSS is just the first instance of AI-integration for games. It is likely we will find use for AI in games for rendering, animation, non-player character interactions, and more.

But whether or not that “future of gaming” pans out in the next 12-18 months is a question I can’t really answer. NVIDIA is saying, and hoping, that it will, as it gives the GPU giant a huge uplift in performance on RTX-series cards and a competitive advantage over anything in the Radeon line from AMD. But even with a substantial “upcoming support” games list that includes popular titles like Battlefield V, Shadow of the Tomb Raider, and Final Fantasy XV, those of us on the outside looking in can’t be sure and are being asked to bet with our wallets. NVIDIA will need to do more, and push its partners to do more, to prove to us that the RTX 20-series will see a benefit from this new technology sooner rather than later.

When will AMD and Radeon step up to put pressure and add balance back into the market? Early 2019 may be our best bet but the roadmaps from the graphics division there have been sparse since the departure of Raja Koduri. We know AMD is planning to release a 7nm Vega derivative for the AI and enterprise compute markets later this year, but nothing has been solidified for the gaming segment just yet.

In truth, this launch is a result of years of investment in new graphics technologies from NVIDIA. Not just in new features and capabilities but in leadership performance. The GeForce line has been dominating the high-end of the gaming market for at least a couple generations and the price changes you see here are possible due to that competitive landscape. NVIDIA CAN charge more because its cards are BETTER. How much better and how much that’s worth is a debate the community will have for a long time. Much as the consumer market feigns concern over rising ASPs on smartphones like the Apple iPhone and Samsung Galaxy yet still continues to buy in record numbers, NVIDIA is betting that the same is true for flagship-level PC gaming enthusiasts.

Apple’s Ecosystem Advantage

on September 19, 2018
Reading Time: 3 minutes

As I step back and look at Apple fall launch event, the big story in my mind is the incredible strength of the Apple ecosystem. This word ecosystem gets used quite a bit and may often be overused, or even attributed to things that are not truly an ecosystem component. Everything Apple has built from hardware, software, services, retail, customer support, and more has the Apple ecosystem as a central component. In fact, Apple’s management has often talked about, and demonstrated the many ways Apple’s product work together seamlessly. This seems entirely logical, that of course, a company who makes many different products should have them work together, but often this is not the case. I’d argue that Apple not only has the strongest ecosystem but that their ecosystem compounds (get’s better with more devices) better than any of their competition.

Apple’s Neural Engine = Pocket Machine Learning Platform

on September 19, 2018
Reading Time: 4 minutes

I had a hunch going into today’s Apple event that the stars of the show would be Apple’s silicon engineering team. The incredible amount of custom silicon engineering that went into making yesterday’s products is worthy of a whole post at some point. For now, I want to focus on the component that may have the most significant impact in future software design which is the neural engine.

Big Leap Year Over Year
It’s first helpful to look at some specific year-over-year changes Apple made with the neural engine. In the A11 Bionic, the neural engine not only took a much smaller part of the overall SoC block, but it was integrated with some other components. It was capable of 600 billion operations per second and was a dual-core design.

The neural engine in the A12 Bionic now has its dedicated block in the SoC and has jumped from two-cores to eight and is now capable of 5 trillion operations per second. While these cores are designed with machine learning in mind, they also play an exciting role in helping to manage how the CPU and the GPUs are also used for machine learning functions. Apple referred to this as the smart computer system. Essentially a machine learning task has three systems that work together to complete the task, the neural engine, the CPU, and the GPU. Each plays a role and is managed by the neural engine.

As impressive as the engineering is with the whole A12 Bionic, where it all comes together is in the software that allows developers to take advantage of all this horsepower. That is where Apple now letting developers use CoreML to make apps we have never experienced before is a big deal.

The Machine Learning Platform
Apple is getting dangerously close to bringing a great deal of science fiction into reality, and the efforts they are doing with machine learning is at the center. In particular, something geeks in the semiconductor industry like to call computer vision.

At the heart of a great deal of science fiction, and the subject of many analysis I have done is the question about what happens when we can give computers eyes. This is front and center in the automotive industry since cars need to be able to see, detect, and react accordingly to all kinds of objects in the road and around them. Google Lens has shown off some interesting examples around this as well where you point your phone at an object, and the software recognizes it and gives you information. This is a new frontier of software development, and up to this point, it has been relegated to highly controlled experiences.

What is exciting is to think about all the new apps developers can now create using the unprecedented power of the A12 Bionic in a smartphone and rich APIs to integrate machine learning into their apps.

If you have not seen it, I encourage you to watch this bit of Apple’s keynote to see the app, but a fantastic demonstration of this technology took place on stage. It was an app called Homecourt that did real-time video analysis of a basketball player and analyzed everything from how many shots he made or missed, to where on the court he made and missed them as a percentage of his shots, and even could analyze his form down to the legs and wrist in order to look for patterns. It was an incredible demonstration with real-world value, yet it is only scratching the surface of what developers can do with a new era of iPhone software with machine learning at the core of their software.

Machine Learning and AI as the New Software Architecture
When it comes to this paradigm change in software it is important to understand that machine learning and AI is not just a feature developers will add but a fundamentally new architecture which will touch every bit of modern-day software. Think of AI/ML being added into software as a new paradigm as the same way multi-touch become a foundation for UI for the modern smartphone. AI/ML is a new foundational architecture enabling a new era of modern software.

I can’t overstate how important semiconductor innovation is to this effort. We have seen it in cloud computing as many Fortune 500 companies are now deploying cloud-based machine learning software thanks to innovations from AMD and NVIDIA. However, the client side processing for machine learning has been well behind the capabilities of the cloud until now. Apple’s has a brought a true machine learning powerhouse and enabled it to be in the pockets of its customer base and opened it up to the largest and most creative developer community of any platform.

We are just scratching the surface of what is possible now and the next 5-7 years of software innovation may be more exciting than the last decade.

Competing With Apple’s Silicon Lead
If you have followed many of the posts I’ve written about the challenges facing the broader semiconductor industry, you know that competing with Apple’s silicon team is becoming increasingly difficult. Not just because it is becoming harder for traditional semiconductor companies to spend the kind of R&D budget they need to meaningfully advance their designs but also because most companies don’t have the luxury of designing a chip that only needs to satisfy the needs of Apple’s products. Apple has a luxury as a semiconductor engineering team to develop, tune, and innovate specialized chips that exist solely to bring new experiences to iPhone customers. This is exceptionally difficult to compete with.

However, the area companies can try with cloud software. Good cloud computing companies, like Google, can conceivably keep some pace with Apple as they move more of their processing power to the cloud and off the device. No company will be able to keep up with Apple in client/device side computing but they can if they can utilize the monster computing power in the cloud. This to me is one of the more interesting battles that will come over the next decade. Apple’s client-side computing prowess vs. the cloud computing software prowess of those looking to compete.

Apple Watch Series 4: A Heart Patient’s Perspective

on September 18, 2018
Reading Time: 3 minutes

When an ordinary healthy consumer looks at Apples new Watch Series 4, with its updated health-related sensors and its new ability to do a real-time electrocardiogram, they most likely see a more modern and better model of this watch, but the heart health features are not relative to them. One of the comments I heard from some of the younger journalists at Apple’s launch event was, “this new watch is for older people.”

AI Application Usage Evolving Rapidly

on September 18, 2018
Reading Time: 4 minutes

Given the torrid pace of developments in the world of artificial intelligence, or AI, it’s probably not surprising to hear that applications of the technology in the business world are evolving quickly as well. What may catch people off guard, however, is that much of the early work in real-world use is happening in more mundane, back-office style applications like data security and network security, instead of the flashier options like voice UI, as many might expect.

As part of the AI in the Enterprise study recently fielded by TECHnalysis Research (see a previous column called “Survey: Real World AI Deployments Still Limited” for more background and additional information on the survey), over 500 US-based businesses that were actively involved in either developing, piloting, or running AI applications in full production, were asked about the kinds of applications they use in their organizations. Respondents were asked to pick from a list of 15 application types, ranging from image recognition, to spam filtering, to IoT analytics, and more, as well as the maturity level of each of their application efforts, from development to pilot to full production.

As Figure 1 shows below, the top two choices amongst the respondent group were Data Security and Network Security, with roughly 70% of all respondents saying they had some kind of effort in these areas.


Fig. 1

While these are clearly critical tasks for most every organization, it’s interesting to see them at the top of the list, because they’re not the type of applications that are typically seen—or discussed—as being cutting edge AI applications. What the survey data clearly shows, however, is that these core company infrastructure applications are the ones that are first benefitting from AI. Though they may not be as sexy as computer vision and image recognition, ensuring that an organization’s data and its networks are secure from attacks are great ways to leverage the practical intelligence that machine learning and AI can bring to organizations.

As important as the top-level rankings of these applications may be, when you look at the application usage data by maturity level of the implementation, even more dramatic trends appear. Figure 2 lists the top AI applications in full production and, as you can see, virtually all of the highest-ranking applications can be classified more as back-office or infrastructure type programs.


Fig. 2

Spam Filtering applications made it to number two on this list and Device Security rose to number four overall. Again, both of these applications can leverage AI-based learning to provide a strong benefit to organizations that deploy them, but neither of them have the association with human intelligence-type capabilities that so many people expect (and fear) from AI.

When you look at the top applications in pilots, a dramatically different group rises to the top, as you can see in Figure 3. Here’s where we start to see more of the AI applications that I think many people might have thought would have appeared higher on the overall list, such as Business Intelligence, Voice UI/Natural Language Processing, as well as Image Recognition. What the data shows, however, is that many of these more “sci-fi” like applications are simply in much earlier stages of development.


Fig. 3

Following the same kind of trends, the top AI applications still in development, illustrated in Figure 4 below, are focused on an even more distant view of the future, with Robotics at the top of the list followed by Manufacturing Efficiency/Predictive Maintenance and then Call Center/Chatbots. Companies are clearly driving efforts to get these kinds of applications going in their organizations, but the real-world implementations are still a bit further down the road.

Fig. 4

Taking a step back from all the data, it’s interesting to note that there are unique groups of applications at the various maturity levels. Many of those that are high-ranking at one level are much lower-ranked at the next maturity level, suggesting very distinct phases of development across different types of AI applications. It’s particularly interesting to see that the realities of AI usage in the enterprise are fairly different than what much of the AI press coverage has suggested.

By understanding what companies are actually doing in this exciting area, it can help set more realistic expectations for how (and when) various aspects of AI will start to make their impact in the business world.

(Look for more data from this study and a link to summary results in future columns.)

Bringing Back Manufacturing Jobs

on September 17, 2018
Reading Time: 3 minutes

If this country wants to bring back high-tech manufacturing jobs it needs to do a lot more than taxing iPhones made in China. President Trumps’ tweet to that effect is far from his worst, but it’s about as ignorant as many we’ve seen. But it’s also an opinion that’s been expressed by others, often with good intentions to bring back manufacturing jobs to this country. And like a broken clock that’s right twice a day, that sentiment is not necessarily wrong. We have lost many high paying manufacturing jobs, and we should look at what it would take to bring them back. Too many of our citizens are underperforming in service jobs and struggling to make a minimum wage. Underemployment is a serious issue.

Having designed and built scores of consumer tech products in this country, beginning in the seventies all the way into the nineties, I’ve seen and participated in bringing more and more products to Asia, and continue to do so. I was instrumental in the shift of building products for Polaroid and Apple from this country to Asia, specifically in Japan, Taiwan, and China.

Our politicians seem to show about as much understanding of this issue as they do of other technologies. They simplify the cause and solution to a few tweets. If they really do want to bring back manufacturing jobs, tariffs are not the solution.

What is the answer? Here’s what I’d tell the politicians to do:

Understand why products are being made in Asia. Spend some time learning why China is such an attractive place to design and build them. Read this classic and timeless article by James Fallows from The Atlantic Monthly, China Makes, the World Takes. You’ll learn that U.S. companies build products there because of talent, speed, infrastructure, and cost. While cost is an important consideration, it’s no longer the primary reason.

The fact that China has become the manufacturer to the world didn’t happen without an immense commitment and foresight. Both national and local governments provided incentives and billions of dollars in investments to create the infrastructure that enabled it to happen. They built industrial parks, highways, bullet trains, libraries, high-speed networks, colleges, hospitals, and airports. They cleared the trees, tilled the fields, planted the seeds, and nurtured the growth that allowed thousands of factories to blossom, skills to be developed and millions of jobs to be created.

During the decades that it was occurring, our government stood by and did nothing. We failed and continue to fail to develop our infrastructure, encourage new development centers, and invest in new technologies. Just one tiny example: the U.S. ranks 28th in the world in mobile internet speeds behind Greece. When there is an initiative, it’s usually boneheaded, such a bringing back coal mines.

And we continue to do nothing. While being the manufacturing center for the electronics industry may have passed us by, we still can do with green technologies what China has done with computers and cell phones. While our governing party denies climate change and even questions science, the Chinese government is fast becoming the world’s center of solar technology and electric cars. By their mandating the move to clean energy to address the environment, they’ve incentivized the building of factories and the manufacturing jobs to build cars, build solar panels, windmills, batteries, and some products yet to be invented. Right before our eyes, they’re repeating what they’ve done with electronics manufacturing and creating new centers of manufacturing for the world, this time for green technologies. Like decades ago, they’re able to see the future and are investing to dominate.

So I’d tell our government if they’re serious about bringing manufacturing jobs back to this country, it’s not going to happen with tariffs or coal mines. But it could happen by looking ahead and seeing where the jobs will be created. Stop denying science, embrace it, support it and invest in the future. That’s the most effective ways to bring back manufacturing jobs to the United States.

Series 4 Watch Could Grow Apple’s TAM and its ASP

on September 14, 2018
Reading Time: 4 minutes

Apple’s launch of the Series 4 Watch this week will likely have a very positive impact on the company’s total available market (TAM) for wearables as well as its average selling price (ASP). The watch’s new, bigger display and faster custom silicon should help drive a refresh among current Apple Watch owners. More importantly, the addition of new health-focused features—including two FDA-cleared apps—should drive a dramatic increase in interest from patients, doctors, insurers, and many others. And Apple took the opportunity with this strong hardware update to also announce an increase to the starting price of the new watch to $399, while keeping the Series 3 in the market at a new lower price ($279) that’s still a $30 premium over the value-focused Series 1 it was shipping prior to the announcement.

In other words, Apple is now likely to sell more watches to more people–at a higher average selling price–than in the recent past. That’s quite a move, especially when you consider that it wasn’t long ago that many skeptics were still calling the Apple Watch a flop.

Bigger Display, Faster Chip
The Series 4 represents the first time the company has made a big change to the size of the Apple Watch display, increasing its two sizes from 38 and 42 mm to 40 and 44 mm. In addition to the larger size, Apple has narrowed the bezels, resulting in a roughly 30% increase in viewable area, which is very noticeable on the wrist. Apple leverages this new display with reworked watch faces, including many with notably more complications, which drives up the information density on the display dramatically.

To my eye, Apple does this so well the display doesn’t look overcrowded or cluttered. And one of the advantages to having all these complications on the screen is that the wearer has easier access to the underlying apps, which has always been one of the biggest interface challenges for wearables.

In addition to the larger screen, Apple also made the Series 4 ever-so-slightly thinner, added new haptic feedback to the crown, and maintained backward compatibility with existing watchbands. As with previous years, some of the biggest changes happened inside the device. Apple says the new S4 chip is twice as fast as the previous generation chip while maintaining a comparable level of battery life performance. The first generation Apple Watch was painfully slow, but each iteration as seen dramatic performance gains, and with the Series 3 I’ve found performance to be more than adequate for the vast majority of tasks. It will be interesting to see how Apple—and potentially third-party developers—will leverage the improved performance of the Series 4 going forward.

Heart-Rate Apps
The new display and faster internals are great for current Apple watch owners who like their current device and who are looking for a reason to upgrade. But the addition of some very specific health-related features is what may well drive a sizeable increase in Apple’s total available market. First, the company improved the gyroscope and the accelerometer so the watch can detect if the wearer has suffered a fall. Next, Apple added new heart-rate technologies that can detect a low heart rate and screen for Atrial Fibrillation (AFib). Also, the company added sensors in the back of the watch and in the crown that lets the wearer run an electrocardiograph (ECG).

In addition, the Watch will store the results of these tests in the Health app for sharing with your doctor via PDF. It’s very notable that the FDA cleared the ECG and AFib apps (the FDA “clears” Type II devices such as the Apple Watch, while it “approves” higher-risk Type III devices such as pacemakers.) This FDA clearance could make the Apple Watch much more attractive to a wider range of potential buyers and could drive meaningful volumes as doctors, insurance companies, and caregivers think about buying these devices for people who would never buy it for themselves. Apple says the ECG app will appear later this year, after the product launch later this month.

Driving iPhone Stickiness
Even before the Series 4 announcements, I’d been thinking about the positive benefits Apple Watch has been driving for the company. Obviously, there are the revenues of the product itself, which Apple claims is now the number one selling watch in the world, surpassing all other wearable vendors AND traditional watch vendors. Perhaps just as important, however, is the stickiness that the Watch drives for iPhone. I know many Apple Watch users that are arguably more attached to their Apple Watch than to their iPhone. The symbiotic relationship between the phone and watch means few of these users will leave the iPhone, even if they wanted to. And then consider the potential new Apple Watch customers who might currently be using an Android phone or even a feature phone. To fully utilize the watch, they too will need to buy an iPhone. That’s a position most smartphone vendors would love to find themselves in.

And all of this is before we consider the real possibility that the Series 4’s higher price –alongside the retention of the Series 3 at a higher entry-level price–will drive a higher average selling price for Apple Watch in the all-important fourth quarter of the calendar year. So in the space of just over three years Apple launched the product, later lowered prices to drive adoption, and now feels its market position and new hardware are strong enough to warrant a higher starting price. It’s unlikely Apple will change its policy of not reporting Apple Watch shipments and ASPs in its earnings calls. But if this new hardware does what I expect it to do, Tim Cook and his team may decide they do, in fact, want to share a few more specifics around Apple Watch’s market performance over the holidays.

News That Caught My Eye: Week of Sept 13, 2018

on September 14, 2018
Reading Time: 3 minutes

Google Leaked Video on Elections

This week a Google employee leaked an internal video in which Sergey Brin and Sundar Pichai comment on the outcome of the 2016 US elections calling the results offensive and inviting employees to express their sadness and concerns.

An iPhone for Everyone

on September 13, 2018
Reading Time: 4 minutes

Two product families were on stage at the Steve Jobs’ Theater. A lot had been revealed in spoilers over the weeks preceding the event, but while we might have known names and sizes, we knew very little about what really makes up the essence of these new devices. By the time Phil Schiller finished introducing the new iPhone Xr, it was clear that the silicon inside these devices is what makes the iPhone X family unique and what Apple is betting on, to continue to differentiate going forward. Of course, hardware improvements will continue, but the A12 Bionic and the Neural Engine genuinely are the cornerstones of these products.

Reading into the iPhone Xr

The iPhone Xr is Apple’s third attempt to deliver a more affordable iPhone, and their approach is entirely different from the past. With the iPhone 5c and the iPhone SE, Apple produced a new shell around an existing iPhone “gut” for lack of a better word. The final result was somewhat more affordable but also felt like a buyer was compromising on his or her experience to hit the price point that was right for them. With the iPhone Xr, it feels like buyers will make little to no compromise because Apple redesigned some of the features to contain costs but left the experience intact.

Nobody knows what the r in iPhone Xr stands for, but I like to think that it just signifies that it is a close experience to the iPhone Xs. Apple made sure that the engine powering the iPhone Xr was the same as the other two X models: the A12 Bionic chip. Apple also included the TrueDepth Camera at the front that enables FaceID, Animojis, and AR. Basically, Apple made sure that buyers would have no compromise on security, a more modern, cleaner while richer UI, and a new class of apps designed around AR. For other features that would have driven up cost, like a dual camera system at the back and the OLED screen, Apple delivered something different. For the camera, Apple is using a single camera and takes advantage of the new Neural Engine in the A12 Bionic to enable portrait mode, the most significant driver around the dual camera uptake. There is a small compromise here as portrait mode is supposed to work on humans only, at least for now. For the screen, Apple improved its LCD resolution to deliver a Liquid Retina Display at 326 pixels per inch, which is the same as on the iPhone 8.

The size of the iPhone Xr is also interesting, as it sits in between the iPhone Xs and the iPhone Xs Max. This might be a big jump for users upgrading from an iPhone 5c and SE, but if this is their primary computing device, I think they would quickly find that the larger display is well worth getting past the initial awkwardness of holding a bigger phone. This also allows Apple to deliver two products supporting dual-SIM in China, where both SIMs have to be physical.

Lastly the colors. The iPhone 5c was not very successful, but the colors were certainly something that people appreciated. Maybe the mistake Apple made back then was the material, which resulted in making the phone look cheap. This is not the case with the iPhone Xr and its glass back that merges fun, modern color with elegance. Apple refrains from naming specific target audiences, but I would think GenZ and younger Millennials are a pretty good audience for the iPhone Xr.

A Clean Portfolio

Some people after the keynote seemed concerned that the iPhone portfolio was now confusing. I actually think sizes and pricing will guide buyers quite easily.

With the launch of the iPhone Xs, production of the iPhone X was halted, which means that buyers might be able to still find a few around until units in the channels run out. After that, however, they will have to turn to the iPhone Xs or the larger Xs Max. Eliminating the iPhone X, helped Apple keep the same price point for the iPhone Xs, as well as only increase the iPhone Xs Max by $100. Leaving the iPhone X in the lineup, as in previous years, would have complicated the pricing of both the iPhone Xs and the Xr, which would have likely lead to confusing as well as cannibalization.

To get some perspective of what this portfolio means, it is worth knowing that roughly 40% of the iPhone user base in the US is on an iPhone model older than an iPhone 7. Some people might be disappointed that the iPhone SE and the iPhone 6S are no longer available, but even though iOS12 has been designed to be kind to older iPhone versions, I think this is the right time to get users to step up. Apple is prepping the installed base to benefit from as many services as possible and to do that users must upgrade.

With an iPhone 7 running on iOS12 priced at $449, it is hard to say that Apple does not have a good option for users who are more financially constraint. As a comparison, today on Amazon.com, you can find an unlocked Samsung Galaxy S7 for around $290 to $450. More likely than not those devices are running Android 6.0 and are upgradable, with some speed degradation, to Android 8.0. There are brand new flagship phones from up-and-coming names like OnePlus that are priced around $500, but more pragmatic buyers tend to rely more on brands they are familiar with, meaning that we might see churn from Android to iOS pick up a little.

Apple Watch Has Grown Up

There is so much to say about Apple Watch that I will be writing a separate column. Here I want to point out that I firmly believe the combination of the new Series 4 with its redesign and focus on health combined with the reduced price of Series 3 will cement Apple’s leadership in the smartwatch market. Series 4 will drive the first big replacement cycle, while Series 3 might attract users who are still a little hesitant, and therefore might want to limit their investment.

In many ways, Apple Watch was even more exciting than the iPhone for me, because, like the iPhone X last year, we see the promise of the future of the segment materialize.

You Cannot Dismiss the Power of Silicon

Of course, some said that what we saw on Wednesday was incremental and not exciting. After all, this is “an s year.” True, the iPhone Xs looks the same as its predecessor. But this year it is not just about being faster. Maybe this year the “s” in the iPhone name should stand for silicon and the neural network that makes these devices not just faster but much smarter. It is hard to explain to a layperson what processing 5 trillion operations per second means. But as new apps taking advantage of the improved machine learning platform will start to come out, when the quality of the pictures taken will be compared to that of other models, dismissing this year’s models as just “an s year” will seem highly inaccurate.

Why Cheating on Smartphone Benchmarks Matters to You

on September 12, 2018
Reading Time: 3 minutes

Earlier this month a story posted on popular tech review site Anandtech discovered some interesting data when looking at the performance of flagship Huawei smartphones. As it turns out, benchmark scores in some popular graphics tests, including UL Benchmark’s 3DMark and long-time mobile graphics test GFXBench, were being artificially inflated to gain an advantage over competing phones and application processors.

These weren’t small changes. Performance in a particular subset of the GFXBench test (T-Rex offscreen) jumped from 66.54 FPS to 127.36 FPS, an improvement of more than 2x. The lower score is what the testing showed when “benchmark detection mode” was turned off – in other words, when the operating system and device was under the assumption that this was a normal game. The higher score is generated when the operating system (customized by Huawei) is able to detect a popular benchmark application and jump up power consumption on the chip outside levels that would actually be integrated in a phone. This is done so that reviews that utilize these common tests paint the Huawei devices in a more favorable light.

The team behind the Geekbench benchmark found similar results, and I posted about them on ShroutResearch.com recently. Those results showed multi-core performance deltas as high 31% in favor of the “cheating” mode.

While higher scores are better, of course, there are significant problems with the actions Huawei undertook to mislead the editorial audience and consumers.

First and maybe most importantly for Huawei going forward, is that this testing and revelation paints the newly announced Kirin 980 SoC (developed in-house by HiSilicon) in a totally different light. While the launch press conference looked to show a new mobile chipset that could run screaming past Qualcomm’s Snapdragon 845 platform, we now look at the presented benchmarks from Huawei as dubious at best. Will the Kirin 980 actually live up to the claims that the company put forward?

The most obvious group affected by Huawei’s decision to misrepresent current shipping devices is the consumer. For buyers of flagship devices that often depend on reviews, and the benchmarks that lead up to an author making a recommendation, to aid in the buying process. And customers that are particularly interested in the gaming and high-end application performance of their smartphones would pay even more direct attention to benchmark results, some of which are falsely presented.

Other players in the smartphone market that are not taking part in the act of cheating on benchmarks also suffer due to Huawei’s actions, which is obviously the point. Competing handset vendors like Samsung, Oppo, Vivo, perhaps even Apple, are handicapped by the performance claims Huawei has made, showing the competing devices in an artificially negative light. In the Chinese market where benchmarks and performance marketing are even more important than in the US, Huawei’s attempt to stem the tide of competition has the most affect.

To a lesser degree, this hurts Qualcomm and Samsung’s Exynos products too, making their application processor solutions look like they are falling behind when in fact they may actually be the leaders. Most of the high-end smartphones in China and the rest of the world are built around the Snapdragon line and pressure from its own customers after seeing Huawei supposedly taking performance leadership was growing.

This impacts the software developers of tools like 3DMark, Geekbench, and GFXBench as well. To some on the outside this will invalidate the work and taint the impact of other, non-cheating results in these tests. Consumers will start to fear that other scores are artificially inflated and not a representation of the performance they should expect to see in their devices. Other silicon and device vendors might back out of support for the tools, reducing the development resources for these companies to improve and innovate on benchmark methodology.

Huawei’s answer of “it’s just some AI” that is purposefully resulting in the shifting benchmark scores has the potential to cause damage to the entire AI movement. If consumers begin to associate AI-enabled devices and software as misrepresenting their work, or that everything that integrates AI is actually a scam, we could roll back the significant momentum the market has built and risk cutting it off completely.

Measuring performance on smartphones is already a complicated and tenuous task. The benchmarks we have today are imperfect and arguably need to undergo some changes to more accurately represent the real-world experiences that consumers get with different devices and more capable processors. But undergoing acts like cheating makes it harder for the community at large to work together and address the performance questions as a whole.

Do we need better mobile device testing for performance and features and cameras and experiences? Yes. But cheating isn’t the way to change things and, when caught, can do significant damage to a company’s reputation.

Apple’s Vertical Strategy is key to Their Success

on September 12, 2018
Reading Time: 3 minutes

One of the things I learned very early on in my limited relationship with Steve Jobs was that he was a control freak. That was both good and bad. Bad in the sense that this was a factor in him getting fired in 1985 when he tried to take control of everything related to the Mac and the way he tried to manage Apple. It was also a good trait when channeled correctly. That started when he came back to Apple in 1997 and had learned a great deal about sharing responsibility with others under him while at NeXT and not being so anal that he had to be in control of everything.

Qualcomm, Android Wear, and Competition in Miniaturization

on September 11, 2018
Reading Time: 3 minutes

Yesterday Qualcomm unveiled their newest Snapdragon creation which has been custom designed for the smartwatch/wearable category. On the heels of this announcement, there are a few critical observations to make when we think about the future of wearables.

The Many Paths and Parts to 5G

on September 11, 2018
Reading Time: 4 minutes

The road to 5G is certainly interesting one and it’s increasingly clear that there are going to be multiple paths to get there. Different countries around the world are taking different routes and within countries, different carriers are also following unique strategies. The net result—realistically—is probably going to be an even more confusion as we move to what is already a complicated topic.

Thankfully, there are a number of important commonalities that will link all the stories together. First, it’s important to note that 5G is the first major new network that builds completely on its predecessor. Unlike the move to 4G from 3G or even 3G from 2G, 5G can fully utilize the very strong 4G LTE networks that have been—and are still being—built all around the world. In each of the previous network transitions, entirely new networks had to be put in place before the benefits of the new standard were really felt.

The practical impact to 5G of this existing technology base is that LTE is going to stick around a lot longer than previous superseded networks. In addition, it’s likely that the full transition to 5G could take longer. At the same time, a number of major technical advances used to enhance LTE, including MIMO (multiple-input, multiple output) and OFDM (orthogonal frequency division multiplexing), will be beneficial for both 4G and 5G users and their devices.

In fact, it’s even conceivable that some advanced versions of Gigabit LTE could be faster than early 5G network deployments because of all the refinements that have been made to 4G LTE over the years. To be clear, as 5G evolves, it will be faster than 4G, but many of the initial benefits for 5G will be focused more on delivering consistently high speeds in many different environments—think stadiums, trade shows, surrounded by skyscrapers in a dense city—as opposed to just the bursty high speeds we can occasionally now get from 4G. In part this is because early 5G mobile network deployments are almost entirely focused on high-frequency millimeter wave spectrum signals, which require many more small cell towers and can transmit over much shorter distances than the lower frequencies used for 4G LTE. Transition to full 5G support in the lower frequency bands currently used for 4G is still several years away.

Many of these key advancements are being driving by important innovations from leading telecom industry players, such as Qualcomm, Ericsson, Nokia, Samsung, Intel and Huawei, all of whom provide their efforts to organizations like the 3GPP, which help create and promulgate critical worldwide telecom industry standards. Of course, enormous amounts of R&D dollars and efforts go into creating these standards, so the companies involved all charge royalties to recoup and justify their efforts. While some of these practices have been viewed as controversial, the fact is, companies should be able to benefit from the intellectual property they’ve created—even though they become part of industry standards. The laws behind these principles can get complex quickly, but the bottom line is that there is a long and rich US business tradition of being rewarded for key technology innovations. In fact, it’s one the key reasons companies have been willing to make the investments necessary to push critical standards forward in many different industries.

How these technologies get deployed by chip, device, network equipment makers and carriers is, of course, the real trick to differentiation and strategy, in particular as 5G starts to be rolled out. From a carrier perspective, one of biggest early differentiators will be which markets they choose to focus the technology on. In the US, for example, Verizon has talked about first using 5G for fixed wireless deployments, providing a wireless alternative to broadband services currently offered by cable companies and different technology solutions from carriers. AT&T, for their part, have said they plan to focus first on mobile 5G applications. Of course, all the telcos will eventually provide a wide range of services—particularly for mobile networks—but the manner with which they offer those services will vary.

As the technology base evolves, we’re also going to see a much wider range of services available with 5G than we’ve seen with previous network generations. While it’s easy to simply call this hype, there are a number of important reasons why 5G really is going to be a big deal. First, much of the network infrastructure and services associated with 5G are arriving at a pivotal time for other related technologies as well. Software-defined networking, or SDN, in particular, marks a particularly important shift in technologies for networking equipment. While SDN has been around in private networks for several years, its real impact in terms of flexibility and range of services available won’t be felt until 5G is more widely deployed.

Similarly, the influence of edge computing models is emerging, just as 5G is becoming an important factor. With edge computing, the idea is that instead of focusing on a centralized cloud computing architecture, it’s going to be more important to spread those computing resources across a wider range of devices that are distributed across the ends of the network. Ironically, some have argued that the rise of distributed edge computing could actually lessen the importance and dependence on a network connection because the compute and storage resources are more readily accessible. But when you think about the issue from an overall computer systems perspective, you realize that in order to most efficiently take advantage of those resources, you need to dramatically increase the throughput to and from these edge devices. Otherwise they could sit their starving for data—a classic design flaw. By providing speedy access to data, along with a flexible, software-driven network architecture, 5G can fully enable the potential of edge computing—hence it’s direct tie to, and dependence upon, the 5G technology shift. In fact, as AT&T hinted at during their Spark developer’s event in San Francisco yesterday, the company is intensely interested in bringing more compute power directly into the telecom network, as well as developing more sophisticated software that can balance the compute load in an intelligent way across the network, while they transition to 5G.

To be clear, the 5G hype is very real and early deployments in late 2018/2019 could prove to be disappointing. However, when you analyze the key technological developments behind the 5G transition and put them in context with other key tech industry megatrends happening around them, it’s clear that the eventual impact of 5G will be enormous. At this point, it’s just about figuring out which parts and which paths will be used to get there.

AI and Deep Fake Videos

on September 10, 2018
Reading Time: 3 minutes

The first time Adobe showed me Photoshop, I was fascinated by its potential. The idea of adjusting a picture to make it better has been an essential tool for professionals, especially in the area of graphics, entertainment, advertising and many other types of applications.

When I had to take some professional photo’s for use in my bio and speaker brochures when I am asked to speak, the photographer used Photoshop to take a slight bit of an under-the-chin fat and make my face more proportionally pleasing. In this case, I was happy for Photoshop.

However, when they showed me Photoshop before it was released, I pointed out that this could also be used to doctor up photos and create false images out of real ones. Of course, that has what has happened over the years. However, a new type of tool in the similar vein of Photoshop will soon come to market that can do the same thing for videos.

Luke Dormehi of Digital Trends wrote about a presentation at Siggraph 2018 in Vancouver, BC a few weeks back and talked about new research presented there by Germany’s Max Planck Institute for Informatics about something that is called “Deep Fake” videos:

“They have created a deep-learning A.I. system which can edit the facial expression of actors to match dubbed voices accurately. Also, it can tweak gaze and head poses in videos, and even animate a person’s eyes and eyebrows to match up with their mouths — representing a step forward from previous work in this area.

“It works by using model-based 3-D face performance capture to record the detailed movements of the eyebrows, mouth, nose, and the head position of the dubbing actor in a video,” Hyeongwoo Kim, one of the researchers from the Max Planck Institute for Informatics, said in a statement. “It then transposes these movements onto the ‘target’ actor in the film to accurately sync the lips and facial movements with the new audio.”

The researchers suggest that one possible real-world application for this technology could be in the movie industry, where it could carry out tasks like making it easy and affordable to manipulate footage to match a dubbed foreign vocal track. This would have the effect of making movies play more seamlessly around the world, compared with today where dubbing frequently results in a (sometimes comedic) mismatch between an actor’s lips and the dubbed voice.

Still, it’s difficult to look at this research and not see the potential for the technology being misused. Along with other A.I. technologies that make it possible to synthesize words spoken in, say, the voice of Barack Obama, the opportunity for this to make the current fake news epidemic look like paltry in comparison is unfortunately present. Let’s hope that proper precautions are somehow put in place for regulating the use of these tools.”

While I understand its value to, as Mr. Dormehi points out, the movie industry, its use to create fake videos and fake news could be staggering. In my work, I get quoted many times a week by the media on news stories. Over the years I have been lucky that most reporters quote me as stated and only a few times have I been misquoted in print. I also do much commentary on national and local TV shows around tech topics, and these usually are taped. While a few of my comments were taken out of context, the actual video of what I shared has never been altered. I would hate to have someone put words in my mouth that I did not say, but that’s child’s play compared to how it could be used for nefarious reasons.

Imagine someone using this technology to post a video of a significant country leader that falsely declares war on an enemy. Alternatively, they take a video of some person and interject their own words to push some political agenda or even threaten a person that ends up impacting that persons life.

Although I was not aware of this particular research from the Max Planck Institute until recently, I saw this kind of technology years ago when I visited a tech lab in the Bay Area that was working on something similar that was focused on a type of military application. At that time I got a glimpse of how this could work and observed my hosts that this could be used for both good and evil.

This video adjusting technology that uses AI will come to the market because there are legitimate applications where it could be used, especially in the world of movie making. However, I sure hope that with it comes some form of checks and balances that will keep it out of the hands of non-professionals.

At this point this, it appears that this is a technology demo and not a product yet coming from a specific company. I suspect we will find out relatively soon what type of company may license this and how it will be using it for commercial purposes.

This type of technology is scary given the plethora of fake news and images today that get posted through all kinds of mediums. Imagine how in-depth fake videos could be used in the future and the potential it has for creating counterfeit videos for evil purposes.

Podcast: Tech Congressional Hearings, Apple Event Preview, CEDIA, Sony

on September 8, 2018
Reading Time: 1 minute

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell discussing the Congressional hearings with major tech players Facebook and Twitter, previewing what they’d like to see Apple introduce at their event next week, and describing some of Sony’s announcements at the CEDIA trade show as well as new core technology developments they’ve recently introduced.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Despite rumors, 7nm is Not Slowing Down for Qualcomm

on September 7, 2018
Reading Time: 3 minutes

Earlier this week, a story ran on Digitimes that indicated there might be some problems and slowdown with the rollout of 7nm chip technologies for Qualcomm and MediaTek. Digitimes is a Taiwan-based media outlet that has been tracking the chip supply chain for decades but is known to have a rocky reliability record when it comes to some if its stories and sources.

The author asserts that “Instead of developing the industry’s first 7nm SoC chip” that both of the fabless semiconductor companies mentioned have “moved to enhance their upper mid-range offerings by rolling out respective new 14/12nm solutions.”

But Qualcomm has already built its first 7nm SoC and we are likely to see it this year at its annual Snapdragon Tech Summit being held in Maui, Hawaii this December. The company has already sent out “save the date” invites to media and analysts and last year’s event was where it launched the Snapdragon 845, so it makes sense it would continue that cadence.

If that isn’t enough to satisfy doubters, Qualcomm went as far as to publish a press release that the “upcoming flagship” mobile processor would be built on 7nm and that it had begun sampling this chip to multiple OEMs building the next generation of mobile devices. The press release quotes QTI President Cristiano Amon as saying “smartphones using our next-generation mobile platform [will launch] in the first half of 2019.”

Digitimes’ claims that both Qualcomm and MediaTek have “postponed” launches from 2018 to 2019 is counter to all the information we have received over the previous six months. As far as we can tell, the development of the next Snapdragon product and TSMC’s 7nm node is on track and operating as expected.

12nm/14nm refinements are coming

The assertion that Qualcomm is enhancing upper- and mid-range platforms around the existing 14nm and 12nm process nodes is likely true. It is common for the leading-edge foundry technologies to be limited to the high performance and/or high efficiency products that both require the added capability and can provide higher margins to absorb the added cost of the newer, more expensive foundry lines.

There could be truth to the idea of chip companies like Qualcomm putting more weight behind these upper-mid grade SoCs due to the alignment with the 5G roll out across various regions of the globe. But this doesn’t indicate that development has slowed in any way for the flagship platforms.

7nm important for pushing boundaries

Despite these questions and stories, the reality is that 7nm process is indeed necessary for the advancement of the technology that will push consumer and commercial products to new highs as we move into the next decade. Building the upcoming Snapdragon platform on 7nm means Qualcomm can provide a smaller, denser die to its customers while also targeting higher clock speeds and additional compute nodes. This means more cores, new AI processing engines, better graphics, and integrated wireless connectivity faster than nearly any wired connections.

This does not benefit only Qualcomm though; there is a reason Apple’s upcoming A12 processor is using 7nm for performance and cost efficiency advantages. AMD is driving full speed into 7nm to help give it the edge over Intel in the desktop, notebook, and enterprise CPU space for the first time in more than a decade. AMD will even have a 7nm enterprise graphics chip sampling this year!

Those that don’t clearly see the advantage 7nm will give to TSMC’s customers aren’t witnessing the struggles that Intel has with its product roadmap. Without an on-schedule 10nm node it is being forced to readjust launches and product portfolios to a degree I have never seen. The world’s largest silicon provider will survive the hurdle but to assume that its competitors aren’t driving home their advantage with early integration of 7nm designs would be naive.

News You Might Have Missed: Week of Sept 7th, 2018

on September 7, 2018
Reading Time: 4 minutes

Evernote’s Troubles

In the past month, Evernote lost its Chief Technology Officer, Anirban Kundu, its Chief Financial Officer Vincent Toolan and its head of HR Michelle Wagner. As it’s getting ready to raise more money it slashed its premium subscription from $70 to $42 a year.

The ‘Post-PC Era’ Never Really Happened…and Likely Won’t

on September 6, 2018
Reading Time: 3 minutes

As we head toward Apple’s annual device announcement-palooza, it’s an interesting exercise to consider where we are in Steve Jobs’ vaunted, much quoted ‘Post-PC Era’. The fact of the matter is, that era never fully arrived, and it doesn’t look like it will, in the near- to medium- term future.

Much was made last year of the iPhone X, celebrated as Apple’s 10th anniversary iPhone model. But in just 18 months, we’ll be commemorating the 10th anniversary of the launch of the iPad. Initially met with skepticism by many analysts and tech reviewers, the iPad’s quick out-of-the gate success led to Jobs’ famous ‘post-PC era’ quote a mere two months later.

Tablets have had a good run, but sales have tailed off of late. I’d say they’ve had greater influence on the evolution of the smartphone and the PC, rather than leading to a significantly different nomenclature for what most of us carry around today. My Techpinions colleague Ben Bajarin says that  Creative Strategies surveys indicate that only about 10% of tablet users have ‘replaced their PC’ — a number that has held steady for several years. And that 10% is concentrated in a handful of industries, such as real estate and construction. PC sales aren’t exactly surging, but they’re steady. Your average white collar professional today still carries around a smartphone and a laptop, with the tablet being an ancillary device, used primarily for media/content consumption.

Tablets have had a significant influence on the design of smartphones and PCs. They ushered in an era of smartphone screen upsizing, led primarily by Samsung, and now reinforced by the iPhone X and the expected announcement next week of a 6.5 inch iPhone model. For those who don’t want to swing both a smartphone and tablet, we have ‘Phablets’, most personified in the successful Galaxy Note series, and alternative-to-keyboard input devices such as the S Pen and the Apple Pencil. We’ve also seen the development of some hybrid tablet/PC devices, the most innovative and successful of which is Microsoft’s Surface line. But that product is competing more in the tablet category than in the PC category, with the exception of a few market segments. And, the growing number of portable PCs that feature touch screens and other tablet-like capabilities are eating a bit into tablet sales, particularly among the student set. The other personification of some aspect of the ‘post-PC’ area, I suppose, is the successful Chromebook line, which is more a reflection of the Cloud and near-pervasiveness of broadband connectivity.

It even appears that Apple doesn’t believe in the ‘post-PC’ mantra in the same way, given the steadily narrowing delta between the largest iPhone and the smallest iPad. Mainly, this is an effort to convince more users to have both an iPhone and an iPad, since I doubt that most users who have both would have a big phone and a small tablet.

So, the question is, what will change in 3 to 5 years? There will be tons of innovation of course, but I’m not expecting the average consumer or business professional to be carrying with them a dramatically different mix of device types or # of devices in the medium term. Even with pens that recognize and convert handwriting better and continual improvements in voice input, there’s still nothing that really beats the good ‘ol keyboard for productivity. And we’re still very locked into the Big Three of word processing, spreadsheets, and presentation software. The main difference has been the move to the cloud, improved collaboration, and competitive products from Google.

There’s a lot of excitement around foldable screens, but that’s initially likely to be more about coolness of form factor and the admission that the largest phones/phablets are becoming unwieldy. There are also steady improvements in mirroring type capability, where the idea is that your portable device upsizes to a big screen when at home or work. But it still requires a fair bit of effort, plus ancillary devices (and their associated cables and chargers) to make it all really work. And among many business professionals, there’s still too much time spent in locations other than home or the office where PC-type functionality is required.

It is likelier that innovation in each category will continue to influence the other categories, just as there’s more touch capability on PCs, and more input options on tablets. But looking out to the early 2020s, I don’t see any dramatic shift in what the average person will be carrying with them on a day-to-day basis. A bunch more of us will have smartwatches or some other wearable. And if anything, the tablet segment might fall off somewhat, squeezed by bigger and more functional phones on one end, and by more versatile laptops on the other end. But among the market share leaders in each category (and there’s a fair bit of overlap), none are planning for any form of product obsolescence anytime soon. When we celebrate the 10th anniversary iPad in April 2020, we’ll be marveling at the significant improvements in speed, display, wireless connectivity, and so on. But PCs will continue to be the workhorse for most of us.

Amazon’s Power Position

on September 6, 2018
Reading Time: 4 minutes

There are several interesting observations surrounding Amazon as the company briefly became the second company to reach a trillion dollar valuation. While Amazon’s market cap has dipped back below, they will inevitably get back to the trillion dollar cap and beyond.

What Current iPhone X Users Tell Us About the Opportunity for the New Models

on September 5, 2018
Reading Time: 4 minutes

We are exactly a week away from the Apple September event which calls us to gather together in Cupertino to see what is to expected to be the next iPhone generation.

As we get closer to the event rumors and leaks multiply. A consistent expectation seems to be that Apple will remove the home button, also known as Touch ID from across the new iPhone models. There has been quite a bit of a debate on whether or not Apple would be wise to do so, as people love Touch ID and there are concerns about the learning curve of the new UI centered around Face ID.

I was a big fan of Touch ID and I must admit there was some trepidation on my part when the change was announced. At the end of the day when things work, accepting change is a little harder than when change comes to replace something you hated.  I can tell you that having used an iPhone X as my main smartphone since its launch I have not regretted the move from Touch ID to Face ID. Except for a few occasions when due to poor light conditions or my inability to open my eyes properly, mostly first thing in the morning, Face ID just works. From picking up to swiping up, all is one fluid movement that makes you feel like your phone was not even locked.

Given, a sample of one does not make for good research we, at Creative Strategies, had reached out, back in March 2018, to 955 consumers in the US, 680 of which were iPhone X owners.

Face ID vs. Touch ID

We asked how satisfied users were with some key features of the iPhone X. Satisfaction runs pretty high across features with the exception of Siri where users were much more ambivalent. When looking at the average satisfaction the Super Retina 5.8” display ranked highest at 1.87 (on a -2 to 2 range), followed by the speed of the iPhone X (1.85) and the looks of the iPhone X (1.84). The new Swipe-based gesture interface scored 1.70 and Face ID scored 1.53.

When looking at the features individually, 79% of the sample was very satisfied with the Swipe-based gesture UI and 65% was very satisfied with Face ID. The ambivalence on Siri comes across clearly when looking at how the sample was distributed: 33% was somewhat satisfied, 27% was neither satisfied nor dissatisfied and another 21% was somewhat dissatisfied. While this is not great for Apple it is not all bad news either. Users clearly don’t see Siri as a purchase driver or positive differentiator, but they also do not see it as detrimental to their overall experience.

Satisfaction is a very good indicator, but we really wanted to get to how users felt about some of the changes implemented on the iPhone X, so we asked if they agreed or disagreed with some very specific statements.

70% of the panel strongly disagrees that they “miss the home button from previous iPhone models”. Another 14% said they somewhat disagree.

Because some people think differently about the Home Button and Touch ID we wanted to make sure we asked about both. Some users think of Touch ID as an enabler for Apple Pay and authentication across the board, while they think of the Home Button as the “control center” to navigate the iPhone. We asked whether they agreed or disagree with the statement “ I miss having Touch ID on my iPhone”. Here while the overall sentiment remains positive it was a little more muted with 50% saying they strongly disagreed and 21% saying they somewhat disagree.

When asked if “adapting to the new swipe-based gesture interface has been easy” 78% strongly agreed with the statement and another 17% somewhat agreed. While this panel was skewed towards early adopters the results ,bode well for a more mainstream userbase as it would be safe to assume that the learning curve of the new UI would be moderate at worst for them.

Price

Another highly debated point when it comes to iPhone X has been the price. Apple’s last two earnings calls have made it clear that price has not been a hindering factor in adoption, as the iPhone X has been the most sold model for two quarters in a row. I have been arguing for a while that users are prepared to pay for something that gives them a high-return and considering the time users spend on smartphones and the wide range of tasks they perform the investment is certainly worthwhile. When we asked whether or not our panel of iPhone X users agreed with the statement “The iPhone X is worth the price I paid for it” 52% said they strongly agreed and another 33% said they somewhat agreed. Being expensive and being worth the price are not mutually exclusive and I think the attention is usually only on the cost rather than the return on the investment.

But the question has now really moved to “how much higher can Apple go?” and with the rumors of a bigger iPhone X-like iPhone model chances are that the screen will not be the only thing being bigger. When we asked our panel of iPhone X users about paying more for an iPhone X with a larger screen the intention might not help us predict the future as much as we would want. When asked if they agreed with the statement “I would have paid even more for an iPhone X with a larger screen size had it been available” only 5% strongly agreed and another 14% said they somewhat agree. The main reason why I am saying that this data point might be somewhat inconclusive is that the reasons why users would have not agreed with the statement might have had more to do with their satisfaction with the current screen size, which as we saw was extremely high, than with their willingness to pay more. In other words, they do not put a higher value on a larger screen.

Overall this dataset shows the opportunity that the new iPhone models will have is big and that the overall combined portfolio Apple will have by the end of the September 12 event will be the strongest yet.

Notch Wars

on September 4, 2018
Reading Time: 4 minutes

Despite no longer being a hyper-growth category, smartphones are still a fascinating category to study. Not only because of the unprecedented impact they have on enabling humans of all shapes and sizes, races, and economic circumstances to engage in personal computing but also because of the global competitive strategies.

Tech Content Needs Regulation

on September 4, 2018
Reading Time: 4 minutes

It may not be a popular perspective, but I’m increasingly convinced it’s a necessary one. The new publishers of the modern age—including Facebook, Twitter, and Google—should be subject to some type of external oversight that’s driven by public interest-focused government regulation.

On the eve of government hearings with the leaders of these tech giants, and in an increasingly harsh environment for the tech industry in general, frankly, it’s fairly likely that some type of government intervention is going to happen anyway. The only real questions at this point are what, how, and when.

Of course, at this particular time in history, the challenges and risks that come with trying to draft any kind of legislation or regulation that wouldn’t do more harm than good are extremely high. First, given the toxic political climate that the US finds itself in, there are significant (and legitimate) concerns that party-influenced biases could kick in—from either side of the political spectrum. To be clear, however, I’m convinced that the issues facing new forms of digital content go well beyond ideological differences. Plus, as someone who has long-term faith in the ability of the democratic principles behind our great nation to eventually get us through the morass in which we currently find ourselves, I strongly believe the issues that need to be addressed have very long-term impacts that will still be critically important even in less politically challenged times.

Another major concern is that the current set of elected officials aren’t the most digitally-savvy bunch, as was evidenced by some of the questions posed during the Facebook-Cambridge Analytica hearings. While there is little doubt that this is a legitimate concern, I’m at least somewhat heartened to know that there were quite a few intelligent issues raised during those hearings. Additionally, given all the other developments around potential election influencing, it seems clear that many in Congress have been compelled to become more intelligent about tech industry-related issues, and I’m certain those efforts to be more tech savvy will continue.

From the tech industry perspective, there are, of course, a large number of concerns as well. Obviously, no industry is eager to be faced with any type of regulations or other laws that could be perceived as limiting their business decisions or other courses of action. In addition, these tech companies have been particularly vocal about saying that they aren’t publishers and therefore shouldn’t be subject to the many laws and regulations already in place for large traditional print and broadcast organizations.

Clearly, companies like Facebook, Twitter and Google aren’t really publishers in the traditional sense of the word. The problem is, it’s clear now that what needs to change is the definition of publishing. If you consider that the end goal of publishing is to deliver information to a mass audience and do so in a way that can influence public opinion—these companies aren’t just publishers, they are literally the largest and most powerful publishing businesses in the history of the world. Period, end of story.

Even in the wildest dreams of publishing and broadcasting magnates of yore like William Randolph Hearst and William S. Paley, they couldn’t imagine the reach and impact that these tech companies have built in a matter of a just a decade or so. In fact, the level of influence that Facebook, Twitter, and Google now have, not only on American society, but the entire world, is truly staggering. Toss in the fact that that they also have access to staggering amounts of personal information on virtually every single one of us, and the impact is truly mind blowing.

In terms of practical impact, the influence of these publishing platforms on elections is of serious concern in the near term, but their impact reaches far wider and crosses into nearly all aspects of our lives. For example, the return of childhood measles—a disease that was nearly eradicated from the US—is almost entirely due to the spread of scientifically invalid anti-vaccine rhetoric being spread across social media and other sites. Like election tampering, that’s a serious impact to the safety and health of our society.

It’s no wonder, then, that these large companies are facing the level of scrutiny that they are now enduring. Like it or not, they should be. We can no longer accept the naïve thought that technology is an inherently neutral topic that’s free of any bias. As we’ve started to learn from AI-based algorithms, any technology built by humans will include some level of “perspective” from the people who create it. In this way, these tech companies are also similar to traditional publishers, because there is no such thing as a truly neutral set of published or broadcast content. Nor should there be. Like these tech giants, most publishing companies generally try to provide a balanced viewpoint and incorporate mechanisms and fail safes to try and do so, but part of their unique charm is, in fact, the perspective (or bias) that they bring to certain types of information. In the same way, I think it’s time to recognize that there is going to be some level of bias inherent in any technology and that it’s OK to have it.

Regardless of any bias, however, the fundamental issue is still one of influence and the need to somehow moderate and standardize the means by which that influence is delivered. It’s clear that, like most other industries, large tech companies aren’t particularly good at moderating themselves. After all, as hugely important parts of a capitalist society, they’re fundamentally driven by return-based decisions, and up until now, the choices they have made and the paths they have pursued have been enormously profitable.

But that’s all the more reason to step back and take a look at how and whether this can continue or if there’s a way to, for example, make companies responsible for the content that’s published on their platforms, or to limit the amount of personal information that can be used to funnel specific content to certain groups of people. Admittedly, there are no easy answers on how to fix the concerns, nor is there any guarantee that legislative or regulatory attempts to address them won’t make matters worse. Nevertheless, it’s becoming increasingly clear to a wider and wider group of people that the current path isn’t sustainable long-term and the backlash against the tech industry is going to keep growing if something isn’t done.

While it’s easy to fall prey to the recent politically motivated calls for certain types of changes and restrictions, I believe it’s essential to think about how to address these challenges longer term and independent of any current political controversies. Only then can we hope to get the kind of efforts and solutions that will allow us to leverage the tremendous benefits that these new publishing platforms enable, while preventing them from usurping their position in our society.

Podcast: VMWorld 2018, Google Assistant, IFA Announcements

on September 1, 2018
Reading Time: 1 minute

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing VMWare’s VMWorld conference, chatting about new multi-language additions to Google Assistant, and analyzing a variety of product announcements from the IFA show in Europe, including those from Lenovo, Dell, Intel, Sony, Samsung and others.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast