Top Takeaways From Studying iPhone X Owners

on April 20, 2018
Reading Time: 4 minutes

This article was originally published for subscribers of the Tech.pinions Think.tank. To learn more, or sign up for your daily dose of tech industry insight, click here.

Last month, we conducted a study on iPhone X owners. Most of the respondents in our survey were from the US, but we did have pockets of respondents from many parts of Europe. Our study intentionally focused on the early adopter part of the market due to this cohort being one of the larger majority groups of iPhone X owners. We knew focusing on this cohort would yield the highest volume of owners and we were right. That being said, we did capture enough non-early adopters to generate some insights on mainstream views of iPhone X, but for this article, I will focus on early adopters.

Customer Satisfaction
It’s tempting to believe that just because results of a study lean heavily on one particular profile that the results are so skewed, you can’t use them. This thinking would be entirely false. For many years I’ve been extensively studying every type of consumer profile, and the real insights come when you examine these different groups separately under a microscope. There is some value looking at the topline representative results of a study, but there is more value in breaking those results up into consumer profiles and see what different types of consumers have to say on the subject you are studying. Very few people in research know and understand this nuanced point.

Interestingly, when it comes to customer satisfaction with a product, we have not seen much variance between how early adopters and mainstream consumers rank products they like. In fact, if anything, early adopters tend to be more critical and less satisfied overall than mainstream consumers. Which is why when we see customer satisfaction from the early adopter profile come in as quite high, we know the product in question is quality.

When it came to overall customer satisfaction, iPhone X owners in our study gave the product an overall 97% customer satisfaction. While that number is impressive, what really stands out when you do customer satisfaction studies is the percentage who say they are very satisfied with the product. Considering you add up the total number of very satisfied, and satisfied, to get your total customer satisfaction number a product can have a high number of satisfied responses and lower number of very satisfied responses and still achieve a high number. The higher the very satisfied responses, the better a product truly is. In our study, 85% of iPhone X owners said they were very satisfied with the product.

That number is amongst the highest I’ve seen in all the customer satisfaction studied we have conducted across a range of technology products. Just to contrast that with the original Apple Watch research with Wristly I was involved in, 66% of Apple Watch owners indicated they were very satisfied with Apple Watch, a product which also ranked a 97% customer satisfaction number in the first Apple Watch study we did.

On Apple’s last earnings call, Tim Cook reported a 99% customer satisfaction number for iPhone X. An observant person may have caught this and wondered why we have different numbers. There are some possible answers like different panel makeups, but I think the big one is we have a significantly higher number of iPhone X owners in our study than 451 Research did in theirs. The higher number of responses led to a slightly more balanced number, but with a +/- of 2% from either survey I’m confident the number holds up.

Where things got interesting in our survey was when we looked at customer satisfaction of iPhone X by specific features. I have created the following chart to help with the visual.

Looking across the board at satisfaction for the main features of iPhone X, it appears Apple has nailed the benchmark features. It was encouraging to see the two major behavior changing features of iPhone X in the new home button-less UI, and FaceID itself ranked an over 90% customer satisfaction. Toward the latter end of the curve were both portrait photos and portrait selfies (*note some of the new features like portait lighting is in beta). Both areas where it seems Apple still has some headroom to improve but still ranked a solid satisfaction number. Then there was Siri.

I can do a whole post on early adopters opinions of Siri, but since it’s on the chart, I just want to make a few points. Firstly, you may think it’s odd for us to include Siri in this since it’s not a feature unique to iPhone X. While this is true, we included it since it is designed to be a core feature of the iPhone but also for the unique optimizations with on-device performance and machine learning that exist with Siri on iPhone X due to the new processor design. The main point, however, is a reflection of an insight I mentioned earlier that early adopters are more critical than mainstream consumers of technology. This is reflected in this chart but also highlights something important. Just because a demographic may be early tech leading, or even fanatical about Apple, Siri ranking low with this cohort shows that they are also quite pragmatic and ready to criticize when necessary.

Overall, the data we collected around iPhone X show that if Apple is truly using this product as the baseline for innovation for the next decade, then they are off to a strong start and have built a solid foundation. The big exception still being Siri, but I’m optimistic Apple is making changes to the priorities around Siri and am hopeful we will see progress here in the next few years. If Apple can bring Siri back to a leadership position and in combination continue to build on the hardware and software around iPhone X base foundation, then they will remain well positioned for the next decade.

News You might have missed: Week of April 20, 2017

on April 20, 2018
Reading Time: 4 minutes

ZTE’s Very Bad Week

The U.S. Commerce Department on Monday banned U.S. companies from providing components, software, and other technology to ZTEfor seven years as punishment for violating agreements reached with the US Department of Commerce after ZTE illegally sold phones and equipment to Iran and North Korea. After admitting to busting the sanctions in 2017 and being fined $US1.2 billion, ZTE agreed to take action against employees but failed to do so. The US ban could affect the company’s ability to build smartphones and other equipment because it relies on American processors and Google’s apps.

New AMD Ryzen Chips Put Pressure on Intel, Again

on April 19, 2018
Reading Time: 3 minutes

Today marks an important day for AMD. With the launch of the Ryzen 2000-series of processors for consumer DIY enthusiasts, gamers, and OEM partners, AMD is showing that not only did it get back into the race with Intel, but that it also is confident enough in its capability and roadmap to start on the journey of an annual cadence of releases.

The Ryzen 2000-series is not the revolutionary step forward that we saw with the first release of Ryzen. Before last year, AMD was seemingly miles behind the technology that Intel provided to the gaming markets, and the sales results showcased that. Not since the release of Athlon had AMD proved it could be competitive with the blue-chip-giant that built dominating technology under the Core-family brands.

While the first release of Ryzen saw insane IPC improvements (instructions per clock, one of the key measurements of peak CPU performance) that were 50% over the previous architectural design, Ryzen 2000 offers a more modest 3-4% uplift in IPC. That’s obviously not going to light the world on fire, but is comparable to the generation-to-generation jumps we have seen from Intel over the last several years.

AMD does have us looking forward to the “Zen 2” designs that will ship (presumably) in this period next year. With it comes a much more heavily revised design that could close remaining gaps with Intel’s consumer CPU division.

The Ryzen 2000-series of parts do have some interesting changes that stand out from the first release. These are built on a more advanced 12nm process technology from GlobalFoundries, down from the 14nm tech used on the 1000-series. This allows the processors to hit higher frequencies (+300 MHz) without drastic jumps in power consumption. Fabs like GF are proving that they can keep up with Intel in the manufacturing field, and that gives AMD more capability than we might have previously predicted.

AMD tweaked the memory and cache systems considerably in this chip revision, with claims of dropping the latencies of cache 13-34% depending on the level. Even primary DRAM latency drops by 11% based on the company’s measurements. Latency was a sticking point for AMD’s first Ryzen release as its unique architecture design meant that some segment of cores could only talk to the other segment over an inter-chip bus called Infinity Fabric. This slowed data transfer and communication between those cores and it impacted specific workloads, like lower resolution gaming. Improvements in cache latency should alleviate this to some degree.

The company took lessons learned in the first generation with a feature called Precision Boost and improved it with the 2000-series. Meant to give additional clock speed to cores when the workload is only utilizing a subset of available resources, the first iteration used a very rigid design, improving performance for few scenarios. The new implementation creates a gradual curve of clock speed headroom and core utilization, meaning that more applications that don’t fully utilize the CPU will be able to run at higher clocks based on the available thermal and electrical capabilities of the chip.

There are other changes with this launch as well that give it an edge over the previous release. AMD is including a high quality CPU air cooler in the box with all of its retail processors, something that Intel hasn’t done in a few generations. This saves consumers money and lessens the chances of not having a compatible cooler when getting all the new hardware home. StoreMI is a unique storage solution that uses tier-caching to combine the performance of an SSD with the capacity of hard drive, essentially getting the best from both worlds. It supports a much larger SSD for caching than Intel’s consumer offerings and claims to be high-performance and low-effort to setup and operate.

AMD saw significant success with the Ryzen processor launch last year and was able to grab a sizeable jump in global market share because of it. In some retailers and online sales outlets in 2017 AMD had as much as 50% share for PC builders, gamers, and enthusiasts. AMD will need many consecutive iterations of successful product launches to put a long-term dent in the Intel lead in the space, but the Ryzen 2000-series shows that AMD is capable of keeping pace.

Chromebooks, iPads, and the Desire for New Computing Platforms

on April 19, 2018
Reading Time: 4 minutes

I recently got my hands on Google’s Pixel 2 Chromebook. I have been wanting to use the Pixel 2 for some time and test it in my everyday computing workflows. There is so much to like about the Chromebook platform. It’s fast, fresh, and feels extremely modern. Much more modern than Windows or OS X. But it is really the speed, lack of clutter, and overall fresh feeling of the OS that I like best. After a few weeks with the device I can see how you can make a strong case an operating system like this has more legs for the future of notebooks, and maybe desktops than Windows or OS X. With the exception of apps.

AI is no Knight in Shining Armor fighting to save Humanity

on April 18, 2018
Reading Time: 4 minutes

Last week during Mark Zuckerberg’s congressional hearing we heard Artificial Intelligence (AI) mentioned time and time again as the one size fits all solution to Facebook’s problems of hate speech, harassment, fake news… Sadly though, many agree with me that we are a long way away from AI to be able to eradicate all that is bad on the internet.

Abusive language and behavior are very hard to detect, monitor, and predict. As Zuckerberg himself pointed out, there are so many different factors that play into making this particular job hard: language, culture, context, all play a role in helping us determine if what we hear, read or see is to be deemed offensive or not.

The problem that we have today with most platforms, not just Facebook, is that humans are determining what is offensive. They might be using a set of parameters to do so, but they ultimately use their judgment. Hence consistency is an issue. Employing humans also makes it much harder to scale. Zuckerberg’s 20,000 people number sure is impressive, but when you think about the content that 2 billion active users can post in an hour, you can see how futile even that effort seems.

I don’t want to get into a discussion of how Zuckerberg might have used the promise of AI as a red herring to get some pressure off his back. But I do want to look at why, while AI can solve scalability, its consistency and accuracy in detecting hate speech in the first place is highly questionable today.

The “feed It Enough Data” Argument

Before we can talk about AI and its potential benefits we need to talk about Machine Learning (ML). For machines to be able to reason like a human, or hopefully better, they need to be able to learn. We teach the machines by using algorithms that discover patterns and generate insights from a massive amount of data they are exposed to so that they can make decisions on their own in the future. If we input enough pictures and descriptions of dogs and hand-code the software with what could look like a dog or be described as a dog, the machine will eventually be able to establish and recognize the next engineered “doodle” as a dog.

So one would think that if you feed a machine enough swear words, racial, religious or sexual slurs, it would be able to, not only detect, but also predict toxic content going forward. The problem is that there is a lot of hate speech out there that uses very polite words as there is harmless content that is loaded with swear words. Innocuous words such as “animals” or “parasites” can be charged with hate when directed to a specific group,of people. Users engaging in hate speech might also misspell words or use symbols instead of letters all aimed at preventing keywords-based filters to catch them.

Furthermore, training the machine is still a process that involves humans and consistency on what is offensive is hard to achieve. According to a study published by Kwok and Wang in 2013, there is a mere 33% agreement between coders from different races, when tasked to identify racist tweets.

In 2017, Jigsaw, a company operated by Alphabet, released an API called Perspective that uses machine learning to spot abuse and harassment online and is available to developers. Perspective created a “toxicity score” for the comments that were available based on keywords and phrases and then predicted content based on such score. The results were not very encouraging. According to New Scientist

“you’re pretty smart for a girl” was deemed 18% similar to comments people had deemed toxic, whereas “I love Fuhrer” was 2% similar.

The “feed It the Right Data” Argument

So, it seems that it is not about the amount of data but rather, about the right kind of data, but how do we get to it? Haji Mohammad Saleem and his team at the University of McGill, in Montreal, tried a different approach.

They focused on the content on Reddit that they defined as “a major online home for both hateful speech communities and supporters for their target groups.” Access to a large amount of data from groups that are now banned on Redditt allowed the McGill’s team to analyze linguistic practices that hate groups share thus avoiding having to compile word lists and providing a large amount of data to train and test the classifiers. Their method resulted in fewer false positives, but it is still not perfect.

Some researchers believe that AI will never be able to be totally effective in catching toxic language as this is subjective and requires human judgment.

Minimizing Human Bias

Whether humans will be involved in coding or will remain mostly responsible for policing hate speech, it is really human bias that I am concerned about. This is different than talking about approach consistency that considers cultural, language and context nuances. This is about having humans’ personal beliefs creep into their decisions when they are coding the machines or monitoring content. Try and search for “bad hair” and see how many images of beautifully crafted hair designs for Black women show up in your results. That, right there, is human bias creeping into an algorithm.

This is precisely why I have been very vocal about the importance of representation across tech overall but in particular when talking about AI. If we have a fair representation of gender, race, religious and political believes and sexual orientation among the people trusted to teach the machines we will entrust with different kind of tasks, we will have a better chance at minimising bias.

Even when we eliminate bias at the best of our ability we would be deluded to believe Zuckerberg’s rosy picture of the future. Hate speech, fake news, toxic behavior change all the time making the job of training machines a never-ending one. Ultimately, accountability rests with platforms owners and with us as users. Humanity needs to save itself not wait for AI.

The Unseen Opportunities of AR and VR

on April 17, 2018
Reading Time: 4 minutes

Some of the most exciting and revolutionary innovations to appear on the scene over the last few years are augmented reality (AR) and virtual reality (VR). At their core, these eye-opening technologies—and the many variations that lie between them—are fundamentally new types of displays that let us see content in entirely different ways. From completely immersing us within the digital worlds of virtual reality, to enhancing our views of the “real world” with augmented reality, the products leveraging these capabilities offer experiences that delight the minds and open the imaginations of most people who try them.

And yet, sales of AR and VR products, and adoption of AR and VR apps to date have been relatively modest, and certainly lower than many had predicted. So, what’s the problem?

To better understand the opportunities and challenges facing the AR/VR market, TECHnalysis Research recently completed an online survey of 1,000 US consumers ages 18-74 who own at least one AR or VR headset. Questions covered a wide range of topics, all intended to learn more about why people bought certain AR/VR devices, how they use them, what they like and don’t like about them, and much more.

The responses revealed a range of different insights—some expected and some surprising—and made it clear that consumers who know about, and have had the opportunity to try, AR and VR products are generally very enthusiastic about them. In fact, the overall tone of the comments made by owners of these devices was surprisingly positive.

A few key facts from the survey results provide a good overview of the AR/VR market. First, most people who have tried AR or VR headsets have used one that leverages a smartphone, with more than twice as many people making that choice over a PC or game console-based option. Standalone headset usage was even lower, at about one-quarter of the number who had tried smartphone-driven solutions. Overall, 76% of respondents had only tried one type of system, while the remaining 24% had used two or more.

The most popular choices among survey respondents were the Samsung Gear VR, Sony PlayStation VR, and “other” smartphone-based headsets, such as the many generic options that were available last holiday season. Interestingly, the Sony PlayStation VR and Samsung Gear VR also had the highest satisfaction levels among owners (see Fig. 1), suggesting both that the products providing the best experience for the money were the most widely purchased, and that the aggressive marketing pursued by these companies has been effective. Around 81% of respondents own one device, but the remaining 19% own an average of 3, highlighting a group of dedicated, and curious, AR and VR enthusiasts (and pushing the overall average number of devices per person to 1.4).

Fig. 1

One of the surprising findings from the study is that the frequency of using headsets was modest, with just 18% saying they used their headsets daily, while 38% reported weekly usage, and the largest group, at 41%, saying they only used them once or twice a month. The average session length across devices was a respectable 38 minutes, but the limited overall usage suggests concerns about limited available content, and an overall sense that, while they like the devices, they don’t like them enough to warrant more frequent usage. This, in turn, raises questions about pricing, because if the products aren’t used that frequently, it’s harder to justify (and harder for consumers to accept) higher prices. In fact, the products that did have the highest percentage of daily users were the HTC Vive, Windows 10 Mixed Reality headsets (from a variety of vendors), and Oculus Rift, all of which are priced higher than most other options on the market.

When respondents were asked to quantify the frequency with which they felt ill or queasy using an AR or VR headset, the numbers point out that this problem still exists. Thankfully, 56% of respondents said they never or only rarely have an issue, but one-third said it happens sometimes (defined as between 10 and 49% of the time they used a device), and 11% of owners said that queasy feelings occur frequently (50-100% of the time). Technology improvements around display refresh rates, reduced latencies and other advancements should reduce these numbers, but it’s clearly still a factor in preventing wider adoption.

Most of the study focused on AR and VR headsets, but the survey also included questions about smartphone-based AR app usage without attached headsets. Given all the hype around the launch of Apple’s ARKit for iOS and Google’s ARCore for Android, there have been high expectations for these new apps, but the survey results confirmed what others have reported: real-world usage is just so-so. About half of respondents use these kinds of apps at least once a month, but the other half either never used these apps or have tried several of them but have essentially given up. Given that the respondent base to this survey is generally enthusiastic about AR and VR and know/care enough about the technology to have purchased an AR/VR headset, the smartphone AR app numbers are definitely disappointing.

One reason for the modest numbers could be related to the most surprising finding of the study. Overall, respondents said they preferred VR over AR by a 3:1 ratio. Given all the industry discussion about how AR is expected to win the long-term technology battle, this certainly flies in the face of conventional thinking. Admittedly, more consumers have likely had exposure to VR than AR, but it was clear from many different types of questions throughout the study that the completely immersive experience offered by VR was one of the most appealing aspects of the technology. It was also surprising to see that the preference was consistent across the different age groups that took the survey (see Fig. 2)

Fig. 2

More than just about any other technology now available, using current AR and VR products highlights the potential future possibilities of what they will be able to do even more than what they currently can do. The ability to see both the real world and entirely new worlds in completely different ways is unquestionably a compelling experience. As the technology and market evolve, the enthusiasm in today’s consumers will only grow. The opportunities may be a bit slow in coming, and the technology is unquestionably in its early days, but there’s little doubt that both will likely surpass our current expectations.

(A free copy of highlights from the TECHnalysis Research AR/VR study are available here.)

Podcast: Facebook Hearings, PC Shipments, GoPro

on April 14, 2018
Reading Time: 1 minute

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the Facebook hearings before Congress, discuss recent PC shipment numbers and new HP gaming PCs, and chatting about the potential sale of GoPro to China’s Xiaomi.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

As Consumer Virtual Reality Lags, Commercial Interest Grows

on April 13, 2018
Reading Time: 3 minutes

The Virtual Reality headset market has taken its fair share of lumps in the last 18 months, as the industry has struggled to find the right combination of hardware, software, and pricing to drive consumer demand. But while consumers are proving a hard sell, many in the industry have found an increasing number of companies willing and eager to try out virtual reality for a growing list of commercial use case. A recent IDC survey helps shed some light on this important trend.

Early Testing, and Key Verticals
IDC surveyed 500 U.S. IT Decision Makers between December 18, 2017, and January 19, 2018. One of the key takeaways from the survey: about 58% of IT buyers said their company was actively testing VR for use in the workplace. That number breaks down like this: More than 30% said they were in early testing stages with VR. Almost 18% said they were in the pilot stage. Close to 7% said they were in pilot stages of deployment. And about 3% said they had moved into late-stage deployment of the technology.

Early testing represents the lion’s share of that figure, and obviously, that means different things to different people. In some companies that means a full-fledged testing scenario with different types of hardware, software, and services. For another, it may well mean somebody in IT bought an HTC Vive and is playing with it. But the fact that so many companies are early testing is important to this nascent industry. That also means more than one-quarter of respondents said they were in a pilot stage or later with VR inside their company.

I have written at length in a previous column about the various industries where we see VR playing a role. In this survey, respondents from the following verticals had the highest response rates around VR: Education/Government, Transportation/Utilities/Construction/Resource, and Manufacturing. When we asked respondents who in their company was driving the move to embrace VR, IT managers were the most prominent (nearly 32%) followed by executives (28%) and line of business (nearly 17%).

Key Use Cases, and a Demand for Commercial Grade
Understanding how companies expect to use VR is another key element of the industry moving to support this trend. We asked respondents about their current primary use cases for VR, and the top three included product design, customer service, and employee training. The most interesting of those three to me is employee training. We’ve long expected VR to drive training opportunities related to the high-risk job and those related to high-dollar equipment. Think training firefighters and doctors, as well as people who utilize million-dollar machines. But VR is quickly moving beyond these types of training to include a much broader subset of employees. VR can speed knowledge transfer for everyone from retail salespersons to auto body repair specialists to new teachers. Many companies quickly realize that VR not only speeds onboarding of new employees but can be a big cost saver as well as you cut down on training-related travel and other expenses.

One of the key challenges for commercial VR rollouts to date is the simple fact that almost all of the hardware available today is distinctly consumer grade. I wrote early this year about HTC’s move to offer a more commercial-friendly product with the Vive Pro, which includes higher resolution, better ergonomics, and a will soon offer a wireless accessory to cut the tether. Beyond these types of updates, however, one of the things that commercial buyers want from their hardware is something that’s more robust, that can stand up to rough usage that would occur in a workplace. So when we asked respondents if they would be willing to pay more for commercial-grade hardware, a whopping 80% said yes, they would.

Biggest VR Roadblocks
While the survey data points to an interest in VR among many companies, clear use cases, and a willingness to pay, bringing the technology into the workplace will still face numerous roadblocks. When we asked respondents about the biggest roadblocks to VR adoption within their company, the top answers included a lack of clear use case, hardware cost, software cost, and services cost. So while many companies can see the obvious use cases for VR, many IT decision makers are clearly having some difficulty articulating these use cases within their company. Also, while many express a strong interest in paying more for commercial-grade hardware, the cost of hardware, software, and services is still a major blocker within many companies.

It’s early days for VR in commercial, and many of these roadblocks will disappear as the technology improves, use cases crystalize, pricing comes down, and the clear return on investment when using VR comes into view. In the meantime, the industry needs to move to embrace the growing demand for commercial VR, making it easier for companies ready to take the next step.

News You might have missed: Week of April 13, 2018

on April 13, 2018
Reading Time: 4 minutes

Apple cuts HomePod Orders

Bloomberg reported this week that according to some Apple store workers HomePod inventory is piling in. It also mentioned that by late March, Apple had cut some orders with Inventec Corp, one of the manufacturers that build the HomePod.

Intel pushes FPGAs for mainstream enterprise acceleration

on April 12, 2018
Reading Time: 4 minutes

Though NVIDIA gets most of the attention for accelerating the movement to more advanced processing technologies with its massive drive of GPU hardware into servers for all kinds of general compute purposes, and rightfully so, Intel has a couple of pots in the fire as well. While we are still waiting to see what Raja Koduri and the graphics team can do on the GPU side itself, Intel has another angle to improve efficiency and performance in the data center.

Intel’s re-entry to the world of accelerators comes on the heels of a failed attempt at bridging the gap with a twist on its x86 architecture design, initially called Larrabee. Intel first announced and showed this technology that combined dozens of small x86 cores in a single chip during an IDF under the pretense of it a discrete graphics solution. That well dried up quickly though as the engineers realized it couldn’t keep up with the likes of NVIDIA and AMD in graphics rendering. Larrabee eventually became a discrete co-processor called Knights Landing, shipping in 2015 but killed off in 2017 due to lack of customer demand.

Also in 2015 Intel purchased Altera for just over $16 billion, one of the largest makers of FPGAs (field programmable gate arrays). These chips are unique in that they can be reprogrammed and adjusted as workloads and algorithms shift, allowing enterprises to have an equivalent to custom architecture processors on hand as they need them. Xilinx is the other major player in this field, and now that Intel has gobbled up Altera, must face down the blue-chip-giant in a new battle.

Intel’s purchase decision made a lot of sense, even at the time, but it’s showing the fruits of that labor now. As NVIDIA has proven, more and more workloads are being shifted from general compute processors like the Xeon family and are being moved to efficient and powerful secondary compute models. The GPU is the most obvious solution today, but FPGAs are another; and one that is growing substantially in the move to machine learning and artificial intelligence.

Though initially shipping as a combination Xeon processor and FPGA die on a single package, Intel is now offering to customers Programmable Acceleration Cards (PACs) that feature the Intel Arria 10 GX FPGA as an add-in option for servers. These are half-height, half-length PCI Express add-in cards that feature a PCIe 3.0 x8 interface, 8GB of DDR4 memory, and 128MB of flash for storage. They operate inside a 60 watt envelope, well below the Xeon CPUs and NVIDIA GPUs they are looking to supplant.

Intel has spent a lot of time and money developing the necessary software stack for this platform as well, called the Acceleration Stack for Intel Xeon Scalable processors with FPGAs. It provides acceleration libraries, frameworks, SDKs, and the Open Programmable Acceleration Engine (OPAE), all of which attempts to lower the barrier of entry for developers to bring work to the FPGA field. One of Intel’s biggest strengths over the last 30 years has been its focus on developers and enabling them to code and produce on its hardware effectively – I have little doubt Intel will be class-leading for its Altera line.

Adoption of the accelerators should pick up with the news that Dell EMC and Fujitsu are selling servers that integrate the FPGAs for the mainstream market. Gaining traction with the top-tier OEMs like Dell EMC means awareness of the technology will increase quickly and adoption, if the Intel software tools do their job, should be spike. The Dell PowerEdge R740 and R740XD will be able to support up to four FPGAs while the R640 will support a single add-in card.

Though specific performance claims are light mainly due to the specific nature of each FPGA implementation and the customer that is using and coding for it, Intel has stated that tests with the Arria 10 GX FPGA can see a 2x improvement in options trading performance, 3x better storage compression, and 20x faster real-time data analytics. One software partner, Levyx, that provides high-performance data processing software for big data, built an FPGA-powered system that achieved “an eight-fold improvement in algorithm execution and twice the speed in options calculation compared to traditional Spark implementations.”

These are incredible numbers, though Intel has a long way to go before adoption of this and future FPGA technologies can rival what NVIDIA has done for the data center. There is large opportunity in the areas of AI, genomics, security, and more. Intel hasn’t demonstrated a sterling record with new market infiltration in recent years but thanks to the experience and expertise that the Altera team brings with that 2015 acquisition, Intel appears to be on the right track to give Xilinx a run for its money.

The Consumer Right to Privacy vs. Our Right to Not be Tracked

on April 12, 2018
Reading Time: 5 minutes

I have officially watched C-SPAN 3 the last few days more than at any other point in my life combined. The subject, Mark Zuckerberg taking questions from the United States Senate. In the vast majority of industry or executive presentations, I give I always start with a point about the times we live in being unprecedented when it comes to technology industry history. It seems the examples I have to support this point are many, but the current situation Facebook finds itself in is no exception. Facebook, every nation, and every consumer now face a philosophical fork in the road. The question is less to a consumer’s right to privacy; it seems no one disagrees that is a right, the question is, do we have the right not to be tracked?

Are Self-driving Cars Targets for Advertisers?

on April 11, 2018
Reading Time: 3 minutes

One of the more inevitable technologies that will impact our lives in the next 5-10 years will be autonomous vehicles. And one of the ultimate virtues of these cars will be that you as a passenger will not have to actually drive the vehicle but could instead, lean back and read, watch a video, etc.

But as this chart below points out, any person in a self-driving car will probably become a major target for location-based advertising.
The key question this survey asked was “Imagine that your car could suggest things to you as you are driving around town, based on the places you are passing along your route. How useful would you find the following?”

US Consumers Want More Transparency from Facebook

on April 11, 2018
Reading Time: 5 minutes

Mark Zuckerberg went on record to say that thus far the #DeleteFacebook meme did not have much impact. We, at Creative Strategies, wanted to see if that was the case and more importantly we wanted to understand more about how the general public felt after the Cambridge Analytica incident. We ran a study across 1000 Americans who are representative of the US population in gender and age.

It would seem impossible for people to have missed the Cambridge Analytica incident, given the extensive press coverage. But, we wanted to make sure people outside the tech bubble were aware of it, so we asked: 39% said to be very aware, and another 37% said to be somewhat aware of what happened. Awareness among men was higher with 48% saying they were very aware compared to 29% among women.

There is no Trust without Transparency

Once we established awareness, we wanted to understand what would take to gain users’ trust back if their trust was indeed impacted. What we found was quite interesting. First, 28% of the people we interviewed never trusted Facebook to begin with. This number grows to 35% among men. When it comes to gaining trust back, it seems the answer rests on understanding and power. More precisely, gaining a better understanding of what data is shared (41%) and exercising the power to decide whether or not we are ok with sharing such data (40%). One of the answer options we gave was about making it easier to manage the amount of personal information share but this was not as much of an ask for the panel with only 33% selecting it. It seems to me that what users are asking for is more transparency rather than more tools to manage their settings, which makes a lot of sense.

How can I manage my information if I don’t even understand what and how it is used? This was a point that several senators made during Mark Zuckerberg’s hearing highlighting how long the Facebook terms of service document is. Zuckerberg’s response was that things are not as complicated as they seem, Facebook users know that what they share can be used by Facebook. Unfortunately, it is not as simple as that as the ramifications of how the data users share is used is quite complicated and even if you understand the Facebook business model, you would be hard pushed to know how far your data goes.

Better management of toxic content is also an action point that would help with trust: 39%. Not surprisingly, this is a hot button for women (49%) more so than it is for men (31%). I say not surprisingly not because I experience it first-hand but because over the years, there have been several studies that have shown a higher number of harassment cases for women online. The Rad Campaign Online Harassment Survey in 2014 found that women are more likely to use social media than men. Sixty-two percent of people who reported harassment experienced it on Facebook, 24% Twitter, 20% via email and 18% YouTube. The Halt Abuse Cumulative 2000-2011 analyzed 11 years of online harassment and found that women made up 72% of victims and men 47.5% of perpetrators.

Only 15% of our panelists said there is nothing Facebook can do to regain trust as they are just ready to move on to something else. Of course, if this sentiment were to be similar across other countries 15% of 2 billion users is a sizable chunk of the installed base that would disappear. What is interesting is that the number grows to 18% among people who said to be very aware of the Cambridge Analytica incident. Our study ran before the new details on the number of people impacted by the Cambridge Analytica data breach was released and before AggregateIQ and CubeYou breaches were revealed. It would be fair to assume that this initial negative sentiment might indeed grow.

Lower Engagement is the real Risk for Facebook

Privacy matters to our panelists. Thirty-six percent said they are very concerned about it and another 41% saying they are somewhat concerned.

Their behavior on Facebook has somewhat changed due to their privacy concerns. Seventeen percent deleted their Facebook app from their phone, 11% deleted from other devices, and 9% deleted their account altogether. These numbers might not worry Facebook too much, but there are less drastic steps users are taking that should be worrying as they directly impact Facebook’s business model.

Most panelists (39%) say to be more careful not just with what they post but also what they like and react to brands and friends posts. Thirty-five percent said to be using it less than they used to and another 31% changed their settings. Twenty-one percent said they are planning to use Facebook much less in the future. Others, in the free format comments, pointed out that they will take a more voyeuristic stance, going on Facebook to look at what people post but not engage. This should be the real concern for Facebook, as unengaged users will prove less valuable to brands who are paying for Facebook’s services.

Connecting People

After reading through the data on privacy concerns and plans to lower engagement one wonders why people are on Facebook in the first place. Here is where Zuckerberg’s explanation of what he created rings true to users: connecting people. Fifty-three percent of our panelists are on Facebook to keep in touch with friends and loved ones who don’t live in the area. Forty-eight percent said they are on Facebook to keep up with friends they lost touch with. Messenger and Groups are the other two drivers to the platform attracting 19% and 16% of the panelists. For those panelists who are very concerned about privacy the opportunity to keep in touch with people is even a stronger driver and seems to be enough to make using Facebook worthwhile.

Twenty percent of the panel said they are on Facebook because they are bored. This datapoint deserves a whole separate discussion in my view on the role that social media play as the digital gossip magazine or the real-life soap opera channel.

Facebook was built to connect people Zuckerberg kept repeating to senators in Washington and 40% of our panelists who have been on the platform for more than seven years wish Facebook could go back to be how it was. Alas, I doubt that is an option for Zuckerberg who did say though a paid version of Facebook might be an option. When we asked our panelists if they would be interested in paying for a Facebook version without advertising and with stricter guarantees of privacy protection 59% said no.

Implementing changes to the platform so that privacy could be better protected is not trivial when it impacts the core business model. Some of the discussion in Washington was pointing to the monopoly Facebook has which could be the biggest factor in determining how forgiving users will be. What is clear, however, is that the size Facebook has reached makes this a global issue, not just a US issue.

The New Security Reality

on April 10, 2018
Reading Time: 3 minutes

On the eve of next week’s RSA conference, it’s worthwhile to take a step back to reconsider what security means in today’s tech world. While the show has traditionally concentrated on cybersecurity threats, in an age of YouTube campus shootings, autonomous automobile-related deaths, and nation-state-driven, influence-peddling social media campaigns, the conversations at this year’s show are likely to be much more wide-ranging.

Admittedly, it’s not realistic to think that all the major issues driving these new kinds of threats can be addressed in a single conference. Nevertheless, the reality of these threats dramatically highlights both the depth of influence that technology now has regarding all forms of security, and just how far the tech world has reached into more traditional physical and political notions of security. Tech-related security issues now affect everything and everyone in some way or other.

In light of this new perspective, it’s also important to rethink how tech-related security challenges get addressed. While individual company efforts will clearly continue to be important, it’s also clear now that the only way to effectively tackle these kinds of big issues is through cooperation among many players.

In the past, many companies have been reluctant to share the security issues impacting them for fear of being seen as naïve or unprepared. With large scale brand trust concerns at stake, as well as the egos and reputations of many proud security professionals, perhaps it wasn’t surprising to see these kinds of reactions.

Today, however, the simple fact is that every company of any size is getting digitally attacked on a daily basis in some form or another, and a huge percentage of companies have had at least some type of security compromise impact them—whether they’ve admitted it or not.

Given this troubling, but realistic, landscape, it’s time for companies to more aggressively seek to partner with others to address the enormous tech-related security challenges we all face. In some cases, that might be via sharing critical, or even potentially sensitive, data to ensure that others can learn from the challenges that have already occurred. For example, companies involved in testing autonomous cars ought to be sharing their results with others in the industry, instead of hording them and treating these results as a proprietary resource. For other situations, cooperation might take the form of a more open, willing, and proactive attitude towards sharing experiences and learning best practices from one another.

Regardless of the approach, it’s going to take some strength of corporate character and some new ways of thinking to effectively address these issues.

Interestingly, one of the better and more recent examples of this proactivity that I’ve witnessed is the effort that Intel made to contact and engage with some of its key competitors in the semiconductor space—AMD and ARM—when they learned about the Spectre and Meltdown bugs that plagued many modern CPUs.

In case you need a quick refresh, the Spectre and Meltdown issues essentially involve manipulating a characteristic of modern CPU design called speculative execution that’s been common in processors from these and many other companies for roughly two decades. As the story played out, Intel took the vast majority of the heat, despite the fact that many other large companies, including Apple and Google, had to deal with most of the same issues.

Part of the focus was (and still is) undoubtedly due to the fact that Intel is the largest semiconductor manufacturer in the world and traditionally known as the major CPU provider to many computing devices. But another reason is that Intel took the lead in publicizing the challenges and continually provided updates on remedies for them. In fact, the company helped coordinate one of the more impressive briefings I’ve been on in nearly 20 years as an analyst by pulling together Intel, AMD and ARM people on the same call to explain the news shortly before it was made public.

At the time, it was a bit shocking to have these competitors come together to discuss the issue, but in retrospect, I realize it was exactly the kind of effort that the tech industry is going to need moving forward to address the kind of big security issues we all will likely continue facing.

Instead of benefitting from taking a more proactive approach to these issues, Intel took a great deal of criticism in both the tech and general press, much of it unfairly from my perspective. The company has followed up with a series of commitments to security—including, notably a very public “security first” pledge from CEO Bryan Krzanich—and is using the challenges that the exploits created as a catalyst for building a full complement of better security solutions moving forward.

The process clearly isn’t an easy one, but given the harsh new security realities that we’re facing, it’s the kind of effort we’re going to need other tech companies to make as well.

Should Apple Create a Social Network?

on April 9, 2018
Reading Time: 3 minutes

Recently, Tim Cook gave multiple interviews on Apple’s commitment to protecting their privacy. This is part of their DNA that Steve Jobs instilled in Apple’s leadership since he came back in 1987.

Here are two key things Cook stated in his interviews on this subject:

On Apple’s recent emphasis on customer privacy

“We do think that people want us to help them keep their lives private. We see that privacy is a fundamental human right that people have. We are going to do everything that we can to help maintain that trust. …

Our view on this comes from a values point of view, not from a commercial interest point of view. Our values are that we do think that people have a right to privacy. And that our customers are not our products. We don’t collect a lot of your data and understand every detail of your life. That’s just not the business that we are in.”

On how customer purchasing history is used
“Let me be clear. If you buy something from the App Store, we do know what you bought from the App Store, obviously. We think customers are fine with that. Many customers want us to recommend an app. But what they don’t want to do, they don’t want your email to be read, and then to pick up on keywords in your email and then to use that information to then market you things on a different application that you’re using. …

If you’re in our News app, and you’re reading something, we don’t think that in the News app that we should know what you did with us on the Music app — not to trade information from app to app to app to app.”

As one who has covered Apple for decades, this privacy issue has been up front and center for Steve Jobs and Apple since Jobs returned in 1997.

Because of Apple’s business model, which focuses on products like the Mac, iPad, iPhone and Apple Watch, they are not reliant on ads to grow their business. This allows them to deliver a highly secure and private experience to those who buy and use their products and services.

Given the problems that Facebook is having and how it, Twitter and Google can only make money through ads, perhaps it is time for Apple to create their own secure private social network. Apple already has the backend infrastructure in place to support this and could charge a nominal monthly fee of perhaps $1.99 to $2.99 a month to cover adding additional back-end network infrastructure to support adding hundreds of millions of social networks users to their service/system.

I see Apple and potential users benefiting from an Apple-hosted secure private network in many ways:

  • First, a secure private social network from Apple would give anyone that uses it an ad-free, highly private social network that would allow them to interact with their friends unfettered by ads of any type. Without ads, Apple is not scraping their data, and people would be free to share things between each other without any fear of Apple or anyone else ever seeing anything posted other than the people a user allows to view their site through a friend confirmation.
  • Second, Apple already has over 1.2 billion customers around the world, and of those, 800 million have given Apple their credit card to use to pay for additional services. That is a very large base to tap into if Apple should decide to do a secure, ad-free social network of their own.
  • Third, this could entice many folks out side of the Apple eco system to join Apple’s social network just for the privacy alone and ditch Facebook completely.
    When asked what he would do if he were currently faced with the problems confronting Facebook CEO Mark Zuckerberg, Cook said: “I wouldn’t be in this situation.”

Tim Cook believes that an ad-free social network is the only way one could provide a truly secure social network. Of course, Mark Zuckerberg disagreed with Cook and called his comments “glib”. Zuckerberg thinks he can create a secure social network that he can keep free supported by ads. The Jury is still out on that one, but Apple’s main business model is selling products. They could offer a very low cost secure private social network rather easily should they see this as a real opportunity to keep people in the Apple eco-system and entice others who are not in Apple’s services and product network to join.

Will Apple do this? I would not bet against it. I have to believe that at the very least Apple’s exec’s have been discussing this. They are the only one who could deliver a private, secure social network that does not need ads to support it. And Tim Cook’s comments show he understands that value of a social network that does not use ads to support it.

Podcast: Facebook, Intel CPUs, YouTube

on April 7, 2018
Reading Time: 1 minute

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell discussing the latest travails for Facebook, chatting about CPU battles for PCs and how the new Intel CPU announcements are affecting PC makers, and analyzing what the longer-term impact on Silicon Valley might be from the YouTube shooting incident.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Silicon Valley Giants Should Hold A Summit on Data Security and Privacy

on April 6, 2018
Reading Time: 3 minutes

Next week, Mark Zuckerberg will be trotted in front of Senate and House committees for what will surely be a high-profile admonishment of his company’s behavior. There’ll be a lot of bloviating by our elected representatives. Zuckerberg will be both prepared and contrite. But this issue is bigger than Facebook. Facebook might be the whipping boy du jour, but most of the Silicon Valley giants have exhibited some form of less than laudatory behavior at some recent point, between the unseemly uses of user data, security breaches, and lack of transparency.  That’s why Tim Cook’s vilification of Mark Zuckerberg earlier this week was actually a bit off-putting.

Notwithstanding next week’s expected spectacle, we should recognize that this is an important moment in the 20-year history of the broadband Internet. It is time for the Silicon Valley giants – who control a vastly outsized share of how users connect, communicate, and consume content – to come together and establish some rules of the road going forward. They need to show some leadership, and head off regulators from getting overly involved. Perhaps they should do this collectively, in some way. Why not a summit on data security and privacy?

Let’s first recognize that much as these companies compete, they are very much intertwined. Facebook, Apple, and Google, especially, would each be far less without each other. Rather than each of them taking this moment in time to address the issue of consumer data and privacy separately, it might make sense to do something collectively. There are three steps involved in this, in my view.

Step One is to establish some minimum level of what is appropriate to do with a customer’s data. We can call it a ‘code of conduct’, although the term is a bit vague and overused. In some cases, we have been criticizing these companies for crossing a line, when the line itself has not been particularly defined. Consumers know that there’s sort of an unofficial bargain here. Facebook, Google, Twitter might be free, but these companies profit from customers’ data and what they do online. Some subscribers might be OK with some of their data being made available to third parties – especially if they benefit from it in some way. But more direct communication and transparency is crucial here. We also need to recognize that as technology evolves and future opportunities arise, there will need to an ongoing dialog.

Step Two involves doing a much better job of communicating to customers exactly what is done with their data, and what sorts of controls they have over that. As an example, perhaps all Facebook users should be required to take an online tutorial that shows both how their data is or could be used, and what the various tools and settings are available to help them manage that.  It is also incumbent on us to become better educated about what is being done with our data, and what we can do to exert some control over that. Perhaps the leading companies should get together and develop the data equivalent of ‘drivers ed’, as a requisite for using these [free] services.

Step Three is developing better consistency across platforms that enable consumers to manage privacy and what is done with their data. Right now, the experience of managing app settings, from privacy to notifications, is quite different between iOS and Android devices, and fragmented still further within the Android ecosystem. It’s a whole other ballgame on PCs, and across the different OSs, browsers, and the like. Within leading apps, such as Facebook, Twitter, Gmail, Outlook, and so on, it would be great if there was some easy to find and easy to use ‘hub’ that would function as both a source of information, and present a consistent approach to managing settings. Too often, this stuff is in obscure places and the configuration tools somewhat obtuse.

At a minimum, this ‘summit’ would include Apple, Google, Facebook, Twitter, Microsoft, and possibly Amazon. It might also make sense to have AT&T, Verizon, Comcast, and possibly Netflix and Intel at the table. This group collectively represents an astounding 70% of the collective PC/mobile OS, digital advertising, online commerce, mobile/broadband, and pay TV markets.

We’ve loved reckoning these first 20 years of the Broadband Internet as sort of a ‘Wild West’. But now that much of the land has been grabbed, it’s time to bring some order to these parts. I think it’s a cop-out for Messrs. Cook and Zuckerberg to say that maybe there should be some ‘regulation’. I’m not sure that regulators know enough about this stuff to get it right, and let’s face it, our Congressional leaders aren’t exactly high on the customer trust/satisfaction list themselves these days. I’d love to see the extremely smart and capable leaders of Silicon Valley have a collective deep think about how they can rebuild a relationship with their customers that has become a tad fractured of late.

NVIDIA GTC proves investment in developers can pay dividends

on April 5, 2018
Reading Time: 4 minutes

Last week NVIDIA hosted its annual GPU Technology Conference in San Jose and I attended the event to learn about what technologies and innovations the company was planning for 2018 and beyond. NVIDIA outlined its advancements in many of its growing markets including artificial intelligence, machine learning, and autonomous driving. We even saw new announcements around NVIDIA-powered robotics platforms and development capabilities. Though we were missing information on new GPU architectures or products aimed at the gaming community, there was a lot of news to take in from the show.

I came away from the week impressed with the execution of NVIDIA and its engineering teams as well as the executive leadership that I got to speak with. CEO Jensen Huang was as energetic and lively as I have ever seen him on stage and he maintained that during analyst and media briefings, mixing humor, excitement, and pride in equal doses. It is one of the areas of impact that a show like GTC can have that doesn’t make it to headlines or press releases but for the audience that the show caters to, it’s critical.

The NVIDIA GTC site definitely states it goal upfront.

GTC remains one of the last standing developer focused conferences from a major technology hardware company. Though NVIDIA will tell you (as it did me) that it considers itself as much a software company as chip company, the fact is that NVIDIA has the ability to leverage its software expertise because of the technological advantages its current hardware lineup provides. While events like Facebook F8, Cisco DevNet, and Microsoft BUILD continue to be showcases for those organizations, hardware developer conferences have dwindled. Intel no longer holds its Intel Developer Forum, AMD has had no developer focused show for several years, and giants like Qualcomm and Broadcom are lacking as well.

GTC is has grown into a significant force of change for NVIDIA. Over the 10+ years of its existence, attendance has increased by more than 10x from initial numbers. The 2018 iteration included more than 8,500 attendees that included developers, researchers, startups, high-level executives from numerous companies, and a healthy dose of media.

NVIDIA utilizes GTC to reach the audience of people that are truly developing the future. Software developers are a crucial piece and the ability to instill them with information about tool sets, SDKs, and best practices turns into better applications and more usage models applied to GPU technology. The educational segment is impressive to see in person, even after many years of attendance. I find myself wandering through the rows and rows of poster boards describing projects that include everything from medical diagnosis advancements to better utilization of memory for ray tracing, all of course built on GPU processing. It’s a reminder that there are real problems to solve and that much of the work is still done by these small groups of students, not by billion-dollar companies.

Of course, there is a benefit to NVIDIA. The more familiar these developers and researchers are with the technology and tools it provides, both in hardware and software, the better the long-term future for NVIDIA in the space. Technology leaders know that leading in technology itself is only part of the equation. You need to convince the right people that your better product is indeed better and provide the proof to back it up. Getting traction with development groups and fostering them with guides and information during the early stages of technological shifts is what helped created CUDA and cement it as the GPU compute language of choice for the better part of a decade.

NVIDIA wants the same to occur for machine learning and AI.

The GPU Technology Conference is the public facing outreach program that NVIDIA spends a tremendous amount of money hosting. The beginnings of the show were bare and had equal parts gaming and compute, but the growth and redirection to focus on it as a professional development event prove that it has paid dividends for the company. Just look at the dominance that NVIDIA has in the AI and ML spaces that it was previously a non-contender in; that is owed at least in part to the emphasis and money pumped into an event that produces great PR and great ideas.

As for other developer events, the cupboard is getting bare for hardware companies. Intel cancelled the Intel Developer Forum a couple of years back. In hindsight, this looks like an act of hubris, that Intel believed it was big and important enough that it no longer needed to covet developers and convince them to use its tech.

Now that Intel is attempting to regain a leadership position in these growing markets that companies like NVIDIA and Google have staked ground in, such as autonomous driving, artificial intelligence, and 5G, the company would absolutely benefit from a return of IDF. Whether or not the leadership at Intel recognizes the value that the event holds to developers (and media/analyst groups) has yet to be seen. And more importantly, does that leadership understand the value it can and should provide to Intel’s growing product groups?

There are times when companies spend money on events and marketing for frivolous and unnecessary reasons. But proving to the market (both of developers and Wall Street) that you are serious about a technology space is not one of them. NVIDIA GTC proves that you can accomplish a lot of good with this and I think the success that it has seen in areas of machine learning prove its value. What started out as an event that many thought NVIDIA created as hubris has turned into one the best outward signs of being able to predict and create the future.

My Five Simple Rules to Survive Social Media

on April 4, 2018
Reading Time: 5 minutes

So much happened over the past couple of weeks that led many to reassess their engagement on social media. The Facebook and Cambridge Analytica debacle might have been the straw that broke the camel’s back, but social media has been under scrutiny for quite some time. From Twitter’s problem with hate speech to Snapchat’s dubious advertizing, to Facebook, we have been reminded daily that social media is not a heavenly world.

Despite all that has been going on, however, I am not ready to pack everything up and walk away just yet. I have, though, reassessed how I use social media and realized that the following five simple rules might help me remain sane.

Invest time to figure out how things work

I am a strong advocate of taking matters into my own hands. When it comes to social media, you should not be expecting to have the platform you are using be explicit about what data they collect and who they share it with.

I believe that often in real life, and too often in our digital life, we give away personal information without thinking about who will eventually get their hands on it. If you are a parent and you have taken your kid to one of those jump/climb/play birthday parties you know that the liability form they ask you to fill in is just a way to have your address for advertising. I have learned from a friend to say no to any additional information other than my signature.  Why are we not that picky when it comes to social networks?

Of course, most sites don’t make it easy for you to find the information and when you do understanding all the ramifications requires a legal degree and the patience to read through it all. Many people don’t realize how Facebook works. They might know it is advertizing, and they might know they are using some personal information but they do not know how deep the link between these two is.

After the Cambridge Analytica news, I went into my Facebook settings and revoked access to all those apps I no longer use, or I could not even remember granting access to them in the first place. I stopped logging in with Facebook or Google to any new service, app or website, and instead, I create an account. Yes, it is painful, but in most cases, it is only a couple of minutes more. I am particularly careful with allowing access to my friends’ information. I might be ok with sharing, but I cannot make that decision for my friends.

Whether or not this is all a lost cause because there is plenty of data about us out in the wild does not really matter. The point is that some tools are available to us and while we should demand more from the platform owners, we should also start using what is already there.

Do Your Due Diligence on News

A big part of social media has to do with news. I know I use Twitter to get breaking news and analysis from my favorite tech reporters and commentators. The immediacy of social media is such that you feel like you have your hand on the pulse. Yet, that immediacy does not give enough time to vet the information being shared. Even relying on official sources might not be enough to avoid mistakes. I am not talking about fake news, I am just talking about news hot off the press that is being reported as it develops. If you are the kind of person who wants to follow news as it unfolds, make sure you validate the information especially before sharing. Yesterday shooting at YouTube HQ was a sad example of how people also try to take advantage of the situation to spread hate and misinformation, as the Twitter account of a person who was caught up in the incident was hacked.

When it comes to news you also need to be aware of your bubble. It is inevitable that you follow or befriend people you like and share interests with. This might cause you to have a very one-sided set of information even more so than it used to be when you bought your favorite newspaper or watched your favorite evening news show. It is on you to balance your sources.

Be Considerate

Be considerate to people you interact with and yourself. I never say something on social media that I would not say in person, and this applies both when I talk about people or brands. I do clean up my language more than I do in person only because I don’t know who is on the other side of my tweet or post. This is not different from not swearing in a room full of children or when you don’t know how people feel about it.

My digital me is also kinder than my real me. When you blast out an opinion or thought you don’t know every person you are going to reach. The number of people who will engage with you is probably a fraction of the people that you reach and might be affected by what you say.

I might be kinder and have a cleaner vocabulary, but I am always myself. All right, maybe on Facebook, I am also the happier side of myself! The point is that I stay true to what I care about and what I believe. This is why at times I would veer from my tech coverage on Twitter to talk about women, diversity, and education.

Also, remember to be kind to yourself. Mute, unfollow, report people who create a toxic environment. Don’t think for a second that just because they do not say it straight to your face, it won’t eventually affect you. In the same way, as you would stop talking to someone in real life or stop going to a place where everyone is obnoxious you should walk away from social.

The Pros must outweigh the Cons

For me today, the pros of being active on social media still outweigh the cons.

From a personal perspective, it gives me a way to keep in touch friends I still have in the UK and Italy that I don’t see often. I could call or write but the reality is that time is limited, and as much as you care about people it is hard to share details or your everyday life over email or even a call.

From a professional standpoint, social media does make things easier. Think about changing job before social media and how much harder it was for people who had a public profile to keep their contacts. Sure people could find you, but it was certainly not as easy. Now people can find me on Facebook, Messenger, Twitter even on LinkedIn. If they cannot find you, it really causes they don’t want to!

Fasting is Good for the Soul

I am always surprised how much time I can spend scrolling through Facebook and Twitter. Twitter, in particular, is a window to a ton of information if you follow the right people.

When I am less present on social, I do feel I am less aware of the latest news but at the same time, there is less anxiety. I might get the news at the end of the day or the following day, but those stories are more developed without having been through that fast pace that only increases my apprehension. This has been especially true since the American Election and Brexit as pretty much everybody in my feed started talking more openly about politics.

As you read this article, I will be enjoying a few days of exploring Washington DC with my daughter. I will have my Apple Watch set up to show breaking news, and I will check in probably at the start and the end of my day. For the rest of the time, though, I will try my best to be in the moment, to make memories with my kid knowing that I will not be missed much on social media and that everybody and everything will still be there when I get back.

 

 

 

Arm Mac’s Cometh. How Mac Differentiation Will Deepen

on April 3, 2018
Reading Time: 3 minutes

A Bloomberg report that came out yesterday speculated that Arm based Mac’s could come as early as 2020. Given both Apple’s silicon roadmap and its hardware and software roadmap, that timing seems plausible. The bigger question is why would Apple make a change, and more importantly pour so much R&D developing a custom chip for Mac hardware when they still only sell roughly 20 million Macs a year. I think a few key reasons make the most sense.

Making AI Real

on April 3, 2018
Reading Time: 4 minutes

Back in the 1980s and ‘90s, General Electric (GE) ran a very successful ad campaign with the tagline “We Bring Good Things to Life.” Fast forward to today, and there’s a whole new range of companies, many with roots in semiconductors, whose range of technologies is now bringing several good tech ideas—including AI—to life.

Chief among them is Nvidia, a company that began life creating graphics chips for PCs but has evolved into a “systems” company that offers technology solutions across an increasingly broad range of industries. At the company’s GPU Technology Conference (GTC) last week, they demonstrated how GPUs are now powering efforts in autonomous cars, medical imaging, robotics, and most importantly, a subsegment of Artificial Intelligence called Deep Learning.

Of course, it seems like everybody in the tech industry is now talking about AI, but to Nvidia’s credit, they’re starting to make some of these applications real. Part of the reason for this is because the company has been at it for a long time. As luck would have it, some of the early, and obvious, applications for AI and deep learning centered around computer vision and other graphically-intensive applications which happened to be a good fit for Nvidia’s GPUs.

But’s it’s taken a lot more than luck to evolve the company’s efforts into the data center, cloud computing, big data analytics, edge computing, and other applications they’re enabling today. A focused long-term vision from CEO Jensen Huang, solid execution of that strategy, extensive R&D investments, and a big focus on software have all allowed Nvidia to reach a point where they are now driving the agenda for real-world AI applications in many different fields.

Those advancements were on full display at GTC, including some that, ironically, have applications in the company’s heritage of computer graphics. In fact, some of these developments finally brought to life a concept for which computer graphics geeks have been pining for decades: real-time ray tracing. The computationally-intensive technology behind ray tracing essentially traces rays of light that bounce off objects in a scene, enabling hyper-realistic computer-generated graphics, complete with detailed reflections and other visual cues that make an image look “real”. The company’s new RTX technology leverages a combination of their most advanced Volta GPUs, a new high-speed NVLink interconnect between GPUs, and an AI-powered software technology called OptiX that “denoises” images and allows very detailed ray-traced graphics to be created in real-time on high-powered workstations.

On top of this, Nvidia announced a number of partnerships with companies, applications, and open standards that have a strong presence in the datacenter for AI inferencing applications, including Google’s TensorFlow, Docker, Kubernetes and others. For several years, Nvidia has offered tools and capabilities that were well-suited to the initial training portion of building neural networks and other tools used in AI applications. At this year’s GTC, however, the company focused on the inferencing half of the equation, with announcements that ranged from a new version (4.0) of a software tool called TensorRT, to optimizations for the Kaldi speech recognition framework, to new partnerships with Microsoft for WindowsML, a machine learning platform for running pre-trained models designed to do inferencing in the latest version of Windows 10.

The TensorRT advancements are particularly important because that tool is intended to optimize the ability for data centers to run inferencing workloads, such as speech recognition for smart speakers and objection recognition in real-time video streams on GPU-equipped servers. These are the kinds of capabilities that real-world AI-powered devices have begun to offer, so improving their efficiency should have a big influence on their effectiveness for everyday consumers. Data center-driven inferencing is a very competitive market right now, however, because Intel and others have had some success here (such as Intel’s recent efforts with Microsoft to use FPGA chips to enable more contextual and intelligent Bing searches). Nevertheless, it’s a big enough market that’s there likely to be strong opportunities for Nvidia, Intel and other upcoming competitors.

For automotive, Nvidia launched its Drive Constellation virtual reality-based driving simulation package, which uses AI to both create realistic driving scenarios and then react to them on a separate machine running the company’s autonomous driving software. This “hardware-in-the-loop” based methodology is an important step for testing purposes. It allows these systems to both log significantly more miles in a safe, simulated fashion and to test more corner case or dangerous situations, which would be significantly more challenging or even impossible to test with real-world cars. Given the recent Uber and Tesla autonomous vehicle-related accidents, this simulated test scenario is likely to take on even more importance (and urgency).

Nvidia also announced an arrangement with Arm to license their Nvdia Deep Learning Accelerator (NVDLA) architecture into Arm’s AI-specific Trillium platform for machine learning. What this does is allows Nvidia’s inferencing capabilities to be integrated into what are expected to be billions of Arm core-based devices being built into IoT (Internet of Things) devices that live on the edge of computing networks. In effect, this allow the extension of AI inferencing to even more devices.

Finally, one of the more impressive new applications of AI that Nvidia showed at GTC actually ties it back with GE. Several months back, the healthcare division of GE announced a partnership with Nvidia to expand the use of AI in its medical devices business. While some of the details of that relationship remain unknown, at GTC, Nvidia did demonstrate how its Project Clara medical imaging supercomputer could use AI not only on newer, more capable medical imaging devices, but even with images made from older devices to improve the legibility, and therefore, medical value of things like MRIs, ultrasounds, and much more. Though no specifics were announced between the two companies, it’s not hard to imagine that Nvidia will soon be helping GE to, once again, bring good things to life.

The promise of artificial intelligence, machine learning and deep learning goes back decades, but it’s only in the last few years and even, really, the last few months that we’re starting to see it come to life. There’s still tremendous amounts of work to be done by companies like Nvidia and many others, but events like GTC help to demonstrate that the promise of AI is finally starting to become real.

Smart Home Competition Fuels Innovation and Creativity

on April 2, 2018
Reading Time: 3 minutes

Although still in its infancy, investment and engagement in artificial intelligence (AI) research continues to grow. A recent Consumer Technology Association (CTA) report citing International Data Corporation (IDC) estimates found global spending on AI was nearly 60 percent higher in 2017 than in 2016 and is projected to grow to $57 billion by 2021. And almost half of large U.S. companies plan to hire a chief AI officer in the next year to help incorporate AI solutions into operations.

As exciting as these changes are, however, one of the most exciting examples of AI right now hits a little closer to home – in fact for many of us, it’s in our living rooms.

Digital assistants are one of the hottest trends in AI, in large part thanks to the vast array of functions they offer consumers. These helpful, voice-activated devices can answer questions, stream music and manage your calendar. More, they turn off the lights, lock the doors and start your appliances when connected to compatible home systems. Budding support for digital assistants across the smart home ecosystem shifts the entire landscape of control from a web of apps to the simplicity of the human voice.

At CES® 2018, we saw many different digital assistants in action, from well-known players such as Google Assistant, Apple Siri and Amazon Alexa to other disruptive options such as Samsung’s Bixby, Microsoft’s Cortana and Baidu’s Raven H. Competition has spurred creativity and boosted innovation, as more and more products that connect with these virtual helpers emerge on the scene.

Competition in the smart speaker category, for example, has prompted greater differentiation among these devices as brands deploy unique features to attract consumers. The strategy is expected to pay off. CTA research projects U.S. smart speaker sales will increase by 60 percent in 2018 to more than 43.6 million units. Almost overnight, smart speakers powered by digital assistants have become the go-to smart home hub, a key component of the Internet of Things (IoT) and the catalyst driving smart home technology revenue growth of 34 percent to a predicted $4.5 billion this year.

The smart speaker category is also boosting other categories of smart home innovations. The rise of smart home technology – expected to reach 40.8 million units in the U.S. in 2018, according to CTA research – creates a new space for digital innovators to connect more devices, systems and appliances in more useful ways. This, in turn, is redefining the boundaries of the tech industry. Competition has fueled creativity, and creativity has expanded convenience – and Americans love it.

Fifteen years ago, we didn’t necessarily think of kitchen and bath manufacturers such as Kohler or Whirlpool as tech companies. Today, these companies are finding ways to integrate their products into the IoT, such as Whirlpool’s “Scan-to-Cook” oven and Kohler’s Verdera smart mirror. And Eureka Park™ – the area of the CES show floor dedicated to startups – hosted dozens of smart home innovators from around the world in January, launching their products for the first time to a global audience. Part of what’s so amazing about these technologies is they work together across platforms to create more efficient, more economical, more livable homes.

For example, South Carolina-based Heatworks developed a non-electric system for heating water, along with an app that lets system users control water temperature and shower length from their phones. New York-based Solo Technology Holdings has created the world’s first smart safe that sends you mobile alerts when it opens. Lancey Energy Storage, out of Grenoble, France, introduced the first smart electric heater, which saves more money and energy than traditional space heaters. And Israeli startup Lishtot showcased a keychain-sized device that tests drinking water for impurities and shares that data wirelessly via Bluetooth. These are just a few of the innovations made possible by IoT.

The IoT revolution has leveraged what I like to call the four C’s: connectivity, convenience, control and choice. Just as we experience the physical world with our five senses, we experience the digital world through the four C’s – they’ve become organic to our modern daily life, yet they are subtle enough that we often take them for granted. Consumers expect the four C’s to be ubiquitous. They are the default settings that anchor our digital experiences, which now increasingly includes our homes and our appliances.

The smart home phenomenon at CES represents what the tech industry does so well: companies big and small leading the IoT charge, crafting unique innovations that can be implemented across ecosystems. And everyone – from the largest multinational companies to the smallest, most streamlined startups – has an opportunity to redefine what it means to be at home.

It’s a redefinition that consumers embrace. Over the course of this year, I have no doubt that we’ll see the efficiencies and improvements technology delivers expanding beyond the home, into our workplaces and our schools. This remarkable evolution – driven by visionary innovation and fierce competition – is proof that technology is improving our lives for the better, saving us time and money, solving problems large and small and raising the standard of living for all.

Podcast: Apple Education, NVidia Tech Conference, Microsoft Reorg, Facebook Memo

on March 31, 2018
Reading Time: 1 minute

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing the news from Apple’s Education event, analyzing the NVidia GPU Technology Conference, chatting about the recent Microsoft reorganization, and debating the impact of the recent Facebook memo release.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast