Podcast: Tech Congressional Hearings, Apple Event Preview, CEDIA, Sony

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell discussing the Congressional hearings with major tech players Facebook and Twitter, previewing what they’d like to see Apple introduce at their event next week, and describing some of Sony’s announcements at the CEDIA trade show as well as new core technology developments they’ve recently introduced.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Despite rumors, 7nm is Not Slowing Down for Qualcomm

Earlier this week, a story ran on Digitimes that indicated there might be some problems and slowdown with the rollout of 7nm chip technologies for Qualcomm and MediaTek. Digitimes is a Taiwan-based media outlet that has been tracking the chip supply chain for decades but is known to have a rocky reliability record when it comes to some if its stories and sources.

The author asserts that “Instead of developing the industry’s first 7nm SoC chip” that both of the fabless semiconductor companies mentioned have “moved to enhance their upper mid-range offerings by rolling out respective new 14/12nm solutions.”

But Qualcomm has already built its first 7nm SoC and we are likely to see it this year at its annual Snapdragon Tech Summit being held in Maui, Hawaii this December. The company has already sent out “save the date” invites to media and analysts and last year’s event was where it launched the Snapdragon 845, so it makes sense it would continue that cadence.

If that isn’t enough to satisfy doubters, Qualcomm went as far as to publish a press release that the “upcoming flagship” mobile processor would be built on 7nm and that it had begun sampling this chip to multiple OEMs building the next generation of mobile devices. The press release quotes QTI President Cristiano Amon as saying “smartphones using our next-generation mobile platform [will launch] in the first half of 2019.”

Digitimes’ claims that both Qualcomm and MediaTek have “postponed” launches from 2018 to 2019 is counter to all the information we have received over the previous six months. As far as we can tell, the development of the next Snapdragon product and TSMC’s 7nm node is on track and operating as expected.

12nm/14nm refinements are coming

The assertion that Qualcomm is enhancing upper- and mid-range platforms around the existing 14nm and 12nm process nodes is likely true. It is common for the leading-edge foundry technologies to be limited to the high performance and/or high efficiency products that both require the added capability and can provide higher margins to absorb the added cost of the newer, more expensive foundry lines.

There could be truth to the idea of chip companies like Qualcomm putting more weight behind these upper-mid grade SoCs due to the alignment with the 5G roll out across various regions of the globe. But this doesn’t indicate that development has slowed in any way for the flagship platforms.

7nm important for pushing boundaries

Despite these questions and stories, the reality is that 7nm process is indeed necessary for the advancement of the technology that will push consumer and commercial products to new highs as we move into the next decade. Building the upcoming Snapdragon platform on 7nm means Qualcomm can provide a smaller, denser die to its customers while also targeting higher clock speeds and additional compute nodes. This means more cores, new AI processing engines, better graphics, and integrated wireless connectivity faster than nearly any wired connections.

This does not benefit only Qualcomm though; there is a reason Apple’s upcoming A12 processor is using 7nm for performance and cost efficiency advantages. AMD is driving full speed into 7nm to help give it the edge over Intel in the desktop, notebook, and enterprise CPU space for the first time in more than a decade. AMD will even have a 7nm enterprise graphics chip sampling this year!

Those that don’t clearly see the advantage 7nm will give to TSMC’s customers aren’t witnessing the struggles that Intel has with its product roadmap. Without an on-schedule 10nm node it is being forced to readjust launches and product portfolios to a degree I have never seen. The world’s largest silicon provider will survive the hurdle but to assume that its competitors aren’t driving home their advantage with early integration of 7nm designs would be naive.

News You Might Have Missed: Week of Sept 7th, 2018

Evernote’s Troubles

In the past month, Evernote lost its Chief Technology Officer, Anirban Kundu, its Chief Financial Officer Vincent Toolan and its head of HR Michelle Wagner. As it’s getting ready to raise more money it slashed its premium subscription from $70 to $42 a year.

Via TechCrunch 

  • Evernote was the first of its kind to allow people to write short notes, saved them in the cloud and access them from any device. This was great as people were finding dipping their toes in the cloud
  • Unfortunately for Evernote, the big platforms got better and Microsoft, Google, and Apple all developed tools that, at one degree or another, integrate what Evernote does in a more natural workflow rather than a standalone app.
  • So for people who are invested in a platform with hardware and services OneNote, Notes, Google Keep make much more sense especially when they come free.
  • “Platformication” of apps for lack of a better term is a big risk developer face all the time. Some like Slack, for instance, are smart at keeping ahead in innovation and differentiation like the integration of other apps in their own app which creates multiple ties for the user.
  • Unfortunately, Evernote did not keep ahead of the competition and also made it difficult for people to continue loving the app by starting to charge for features that used to be free and killing off features.
  • Hard to imagine this round of funding going well for the company.

Google Oct 9 Event

On Thursday Google sent out invites for an event in New York City where the Made by Google team is expected to launch the Pixel 3 and Pixel 3 XL.

Via The Verge 

  • Some of the details on the new devices have already been leaked courtesy of someone forgetting a Pixel 3 prototype in a Lyft car.
  • What I am more interested to hear from Google is their distribution strategy. After a few initial glitches, the Pixel 2 established itself as a good product but the impact on the overall market was very much limited due to the distribution strategy Google adopted
  • Deals with carriers, an upgrade program, and a stronger in-store presence are all necessary to grow sales.
  • Last year we saw a very strong marketing campaign which I would expect to see repeated this year. But it is really more about sales reps pushing the products, in-store props to highlight the camera features and in general help consumers seeing to believe there is an alternative to Apple and Samsung is what Google needs.
  • We should also see some updates to the Google Home line up and possibly to Pixel Buds. Such updates might not be bounce and leaps in innovation but they would help consumers growing confidence that Made by Google is here to stay.

Samsung’s Foldable Phone and Mid-Tier Portfolio

DJ Koh stated that it is time for Samsung to take care of the mid-tier and in order to do that he plans to deliver new technology to the mid-tier before flagship products get it. As part of the interview, he also said we might be seeing a foldable device in November.

Via ZDNet

  • Samsung is holding its developer conference (SDC) in early November so it could be feasible to see a product launch there, especially if the foldable is a true foldable display which would call for apps to be redesigned to take advantage of the new screen.
  • I do not think it is feasible however to introduce the device at SDC in November and have it ship this year.
  • If the phone came out under the Note brand it might also help Samsung to move away from the August launch schedule making it easier for the Note to get back on top of the tech cycle compared to the Galaxy S line launching in February/March
  • As far as introducing new tech in the mid-tier before introducing it into a flagship product I am sure DJ Koh did not mean foldable displays coming to mid-tier first, as I really struggle to see how Samsung could hit the right price point and deliver a profitable product.
  • That said, Samsung needs something new for the mid-tier. For a long time, users in the US and Europe who were more price conscious turned to buy one or two-year-old high-end products and be satisfied by the price point and the tech they were getting for it. As prices continue to increase, a year old product is still too expensive to please a buyer looking for a mid-tier product.
  • In developing markets aside from price point there is also the issue that a lot of the tech included in flagship products might not be as relevant making the user feel they are paying for something they will not use. A more tailored approach to buyers in China and India would help Samsung quite a bit in the fight for market share against the Chinese.

Senate Hearing

This week Sheryl Sandberg and Jack Dorsey went to Washington to discuss social media, foreign hacks on elections and their anticompetitive nature when it comes to monitoring discourse and banning from the platform.

Via NPR

  • The link takes you to the most entertaining part of the hearing when a pro-Trump protester interrupted the proceedings and Mr. Long from Missouri went off auctioning her phone!
  • As far as the hearing it seems that Senators and law makers were much more informed than those who had attended Zuckerberg’s hearing a few months ago
  • It was also clear that they all played to their political agenda with the mid-terms as their highest concern
  • I thought that Sandberg came across well, genuinely interested in being there and listening, willing to explain using a jargon many could relate – including using the term “alternative facts”!
  • Jack on the other hand, came across quite like Zuckerberg: uncomfortable, apologetic, not on top of his own business and gave little confidence that he knows how to make things better
  • Google, of course, was not there, which resulted in being pretty much found guilty by default.

The ‘Post-PC Era’ Never Really Happened…and Likely Won’t

As we head toward Apple’s annual device announcement-palooza, it’s an interesting exercise to consider where we are in Steve Jobs’ vaunted, much quoted ‘Post-PC Era’. The fact of the matter is, that era never fully arrived, and it doesn’t look like it will, in the near- to medium- term future.

Much was made last year of the iPhone X, celebrated as Apple’s 10th anniversary iPhone model. But in just 18 months, we’ll be commemorating the 10th anniversary of the launch of the iPad. Initially met with skepticism by many analysts and tech reviewers, the iPad’s quick out-of-the gate success led to Jobs’ famous ‘post-PC era’ quote a mere two months later.

Tablets have had a good run, but sales have tailed off of late. I’d say they’ve had greater influence on the evolution of the smartphone and the PC, rather than leading to a significantly different nomenclature for what most of us carry around today. My Techpinions colleague Ben Bajarin says that  Creative Strategies surveys indicate that only about 10% of tablet users have ‘replaced their PC’ — a number that has held steady for several years. And that 10% is concentrated in a handful of industries, such as real estate and construction. PC sales aren’t exactly surging, but they’re steady. Your average white collar professional today still carries around a smartphone and a laptop, with the tablet being an ancillary device, used primarily for media/content consumption.

Tablets have had a significant influence on the design of smartphones and PCs. They ushered in an era of smartphone screen upsizing, led primarily by Samsung, and now reinforced by the iPhone X and the expected announcement next week of a 6.5 inch iPhone model. For those who don’t want to swing both a smartphone and tablet, we have ‘Phablets’, most personified in the successful Galaxy Note series, and alternative-to-keyboard input devices such as the S Pen and the Apple Pencil. We’ve also seen the development of some hybrid tablet/PC devices, the most innovative and successful of which is Microsoft’s Surface line. But that product is competing more in the tablet category than in the PC category, with the exception of a few market segments. And, the growing number of portable PCs that feature touch screens and other tablet-like capabilities are eating a bit into tablet sales, particularly among the student set. The other personification of some aspect of the ‘post-PC’ area, I suppose, is the successful Chromebook line, which is more a reflection of the Cloud and near-pervasiveness of broadband connectivity.

It even appears that Apple doesn’t believe in the ‘post-PC’ mantra in the same way, given the steadily narrowing delta between the largest iPhone and the smallest iPad. Mainly, this is an effort to convince more users to have both an iPhone and an iPad, since I doubt that most users who have both would have a big phone and a small tablet.

So, the question is, what will change in 3 to 5 years? There will be tons of innovation of course, but I’m not expecting the average consumer or business professional to be carrying with them a dramatically different mix of device types or # of devices in the medium term. Even with pens that recognize and convert handwriting better and continual improvements in voice input, there’s still nothing that really beats the good ‘ol keyboard for productivity. And we’re still very locked into the Big Three of word processing, spreadsheets, and presentation software. The main difference has been the move to the cloud, improved collaboration, and competitive products from Google.

There’s a lot of excitement around foldable screens, but that’s initially likely to be more about coolness of form factor and the admission that the largest phones/phablets are becoming unwieldy. There are also steady improvements in mirroring type capability, where the idea is that your portable device upsizes to a big screen when at home or work. But it still requires a fair bit of effort, plus ancillary devices (and their associated cables and chargers) to make it all really work. And among many business professionals, there’s still too much time spent in locations other than home or the office where PC-type functionality is required.

It is likelier that innovation in each category will continue to influence the other categories, just as there’s more touch capability on PCs, and more input options on tablets. But looking out to the early 2020s, I don’t see any dramatic shift in what the average person will be carrying with them on a day-to-day basis. A bunch more of us will have smartwatches or some other wearable. And if anything, the tablet segment might fall off somewhat, squeezed by bigger and more functional phones on one end, and by more versatile laptops on the other end. But among the market share leaders in each category (and there’s a fair bit of overlap), none are planning for any form of product obsolescence anytime soon. When we celebrate the 10th anniversary iPad in April 2020, we’ll be marveling at the significant improvements in speed, display, wireless connectivity, and so on. But PCs will continue to be the workhorse for most of us.

Tech Content Needs Regulation

It may not be a popular perspective, but I’m increasingly convinced it’s a necessary one. The new publishers of the modern age—including Facebook, Twitter, and Google—should be subject to some type of external oversight that’s driven by public interest-focused government regulation.

On the eve of government hearings with the leaders of these tech giants, and in an increasingly harsh environment for the tech industry in general, frankly, it’s fairly likely that some type of government intervention is going to happen anyway. The only real questions at this point are what, how, and when.

Of course, at this particular time in history, the challenges and risks that come with trying to draft any kind of legislation or regulation that wouldn’t do more harm than good are extremely high. First, given the toxic political climate that the US finds itself in, there are significant (and legitimate) concerns that party-influenced biases could kick in—from either side of the political spectrum. To be clear, however, I’m convinced that the issues facing new forms of digital content go well beyond ideological differences. Plus, as someone who has long-term faith in the ability of the democratic principles behind our great nation to eventually get us through the morass in which we currently find ourselves, I strongly believe the issues that need to be addressed have very long-term impacts that will still be critically important even in less politically challenged times.

Another major concern is that the current set of elected officials aren’t the most digitally-savvy bunch, as was evidenced by some of the questions posed during the Facebook-Cambridge Analytica hearings. While there is little doubt that this is a legitimate concern, I’m at least somewhat heartened to know that there were quite a few intelligent issues raised during those hearings. Additionally, given all the other developments around potential election influencing, it seems clear that many in Congress have been compelled to become more intelligent about tech industry-related issues, and I’m certain those efforts to be more tech savvy will continue.

From the tech industry perspective, there are, of course, a large number of concerns as well. Obviously, no industry is eager to be faced with any type of regulations or other laws that could be perceived as limiting their business decisions or other courses of action. In addition, these tech companies have been particularly vocal about saying that they aren’t publishers and therefore shouldn’t be subject to the many laws and regulations already in place for large traditional print and broadcast organizations.

Clearly, companies like Facebook, Twitter and Google aren’t really publishers in the traditional sense of the word. The problem is, it’s clear now that what needs to change is the definition of publishing. If you consider that the end goal of publishing is to deliver information to a mass audience and do so in a way that can influence public opinion—these companies aren’t just publishers, they are literally the largest and most powerful publishing businesses in the history of the world. Period, end of story.

Even in the wildest dreams of publishing and broadcasting magnates of yore like William Randolph Hearst and William S. Paley, they couldn’t imagine the reach and impact that these tech companies have built in a matter of a just a decade or so. In fact, the level of influence that Facebook, Twitter, and Google now have, not only on American society, but the entire world, is truly staggering. Toss in the fact that that they also have access to staggering amounts of personal information on virtually every single one of us, and the impact is truly mind blowing.

In terms of practical impact, the influence of these publishing platforms on elections is of serious concern in the near term, but their impact reaches far wider and crosses into nearly all aspects of our lives. For example, the return of childhood measles—a disease that was nearly eradicated from the US—is almost entirely due to the spread of scientifically invalid anti-vaccine rhetoric being spread across social media and other sites. Like election tampering, that’s a serious impact to the safety and health of our society.

It’s no wonder, then, that these large companies are facing the level of scrutiny that they are now enduring. Like it or not, they should be. We can no longer accept the naïve thought that technology is an inherently neutral topic that’s free of any bias. As we’ve started to learn from AI-based algorithms, any technology built by humans will include some level of “perspective” from the people who create it. In this way, these tech companies are also similar to traditional publishers, because there is no such thing as a truly neutral set of published or broadcast content. Nor should there be. Like these tech giants, most publishing companies generally try to provide a balanced viewpoint and incorporate mechanisms and fail safes to try and do so, but part of their unique charm is, in fact, the perspective (or bias) that they bring to certain types of information. In the same way, I think it’s time to recognize that there is going to be some level of bias inherent in any technology and that it’s OK to have it.

Regardless of any bias, however, the fundamental issue is still one of influence and the need to somehow moderate and standardize the means by which that influence is delivered. It’s clear that, like most other industries, large tech companies aren’t particularly good at moderating themselves. After all, as hugely important parts of a capitalist society, they’re fundamentally driven by return-based decisions, and up until now, the choices they have made and the paths they have pursued have been enormously profitable.

But that’s all the more reason to step back and take a look at how and whether this can continue or if there’s a way to, for example, make companies responsible for the content that’s published on their platforms, or to limit the amount of personal information that can be used to funnel specific content to certain groups of people. Admittedly, there are no easy answers on how to fix the concerns, nor is there any guarantee that legislative or regulatory attempts to address them won’t make matters worse. Nevertheless, it’s becoming increasingly clear to a wider and wider group of people that the current path isn’t sustainable long-term and the backlash against the tech industry is going to keep growing if something isn’t done.

While it’s easy to fall prey to the recent politically motivated calls for certain types of changes and restrictions, I believe it’s essential to think about how to address these challenges longer term and independent of any current political controversies. Only then can we hope to get the kind of efforts and solutions that will allow us to leverage the tremendous benefits that these new publishing platforms enable, while preventing them from usurping their position in our society.

Podcast: VMWorld 2018, Google Assistant, IFA Announcements

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell discussing VMWare’s VMWorld conference, chatting about new multi-language additions to Google Assistant, and analyzing a variety of product announcements from the IFA show in Europe, including those from Lenovo, Dell, Intel, Sony, Samsung and others.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Windows on ARM: Good Today, Better Tomorrow

I’ve spent the last few weeks using Lenovo’s Miix 630 detachable product that utilizes Qualcomm’s 835 Snapdragon processor running Windows 10 Pro (upgraded from Windows 10S). It hasn’t been an entirely smooth experience, and there is still work to be done on this new platform (especially regarding a few key apps). But this combination of Windows and ARM is undeniably powerful for a frequent business traveler such as me. Early challenges aside, it’s hard not to see Qualcomm, and eventually, the broader ARM ecosystem, playing a key role in the PC market down the road.

The Good
As I type this, I’m finishing up a New York City trip where I attended ten meetings in two days. I needed access to–and the ability to quickly manipulate–Web-based data during these meetings, a task that I’ve never been able to accomplish well on my LTE-enabled iPad Pro. So I typically bring my PC and a mobile hotspot so I can stay connected in Manhattan throughout the day. I carry my computer bag, too, because I need to carry the power brick because I invariably need to plug in my notebook at some point or risk running out of power before the end of the day. This time out, I left the mobile hotspot, power cord, and computer bag behind, carrying just the Miix. I used it throughout the day, both during meetings and in the times in between. The LTE connection was strong throughout, and I didn’t experience any performance issues. When I returned to the hotel room after 6 pm, after close to 11 hours of pretty much constant use, I checked the battery: 52%.

That’s a game changer, folks. It’s actually a bit hard to describe just how freeing it is to spend the day using a PC without worrying about connectivity or battery life. With battery-saver mode enabled, I could well have accomplished two days of meetings without needing a charge. Does everybody care about these things? Obviously, not. Would I swap this device for my standard PC where I perform more heavy workloads? No, not today.

But I’m beginning to think that day may be closer than many expect.

The Bad
I’ve come to realize that my most-preferred form factor for work-related tasks is a notebook (which is why I’m excited to see Lenovo has already announced plans for the Snapdragon-powered Yoga C630). That said, the Miix 630 is a solid detachable with a good display, somewhat oversized bezels, and a reasonably good keyboard. However, at $899 list, it is quite expensive for a device that what most people would use as a secondary computer. And it doesn’t help that Qualcomm announced the follow-on 850 chip before Lenovo had even begun shipping this product to customers.

And at present, this product—and other Windows on Snapdragon products—must remain secondary product because some limitations prevent it from being a primary PC for many users. Performance is one, although honestly I didn’t find the performance to be that limiting on this machine when using it for my described tasks (Lenovo seems to have done a good job of tuning the system). The main reason these products will have to serve as secondary devices is that there are still some deal-breaking app challenges. For me, the primary one was the fact that I couldn’t install and use Skype for Business, which is the primary way I communicate with my work colleagues and how my company conducts meetings. I was able to work around the meeting problem by joining meetings via the Web-based version of Skype for business, but there’s no way to do that for instant-messaging communication. I had a similar problem with Microsoft Teams, but there’s also a Web-based workaround for that program.

I understand the challenges Microsoft faces with making its ever-broadening portfolio of apps work on this new version of Windows, but the fact that I couldn’t use this important first-party app is pretty frustrating.

The Future
Microsoft still has some work to do in terms of app compatibility, but I’m hopeful the company will sort much of this out in the coming months. In the meantime, we now know that not only does Qualcomm have strong plans for future PC-centric chips, but ARM itself has now announced a roadmap that it promises will usher in next-generation chips from other licensees that should offer desktop-caliber performance with smartphone-level power requirements.

Of course, there are still plenty of other hurdles to address. Many IT organizations will push back on the idea of ARM-based PCs, with Intel understandably helping to lead that charge. There’s the ongoing issue of cost and complexity when it comes to carrier plans. Finally, there’s a great deal of education that will need to happen inside the industry itself around the benefits of this platform.

In the end, I’m confident that Windows on Snapdragon (and Windows on ARM more broadly) is going to eventually coalesce into an important part of the PC market, especially as 5G becomes pervasive in the next few years. I fully expect many long-time PC users to question its necessity, but I also expect a small but growing percentage of users to have the same types of “ah ha” moments that I did when testing out systems. And, perhaps most importantly, I believe future iterations of these devices are going to appeal a great deal to the next-generation of users who expect their PCs to act more like the smartphones and tablets they grew up using.

News You Might Have Missed: Week of August 31, 2018

Google Assistant is Now Bilingual

From Thursday this week, Google Assistant is bilingual. Users can jump between two different languages across queries, without having to go back to their language settings. Once users select two of the supported languages, English, Spanish, French, German, Italian and Japanese, from there on out they can speak to the Assistant in either language and the Assistant will respond in kind. Previously, users had to choose a single language setting for the Assistant, changing their settings each time they wanted to use another language, but now, it’s a simple, hands-free experience for multilingual households. Getting this to work, however, was not a simple, said Google. In fact, this was a multi-year effort that involved solving a lot of problems that can be grouped in three areas: Identifying Multiple Languages, Understanding Multiple Languages and Optimizing Multilingual Recognition for Google Assistant users. Google said to be working to teach the Google Assistant how to process more than two languages simultaneously.

Via GoogleBlog 

  • I updated the setting on my Google Assistant as soon as the news was out and played around with my Google Home for a bit and was delighted with the experience.
  • Set up took just a few clicks. Under the Google Assistant’s settings, I went under language and just clicked on the + to add Italian.
  • I had used the translation feature in past from English into Italian and that worked quite well so I was pretty confident the answers to my requests would be good but I was curious about how Google Assistant would understand my Italian.
  • Google’s blog explains the complexity of delivering this new feature and from my limited testing, it does seem quite a bit went into it as I was able to do some interesting things including mixing the two languages in the same sentence.
  • I asked Google Assistant in Italian “how many inches equal to 5 centimeters” but as I could not remember the word for inches in Italian I used the English word. Google Assistant not only understood my question but replied back using the correct Italian word for inches, which, if you are curious, is “pollici”
  • This would suggest that the two languages are used together to understand what the user is saying. I suppose that must be the case in order to be ready to reply to either one of the languages.
  • I switched back and forth between Italian and English several times and there was no delay in the answer I was getting.
  • It was also interesting that when I asked in Italian what the weather was the answer I received listed temperatures in Centigrade rather than Fahrenheit even if my preference for Google Assistant is set for the latter. The answer mirrored the measure used in Italy.
  • Lastly, it was also interesting to note that Google Assistant had no problem answering back to my weather question with my location “Campbell” but when I asked for the weather in Campbell, Google Assistant struggled to understand until I tried to say it with a similar Italian accent to the one used by the Assistant.
  • Like Google said in the blog, there are many multi-language households and adding bilingual support is an important competitive advantage.
  • I also wonder how much it might play a role in closing the digital divide as many multilingual families have parents whose first language is not English and adopting a voice first technology in their own language might feel much less intimidating. We all remember the Italian grandma trying to speak to Google Assistant https://www.youtube.com/watch?v=e2R0NSKtVA0

Microsoft Requires Paid Parental Leave for Subcontractors

Over the next 12 months, Microsoft will work with its U.S. suppliers to implement this new paid parental leave policy. It will require that suppliers offer their employees a minimum of 12 weeks paid parental leave, up to $1,000 per week. This change applies to all parents employed by Microsoft suppliers who take time off for the birth or adoption of a child. The new policy applies to suppliers with more than 50 employees and covers supplier employees who perform substantial work for Microsoft.  This minimum threshold applies to all of Microsoft suppliers across the U.S. and is not intended to supplant a state law that is more generous. Many of our suppliers already offer strong benefits packages to their employees, and suppliers are of course welcome to offer more expansive leave benefits to their employees.

Via Microsoft 

  • The importance of this move is clear when you read some of the numbers Microsoft cites in the blog. Only 13% of private sector workers in the U.S. have access to paid parental leave.
  • To understand the impact this move from Microsoft can have one needs to know that they currently have around 1,000 suppliers in the US
  • This is not the first time Microsoft pushes its suppliers to do better when it comes to benefits. In 2015, they started demanding that their suppliers offered at least 15 days of paid leave
  • This week’s request comes after Washington state changed the parental leave requirements through a legislation that will demand paid parental leave starting in 2020. Microsoft looked at that and decided to make its suppliers outside the state to align with it.
  • With workers in the Gig economy growing, the topic of benefits is a hot one as adding benefits for these workers has tax implications and might have wage requirements.
  • In 2017, Google was said to have more contractors than regular employees. Most of those contractors did not have access to the same benefits from stock rewards, free food or health benefits.
  • Competition for talent is always high in tech and companies often use benefits to attract the best. A few years ago Apple, Google, and Facebook started to include Egg-freezing as a perk.
  • Considering that According to the U.S Census Bureau millennials will comprising more than one of three adults in America by 2020 and 75% of the workforce by 2025 it is easy to see how benefits around children either making them or looking after them should be a priority for organizations.

Lenovo Yoga C630 WOS Laptop First with Snapdragon 850

More than a full year into the Windows on Snapdragon product life, the jury is still out on how well received the first generation of notebooks were. Qualcomm launched machines with three critical OEM partners: HP, ASUS, and Lenovo. All three systems offered a different spin on a Windows laptop powered by a mobile-first processor. HP had a sleek and sexy detachable, the ASUS design was a convertible with the most “standard” notebook capabilities, and the Lenovo Miix design was a detachable with function over form.

Reviews indicated that while the community loved the extremely long battery life that the Snapdragon platform provided, the performance and compatibility concerns were significant enough to sway decisions. Prices were a bit steep, if only when compared in a raw performance angle against Intel-based solutions.

Maybe the best, but least understood, advantage of the Snapdragon-based Windows notebooks was the always connected capability provided by the integrated Gigabit LTE modem. It took only a few trips away from the office for me to grasp the convenience and power of not having to worry about connectivity or hunting for a location with open Wi-Fi service in order to send some emails or submit a news story. Using your notebook like your smartphone might not be immediately intuitive, but now that I have tasted that reality, I need it back.

As a part of a long-term strategy to take market share in the Windows notebook market, Qualcomm announced the Snapdragon 850 processor in June during Computex in Taipei. A slightly faster version of the Snapdragon 845 utilized in today’s top-level Android smartphones, the SD 850 is supposed to be 30% faster than the SD 835 (powering the first generation of Always On, Always Connected PCs) while delivering 20% better battery life and 20% higher peak LTE speeds.

Those are significant claims for just a single generational jump. The 20% of added battery life alone is enough to raise eyebrows as the current crop of Snapdragon devices already provided the best battery life we have ever tested on a Windows notebook. The potential to get 30% better performance is critical as well considering the complaints about system performance and user experience that the first generation received. We don’t yet know where that 30% will manifest: single threaded capability or multi-threaded workloads only. It will be important to determine that as the first devices make their way to market.

Which leads us to today’s announcement about the Lenovo Yoga C630 WOS, the first notebook to ship with the Snapdragon 850 processor. The design of the machine is superb and comes in at just 2.6 pounds. It will come with either 4GB or 8GB of LPDDR4X memory and 128GB or 256GB UFS 2.1 storage, depending on your configuration. The display is 13.3 inches with a resolution of 1920×1080 and will have excellent color and viewing angles with IPS technology. It has two USB Type-C ports (supporting power, USB 3.0, and DisplayPort) along with an audio jack and fingerprint sensor.

When Lenovo claims the Yoga C630 WOS will have all-day battery life, they mean it. Lenovo rates it at 25 hours which is well beyond anything similarly sized notebooks with Intel processors have been able to claim. Obviously, we will wait for a test unit before handing out the trophies, but nothing I have read or heard leads me to believe this machine won’t be the leader in the clubhouse when it comes to battery life this fall.

Maybe more important for Qualcomm (and Arm) with this release is how Lenovo is positioning the device. No longer subjugated to the lower tier brand in the notebook family, the Snapdragon 850 iteration is part of the flagship consumer Yoga brand. The design is sleek and is inline with high-end offerings that are built around Intel processors. All signs indicate that Lenovo is taking the platform more seriously for this launch and the mentality should continue with future generations of Snapdragon processors.

I don’t want to make more of this announcement and product launch without information from other OEMs and their plans for new Snapdragon-based systems, but the initial outlook is that momentum is continuing build in favor of the Windows-on-Arm initiative. The start was rocky, but in reality, we expected that to be the case after getting hands on with the earliest units last year. Qualcomm was at risk that partners would back away from the projects because of it or that Intel might put pressure (marketing or product-based) on them to revert.

For now, that doesn’t appear to be the case. I am eager to see how the Lenovo Yoga C630 WOS can close the gap for Windows-on-Snapdragon and continue this transformative move to a more mobile, more connected computing ecosystem.

Survey: Real World AI Deployments Still Limited

You’d be hard pressed to find a topic that’s received more attention, been more closely scrutinized or talked about at greater length recently than Artificial Intelligence, or AI. Alternatively hailed as both the next big thing in technology—despite a multi-decade gestation period—and the biggest threat that the tech industry has ever created, AI and the related field of machine learning are unquestionably now woven into the fabric of modern life and are likely to remain there for some time to come.

Despite all the interest in the topic, however, there’s surprisingly little insight into how it’s actually being used in real-world applications, particularly in business environments. To help address that information gap, TECHnalysis Research recently engaged in an online survey of IT and other tech professionals in medium (100-999 employees) and large (1,000+ employees) US businesses to help determine how AI is being deployed in new applications created by these organizations.

After starting with a sample of over 3,700, the survey respondents were whittled down to a group of just over 500 who provided information on everything from what applications they were creating, the chip architectures they leveraged for inferencing and training, cloud platforms they utilized, the AI frameworks they used to build their applications, where they were deploying the applications now, where they planned to deploy them in the future, and much more. The full analysis of all the detailed data is still being completed, but even with some early topline results, there’s an important story to tell.

First, it’s interesting to note that just under 18% of the total original sample claimed to be either pilot testing or doing full deployments of applications that integrate AI technology. In other words, nearly 1 in 5 US companies with at least 100 employees have started some type of AI efforts. Of that group, 56% are actively deploying these types of applications and 44% are still in the development phase. Among companies in the sample group who are self-proclaimed early adopters of technology, an impressive 72% said they are using AI apps in full production environments. For medium-sized companies in the qualifying group, slightly more than 50% said they were in full production, but the number rises to just under 61% for large companies.

Equally interesting were the reasons that the remaining 82% of the total sample group are not creating AI-enhanced applications. Not surprisingly, cost was a big factor among those who were even considering the technology. In fact, 51% of that group cited the cost of creating and/or running AI applications as the key factor in why they weren’t using the technology. The second largest response, at almost 35%, came from those who were intrigued by the technology, but just weren’t ready to deploy it yet.

The third largest response of nearly 32% (note that respondents were allowed to select multiple factors, so the total adds up to over 100%) related to a real-world concern that many companies have voiced—they don’t have the in-house expertise to build AI apps. This isn’t terribly surprising given the widely reported skills gap and demand for AI programmers. Nevertheless it highlights both a big opportunity for developers and a huge challenge for organizations that do want to move into creating AI-enabled applications. The next most common response from this group, at 29%, was that they didn’t know how AI would be applicable to their organization, and another 26% cited not enough knowledge about the subject.

Both of these last two issues highlight another real-world concern around AI: the general lack of understanding that exists around the topic. Despite all the press coverage and heated online discussions about AI, the truth is, a lot of people don’t really know what AI is, nor what it can do. Of course, it doesn’t help that there are many different definitions of artificial intelligence and a great deal of debate about what really “counts” as AI. Still, it’s clear that the tech industry overall needs to invest a great deal more time and money in explaining what AI and machine learning are, what they can (and cannot) do, and how to create applications that leverage the technologies if they hope to have more than just a limited group of companies participate in the AI revolution.

From an industry perspective, it’s probably not surprising, but still interesting, to observe that almost 27% of respondents who were piloting or deploying AI apps came from the Tech industry. Given that tech workers make up less than 5% of the total workforce, this data point shows how much more the Tech industry is focused on AI technology than other types of businesses. The next largest industry at 13.3% was Manufacturing followed by Professional, Scientific and Technical Services at just under 10% of the respondents.

There’s a great deal more information to be culled from the survey results. In future columns I plan to share additional details, but even from the top-line findings, it’s clear that, while the excitement around AI in the business world is real, there’s still a long ways to go before it hits the mainstream.

Why Consumer VR Headsets Have Potential but Need a Killer App to Survive

From the first time I used a VR Headset, I was skeptical that it could ever become a consumer hit. The industrial strength models, such as the original Oculus Rift or the HTC Vive, were expensive and were tethered to a PC to work. While what they delivered in ways of VR functionality was exciting, it mostly garnered interest in gaming and for use in some verticals.

Samsung jumped in with their own Galaxy VR headset in which you put a Samsung phone inside, and a smartphone powered it. Early models were interesting and got some consumer uptake but never really took off.

Today we have some new VR headsets, most notably the Oculus GO and Lenovo’s Mirage Solo with DayDream in the $199-$399 price range in which the headset itself has the CPU and internal memory and delivers a stand-alone VR experience. These models are aimed squarely at consumers, and the companies behind these new VR products hope that these could finally cause the low end of the VR headsets to take off.

For the past two weeks, I have been using both the Lenovo Mirage Solo with Daydream and the Oculus Go and have enjoyed the experience. In my case, watch Netflix, and Hulu shows on them since it delivers a considerable screen viewing experience that is fun to watch. I also am an armchair traveler these days and the various shows that highlight different countries and points of interest are cool too.

Not much of this content is true VR. Hulu and Netflix have their apps on these devices so you can view their content on a big screen. Some of the travel apps have 3D 360 degree viewing features. On the other hand, the Disney VR snippets and some of the other apps deliver actual VR experiences where it puts you in the center of the action, and these apps show the real potential that a consumer VR headset can provide.

However, these dedicated VR apps are minimal today on these stand-alone consumer VR headsets. Which brings me to the real problem that needs to be solved if these are to take off. While many apps and travel sites deliver 360-degree views and in some cases do it in 3D, its the actual VR experience that could bring these headsets to more consumers.

For example, Disney has a few VR examples in their Oculus Go app that brings you right into the movie scene. In the dining scene from Beauty and the Beast, you are sitting at the head of the table while the plates, dishes and the candlestick dances in front of you and around you. In their Coco movie preview, you are on the stage with one of the lead characters as he sings and dances. Disney seems very committed to VR and over time is planning to convert more of their movies to VR and even create VR dedicated videos too.

There are also some specialty video sites that have created 3D VR styled videos in which they use a 3D camera and interject you into a specific scene. Then there are the VR games that plop you into the action and roller coaster type apps in which you feel like you are sitting in the roller coaster as it travels on its track and gives you the visual sensations you get when riding in a real roller coaster. (These are the apps that cause dizziness and nausea, and this particular problem needs to be addressed for any VR headset to gain broad acceptance)

I admit that I am enamored with these low end stand alone VR headsets and can waste many hours playing games, watching videos. Even though most are still 2D, and the VR apps are not plentiful, the experience, at least for a techie like me, is always fun. However, what exists in the way of 2D, 3D and VR content today makes it hard for a mainstream consumer to justify the cost at this point. Also, from using these for a while, I have not seen what I call a “killer app” for low-end VR headsets.

The higher end VR headsets the deliver high-quality gaming experiences are the killer app for that set of people. Also, in vertical markets, VR apps needed for people to do their jobs more effectively is the killer app for them
However, after viewing over 100 apps and videos on these low-cost stand-alone VR headsets, I cannot say that any one of them one drive me to buy one of these if I were a mainstream consumer.

There are some categories of apps that could be attractive to some audiences. Seniors might enjoy the travel apps and documentaries. Gen Z and some millennial’s might enjoy the gaming apps. Three are some useful educational apps and even ones that are great for meditating. Moreover, as I said above, I like watching Netflix and Hulu since I get the giant movie screen viewing experience with these services on a VR headset.

However, we need a killer app of some kind that is transformative and can get the interest of a broad consumer audience for these headsets to ever go mainstream. Until that happens, I am afraid the demand for these low cost, and self-contained VR headsets will remain tepid at best.

Nvidia Turing brings higher performance, pricing

During the international games industry show, Gamescom, in Cologne, Germany this week, NVIDIA CEO Jensen Huang took the covers off the company’s newest GPU architecture aimed at enthusiast PC gamers. Codenamed Turing and taking the brand of GeForce RTX, the shift represents quite a bit more than just an upgrade in performance or better power efficiency. This generation NVIDIA is attempting to change the story with fundamentally changed rendering techniques, capabilities, and yes, prices.

At its heart, Turing and GeForce RTX include upgrades to the core functional units of the GPU. Based on a very similar structure to previous generations, Turing will improve performance in traditional and current gaming titles with core tweaks, memory adjustments, and more. Expect something on the order of 1.5x or so. We’ll have more details on that later in September.

The biggest news is the inclusion of dedicated processing units for ray tracing and artificial intelligence. Much like the Volta GPUs that are being utilized in the data center for deep learning applications, Turning includes Tensor Cores that accelerate matrix math functions necessary for deep learning models. New RT cores, a first for NVIDIA in any market, are responsible for improving performance of traversing ray structures to allow real-time ray tracing an order of magnitude faster than current cards.

Both of these new features will require developer integration to really take advantage of them, but NVIDIA has momentum building with key games and applications already on the docket. Both Battlefield V and Shadow of the Tomb Raider were demoed during Jensen’s Gamescom keynote. Ray tracing augments standard rasterization rendering in both games to create amazing new levels of detail in reflections, shadows, and lighting.

AI integration, for now, is limited to a new feature called DLSS that uses AI inference locally on the GeForce RTX Tensor Cores to improve image quality of the game in real-time. This capability is trained by NVIDIA (on its deep learning super computers) using the best quality reference images from the game itself, a service provided by NVIDIA to its game partners that directly benefits the gamer.

There are significant opportunities for AI integration in gaming that could be addressed by NVIDIA or other technology companies. Obvious examples would include compute-controlled character action and decision making, material creation, and even animation generation. We are in the nascent stages of how AI will improve nearly every aspect of computing, and gaming is no different.

Pricing for the new GeForce RTX cards definitely raised some eyebrows in the community. NVIDIA is launching this new family at a higher starting price point than the GTX 10-series launched just over two years ago. The flagship model (RTX 2080 Ti) will start at $999 while the lowest priced model announced this week (RTX 2070) comes in at $499. This represents an increase of $400 at the high-end of the space and $200 at the bottom.

From its view, NVIDIA believes the combination of performance and new features that RTX offers gamers in the space is worth the price being asked. As the leader in the PC gaming and graphics space, the company has a pedigree that is unmatched by primary competitor AMD, and thus far, NVIDIA’s pricing strategy has worked for them.

In the end, the market will determine if NVIDIA is correct. Though there are always initial complaints from consumers when the latest iteration of their favorite technology is released with a higher price tag that last year’s model, the truth will be seen in the sales. Are the cards selling out? Is there inventory holding on physical and virtual shelves? It will take some months for this settle out as the initial wave of buyers and excitement comes down from its peak.

NVIDIA is taking a page from Apple in this play. Apple has bucked the trend that says every new chip or device released needs to be cheaper than the model that preceded it, instead increasing prices on last year’s iPhone X and finding that the ASP (average sales price) jumped by $124 in its most recent quarter. NVIDIA sees its products in the same light: providing the best features with the best performance, and thus, worthy of the elevated price.

The new GeForce RTX family of graphics cards is going to be a big moment for the world of PC gaming and likely other segments of the market. If NVIDIA is successful with its feature integration, partnerships, and consumer acceptance, it sets the stage for others to come into the market with a similar mindset on pricing. The technology itself is impressive in person and proves the company’s leadership in graphics technology, despite the extreme attention that it gets for AI and data center products. Adoption, sales, and excitement in the coming weeks will start to tell us if NVIDIA is able to pull it all off.

News You Might Have Missed: Week of August 24, 2018

Apple removes Facebook’s VPN app Onavo

Apple officials told Facebook last week that Onavo violated the company’s rules on data collection by developers, and suggested last Thursday that Facebook voluntarily remove the app. Facebook said in a statement that it’s transparent with Onavo users: “We’ve always been clear when people download Onavo about the information that is collected and how it is used,” the company said. “As a developer on Apple’s platform, we follow the rules they’ve put in place.”

Via CNBC

  • Onavo is a free VPN app. So I am sure most users never went ahead to read the fine print around Onavo sharing information with Facebook on how the users use their phones beyond the Facebook app. they actually thought they were using an app to protect themselves to expose themselves!
  • Onavo has been in the app store for a long time and some have criticized how long it took Apple to take the app down. What is different, however, is the change Apple made to app data collection in June when developers were asked to add the ability for users to grant or deny permission to such data harvesting
  • Clearly this is another blow for Facebook at two different levels:
    • The lack of transparency about data collection
    • The fact that they were gathering data on how users used apps and websites to gain a competitive advantage
  • Similar to the Cambridge Analytica case, users are frustrated with Facebook bending and breaking rules for a business gain. Had the app was gathering information to improve the experience or the service, sentiment would have been different. While of course, it would still infringe on Apple’s new code of conduct and would have still ended up out of the store users would not have had as much reason to be upset with Facebook.
  • In very different ways both Facebook and Twitter are letting users down because of what seems a weak set of corporate values. A set of values that have been chosen to further the business at the expense of its customers and one that more and more users are finding questionable.
  • Some of the problems Facebook is facing, like hacks and fake news are to some extent out of their control. What is up to them is, of course, finding a way to improve things. But in cases like Cambridge Analytica and Onavo someone somewhere made the business decision that it was OK to use users data.
  • Let me be clear, you might find it to be a subtle difference but I think there is actually a big difference between using my data to better target ads than to use my data to stay ahead of the competition by coming up with a new product or feature based on spying on what I do with my phone.
  • Technology, AI, more dedicated staff, they all can solve fake news and harassment, eventually, but only stronger ethics will avoid another Onavo and Cambridge Analytica.

5G Licensing Prices

This week, Nokia set its 5G licensing rate at €3 flat rate. This contrasts with Ericsson, which has a sliding scale of $2.50 to $5 per 5G device, depending on the handset’s price. And it’s considerably lower than Qualcomm, which plans to license its 5G patents at 2.275 percent of a single-mode 5G handset’s wholesale price, or 3.25 percent of a multi-mode 5G handset’s price, capped at $400. That would be $13 in 5G licensing fees for Qualcomm alone, bringing the total royalty fees for just these three companies to over $21 per device — that’s before paying royalties to other essential 5G patent holders such as Samsung and Huawei. By publicly disclosing its 5G patent fees — and keeping them low — Nokia is dramatically reducing the chances of future licensing turbulence. Huawei remains the only big player yet to set a price.

Via Venturebeat 

  • Nokia said the $3 is limited to phones and they retain the right to charge differently for other categories of devices. Clearly with the big talk of IoT and 5G there is a much bigger opportunity outside of phones. From smart cities to cars, everything could have a 5G modem, but the cost of the devices using such modem will vary greatly which is why Nokia is not using a totally blanket approach.
  • I would think, Huawei might want to keep its licensing competitive to possibly open up some opportunity. This might be particularly helpful in the US market.
  • Linked to these prices, there were a few speculations that cost was the main reason why Apple would not have 5G in the upcoming set of devices. And of course, the current Qualcomm litigation would have played a role in the decision.
  • However, rather than the cost itself it would be appropriate to look at the return Apple would get from integrating 5G technology so early in the day.
  • Often Apple is criticized for not adopting technology early in its life cycle, but this is not quite correct. If you look at decisions such as the removal of the audio jack, the adoption of USB-C and FaceID they are all examples of Apple making an early move.
  • What Apple does not do is to deploy a technology or a feature that is either not stable enough and it will likely impact usability or consumers will have minimal return from it.
  • 5G falls squarely in these two categories, By 2019, 5G networks availability will be very limited and consumers are yet to understand what the value of 5G will actually be. Hence, adding $20 on the Bill of Materials is unjustified.
  • Of course, if you are in the network business as well as in the smartphone business having early 5G devices help your case for network deployments which is why we will see Huawei and Samsung move early.
  • Also, brands who are still heavily dependent on carriers for their distribution and marketing budget might launch early 5G devices in collaboration with the carriers to support them more than boost sales. The recent Motorola 5G Mod with Verizon is a good example of that.
  • If the 3G rollout has anything to teach us, Apple is unlikely to suffer from this move. You might remember that when the first iPhone launched on EDGE, market leader at the time, Nokia already had WCDMA phones in the market.

The Great Tech Questioning

The past year has been a challenging one for tech, what with #metoo moments, security and privacy breaches, unseemly use of power, and certainly some missteps in the ‘fake news’/Russia meddling arena. And despite the seeming incongruity between these incidents/actions/behaviors and tech company earnings and sky-high valuations, there has started to be a reckoning, of sorts.

But I think there’s a bigger issue in play, one with greater potential long-term consequences. I call it the “Great Tech Questioning”. For the past 10-15 years, going back perhaps to the advent of the smartphone circa 2005, the talk has been about industries that have been ‘disrupted’. At first it was more about substitution, such as cellular replacing landlines, broadband smartphones replacing cameras and GPS units, digital media replacing physical media, and so on. Then it became more about entire industries being disrupted: photography, newspapers and magazines, retail, and so on. But more recently, the types of changes we’re seeing as a result of some of the most successful and fastest-growing companies in history are starting to have far broader business and societal consequences. And we’ve been caught largely flat-footed in terms of the longer-term ramifications and how to deal with them.

Let’s take four companies as examples. First, Uber. It plunged into a space that was ripe for disruption and rife with corruption. And though most of us love the service, Uber and its ilk grew so fast and so unchecked that we failed to assess the consequences: the significant increase in congestion in some cities, thus hampering one of ride-sharing’s key selling points of making it easier (and cheaper) to get from A to B. Another incongruity is that while Uber was initially hailed as a more favorable model for drivers, we underestimated the bottom falling out on medallion prices, which has affected hundreds of thousands of hard-working individuals.

Second, AirBnB. Still a great thing in many respects, but its rapid and largely unregulated growth resulted in its straying from its mission – and not really from any corporate wrongdoing. My wife and I were the initial ‘target’ AirBnB hosts. Sitting right between Boston College and Boston University, we’d rent a room out for $100 per night on our top floor, which was a godsend to parents visiting their kids in under-hoteled and over-priced Boston. This was the problem AirBnB was trying to solve. But then, developers, speculators, and opportunists swept in, killing rental inventory and disrupting the housing industry in already tight and expensive cities.

Third, Apple — as the poster child for the smartphone and its ‘ecosystem’. This wireless broadband pocket computer is indeed a modern marvel. It’s high level of usefulness was hugely evident on a recent vacation: helping us navigate our way, record beautiful places, stay in touch with work, friends and family, and enjoy media of many sorts. But this has also been a year where there have been serious questions about the effects of ‘screen addiction’. Many people have a really hard time applying the ‘everything in moderation’ mantra to their phones.

Finally, Facebook. Similar to the three examples cited above, it’s valuable and useful to hundreds of millions of people worldwide. But its unchecked growth, pursuit of profit, and poor corporate judgement have led to abuses of its platform, by the company itself and by myriad third party actors.

As a visceral reaction to this, we’ve seen a lot of questions being asked in 2018, and a giant ‘hey, slowdown’ come from numerous directions: Europe’s fining Google and implementing GDPR; the Zuckerberg hearings in Washington; the caps being placed on ride-sharing licenses in New York, and the various skirmishes being waged daily in locations worldwide; the backlash on ‘over-tourism’ and the attempt by some cities to impose some regulations on AirBnB; the stunning letter in January by two of Apple’s largest investors, reflecting concerns about the effects of technology and social media; and questions about IP theft, figuring into Qualcomm/Broadcomm, Huawei, ZTE, and so on.

The acceleration of big data and AI, combined with a turn toward the autocratic and authoritarian in some countries, are amplifying some of these concerns. This stuff can go from merely creepy to downright Orwellian in a hurry. In our heated conversation about immigration, for example, how long will it be before ICE snoops on individuals’ location data and messaging content?

I’m hoping that all this is the catalyst for some important conversations about the long-term effects of tech acceleration on the future of how we live, work, and get around. Some 27 million Americans are employed in the ‘gig economy’, according to a report I recently read…what happens to these people’s livelihoods, health care, and retirement, long term? Can ride sharing services become more of a conversation about the future of transportation than just ‘cheaper than a cab and better than a bus’?  Will the disruption being caused by AirBnB catalyze a conversation about the future of housing in the many cities facing a severe housing crunch? And can we adopt an ‘everything in moderation’ mantra on smartphones, and re-learn (or learn for the first time) some of the people navigation and long-form attention skills that were so essential before the crutch of our phones and e-everything?

There are no easy answers to these questions. But we might look back on 2018 as the year that some of these important conversations started in earnest.

Apple Must Reinvent the Genius Bar

Last week @mgsiegler wrote a post about his customer experience at an Apple Store. While the issue that brought him to a store is somewhat unique, his recount of long lines and wait time despite having an appointment was not that different from what I have heard pop up as a complaint from friends who are iPhone and iPad users and one that I have experienced myself on a couple of occasions.

There was a lot in the post, but I want to focus on one point I agree with, which is that Apple has reached such a scale that makes the current customer service model unsustainable.

Big Retail Stores are not the Model

Apple knew they had to scale issue when back in 2012 they hired John Browett, chief executive of Dixons Retail, a large chain of electronics stores in the UK. Browett was replacing Ron Jonson who had left Apple to become the CEO of J.C. Penney. Before Dixon, Browett had spent eight years at Tesco, a leading UK supermarket.

Clearly, Browett brought the understanding of large retail companies to Apple. However, as I had commented at the time, the high-quality customer care Apple’s customers were used to seemed to be at odds with the poor customer service Dixon’s was renowned for.

So it was no surprise when less than a year after he joined, Browett was let go. During his time at Apple, he was said to have focused on reducing employees in the attempt to cut payroll costs as well as general spent on the upkeep of the physical stores. In short, Browett was focused on growing profitability by teaching Apple stores to “run lean” as apparently he was quoted saying. But Apple stores are not about profits!

Tim Cook took over from Browett until he hired Angela Ahrendts in December 2014. Ahrendts, who was the first woman to join Apple’s executive team in almost a decade was given responsibility for both physical and online retail. In her previous role at Burberry, Ahrendts was able to turn the brand around making it relevant to the mainstream while retaining its luxury status. The challenge was not that different at Apple, where the stores had to be able to deal with more customers while continuing to make you feel you were the only one that mattered.

It’s not about Selling

When you read Ahrendts’ bio on the Apple’s website, and you think at some of the stores that were launched under her leadership from Chicago Michigan Avenue to Milan Piazza Liberty, it is easy to see she is delivering on the promise of what Apple retail is supposed to be:

“Since joining Apple in 2014, Angela has integrated Apple’s physical and digital retail businesses to create a seamless customer experience for over a billion visitors per year with the goal of educating, inspiring, entertaining and enriching communities. Apple employees set the standard for customer service in stores and online, delivering support from highly trained Geniuses and expert advice from Creative Pros to help customers get the most out of their Apple products.”

At a recent interview at the Cannes Lions, Ahrendts reiterated much of the same, pointing out that shopping is moving to online but that buyers will still go into a physical store to finalize their purchase. Because of this retail has to evolve.

Although revenues generated by the stores are growing, I have always argued that Apple stores are much more a marketing machine for Apple than they are a revenue one. Getting people in to fully immerse themselves in what being in an Apple world feels like is not new though. Ahrendts added more of a community focus to it at a time when, more often than not, tech companies are seen as damaging the community rather than enhancing it.

Evolving Customer Care

While all this is great for Apple overall branding, it seems to get in the way of current customers going to the stores to get support. Existing customers, especially long-term customers, have been accustomed to turn up at the store with whatever problem they had and have that resolved without even the need for an appointment. This excellent customer service is a big part of why people bought Apple.

As Apple’s customer base grew so did the need for support, a need that can no longer be fulfilled in the way it has been over the years. As Ahrendts points out, retail must evolve, and I would add that customer care must evolve too.

The Genius Bar which for the longest time has been the pride and joy of Apple can no longer be the first option for customer support. Apple’s website encourages customers to get support via phone, chat, email, even Twitter and of course there are authorized service providers. But come on, if I buy Apple, I want Apple to take care of me, right? I want to get to a store and feel I get the attention, love, and care I feel I pay for being a “special customer.”

It seems to me that Apple should come up with something that is as caring and personal of an experience than it was back in the day when I went into a store and met with my Genius Bar guru who knew everything about me and my device.

Today, through technology, Apple can deliver the same “boutique feel” thanks to a device that knows me and knows itself. Machine learning and artificial intelligence could help with self-diagnose, and an app or even Siri could walk/talk a user through some basic testing that would help assess whether I can fix it, need to go into a store or mail my device in. The Genius would move from the Bar to my device. Setting the right expectations from the start, avoid wasting time and eliminating friction overall while creating rapport with the brand was exactly what people liked about Apple’s customer service. The feeling of buying products from a company that put its customer first and that “Cheers – where everybody knows your name” factor that made Apple’s customer service second to none. Apple can do it again this time putting its technology first rather than its store staff.

While I realize my vision is not going to be delivered overnight, I believe that, if done well, this “Genius on device” would add even more value to Apple’s products and it would position Apple’s customer care as the industry benchmark once again.

Nvidia RTX Announcement Highlights AI Influence on Computer Graphics

Sometimes it takes more than just brute horsepower to achieve the most challenging computing tasks. At the Gamescom 2018 press event hosted by Nvidia yesterday, the company’s CEO Jensen Huang hammered this point home with the release of the new line of RTX2070 and RTX2080 graphics cards. Based on the company’s freshly announced Turing architecture, these cards are the first consumer-priced products to offer real-time ray tracing, a long sought after target in the world of computer graphics and visualization. To achieve that goal, however, it took advancements in both graphics technologies as well as deep learning and AI.

Ray tracing essentially involves the realistic creation of digital images by following, or tracing, the path that light rays would take as they hit and bounce off objects in a scene, taking into consideration the material aspects of the those objects, such as reflectivity, light absorption, color and much more. It’s a very computational intensive task that previously could only be done offline and not in real-time.

What was particularly interesting about the announcement was how Nvidia ended up solving the real-time ray tracing problem—a challenge that they claimed to have worked on and developed over a 10-year period. As part of their RTX work, the company created some new graphical compute subsystems inside their GPUs called RT Cores that are dedicated to accelerating the ray tracing process. While different in function, these are conceptually similar to programmable shaders and other more traditional graphics rendering elements that Nvidia, AMD, and others have created in the past, because they focus purely on the raw graphics aspect of the task.

Rather than simply using these new ray tracing elements, however, the company realized that they could leverage other work they had done for deep learning and artificial intelligence applications. Specifically, they incorporated several of the Tensor cores they had originally created for neural network workloads into the new RTX boards to help speed the process. The basic concept is that certain aspects of the ray tracing image rendering process can be sped up by applying algorithms developed through deep learning.

In other words, rather than having to use the brute force method of rendering every pixel in an image through ray tracing, other AI-inspired techniques like denoising are used to speed up the ray tracing process. Not only is this a clever implementation of machine learning, but I believe it’s likely a great example of how AI is going to influence technological developments in other areas as well. While AI and machine learning are often thought of as delivering capabilities and benefits in and of themselves, they’re more likely to provide enhancements and advancements to other existing technology categories by accelerating certain key aspects of those technologies, just as they have to computer graphics in this particular application.

It’s also important to remember that ray tracing is not the only type of image creation technique used on the new family of RTX cards, which will range in price from $499 to $1,199. Like all other major graphics cards, the RTX line will also support more traditional shader-based image rasterization technologies, allowing products based on the architecture to work with existing games and other applications. To leverage the new ray tracing capabilities, in fact, games will have to be specifically designed to tap into the ray tracing features—they won’t simply show up on their own. Thankfully, it appears that Nvidia has already lined up some big name titles and game publishers to support their efforts. PC gamers will also have to specifically think about the types of systems that can support these new cards, as they are very power hungry and demand up to 250W of power on their own (and a minimum 650W power supply for the full desktop system).

For Nvidia, the RTX line is important for several reasons. First, achieving real-time ray tracing is a significant goal for a company that’s been highly focused on computer graphics for 25 years. More importantly, though, it allows the company to combine what some industry observers had started to see as two distinct business focus areas—graphics and AI/deep learning/machine learning—into a single coherent story. Finally, the fact it’s their first major gaming-focused GPU upgrades in some time can’t be overlooked either.

For the tech industry as a whole, the announcement likely represents one of the first of what will be many examples of companies leveraging AI/machine learning technologies to enhance their existing products rather than creating completely new ones.

Should Facebook Create Vertical Channels to Survive?

When Facebook started out, it’s mission was to be a social network that would connect college classmates and friends and soon also caught on as a social medium to communicate with family and loved ones all over the world. While I was not a college student by any means when I joined Facebook a year after it launched, my reason for joining was to keep up with family, friends and business associates.

For the first five years of Facebook’s existence, this was who their primary audience was, and they catered to these peoples needs by adding contextual content and contextual ads. By 2010, their audience had hit close to 220 million users, and the types of people using the platform started to expand exponentially. Businesses, media outlets, brands, and other organizations, began to discover that Facebook was an excellent way to get to their customers and started taking Facebook from its social media roots to one that was more like a publishing platform for content that allowed people to interact directly with any Facebook member.

However, I believe that it was the Arab Spring in January 2011 where Facebook moved from a social media platform to becoming a full-fledged publishing platform. Since then it has expanded partnerships with all types of media publications, businesses and brands and gets most of its revenue from ads. However, because of the role, it played in the Arab Spring uprising, Facebook morphed even further into the world of politics and had allowed all types of players to make political comments, place political ads and spread fake news, much of it of political nature.

The Arab Spring ended up serving two purposes for those with political agendas. First, it allowed them to present a rallying cry for those supporting some political action and, in the case of the Arab spring uprising, it was a call to arms which toppled the leaders in Egypt.
However, it also gave those who had opposite positions a new vehicle to spread their agenda and took to Facebook to promote their views using any means possible, including propaganda and false news tailored to support their position.

I believe that under Facebook’s current rules and regulations, the role it plays in influencing political agenda’s cannot be solved under their current terms of service, and they need to adjust their rules around more publishing focused business models to continue to grow. If they served more like a publishing platform using the kind of journalistic regulations deployed by the top newspapers and magazine publishers today, they could get control of what type of material gets to their customers.

I realize this is very controversial, but I no longer believe Facebook can thrive under its current policies. For example, can you imagine Alex Jones ever being allowed to publish his content in the New York Times or the Wall Street Journal? He would never be allowed to do this because these publishers have a code of ethics and rules and regulations that drive what can and cannot be published on their pages. That is why I believe Facebook has to come to grips on their role as a publishing medium and put stricter controls in place to keep false news and propaganda off of their site the same way mainstream publishers control their content today.

Another way they could keep the company growing, even if they add stricter rules and controls around their main site, is to develop what I call vertical channels that become spin-outs from Facebook itself. If you look at Instagram, one of their properties, you could consider this a vertical channel now. Its focus is just on sharing pictures. Moreover, to a degree, its Oculus program with its dedicated apps and services can be viewed as a vertical channel too, although it will eventually play a key role inside Facebook’s virtual VR rooms in the future.

If you broadly scan Facebook today, you see posts from people showing off DIY projects, food and recipes, and all types of hobbies and interests. At the moment these are not organized or even grouped into dedicated like -mind programs. However, what if they were? What if Facebook had a channel just for those who love Italian Food and recipes and brought together people on Facebook to participate. It would attract ads form companies touting Italian food supplies, travel to Italy. As a diver, I would like to find like-minded diver friends where we can share our interests and see what’s new in dive gear and related products and services such as dive trip locations and diving holiday packages.

I realize there are dozens of these vertical sites for food, diving, and more, already. However, imagine if Facebook could tap into the special interests of its 2.5 billion users and bring millions of them together around a dedicated hobby or interest. It could drive even more targeted revenue and allow them to diversify beyond their current social media focus, which as I stated above, needs to be recognized as a publishing platform to give them more control over what content can and cannot be posted over their site.

Facebook has gone well beyond its social network focus and is much more than that for all types of peoples and groups. However, without stricter rules and regulations guiding its future, I do not think it can continue to grow any further. In my view, putting more controls in place that tracks the way publishers deal with the content allowed on their site would be the first step to stem the tide of people leaving the platform or becoming less engaged. However, adding vertical channels could be the ticket that could keep them growing while still serving and not angering their current users.

Podcast: NVidia Turing, ARM CPUs, AMD Threadripper, Intel AI

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell discussing major developments in the semiconductor industry, including the announcement of NVidia’s Turing GPU architecture and the company’s quarterly earnings, the debut of ARM’s CPU roadmap for PCs, the impact of AMD’s new Threadripper CPU and their datacenter plans, and Intel’s new AI developments.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Cortana and Alexa: The Next Step Forward for Voice

This week Amazon and Microsoft announced the rollout of Alexa and Cortana integration. First discussed publicly one year ago, the collaboration represents an important step forward for smart assistants today and voice as an interface in the future. I’ve been using Alexa to connect to Cortana, and Cortana to connect to Alexa, and while it’s clearly still in the earliest stages of development, it generally works pretty well. The fact that these two companies are working together—and other notables in the category are not—could offer crucial clues about the ways this all plays out over time.

Cortana, Meet Alexa
Enabling the two assistants to talk to each other is straightforward assuming you’re already using both individually. You enable the Cortana skill in the Alexa app and sign into your Microsoft account. Next, you enable Alexa on Cortana and sign into your Amazon account. To engage the “visiting” assistant, you asked the resident one to open the other. So you ask Alexa to “open Cortana” and Cortana to “open Alexa.” In my limited time using the two, I found that accessing Cortana via Alexa on my Echo speaker seemed to work better than accessing Alexa via Cortana on my notebook. Your mileage may vary.

One of the biggest issues right now is that it gets quite cumbersome asking one assistant to open the other so that you can then ask that assistant to do something for you. One of the reasons Alexa has gained such a strong following—and is the dominant smart assistant in our home (four Dots, two Echos, and two Fire tablets and counting)—is because it typically just works. The reason it just works is that Amazon has done a fantastic job of training we Echo users to engage Alexa the right way. It’s done this by sending out weekly emails that detail updates to existing skills as well as introducing new ones. Alexa hasn’t so much learned how we humans want to interact with her. Instead, we’ve adapted to the way she needs us to interact with her.

The issue with accessing Alexa through Cortana is that we lose that simplicity. I found myself trying to remember how I needed to engage Alexa while talking to the microphone on my notebook (Cortana). The muscle memory I’ve built around using Alexa kept getting short-circuited when I tried to access it through Cortana. I suspect this will self-correct with increased usage, but it’s obviously an issue today.

That said, even at this early stage, the potential around this collaboration is clear and powerful.

Blurring of Work and Home
We all know that the lines between our work lives and home lives are less clear than ever before. Most of us use a combination of personal and work devices throughout the day, accessing throughout the day both commercial and consumer apps and services. But when it comes smart assistants, the lines between home and work have remained largely unblurred. As a result, today Amazon has a strong grip on the things I do at home, from setting timers to listening to music to accessing smart-home devices such as connected lightbulbs, thermostats, and security systems. But Alexa know very little about my work life. Here, I’d argue, Microsoft rules, as my company uses Office 365, and Cortana can tap into my Outlook email and calendar, Skype, and LinkedIn among other things.

During my testing, I did things such as ask Alexa to open Cortana and check my most recent Outlook emails, or to access my calendar and read off the meetings scheduled for the next day. Conversely, I asked Cortana to open Alexa and check the setting of my Ecobee smart thermostat and to turn on my Philips Hue lights.

Probably the biggest challenge around this collaboration, once we get past the speed bump of asking one assistant to open another, is the need to discern individual users and then address their privacy and security requirements when working across assistants. Now that I’ve personally linked Alexa and Cortana, anyone in my house can ask Alexa to open Cortana and read off the work emails that previously were accessible only through Cortana (on a password-secured notebook). That’s a security hole they need to fill, and soon. The most obvious way to do this is for each of these assistants to recognize when I am asking for something versus when other members of my household (or visitors) are doing it.

Will Apple, Google, and Samsung Follow?
It makes abundant sense for Amazon and Microsoft to be first into the pool on this level of collaboration. While the two companies obviously compete in many markets, Cortana and Alexa represent an area where I’d argue both sides win by working together. I look forward to seeing where the two take this integration over the next few years.

But what about the other big players? Among the other three serving primarily English-speaking markets, I could imagine Samsung seeing a strong reason to cooperate with others. It’s Bixby trails the others in terms of capabilities, but the company’s hardware installed base is substantial. At present, however, it seems less likely that either Apple with Siri or Google with Google Assistant would be interested in joining forces with others. With a strong position on the devices most people have with them day and night (smartphones), both undoubtedly see little reason to extend an olive branch to the competition. Near-term this might be the right decision from a business perspective. But longer term I’m concerned it will slow progress in the space and lead to high levels of frustration among users who would like to see all of these smart assistants working together.

News You might have missed: Week of August 17th, 2018

Google to Open Retail Store in Chicago

Google is planning a two-level store in Chicago’s Fulton Market district, its first known location for a retail flagship. The technology giant is close to finalizing a lease for almost 14,000 square feet on the first and second floors of several connected, two-story brick buildings between 845 and 853 W. Randolph St., according to sources.

Via Chicago Tribune 

  • As Google gets more into the hardware business having a retail presence is not a bad idea. Apple, of course, is the benchmark when it comes to turning the move into retail into a strong success both in marketing and revenue.
  • That said, I would not look at this move by Google as an attempt to mirror Apple but more an attempt to mirror Amazon and its Amazon Books retail spaces that we have seen pop up over the past year or so. If you have ever been to any of those stores, you know that the whole front part is dedicated to Alexa and the home.
  • I say that because I really believe that this move for Google has way more to do with showing off a smart home governed by Google than selling a few phones.
  • Experiencing a smart home is still very hard today. Spaces in Best Buy, Home Depot, and Lowes help some but are a far cry from what a consumer needs today to understand and buy into the promise of a connected home.
  • At the last product launch in San Francisco, Google set up the space like a home showing off Google Home, Chromecast connected to a TV, Nest products and more. I would expect a similar experience to take up much of this new retail space.
  • One swallow doesn’t make summer the same as one store does not make a retail presence. It would be interesting to see what the next move is for Google. Will Google follow a traditional roll out with prime real estate presence in the big cities or will they open up those pop-up stores we have seen Amazon open in shopping malls? The latter might be much easier and cheaper way to have a presence in several cities.

#BreakingMyTwitter

In a company emailed shared today, Twitter cited “technical and business constraints” that it can no longer ignore as being the reason behind the APIs’ shutdown. It said the clients relied on “legacy technology” that was still in a “beta state” after more than 9 years and had to be killed “out of operational necessity.” The company’s email also says it hopes to eventually learn “why people hire 3rd party clients over our own apps.”

Via TechCrunch 

  • This has not been a good week for Twitter. First defending Alex Jones’ presence on Twitter and now this.
  • Developers have been left high and dry before by Twitter mostly with the reasoning that Twitter has its own app which makes third-party apps unnecessary.
  • Unfortunately, however, when it comes to power users, Twitter does not cut it. Managing lists, syncing across devices and just simply have a linear timeline are steps hard to achieve on the original Twitter app.
  • Tweetbot one of the most popular apps across iOS and Android sent out an update listing all the features that would have to be turned off because of this update. The list included a Watch app, push notifications for likes, retweets, follows and quotes timeline streaming, and pushed notifications for mentions and direct messages being delayed by a few minutes.
  • While I understand why Twitter wants users to engage with its own app – advertizing – it is disappointing that the solution is not found in making that app superior to anything else but rather in crippling third-party apps.
  • Twitter, who bought TweetDeck years ago, let the app basically die as no enhancements where ever made and the main Twitter app remains pretty basic even today. There is not even a Twitter app for MacOS.
  • Users who are prepared to pay money for an app are clearly engaged with the platform which means that this move is hurting the more profitable users Twitter has.
  • Like John Gruber, pointed out, people willing to pay for an app like Tweetbot would be either willing to put up with ads or even pay a fee to use the apps. Both are opportunities for Twitter.
  • Pushing consumers to use the main Twitter app at a time when many already feel the toxicity on the platform is taking away too much from the pleasure and the productivity aspects might just be the straw that broke the camel’s back.
  • The lack of sense of responsibility Twitter has towards developers shows a deep lack of understanding of the role these apps play in user engagement
  • The statement made in the letter about wanting to understand why people use third-party apps shows even less of an understanding of how diverse the user base is and how a one size fits all approach will just hurt. Ironically the executive who shared the internal email did so using a third-party Twitter app.

New Threadripper Puts AMD in Driver Seat for Workstations

AMD started off a race of CPU core count when it released the first-generation Ryzen processor back in 2017, pushing out a product with 8 cores and 16 threads, doubling that of the equivalent platform from Intel. It followed that same year with Ryzen Threadripper, an aggressive name for an aggressive product for the high-end enthusiast market and the growing pro-sumer space that combines users looking to do both work and play on personal machines. Threadripper went up to 16 cores and 32 threads, going well above the 10-core designs that Intel offered in the same market space.

AMD was able to do this quickly and cost effectively by double dipping on the development cost of the EPYC server processor. It shared the same socket and processor package design with only a handful of modest modifications to make it usable by end-users and partners. It was putting the pressure on Intel once again, this time in a market that Intel was previously the dominant leader in AND that it had created to begin with. Thus continued the “year of AMD.”

Intel did respond, offering a revision to the Core X-series of processors that reached up to 18 cores and 36 threads, one-upping the AMD hardware in core count and performance. But it did so at a much higher cost; it seemed that Intel was not willing to under cut its own Xeon workstation line in order to return the pressure on AMD. But the battle had started: the war of processor performance and core count had begun.

This month, just a year after the release of the first Threadripper processor, AMD is launching the 2nd generation Threadripper. It utilizes the updated 12nm “Zen+” core design with better clock scaling capability, improved thermal and boost technologies, and lower memory latencies. This is the same core found in the Ryzen 2000-series of processors, but with two or four dies at work rather than a single.

But this time, AMD has divided Threadripper into two sub-categories, the X-series and the WX-series. The X-series peaks with the 2950X and targets the same users and workloads as the first-generation platform including enthusiasts, pro-sumer grade content creators, and even gamers. The core counts reach 16, again the same as the previous generation, but the addition of the “Zen+” design makes this noticeably faster in nearly every facet, with a lower starting price point.

The WX line is more unique. It is going directly after workstation users, as the “W” would imply, with as many as 32 cores and 64 threads on a single processor. Applications that can really utilize that much parallel horsepower are limited to extremely high-end content creation tools, CAD design, CPU-based rendering, and heavy multi-tasking. The WX-series is basically an EPYC processor with half the memory channels and consumer-class motherboards.

Performance on the 2990WX flagship part is getting a lot of attention; mostly positive but with some questions. It obviously cuts through any multi-threaded applications that properly utilize and propagate workloads but it also does well in single threaded tasks thanks to AMD’s Precision Boost 2 capability. There are some instances where applications, even those that had traditionally been known as multi-threaded tests, demonstrate performance hits.

In software where threads may bounce around from core to core, and from NUMA node to NUMA node, results are sometimes lower on the 2990WX than the 2950X even though the WX model has twice the available processing cores. Gaming is one such example – it isn’t heavy enough on the processor to saturate the cores and thus threads move between the four die and two memory controllers occasionally causing a perf hit. AMD has a software-enabled “game mode” for the 2990WX (and the 2950X) to disable one-half or three-quarters of the cores on the part, which alleviates the performance penalty, but adds an extra step of hassle to the process.

Despite the imperfection, the second-generation Threadripper processor has put Intel in a real bind.

If Intel executives were angry last year when the first Threadripper parts were released, taking away the performance crown from Intel even if for a modest amount of time, they are going to be exceptionally mad this time around. AMD now offers content creators and OEMs a 32-core processor in a platform that Intel only provides an 18-core solution and in applications where the horsepower is utilized AMD has a 60%+ performance advantage.

Intel is probably planning a release of its Xeon Scalable-class parts for this same market with a peak 28-core solution to address Threadripper, but this means another expensive branding exercise, new motherboards, a new socket, and more hassle. Intel demonstrated a 28-core processor on stage at Computex but received tremendous blowback for running in an overclocked state and apparently forgoing that information during the showcase.

While there might be a legitimate argument to be made about the usefulness of this many processor cores for a wide range of consumers, there is no doubt that AMD is pushing the market and the technology landscape forward with both this and the previous generation Threadripper launches. Intel is being forced to respond, sometimes quickly and without a lot of tact, but in the end, it means more options and more performance at a lower price than was previously available in the high-end computing space.

It’s good to have competition back once again.

The Value of Smartphones

Over the past few weeks, I have been asked a lot whether the prices of smartphones will continue to increase and if such an increase is justified. The success of the iPhone X took those who said people would never pay $1,000 for a phone by surprise. The iPhone X also gave hope to smartphone vendors that, while sales might be capping there is an opportunity to grow average selling price and possibly profits. Yet, the success of the iPhone X must be considered with some caution. Not everybody is prepared to pay that kind of money for a phone, and, even more importantly, not every brand can charge as much.

The Bill of Materials is growing

It should not come as a surprise that the cost of making phones is rising. Smartphones have come to offer as much as a PC does, sometimes even more. Storage, screen quality, more sensors, bigger batteries, premium materials, cameras, and a lot more software. While some of the technologies are well established, so their cost has come down, others are cutting edge and add a fair chunk to the bottom line. Think, for instance, at the different biometric solutions from iris scanning to fingerprint readers.

Let’s look at the two trend setters in the market to see what has been happening over the past year. According to the teardown analysis conducted by IHS Markit, Apple’s total cost to make the iPhone 8 Plus rose to $295.44, $17.78 higher than that of the iPhone 7 Plus. IHS Markit also estimated that the iPhone 8 bill of materials is $247.51, or $9.57 higher than the Phone 7 at the time of release. The Samsung Galaxy S9+ (64GB) carries a bill of materials (BOM) cost of $375.80, much higher than for previous versions of the company’s smartphones. The preliminary estimated total is $43.00 higher than costs for the Galaxy S8+. It is too early for a tear down of the Samsung Note 9 and we know nothing about the iPhone X successor but we know the 64 gigabytes iPhone X model carries a BOM of $370.25 and betting on a higher BOM wouldn’t be a bad idea.  

The Return on Investment is High

We understand now why the cost of the phones at the high-end of the spectrum is going up. But why are consumers prepared to pay those rising prices?   The answer is in the return users see in the phone they buy.

Smartphones have become a must-have for most. They have replaced other consumers electronics such as MP3 players, digital cameras, video cameras, portable navigation devices as well as some other things like watches, alarms, and wallets. We use them throughout the day, every day, whether we are at home, commuting, at school or at the office. Our dependence has grown so much that we have started to talk about addiction. Whether you are addicted or not, there is no question that a lot of value is given to this thing we carry in our pockets.

Adding to the practical side of what the phone does for us there is a more irrational value we see in these devices. The pictures we store, and for some even the music, offer a deep emotional connection to the piece of hardware. While your computer can store the same things, the phone has the huge advantage of being that thing in your pocket you always pull out, much like you used to do with those pictures in your wallet. Plus as much as your phone is the same as everybody else you feel you made it yours through your pictures, your apps…You now even start to feel that your phone knows you!

No Sticker Shock

The smartphone market is not that different from the car market where the price spectrum is more and more polarized. The higher end is getting more expensive while the lower end is getting cheaper and more reliable.

Moving from contracts to installment plans helped consumer appreciate that not all phones cost $199. But even then, consumers do not have to face the full price of a phone in one go. The biggest increase they see is on the initial payment which includes tax but that percentage on a $100 increase is negligible.

Buyers could object to the price more on principle. It is really, the idea of spending $1000 that seems ridiculous to some. This is precisely where the weight one puts on the usefulness of the device and that emotional connection I mentioned will determine if you are a “you must be kidding me” or a “where do I sign” buyer.

The Power of a Brand

There is also a final component that plays a big role at the high-end and that is brand. The brand is what turns the device into a status symbol, something some consumers are prepared to pay more for. And I am not talking about the technology, the design, the quality that goes into the devices made by these brands. I literally mean the name, the logo.

In the smartphone world, this is true for Apple and Samsung and possibly Google. Consumers see these brands as leaders and are willing to pay more for their products. Other brands, like Huawei or Xiaomi, while getting recognition for their technology advancements or design have not quite earned the right to grow their price tags as much.

How far prices can continue to grow is hard to say, but I do not see this formula of rational plus irrational value and brand change much over the next five years.

 

The Shifting Nature of Technology at Work

In the business world, technology products and solutions have played an important role in some companies for several decades. In today’s era, however, it’s safe to say that technology plays a critical part in almost every company, regardless of its size. From key infrastructure systems that serve as the backbone of modern commerce to the enormous range of smart devices through which many of us perform our labors, technology’s role has been very impactful.

As a result of their evolution, commercial technology products provide everything from older “legacy” solutions (several of which still play surprisingly important roles in many organizations), to new platforms and solutions that are digitally transforming businesses of all types.

Of course, along with the growing influence and importance of technology in business has come a nearly crushing reliance on it. Obviously, it’s easy to see some major concerns that can stem from this near addiction, but the commercial dependence on technology has also led to an explosion of new ideas, new technologies, new companies, and new products all designed to ensure better, faster, easier, and more reliable access to the tools we need to get our jobs done.

Everything from cloud computing capabilities offered by companies such as Amazon, Microsoft, Google, IBM, SAP, Oracle, and others to ruggedized computing devices like Panasonic’s Toughbooks and Dell’s rugged PCs, there’s an amazing range of products and services designed to ensure that we can do computing however, whenever, and wherever we need to. In fact, there’s even a surprisingly diverse set of “services made up of services” offerings from managed service providers like Rackspace or system integrators like Atos or DXC to help companies that don’t have the expertise or don’t want to deal with the hassle of setting up things like hybrid cloud environments or building the custom applications necessary to keep their organizations competitive.

Part of the challenge for many organizations is figuring out how to deal with the enormous range of devices, platforms, applications, and services that companies of all sizes are now faced with. Gone are the days of limited choices, as device and platform heterogeneity now rule the day in most organizations. This creates challenges not only to manage and maintain the diverse set of devices that people now use for work, but also to provide a consistent set of applications and services that allow people to work together within a company, with partners, or with other related organizations.

The challenge is not just about the devices. The range of different infrastructure types has also grown dramatically. Internal corporate data centers are still an important part of many organizations, but numerous flavors of cloud computing, co-location services, and other interesting alternatives have created an equally varied set of centralized computing resources.

To bridge these worlds, companies are starting to look for solutions that can deliver a consistent set of data and applications to a wide variety of different devices from an equally wide set of infrastructure options. Companies like Citrix and VMWare are tackling this by offering “workspace” services that tie together a suite of applications—regardless of whether they’re simple Windows applications, cloud-based SaaS (Software as a Service) apps, HTML5-driven browser-based apps, or even Android or iOS platform-specific solutions.

These new integrated offerings allow organizations to deploy these environments across a wide range of devices and infrastructure architectures. Essentially, it’s the homogenization of very heterogenous environments. While that may not sound like much, it’s both incredibly difficult to do and incredibly valuable to leverage in the diverse IT environments that even today’s small and medium businesses find themselves in.

The newly released Citrix Workspace, in particular, offers a unified way to deliver applications and data to all employees in an organization, regardless of the unique device and platform combinations they happen to use, as well as the infrastructure environments they have in place. In practical terms, that means those who use everything from Windows PCs, Macbooks, Chromebooks, Android and iOS-based devices can get access to the applications and data they need to get their jobs done. Long-time Citrix users may recognize this as an advancement in the original Citrix Receiver offering, but there are significant security enhancements in Workspace, particularly around the integrated browser for SaaS and browser-based apps, that make it a more practical solution for today’s security-challenged environments.

The idea of bringing any application and any data to any device has been a dream of IT departments and other technology-focused individuals in businesses around the world for some time. The problem is, actually reaching that dream has been significantly harder and has taken significantly longer than most people (and companies) expected. Finally, however, we are at the point that both legacy systems and modern systems are starting to come together in a way that lets employees get access to whatever applications and data they need to get their jobs done on whatever environment(s) their company has chosen to deploy. It’s been a long time coming, but the practical, real-world benefits of a fully integrated computing environment should finally start to be felt very soon.

The Challenges Facing Online Advertising

I’m fortunate that Tim and Ben provide the opportunity to write about anything technology on this forum. In this column, I want to address two different subjects.

Online Ads
We’ve been convinced that online advertising provides some of the most effective means to sell products. Moreover, based on the success of Google, Facebook and others, we’ve all accepted that premise. We see ads hundreds of time a day as we read the news, visit websites, each tailored to our specific profile and interests based on our browsing habits and other online activities. However, based on my own experience and many of those I’ve talked with, whatever algorithms or rules are being used to figure what ads to show us, are seriously flawed and could be much better.

Two years ago, I visited Harry’s website to read about their razors and blades being sold online. I purchased a starter set and a few months later subscribed. I rarely ever returned other than to check on a shipment. Ever since – for two years running – I see Harry’s ads on my computer, phone, and tablet, morning noon and night. Dozens of times every day. I’ve frequently clicked on the corner of their Google ad where you can report or complain about the ad, and I consistently select “stop seeing this ad” then select the option “seen this ad multiple times,” and get the message from Google “we’ll try not to show this ad again.” However, it seems I only see it more often. I spoke with Harry’s, and their solution is to clear your caches and browsing history, but that hasn’t worked. I’ve also added ad blockers, but these ads are so pernicious, almost like weeds, that they still manage to show up.

If it were only Harry’s, I’d chock it up to some anomaly. However, it’s happened with a few other items as well. The reward I got for buying a pair of Allbirds shoes a year ago now are ads for every kind of shoes most everywhere I go on the web (in between the Harry’s ads). What’s strange is that I don’t see many other ads repeating, as if Google has typed me as someone that shaves and walks.

One of the significant issues in serving up ads that are based on interests is that Google doesn’t know when our interests have been satisfied, either with a purchase or a decision to move on. The exception, of course, is Amazon that knows the difference between looking and buying, and appropriately tailors their ads with that knowledge in mind.

I suppose numbers don’t lie, and Google can prove that their ads tailored for each of us are more effective, but they are also much more annoying than random ads. In fact, they’re creepy at times, messaging us that they know where we were online or what we were thinking about. That can be jarring and interrupts us from reading an article or doing other work online. They grab more attention than random ads, just as they are supposed to do. More effective and more annoying.

A Facebook Solution
Having followed and written about Facebook and the mess they’ve created, it’s very discouraging to see how little Zuckerberg is doing to protect us from interference from the Russians in our upcoming elections. The latest act of contempt is not replacing Alex Stamos, their respected chief of security who, apparently tangled with Zuckerberg and Sandberg about being more transparent and more aggressive in dealing with their problems. Instead, Facebook said, they are spreading the expertise into individual groups. Anyone that understand organizational behavior knows that’s a way of diffusing responsibility and accountability and makes it more difficult to hold executives accountable.

Even if Facebook did take these threats more seriously, it might just be that their basic model can never be adjusted to let in the well-intended advertisers while keeping out the bad players. That’s a terrible thought as we approach November. Perhaps there’s only one solution to prevent a tainted election, based on the long tradition of banning campaigning around voting locations during on election day.

We should consider having Facebook suspend operations 30 days before the mid-term election. Yes, it may sound outlandish. How can a private company be prevented from operating and no one has the authority to make this happen. However, how important is preserving our elections and our democracy? Moreover, does anyone have a better idea that would be equally effective?

Lastly, if there’s a common thread with this column on online ads and Facebook, it’s that it may be time for an Internet and Facebook that’s supported by paid subscription rather than advertising.