Apple’s Changing Relationship With Personal DataReading Time: 3 minutes
This article is exclusively for subscribers to the Think.Tank.
This article is exclusively for subscribers to the Think.Tank.
The new bestseller about Theranos, Bad Blood by John Carreyrou, is a must-read for anyone in the tech world, particularly those in Silicon Valley. Not only is it disturbing in its own right, but it’s a reflection on Silicon Valley and not in a positive way. How could its founder, Elizabeth Holmes, get away with so much in the middle of the most technology-aware community in the world?
Holmes convinced most everyone she came in contact with that she invented and perfected a revolutionary blood tester that would obsolete the competition. She did it without ever having to validate her technology. When she finally did ship her product, it was nothing more than a competitor’s tester modified without the required FDA approvals, a product that never worked. Her fraud was so effective that she raised more than a billion dollars and convinced Walgreens to put them in their stores, exposing their customers to faulty test results and putting lives in danger.
What’s given less attention is how Silicon Valley failed to detect the fraud and gave Holmes her legitimacy. You might excuse some of her board members with no tech or governing experience, but you can’t excuse the professional investors and venture capitalists. You can’t excuse the well-known attorney David Boies and his law firm that are described as behaving like thugs with their attacks and threats on those trying to reveal the truth. And you can’t excuse the Stanford professors that are supposed to discern truth from fiction.
The people she convinced are a who’s who of cabinet secretaries, professors, investors, politicians, and business people. None of them ever insisted on doing a blood test to compare with a standard test before participating. None of them ever insisted on seeing an FDA approval. None of them ever insisted on an engineering assessment. None of them ever insisted on anything to confirm her claims. They believed it was true because she said so and because it was Silicon Valley.
Because it was Silicon Valley, major publications put her on their front covers and elevated her to the level of Steve Jobs. Because it was Silicon Valley, politicians flocked to meet her and take selfies. They all believed it was true because it was Silicon Valley.
Yes, Silicon Valley is sometimes known for faking it until making it get the promise out in front of the solution. But that’s normally an interim step done to raise money, and the outcome is either the promised breakthrough or a failed product that never gets released. It’s unheard of when a company ships a product that doesn’t work and that puts lives in danger.
Some employees and some in the tech community were skeptical about Holmes’ claims, especially when Walgreens made an investment but was never allowed to see the product in use. But the company’s protective bubble of lawyers, PR firms, promoters and VCs drowned them out for much too long. After all of this and the recent criminal indictments, a few Silicon Valley VCs still think she was wronged and blame her downfall on Carreyrou.
For those that take pride in Silicon Valley’s contribution to the world, the Theranos story is a black mark on the community and hopefully an aberration.
Ben Bajarin is joined by fellow Tech.pinions columnist, Mark Lowenstein to chat AT&T – Time Warner and why we are watching for more industry consolidation through mergers and acquisitions.
This week I attended PTC’s LiveWorx18 conference in Boston, where the company demonstrated some of the ways its customers are leveraging AR technology today. PTC is an interesting company because it has a wide range of solutions beyond AR, and it has done a good job of telling a story that shows how industry verticals can utilize its Internet of Things (IoT) technology as well as its Computer Aided Design (CAD) products to drive next-generation AR experiences.
Back in 2015, PTC purchased the Vuforia business from Qualcomm. Vuforia is a mobile vision platform that uses a device’s camera to give apps the ability to see the real world. It was among the first software developer kits (SDKs) to enable augmented reality on a wide range of mobile devices, long before Apple launched ARKit or Google launched ARCore (today Vuforia works with both of those platforms). Today developers can use it to create AR apps for Android, iOS, and UWP. As a result, there are tens of thousands of Vuforia-based apps in the real world.
In addition to the Vuforia Engine, PTC also has software called Vuforia Studio (formerly ThinkWorx Studio) that lets use create AR experiences such as training instructions using existing CAD assets using a simple drag-and-drop interface (I’ve watched PTC executives create new AR experiences on stage during events using this software). Vuforia View (formerly ThingWorx View) is a universal browser that lets users consume that Studio-created content. And Vuforia Chalk is the company’s purpose-built remote assistance app that enables an expert to communicate and annotate with an on-site technician through an AR interface. Most companies today are utilizing PTC-based technology through mobile devices such as tablets and smartphones already present in the enterprise. But a growing number are testing on headsets from partners including Microsoft, RealWear, and Vuzix.
In addition to these shipping products, the company recently acquired new technology that it will deliver in future products that enable the creation of step-by-step AR experiences by a person wearing an AR headset (Waypoint) and to later edit that experience for consumption (Reality Editor). Training is one of the key use cases for AR across a wide range of industry verticals, and this type of software will make it much easier for companies to streamline knowledge transfer between experienced workers and new hires.
IoT Plus AR
I’ve long suggested that one of the powerful things about AR is that it has the potential to let us humans see into the Internet of Things. PTC demonstrated this ability during its keynote. It also showed a very cool example of moving a digitally created control switch from an AR interface to a physical world control panel (in this case, the notebook screen of an IoT-connected machine). The company also created a real, working manufacturing line on the expo floor that demonstrated the integration of IoT, AR, and robots.
There are plenty of companies doing good work in AR today, but one of the things that make PTC stand out is the fact that its software is straightforward to use, it helps companies leverage many of the digital assets it already has, and it promises to help them make sense of data generated by the IoT.
I attended several of the working sessions during the show, including one on connecting AR to business value. PTC isn’t just talking the talk: During that session, the presenter gave real-world advice to IT decision makers trying to utilize AR in areas such as service, sales, and manufacturing.
The Future Requires Partners
One of the things I like about PTC and its CEO Jim Heppelmann is that the company is confident in its product line but humble enough to know that partnerships are key to building out new technologies such as IoT and AR. In the weeks leading up the show, and on the keynote stage, the company announced strategic partnerships with companies including Rockwell Automation, ANSYS, and Elysium. And earlier this year it announced a key partnership with Microsoft (PTC even had Alex Kipman, Microsoft Technical Fellow, present the day-two keynote).
As a software company, PTC depends upon hardware partners to bring the next-generation of hardware to market. It knows that AR on mobile devices is powerful, but AR on a headset is game-changing for workers who need to use their hands to get work done. Like me, executives at PTC are eager–and a bit impatient–to see new hardware from companies such as Microsoft, Magic Leap, and others ship into the market. This hardware is going to be key to moving AR forward in the enterprise. I look forward to seeing what PTC and its partners can do with it once it finally happens.
On Wednesday, at an event in San Francisco, Instagram announced a standalone app to watch long-form vertical videos. The app is called IGTV and it will also have a dedicated button inside the Instagram app. IGTV will launch on iOS and Android and will allow content creators and the public to post videos that can be up to 10 minutes long to start, growing to a full hour over time.
The market for current generation VR technology is in an interesting place. Many in the field (including analysts like myself) looked at the state of VR in 2015/2016 and thought that the rise and advance of sales, adoption, software support, and vendor integration would be significantly higher than what we have actually witnessed. Though the HTC Vive and Oculus Rift on the PC, as well as Gear VR from Samsung and various VR platforms from Qualcomm do provide excellent experiences in price ranges from $200 to $2000, the curve of adoption just hasn’t been as steep as many had predicted.
That said, most that follow the innovation developments in VR and AR (augmented reality) clearly see that the technology still has an important future for consumer, commercial, and enterprise applications. Let’s be real: VR isn’t going away and we are not going to see a regression of the tech that plagued previous virtual reality market attempts. Growth might be slower, and AR could be the inflection point that truly drives adoption, but everyone should be prepared to consume content and interact through this medium.
There is no shortage of players in the VR/AR market, all attempting to leave their mark on the community. From hardware designs to software to distribution platforms and even tools development, there are a lot of avenues for companies looking to invest in VR to do so. But one company that potentially could have a more significant impact on VR, should it choose to make the investment of budget and time, is Dell. It may not be the obvious leader for a market space like this, but there is an opportunity for Dell to leverage its capabilities and experience to get in on the ground level of disruptive VR technology. There is more Dell can do that simply re-brand and resell what Microsoft has determined its direction is for VR.
Here are my reasons on why that is the case:
There is no clear answer or path to the future of virtual or augmented reality. It is what makes the segment simultaneously so exciting and frightening for those of us watching it all unfold, and for the companies that invest in it. There are and will remain many players in the field, and everyone from Facebook to Qualcomm will have some say in what the future of interactive computing looks like. The question is, will Dell be a part of that story too?
News broke this morning that Intel’s CEO Brian Krzanich was forced out after the board discovered that he violated the company’s non-fraternization policy with another employee. While this seems like an odd way for him to go, it is a welcomed move by many employees, investors, and Intel watchers. Over the past year, I’ve turned extremely bearish on Intel.
One of the more fascinating stories to come out of the recent US and North Korean summit was the fact that N. Korean leader Kim Jong Un carried his toilet with him to the Singapore meeting with President Trump.
According to multiple accounts, Mr. Kim did this so that no person could get access to his stools to be able to test them for DNA and learn anything about his health. This may sound crazy but given modern DNA testing technology and the fact that we can learn a great deal about a person’s history as well as future health issues from these tests, this move by Mr. Kim does make some sense. He has to be very paranoid based on the damage he has done to N. Korean people and their economy. Keep in mind, he is a dictator and has to be in total control of everything, and this appears to trickle down to his toilet habits as well.
Over the past week, I have been playing with the Mirage Solo with Daydream, the standalone VR headset recently released by Lenovo. The headset uses the Daydream VR platform that has been available to the Daydream View Headset since 2017. The key difference with the Mirage Solo, as the name gives away, is that you no longer require a phone to experience VR. The Mirage Solo also does not need a PC like the HTC Vive or Oculus. It is, in fact, a direct competitor to the Oculus Go but uses a new technology called WorldSense that allows to track the world around you, or at least a good square meter or so of it.
Overall I felt that the Mirage Solo delivers a decent experience and I very much appreciate not having to worry about the phone overheating or running out of battery. I also felt the freedom from cables was a welcome improvement to my Oculus experience even though it did not take much moving around before WorldSense would ask to re-center the device. The peace of mind from walking around without worrying about tripping and the instant-on of wearing the headset and starting to enjoy content right away was a good start for me.
Content is where the Mirage Solo shows its weakness. The good news is that out of the box the Mirage Solo has access to all the Daydream apps that are available in the Google Play Store and the YouTube content. The bad news is that the Daydream apps are all there is.
The content is not bad, but it is limited. Some of it really does a disservice to the Mirage Solo as it lacks the quality someone investing $400 in the device would like to see. And this is the issue. Creating good quality content for VR is not cheap, and developers might be reticent to invest in doing so while the addressable market is limited and understandably so. Good quality content comes at a price, with apps that cost as much as $19.99. As users might first try free or cheaper content, the lack of quality might put them off spending more. I find this to be a problem for the Play Store in particular, as consumers have been historically spending less money, relying on free apps more than in the iOS App Store. Delivering ad-funded apps in VR might also be more complex if you want to keep true to the content or extremely annoying if you are not!
Lenovo smartly launched the Mirage Camera with Daydream so that users can create their immersive content by shooting videos that they can then enjoy with the Mirage Solo. That $300 price tag, however, might mostly appeal to early adopters.
While AR has similar issues with lack of compelling apps, users are not investing extra money in a device to try AR in the same way VR users do. It seems that the interim step of screen-less viewers is coming to an end and the industry wants to move towards standalone headsets for the mass market which makes content availability even more critical.
As I was trying different apps, I was also left wanting a different in-store purchase experience. With traditional apps, looking at the screenshots and reading the reviews is usually enough to get a sense of how good an app will be. I found that with VR there are way more variables at play.
The target audience age is the first thing you see when looking at purchasing an app, which is pretty straightforward. After that, you are given a sense of how much motion You will experience, which should be an indication of how sick you might feel if you do suffer from motion sickness. I do, and I found that the guidance was a bit of a hit and miss. Aside from those couple of points, you really do not get a sense for how immersive the app will be both from a realistic perspective and an engagement one.
It seems to me that free trials are a must in a VR app store. Apple introduced the ability for developers to offer free trials for subscriptions apps in 2017, after resisting the idea for quite some time. This would work best for entertainment apps but not necessarily for all VR apps. The shift in spending from new apps to in-app purchase we have seen over the past couple of years within traditional app stores comes from many developers offering a free app and then opening up levels or features at a price. I am not sure this technique would necessarily work with VR where maybe a time-based approach might be preferable. You get ten minutes of the full experience before you are asked to pay for the app. Of course, developers can still open up levels and sell cheats but a watered down free version of the app might just not be compelling enough to get consumers to want more.
I also wonder if subscription services, similar to Xbox Live Gold, might be a good idea for power users, especially at this stage of market adoption when you want users to experience as much as possible and start evangelizing. Of course, big titles will build on the success of their traditional apps and might not need further help to reach success. Yet, I am hoping VR will open up the market to new titles and different experiences.
Overall I see the addressable market for VR coming from a blend of traditional gaming and mobile content consumption which spans from games to video to educational and productivity apps. The more opportunities to try good quality content mainstream users will have the more rapid the adoption will be as with VR trying is indeed believing.
I’ve written extensively about the growing trend of unbundling happening to the cable TV bundle. Voices in tech keep highlighting the cyclical nature of this trend where everything that was once bundled becomes unbundled only to be bundled again. The important observation we cannot escape is the inherent value in bundles. Bundles work for a variety of reasons but mostly because once a company has a billing relationship with a customer, it is effortless for them to layer value. So while we are currently in a partial phase of unbundling TV content, the reality is it will all become bundled again quite quickly. But the interesting new wrinkle I see coming is the rise of what I call the super bundle.
I recently wrote about my frustrations with my MacBook keyboard due, in my opinion, to Apple’s obsession with thinness. I found my MacBook keyboard to be just too difficult to use and unreliable, as well. Even after a replacement, random keys continue to become mushy and don’t reliably register. In speaking with friends using recent Macs I hear much the same issue.
For the first time in twenty years, it got me to consider moving to a Windows 10 notebook. I never expected that to happen, because I think the MacOS is elegant, easy to use and visually appealing. It also works well with the iPhone I use. The tipping point came with my spending 2 to 3 hours a day at the keyboard working on a new book. But when I casually looked at what alternatives were available, I was surprised by the latest crop of Windows notebooks.
Costco and the local Microsoft Store had computers from Lenovo, Dell, Microsoft and HP that were beautiful, lightweight, with none of the compromises found on the MacBooks. I had been under the impression that thin and light meant limited ports and a shorter battery life, but that’s not what I discovered.
I eventually picked a Lenovo Carbon X1 with its best quality 14-inch, 2560 x 1440 non-touch glossy screen. It’s spectacular – almost OLED like sharp, and intensely bright. The X! also had a full complement of ports, a memory card slot, and that terrific keyboard.
My biggest reservation in switching notebooks was moving from the MacOS to the Windows 10 operating system. It’s taken me almost two weeks to become comfortable doing most things under Windows, including a visit to the local Microsoft Store for a short class. Clearly, Microsoft is remiss by not offering the migration tools that Google and Samsung do to help iPhone users move to Android.
Switching means abandoning some of the apps that I’ve grown accustomed to on the Mac, such as Mail, Fantastical. Grab, and Contacts. I tried using Outlook for Windows, but in spite of watching YouTube videos from third parties and calls to Microsoft, I’ve not gotten it to work reliably.
I was able to access my Apple iCloud web client and its online apps, but they’re not very robust for frequent use. Fortunately, Apple offers a Windows app to access my iCloud drive, so my documents and photos were readily available. Office for Windows seems slightly better than the Mac version. I decided to use Google’s online calendar, contacts, and email clients. They’ve all improved over time, particularly the new email interface. But you’re still limited to Gmail accounts and I wasn’t able to add my Apple email account.
I found Windows 10 to be much improved compared to the last time I tried it using Windows 8. There are still vestiges of the old version with the large tiles that seem unnecessary and redundant, and there are hidden settings that take some searching to find, such as the Control Panel. But Windows OS also has much-improved aesthetics with a clean, clear interface with many intuitive features. The large Cortana search window provides a powerful search for help on the computer and the web.
I still prefer MacOS, which I’d rate a 90 vs an 80 for Windows, using my arbitrary wine rating scale. The Windows computer hardware, however, beats Apple by a larger margin, 95 vs 70. If I were an Apple MacOS software engineer, I’d be unhappy that my fellow hardware engineers are shortchanging the software by offering products that are well behind the competition. There’s no doubt in my mind that Apple has lost its edge with its latest line of notebook computers and is way behind the Windows offerings. I’m likely not telling them anything they really don’t know. Last time I was at the Apple Store to repair my keyboard, they suggested I’d be better off with a MacBook Air.
One of the challenges of life, regardless of who you are, is the quest to remain healthy. I admit that in my youth, this was not on the top of my list of things to be concerned with. Even into my thirties, I pretty much lived a life of excess and worked way too many hours and traveled for work without any restrictions on my schedule.
At the age of 35, during an annual physical, I showed signs of high blood pressure and minor heart arrhythmia and was told I needed to change my lifestyle. I was also put on a mild BP drug. As I left that doctor visit I was a bit shocked at this news. I was young and felt invincible. But as I aged, and admittedly, I did not change my lifestyle that much given the warnings I received at age 35, my blood pressure issues got worse, my heart problems accelerated and by age 48, I was diagnosed with Type 2 diabetes. At age 62. I had a heart attack and underwent a triple bypass.
From a genetics standpoint, both my mother and father had blood pressure and heart problems and were pre-diabetic in their later years. However, as we now know, genetics only plays a portion of our health destiny while what we eat, our lifestyles and environmental issues have a real impact on our actual health outcomes at any stage of our lives.
While I was growing up, we had very few tools that could help us monitor our health outside of simple things like scales, blood pressure cuffs we could use at home and simple thermometers to read our temperatures. But these days we have home blood testing kits to check for various maladies. We have services that give us our DNA that includes all types of data about potential health problems that may lie ahead. We also have smartwatches and fitness bands that monitor our steps, heart rate and other activities that are then sent to apps like Apple’s Health app that gives us daily readings about various health data points. I even use the Dexcom G6 Continuous Blood Glucose monitoring system that gives me my blood sugar readings 24 hours a day, which I can see at glance on my Apple Watch.
One of the things that these new tech tools for health monitoring has done is given people of all ages many ways to self-check their health and monitor their overall health conditions. I am encouraged that even young people in their teens are using these health monitoring apps and using them early on to try and stay healthy. I am even seeing senior citizens using things like the Apple Watch and fitness bands, although we need to see more of them using these tools in the future as this generation is still a bit tech challenged.
There are many companies in tech that are creating all types of products to keep us healthy and monitor our overall health conditions. However, Apple has taken major leadership role in terms of their aggressive approach of using the iPhone and Apple Watch to monitor and collect health data. More importantly, they have created a set of tools that anonymously send that data to various health researchers, so they can use that data to create better treatments and medications to combat various diseases such as multiple sclerosis, heart disease, concussions, melanoma, Postpartum depression and sleep health for starters.
These tools are HealthKit and ResearchKit.
These tools have three objectives-
Making medial research easier so understanding disease is simpler.
To get more participants into the study so that researchers get more data, which leads to more meaningful results.
Taking research out of the lab and into the real world.
Apple also has another important tool called CareKit, that is a software framework that allows developers to build medically focused apps that track and manage medical care.
As a professional market researcher, I understand how important data is to understand various aspects of the tech market I cover. But the kind of data I look for does not deal with life and death issues in a human sense. On the other hand, medical researchers desperately need as much data and information about a particular disease they are researching in order to better understand it and look for ways to treat it and ultimate defeat the disease altogether.
When Apple introduced the heart study last year, I was one of the first to sign up. As a heart patient for life, I clearly want to have the best solutions for dealing with this disease and if my heart data can help deliver better treatment for all, then I am all in. The data I send to Apple is anonymous and private. Consequently, I did not hesitate to participate in this study. In my discussions with others who have diseases that are tracked via Apple products and HealthKit and ResearchKit, they also seem to be very willing to send that data to researchers via Apple, as they too want to see better ways to treat and possibly cure their particular diseases.
Apple’s role in helping people track their health and then get that data to researchers can’t be underestimated. This is a big deal for Apple and more importantly, health researchers and professionals who need as much help as possible as they tackle the various health issues and diseases they study. I see this as being one of Apple’s greatest callings. In last Septembers keynote, Apple CEO Tim Cook stated that “healthcare is big for Apple’s future.”
I had a meeting with the retired CEO of a major health organization a few years back and well before Apple declared their strong commitments to health apps and products. In the meeting, he told me that he had been in talks with Apple about their ways of thinking about future health apps and services. Before he left my office, he made a prediction to me. He said, “Apple will emerge as the major company who will change the face of healthcare.” Given the timing of this meeting, which took place not long after Steve Jobs died, his prediction seems prophetic.
We are still in the early stages of this data impacting current research studies on the various diseases I mentioned above. Because these tools can be applied to all types of health conditions, I expect to see more studies taking advantage of Apple’s various health research tools and apps.
We should all be rooting for Apple to succeed with their health initiatives. Of course, it would be good for their business if they are successful, but it would be a bigger win for mankind if they succeed.
Yesterday, Judge Leon ruled that AT&T can acquire Time Warner. In this column, I’d like to discuss the broad implications of the deal, and more specifically what it means for the telecom and mobile landscape.
First off, congratulations to AT&T. They stuck to their guns and didn’t agree to any of the initial — and unreasonable — DOJ terms to sell off piece parts of Time Warner to get the deal through. Hopefully, AT&T will be more successful with Time Warner than was AOL which, ironically now sits in the hands of arch-rival Verizon’ unfortunately named Oath.
Some of the benefits of the deal will be felt apparent to consumers within a few months. Expect some additional bennies and content bundles for AT&T wireless subscribers. HBO for free, a la T-Mobile Netflix? In the more medium term, marrying the huge Time Warner ad inventory with the insights on AT&T-DTV’s customers will create value. It will be a longer term project to build a more effective ad targeting platform, pulling together the content, ad inventory, and customer data in an effective – and responsible – manner.
AT&T will have to tread carefully. With the tech industry reeling from myriad episodes of inappropriate exposure/use of customer data, the $200 billion AT&T-Time Warner behemoth, which will still be under greater regulatory scrutiny than its Silicon Valley brethren, will have to be both careful and transparent with regard to how that customer data is leveraged. It will also have to abide by the near promises it made during the trial to not discriminate in the provision of Time Warner content to DTV rivals. That said, the TV and rights fees landscape is in turmoil and under pressure, so needles will have to be threaded here.
Against this backdrop, and with uncanny timing, net neutrality was officially repealed this week, smoothing the way for all of the above to be implemented.
The clarity of the ruling and its lack of conditions will help to unleash a wave of M&A activity in the media and content landscape. Most immediately, the bid for 21st Century Fox assets will heat up, with Comcast entering the fray.
I believe this will also ease the path for the T-Mobile/Sprint deal. Just as the TV market has changed hugely with OTT, streaming, and the impact of Netflix, Amazon, Apple, YouTube and so on, so too has the telecom business. Landline is all but dead, broadband is a near monopoly in 50% of the country, and demand for wireless data (driven by video) and the capex to support it remains near insatiable. It is hard to imagine T-Mobile and Sprint competing successfully, independently, and profitably with AT&T and Verizon, long-term. Especially with DISH’s spectrum, Comcast/Charter MVNOs, and possible entry of some Internet/Web giant into the space, as part of the mix.
I think T-Mobile and Sprint can successfully make the argument that the industry landscape has changed significantly since a deal was first broached a few years ago. The biggest benefit of 5G is capacity – in the form of spectrum breadth and depth, and cell site density. T-Mobile and Sprint will be able to do more together than they would do independently (1+1=3, as it were).
5G will be another beneficiary of this evolving telecom/media landscape. Verizon, AT&T, T-Mobile (Layer 3), and Comcast all have important content and video assets, which in addition to driving traffic growth, will also unleash innovation in apps, games, and so on that will form some of the business cases for 5G, such as in AR and VR. This thinking was on display last week at the AT&T Shape conference, which was held in Los Angeles at – wait for it – the Time Warner Studios lot (see my column on that here).
I also think that Verizon, Comcast, and AT&T getting more deeply into content and media will incent some of the major internet players, namely Google, Facebook, Amazon, Apple, and Netflix to be more masters of their own domain with regard to telecom and mobile. At the very least, it will drive the development of edge networking (and hence small cells/data centers) and 5G. One could also envision a deal for DISH’s spectrum, their participation in future spectrum auctions, leveraging Wi-Fi/unlicensed/3.5 GHz spectrum, or some level of MVNO relationship — or some hybrid of all of the above.
The telecom landscape will look less homogeneous going forward. Mobile-centric AT&T looks more like broadband-centric Comcast than it does Verizon. Verizon, with its leadership in 5G, emphasis on 5G FWA, and appointment of former Ericsson CEO Hans Vestberg as its next CEO, has taken a turn toward re-emphasizing the network. It is still in the early stages of truly leveraging its Oath asset, though if it is going to be a serious player in media/content/advertising, there’s more dealing to be done. T-Mobile and Sprint together look the most like a wireless pure play, though I could certainly see how Sprint’s 2.5 GHz spectrum could be leveraged as a potential competitor to broadband in some markets. And as part of the likely M&A acceleration in the telecom/media arena likely over the next year, one can’t imagine how DISH’s spectrum can lie fallow for much longer.
This week Microsoft announced that they will introduce a series of changed to Office.com and Office 365. The changes are built on a lot of users’ feedback and aim to focus on simplicity and context.
The initial set of updates includes three changes:
Simplified ribbon– An updated version of the ribbon designed to help users focus on their work and collaborate. People who prefer to dedicate more screen space to the commands will still be able to expand the ribbon to the classic three-line view.
There is a bigger picture observation to be made in the wake of the AT&T and Time Warner merger/acquisition. It is an observation a long time coming as we have observed a number of larger merger/acquisitions already go down in the semiconductor industry with even more coming. I have continually been predicting the consolidation of the semiconductor industry and others have been making similar predictions about the media industry. It is worth looking at why this is happening and will continue to happen and what that may mean going forward for startups.
There are a number of interesting trends emerging around video games worth observing. On the heels of the gaming industries biggest show of the year, E3, I thought it would be a good time to outline the broader trends I see happening worth watching.
Gen Z PC Gaming Growth
This is one of the bigger sleeper trends I’m watching. While I’m not ready to completely and boldly state that Gen Z is dumping consoles for PC gaming, it is certainly trending that way. I caught wind of this trend a few summers ago, when all of a sudden, more than a dozen friends or family from around the country asked me my opinion on an affordable notebook gaming PC for their high school boy who wanted to get a notebook for high school but also to play PC games. This peaked my interest, and upon further questioning, I found the gaming desire was driving by many of said teens friends starting to play more PC games and they wanted to start playing PC games online with their friends.
I chatted with over a dozen parents and it was the same story every time. Kid wanted notebook for school, kids friends were all starting to play more PC games online, so they wanted a gaming notebook for school and to play online with friends. I went on to ask all the parents I talked to about the gaming console. Nearly everyone had an XBOX or Playstation in the home and everyone said their kid, and their friends were playing it less and less and instead playing PC gamines online. In fact, in several instances, the parent (who is around my age 40, and was a big console gamer like I am/was) chuckled when they told me this anecdote “my son and his friends think console gaming is for old people.”
It is relevant to trend to understand a game called PUBG (Players Uknown Battle Grounds). This game was single-handedly the reason teen males were flocking to PC games and leaving their consoles. Yes PUBG came to XBOX but that was not the case at the time. This game enlightened Gen Z about the faster pace of innovation in the PC gaming sector in both hardware and software. Every year your games can get richer and more immersive if you are willing to spend money on a new GPU, but similarly, new games are released and updated with new features faster than on consoles. All of this together makes for a compelling experience for this particular generation.
What I was seeing, with a single game, and social dynamic driving adoption of a gaming platform, was like watching a movie I’d seen before with the original XBOX. I had the privilege of doing some work with the original XBOX group, and the Halo phenomenon was remarkable at the time. For my demographic, Halo and playing online with friends in large battlegrounds able to battle each other as well as others, was a brand new experience and one that was responsible for the first XBOX’s rise to fame.
It is eerie the similarities I’m seeing for the motivation driving Gen Z to PC gaming to the rise of the original XBOX and console gaming with Gen Y/X.
Massive Multiplayer Games Going Mainstream
Another interesting trend is how a game like Fortnite may be leading the charge in bringing massive multiplayer online gaming to the masses. Fortnite is a more consumer-friendly version of PUBG and quickly rose to an amazing 2 million concurrent players and boasts around 3.5 million players monthly. While PUBG has similar numbers, Fortnite started on mobile and much of its growth has been people playing it on their smartphones and tablets.
I have a hunch the success of Fortnite, which proves consumers are comfortable playing large multi-player games on their smartphones, may open the floodgate for this type of gaming specifically in western markets. What many may not realize is this is common behavior in China with hundreds of millions of people playing online games on their smartphones and often in large groups. The genre varies that drives this behavior but I think we may have reached a tipping point where mobile gaming starts to become a driver of global MMO gaming.
This is exciting because we could see new innovation in games and gameplay. PUBG was a new innovation in game style, called Battle Royale, but added a twist which starts you off in a massive world but then forces the play area to shrink in order to bring players closer together leading to inevitable battles. It was a fascinating new dynamic that is now bein adopted by other games and game types. Fortnite may have opened Pandora’s Box to the mobile gaming opportunity globally and could lead a wave of new mobile game innovation in both genre and game dynamics.
I know I just covered two completely different ends of the gaming spectrum with both hardcore PC gaming and more approachable mobile gaming, but in some ways, they are related given the genre of Battle Royale is at the center of driving both trends. Ultimately, we may be seeing a new movement to truly massive global multiplayer games that are playable on all platforms. Imagine a game that every person in the world can play together in massive worlds no matter what device they have? High-end gaming PC, smartphone, tablet, basic notebook, console, streaming TV box, etc., all enabling a truly global gaming environment. This would be truly remarkable, but entirely possible, and whoever can crack this first would be sitting on a gold mine.
Before the launch of its Zen-architecture processors, AMD had fallen to basically zero percent market share in the server and data center space. At its peak, AMD held 25% of the market with the Opteron family, but limited improvement in performance and features slowly dragged the brand down and Intel took over the segment, providing valuable margin and revenue.
As I have written many times, the new EPYC family of chips has the capability to take back market share from Intel in the server space with its combination of performance and price-aggressive sales. AMD internally has been targeting a 5% share goal of this segment, worth at least $1B of the total $20B market size.
However, it appears that AMD might be underselling its own potential, and Intel’s CEO agrees.
In a new update from analyst firm Instinet, the group met and spoke directly with Intel CEO Brian Krzanich and found that Intel sees the future being brighter for AMD in the data center. Krzanich bluntly stated that Intel would lose server share to AMD in 2018, which is an easy statement to back up. Going from near-zero share to any measurable sales will mean fewer parts sold by Intel.
In the discussion, Krzanich stated that “it was Intel’s job to not let AMD capture 15-20% market share.” If Intel is preparing for a market where AMD is able to jump to that level of sales and server deployment then the future for both companies could see drastic shifts. If AMD is able to capture 15% of data center processor sales that would equate to $3B in revenue migrating from incumbent to the challenger. By no measurement is this merely a footnote.
For months I have been writing that AMD products and roadmaps, along with the impressive execution the teams have provided, would turn into long-term advantages for the company. AMD knows that it cannot compete in every portion of the data center market with the EPYC chip family as it exists today, but where it does offer performance advantages or equivalency, AMD was smart enough to be aggressive with pricing and marketing, essentially forcing major customers, from Microsoft to Tencent, to test and deploy hardware.
Apparently Intel feels similarly.
Other details in the commentary from Instinet shows the amount of strain Intel’s slowing production roadmap is causing product development. Intel recently announced during an earnings call that its 10nm process technology that would allow it to produce smaller, faster, more power efficient chips was delayed until 2019.
Krzanich claims that customers do not care about the specifics of how the chips are made, only that performance and features improve year to year. Intel plans updates to its current process technology for additional tweaking of designs, but the longer Intel takes to improve manufacturing by a significant amount, the more time rivals AMD and NVIDIA will be able to utilize third party advantages to improve market positions.
This week I spent a couple of day at the Women in Technology Summit hosted by WITI. I was invited to moderate two panels and rather than just going in for those I decided to invest some time to listen to what other speakers had to say, to attend a workshop on how better to communicate with men and build allies and to network. Over the years, I have attended a few women in tech luncheons and breakfasts at broader industry events, but I usually shy away from women networking events marketed explicitly at women. This is mostly because I prefer to fight my way into events where the majority of attendees are men as this is, after all, what best reflects my day to day in tech. That said, I think there is power in conversations that happen in an environment where you feel it is safe to be open and this is precisely what the WITI Summit offered. There is power in sharing stories, opinions, openly talk about the challenges we face without being concerned of being judged and with the reassurance that more often than not, the person you are talking to is able to relate to what you are saying.
You often hear men complain about a shortage of women in tech. Not enough women to keynote at CES, not enough women in tech to follow on Twitter, not enough women in tech to invite as guests on their podcast. Time and time I see women making extensive lists of the talent that is out there if you are willing to look. And by look I mean, taking a quick look at these women are not hiding under a rock but they are openly visible doing their thing and demonstrating their awesomeness.
In case you are tempted to believe this shortage nonsense, let me tell you that at the WITI Summit there were over 100, yes one hundred, speakers, panelists, coaches and guess what, they were all women. I can bet that the organizers did not have to send out search parties to hunt them down either. What struck me was the quality of women on stage. They knew what they were talking about, many had science and engineering background, they were engaged with the audience, they were generous with their knowledge and time, and they genuinely wanted to make a difference.
Something that really struck me in listening to the speakers that the vast majority of them did not just tell a story and spoke hypothetically about a topic, whether the topic was a new technology like AI or the issue of diversity and inclusion in tech. They were prepared on the topic, talked with purpose and always left the audience something to reflect on. All by rarely mentioning their own personal achievements other than to make a point.
As I was looking at the crowd in the sessions, I started to notice that the mix looked a little different from what I see at other tech events. Coming fresh from the round of developer events over the past couple of months and being used to see young white women making up a significant proportion of the female mix, I was stunned to find a considerable lack of millennial white women at the summit. There were many millennial minority women in the audience, but it was hard to see young white women.
I am aware that millennials are the group where minorities are becoming the numerical majority, but I think there might be something else going on there. I do wonder if, young white women share my feeling that we should find our place in industry events and not at events that are focused on women only. Maybe young white women are in general more comfortable when it comes to their place in tech thanks partly to the effort of those who came before them.
I hate to think that young white women do not want to be part of the conversation about diversity and inclusion. As a matter of fact, I find it hard to believe that is the case. I do wonder, however, if they might not think there is something to be learned from women who were the first in their company to become the CEO, or a lead engineering or product manager. Of course, the bigger point is that whether or not they think they can learn or they can benefit further from being part of the conversation is somewhat irrelevant. What I do hope, is that young white women understand they, like me, have a responsibility help and support other women and women from ethnic minorities.
As speakers shared their stories and coaches shared their knowledge, I was listening to find little nuggets of wisdom, and that is precisely what I found:
Ahalya Kethees founder of Lead with Brilliance said: You cannot be truly curious about someone if you are judgmental. I never thought about it this way, but it is true that if you are judging someone, it is hard to keep an open mind and wanting to know more about what they are talking about or who they are.
VP of Engineering at Autodesk, Minette Norman said, “ stay true to yourself, don’t try and be one of the boys.” I can really relate to this. I tried to fit in by being like one of the boys, but it just was not for me because it was not me. Over the years I found that being me, with my faults and quirks was the most effective approach to build a relationship with clients as well as colleagues.
Several of the speakers urged the audience to go and get a career coach. And apparently, according to a survey run by IDC across WITI members, a male coach would help you get a higher salary more so than a female coach would! Not a surprise when you think that women generally are not good at negotiating their contracts and assessing their worth!
Barbara Nelson GM & VP at Western Digital said: “Fight your own battle.” Yes, we need sponsors, and advocates, and allies but we need to be prepared to speak up, ask the hard questions and fight our own battles.
Lastly, I leave you with my action point: amplify women’s voices. Highlight when one of your female colleagues says or does something smart, retweet and follow other women in tech, stop a male colleague when he interrupts a woman in a meeting, so she gets to finish talking. Let’s not fight among ourselves to get a seat at the table let’s bring in a chair for someone else when we get there!
When most people think about software for business, they tend to think of things like Microsoft Office. After all, Office is the application suite that many of us spend a great deal of time in during our work days.
In reality, however, productivity suites like Office only represent a small portion of the overall market for software used in businesses and other large enterprises. Some of the biggest categories are things like Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), and Business Intelligence (BI) and analytics. In addition, there are millions of custom applications (many of which are built with these types of tools as a foundation) that play an extremely important role in the operation of today’s businesses.
While Microsoft is an important player in many of these categories, it’s companies like IBM, SAP, Oracle, and Salesforce that are the leaders in many of these lesser-known segments that are commonly referred to as “back office” operations (a historical phrase that stems from many business organizations having the operational teams doing this work physically located in the rear section of office buildings). In fact, companies like SAP have built large businesses creating the tools and platforms that sit at the central operational point for many organizations in areas ranging from supply-chain management to human resources and other personnel systems.
At last week’s SAPPHIRE NOW, SAP’s annual customer conference, the company announced a major entry into the “front office” CRM market with C/4 HANA. The new offering ties together the technology from a number of different acquisitions it has made to create a suite of applications and cloud services that allows sales and marketing people (who typically sat in the “front” part of office buildings) to organize all the critical information about their customers in a single place. C/4 HANA builds on the company’s existing in-memory HANA database architecture, which stores all data and applications in server memory (versus in storage) to speed overall performance.
What’s interesting about the release is the position it holds in the overall evolution of the enterprise software market. For several decades, companies like SAP were strongly associated with old legacy software that ran only in the physical servers within a company’s data center—or “on premise,” as many like to say. The applications were large, monolithic chunks of code that were so complicated, they almost always required external help from large consulting firms and system integrators, or SIs (such as Accenture, CapGemini, the services division of IBM, etc.), to properly install and deploy.
Over the last decade or so, however, we’ve seen companies like SAP and IBM evolve their software architectures and approaches, in large part because of the dramatic rise of cloud-based software companies such as Salesforce.com. The efficiencies, flexibility, and cost-savings enabled by these internet-based business software companies and the new business models they offered—such as Software as a Service (SaaS), Platform as a Service (PaaS), etc.—forced some dramatic changes from the traditional enterprise software vendors. In particular, we saw a dramatic increase in the use of public cloud platforms, such as Amazon Web Services (AWS), Microsoft Azure and Google Cloud, to host and run applications that traditionally only ran in corporate data centers. In addition, we’ve witnessed the dramatic increase of enterprise mobile applications that provide a means to run or interact with business software on our smartphones and other mobile devices.
The new C/4 HANA release is an intriguing example of these many developments because it is a cloud-first set of tools that companies can now run in the public cloud across any of these major cloud platforms, in their own private cloud within their data center, or in a combined “hybrid” cloud model. Architecturally, the suite incorporates a large number of microservices—a dramatically different and more modular structure than older monolithic applications—that offers much more flexibility in terms of how the software can be leveraged, updated, and enhanced. In particular, the ability to do things like plug-in new enhancements such as AI and machine learning via SAP’s Leonardo suite of new technologies is indicative of the new approach the company is taking with its software offerings.
At this year’s SAPPHIRE NOW, SAP also announced an SDK (software development kit) that will allow native access to all their services from Google’s Android platform for mobile access. This builds on the work that the company had previously done for iOS and Apple devices.
Even with all these enhancements and long-term evolutionary progress, there’s still no question that the bulk of enterprise software offerings can still be extremely complex and difficult to completely decipher. However, it is also clear that tremendous progress is being made and that, in turn, is helping companies who use these tools improve their efficiencies and enhance the digital readiness of their organizations. As the business environment continues to advance, it’s good to see the toolmakers who’ve supported these companies taking the steps necessary to make these digital transformations possible.
At the end of the last century, the tech world was in a flutter around an unannounced product that appeared to be revolutionary. Dean Kamen, one of the smartest and brightest inventors in the last 50 years, was reportedly working on a new device that was cloaked in secrecy. It had received great the attention in 1999 from VC John Doer, who speculated that it would be bigger than the Internet, and Steve Jobs, who originally said it would be bigger than the PC, although he retracted that statement after the Segway came out and was critical of it once it shipped.
This week’s Tech.pinions podcast features Carolina Milanesi, Ben Bajarin and Bob O’Donnell analyzing Apple’s WWDC event in great detail, including new announcements around iOS12, WatchOS5 and MacOS Mojave.
If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast
At this week’s Computex show in Taiwan Qualcomm announced the next generation of silicon for the Windows on Snapdragon platform. The new chip is called the Snapdragon 850, and rather than simply repurposing an existing high-end smartphone processor the company has cooked up a modified chip specifically for the Windows PC market. Qualcomm says the new chip will provide a 30 percent system-wide performance boost over the previous generation. I’m pleased to see Qualcomm pushing forward here, as this area will eventually evolve into a crucial piece of the PC market. However, announcing it now, with an eye toward new products appearing by year’s end, puts its existing hardware partners in a very tough spot.
Tough Reviews, And a Short Runway
Qualcomm and Microsoft officially launched the Windows 10 PCs powered by the Snapdragon Mobile PC Platform in December 2017. The promise: By using the Snapdragon 835 processor and related radios, Windows notebook and detachable products would offer instant startup, extremely long battery life, and a constant connection via LTE. Initial PC partners included HP, Lenovo, and ASUS.
Reviews of the three initial products have been mixed at best, with many reviewers complaining about slow performance, driver challenges, and app compatibility. But most also acknowledge the benefits of smartphone-like instant on, the luxury of connectivity beyond WiFi, and battery runtimes measured in days versus hours. I’d argue that the technical issues of rolling out a new platform like this were unavoidable. However, the larger self-inflicted wound here was that nobody did a great job of articulating who these products would best serve. This fundamental issue led to some head-scratching price points and confused marketing. I talked about the missed opportunity around commercial users back in December.
There was also the issue of product availability. While the vendors announced their products back in December, shipments didn’t start until 2018. In fact, while HP’s $1,000 Envy X2 started shipping in March, neither Lenovo’s $900 Miix 630 nor ASUS’s $700 NovaGo TP370QL is widely available even today. Amazon recently launched a landing page dedicated to the Always-Connected Windows 10 PC with a bundled option for free data from Sprint for the rest of 2018. The ASUS product moved from pre-order to available on June 7; Lenovo’s product still has a pre-order button that says it will launch June 27th.
That landing page appears to have gone live just days before Qualcomm announcing the 850 in Taiwan, and promising new hardware from partners-including Samsung-by the end of the year. Now, if I’m one of these vendors who threw support behind Windows on Snapdragon early, only to have Qualcomm Osborne my product before I’ve even started shipping it, I’m not a happy camper.
Might as Well Wait
As a frequent business traveler, the Windows on Snapdragon concept is very appealing to me. I realize that performance won’t come close to what even lower-end X86 processors from Intel and AMD offer, but I’m willing to make that trade for the benefits. As a result, I expect that for the first few years these types of PCs will be better as companion/travel devices rather than outright replacements for a traditional PC. In my case, I could see one competing for space in my bag with the LTE-enabled iPad Pro I carry today. Except when I carry the Pro, I still must carry my PC because there are some tasks I can’t do well on iOS.
Both the Lenovo and HP products are detachable tablets, whereas the ASUS is a convertible clamshell, which is the form factor I’m most eager to test. I was close to pulling the trigger on the ASUS through Amazon when the Qualcomm 850 news hit. Buying one now seems wasteful, with new, improved product inbound by the holidays. And that’s not the kind of news vendors want to hear.
Now many will say that this is the nature of technology, that something new is always coming next. And while that’s essentially a true statement, this move seems particularly egregious at a time when Qualcomm and Microsoft are trying to get skeptical PC vendors to support this new platform. Plus, we’re not talking about a speed bump to a well-established platform, this is a highly visible initiative with an awful lot of skeptics within the industry. Qualcomm might have decided that the poor initial reviews warranted a fast follow up; one hopes their existing partners were in on that decision.
Bottom line: I continue to find the prospects of Windows on Snapdragon interesting, and I expect the new products based on the 850 chip will perform noticeably better than the ones running on the 835. But if Qualcomm and Microsoft expect their partners to continue to support them in this endeavor, they’ve got to do a better job of supporting them in return.
The Sonos Beam Provide Options to Broaden Appeal
This week, Sonos launched Sonos Beam, a $399 soundbar that will be available on July 17. Out of the box, Beam comes with Amazon Alexa in the US, UK, Germany, Canada, Australia, New Zealand and soon, in France. Beam will support additional voice assistants as they become available on Sonos around the world and won’t lock owners into specific streaming boxes or services. AirPlay 2 will be available on Sonos in July via a free software update. With AirPlay 2, customers can play music and podcasts from their iOS devices directly on their Sonos speakers, including the new Sonos Beam, Sonos One, Playbase, and the second generation Play:5. And, with a single supported speaker, AirPlay content can be streamed to other Sonos speakers in the system. Customers enjoying AirPlay 2 on Sonos will also gain a new voice experience with the addition of Siri. Ask Siri to play any track, album, or playlist on Apple Music by using an iOS device to start playing on Sonos.
It seems not long ago that 2- and 4-core processors were at a seemingly unmovable status in the consumer CPU market. Both Intel and AMD had become satisfied with four cores being the pinnacle of our computing environments, at least when it came to mainstream PCs. And in the notebook space, that line was weighted lower, with the majority of thin and light machines shipping from OEMs with dual-core configurations, leaving only the flagship gaming devices with H-series quad-core options.
Intel first launched 6-core processors in its HEDT (high end desktop) line back in 2010, when it came up with the idea to migrate its Xeon workstation product to a high-end, high-margin enthusiast market. But core count increases were slow to be adopted, both due to software limitations and because the competition from AMD was minimal, at best.
But when AMD launched Ryzen last year, it started a war that continues to this day. By releasing an 8-core, 16-thread processor at mainstream prices, well under where Intel had placed its HEDT line, AMD was able to accomplish something that we had predicted would start years earlier: a core count race.
Obviously AMD didn’t create an 8-core and price it aggressively against Intel’s options out of the goodness of its heart. AMD knew that it would fall behind the Intel CPU lineup when it came to many single threaded, single core tasks like gaming and productivity. To differentiate and to be able to claim performance benefits in other, more content creation heavy tasks, AMD was willing to spend additional silicon. It provided an 8-core design priced against Intel’s 4-core CPUs.
The response from Intel was slower than many would have liked, but respond it did. It launched 6-core mainstream Coffee Lake processors that closed the gap but required new motherboards and appeared to put Intel out of its expected cadence of release schedules.
Then AMD brought out Threadripper, a competitor that it had never had previously to go against the Intel X-series platforms. It doubled core count to 16 with 32-threads available! As a result, Intel moved forward its schedule for Sky Lake-X and released parts up to 18-cores, though at very high prices by comparison.
Internally, Intel executives were livid that AMD had beat them to the punch and had been able to quickly release a 16-core offering to steal mindshare in a market that it had created and lead throughout its existence.
And thus, the current many-core CPU races began.
At Computex this week, both Intel and AMD are beating this drum. The many-core race is showing all its glory, and all of its problems.
Intel’s press conference was first and it had heard rumblings that AMD might be planning a reveal of its 2nd generation Threadripper processors with higher core counts. So it devised an impressive demonstration of a 28-core processor running at an unheard of 5 GHz on all cores – it’s hard to understate how impressive that amount of performance is. It produced a benchmark score in a common rendering test that was 2.2x faster than anything we had seen previously in a single socket, stock configuration.
This demo used a previously unutilized socket on a consumer platform, LGA3647, built for the current generation of Xeon Scalable processor. This chip also is a single, monolithic die, which does present some architectural benefits over AMD multi-chip designs if you can get past the manufacturing difficulties.
However, there has been a lot of fallout from this demo. Rather than anything resembling a standard consumer cooling configuration, Intel used a water chiller running at 1 HP (horsepower), utilizing A/C refrigerant and insulated tubing to get the CPU down to 4 degrees Celsius. This was nothing like a consumer product demo, and was more of a technology and capability demo. We will not see a product at these performance levels available to buy this year, and that knowledge has put some media, initially impressed by the demo, in a foul mood.
The AMD press conference was quite different. AMD SVP Jim Anderson showed a 32-core Threadripper processor using the same socket as the previous generation solutions. AMD is doubling the core count for its high-end consumer product line again in just a single year. This brings Threadripper up to the same core and thread count as its EPYC server CPU family.
AMD’s demo didn’t focus on specific performance numbers though it did compare a 24-core version of Threadripper to an 18-core version of Intel’s currently shipping HEDT family. AMD went out of its way to mention that both the 24-core and 32-core demos were running on air-cooled systems, not requiring any exotic cooling solutions.
It is likely AMD was planning to show specific benchmark numbers at its event, but because Intel had gone the “insane” route and put forward some unfathomably impressive scores, AMD decided to back off. Even though media and analysts that pay attention to the circumstances around these demos would understand the inaccuracy of comparison, it would have happened, and AMD would have lost.
As it stands, AMD was showing us what we will have access to later in Q3 of 2018 while Intel was showing us something we may never get to utilize.
The takeaway from both events and product demos is that the many-core future is here, even if the competitors took very different approaches to showcase it.
There are legitimate questions to the usefulness of this many-core race, as the software that can utilize this many threads on a PC is expanding slowly, but creating powerful hardware that offers flexibility to the developer is always a positive move. We can’t build the future if we don’t have the hardware to do it.
As I articulated earlier in the week, Apple’s focus on features that help us be more productive and efficient may not have been the most exciting when it comes to future and brand new things, however, there were some signals Apple gave us worth pondering about what the future may hold.