Does iOS 11 help fulfill Steve Jobs’ Vision of making the iPad the Next Mobile Computer

When Steve Jobs introduced the iPad in 2010, he made a rather bold statement. He basically stated that the iPad would become the mobile computer of tomorrow. I talked to him right after the iPad was introduced and he said that over time the iPad had the potential of replacing one’s laptop. He was really excited about this as you can imagine and while the iPhone was his biggest start, he seemed sure of the potential of the iPad to become more than just a tablet over time.

I remember at the time thinking if that was true it could canalize their laptop business, but after I began using it I could see some early glimpses of how that vision could someday become a possibility.

Even though the iPad has been on the market for almost 8 years, for most people the iPad has not really replaced their laptop. I know there are exceptions. Many times when I travel I now only take my iPad and one of our analysts, Carolina Milanesi, does the same. We both are partial to the 9.7 inch iPad. My son Ben swears by the 12.9inch iPad and when traveling this is his go to machine now. And I know from talking to friends we are not the exception. Many feel the iPad can meet most of their needs when in most mobile settings.

In my case, I only use a Mac or PC when I am writing complex documents or working on large spreadsheets but I find the iPad handles most of my productivity needs when traveling, just not all. Which is why I have not given up my laptop and it still plays a major role in my work and personal digital life. In my case, I travel with a 12 inch MacBook as well as a 9.7 inch iPad and they are interchangeable depending on what I am doing, although I use the iPad for about 70% of the time while on the road.

But three features Apple has added to iOS 11 could change things for many of us who like the small and extreme portability of a tablet and in many ways have wanted the iPad to be more like a Mac.

The first thing iOS 11 has that makes the iPad more powerful is drag and drop and as Jobs told me back in 2010, this starts moving the iPad into the arena of being the dominant mobile computing tool everyone can use. Drag and drop is one of the major features of any PC OS and is something that many have been asking for in iOS for many years. Apple showed a good demo of how drag and drop would work at WWDC and from the demo, it looks and acts much as it does in macOS.

The second thing they introduced is the same file format that is also used on the Mac, making it possible for a file that you create on iOS to be file compatible with any macOS device. Also, the folder structure is the same so the look and feel are very familiar to anyone that already uses the Mac. This is really important since it makes these same files available on both macOS and iOS and in that sense they become interchangeable.

Again, with iOS 11 the iPad is becoming more and more like a Mac.

The third thing Apple did is expand the Dock so that you can add more apps to it so that working between apps gets easier. Again, this is very similar to the Dock on Mac OS and from the demo, they showed works in a similar way to allow you to go between apps. This is where Drag and Drop and the new file formats really shine.

This does not mean that Apple will eventually kill their Mac or MacBooks. From what I have seen in High Sierra, they continue to make macOS even more powerful, which in turn, makes their desktops and laptops richer tools for those who need a lot of power to do things like graphics, video editing, etc. But you are seeing Apple move the Mac more and more to the upper end of the productivity scale while making the iPad a more powerful tool that can be used successfully in business and for personal use.

Steve Jobs’ vision of making the iPad an ultra mobile device that you can carry with you all the time and yet is powerful enough to do almost anything you need to do in terms of personal or business computing, is finally becoming a reality with the added features of iOS 11.

When I talked with Tim Cook at a private event at WWDC, he told me that iOS 11 has the potential of moving more iPads into both business and personal use cases and believes it will help drive new iPad sales in a more positive direction. We will find out soon if that is true since the iOS 11 public beta will be released at the end of June. But if iOS 11 makes these features work as shown at WWDC, I know in my case it would move up my use of an iPad to about 90% and it would become my primary computing tool, a scenario that Jobs’ predicted when he launched the iPad in 2010.

What I Want from Apple’s HomePod

I got a chance to really see and listen to Apple’s HomePod during WWDC and the quality of the speaker in this device is really amazing. It was demoed compared to a comparable Sonos speaker and an Amazon Echo speaker and the HomePod beat them in overall sound quality hands down. I could not believe the audio quality I heard out of this small cylindrical speaker.

Although these demos were prototypes and the final version of the HomePod will not ship until Dec, Apple is working out the final specs and what it will ultimately be capable of doing before locking in the design and sending it to manufacturing for the December official launch.

While it is a quality speaker that also has Siri, for me Siri is less of an issue since I already have Siri in my pocket via an iPhone and on Apple Watch and in my case Siri on a speaker is redundant. That is not to say I would not use it if it were handy but I already get that instant access to Siri if I need it via the iPhone or Apple Watch.

I know I may be the odd one on this as my colleague Bob O’Donnel suggests that it will be Siri that makes or breaks the HomePods success.

But Apple made it clear to me that the HomePod’s reason to exist is as a music speaker and represents their corporate culture that believes that music is a central tenet of our digital experience. While Siri is important and is used to manage the music on the HomePod, Siri’s other capabilities are an extended feature since Siri is capable of connecting to HomeKit devices. But as I said, I already have that feature via my iPhone and Apple Watch and in most cases, they are more convenient to use for this purpose.

Given its price, it is actually very competitive with home speakers such as ones from Bose and Sonos. But I would want one other major feature added to the HomePod and that is its ability to connect to my Apple TV or even directly to my TV via the HD optical audio port on all new televisions.

When I asked Apple officials about this they said that at the time tying the HomePod to the Apple TV is difficult since current Apple TV’s were not designed to connect to external speakers via cable or even wirelessly. However, they were seriously looking at this idea.

One of the big problems with most new HD TV’s today is that because of their thin design the speakers and audio systems on most are not that great. That is why a secondary market for high-quality external TV speakers and speaker bars are in high demand. While prices for these external TV speakers can be as low as $149, the sweet spot for relatively good speakers is in the $350 to $500 range although some larger audio speakers for the TV could cost as much as $2000.

One way Apple could make the new HomePod connect to the TV would be for it to include a port that supports an optical audio cable and let it work directly with the TV. That would allow the HomePad to work as a high-quality speaker for the TV too and give people even more of an incentive to buy it. Keep in mind that the HomePod is a stationary device. Yes, it is small enough to move around but I suspect most people would place it in one location most of the time and leave it there. In this case, it would be close to the TV and while using the optical cable is not as optimal as a wireless solution, it would work.

But I see the HomePod as eventually being much more than a powerful speaker and home assistant. Apple said that this speaker uses their A8 chip. When Apple announced the A8 two years ago, they positioned it as a desktop PC-class processor.

If this was just a speaker and home assistant, that processor is an overkill. I sense that Apple has a much greater vision for this device and could do a lot of innovative things such as turn it into an actual Set Top Box, a computer like extension of the Mac or IOS ecosystem, etc. The HomePod is capable of hosting its own operating system, just as the Apple TV and the iPhone itself did even before they evolved from devices into platforms. Think of this as a Trojan horse for Apple to create an even more powerful platform and endpoint that connects to their ever expanding ecosystems of products and services.

This is a product to really keep your eye on. While most see it as a direct competitor to Google Home and Amazon’s Echo, this is much more then these two products are positioned to do today and in the future given their current designs. With Apple making the core of the HomePod a serious computing device, it will be interesting to see how Apple evolves this product over the next few years.

My Takeaways from AWE (Augmented World Expo)

Two weeks ago, I spent two days at Augmented World Expo(AWE) held at the Santa Clara Convention Center just across from Levi’s Stadium. As I sat through many sessions and walked the show floor, I observed a lot of hype along with some really cool demos.
The AV and AR crowd at this show is very bullish on AR and VR as they should be. These technologies represent the eventual way will interact and immerse ourselves in data, information, and entertainment and how we learn in the future. Here is a link to AWE CEO Ori Inbar’s keynote and his optimistic view of VR and AR

My own view is that the AR/VR crowd is a bit too optimistic on the adoption cycles of this technology at all levels and while VR and even AR goggles glasses are making good progress in terms of function and size, I just don’t see people outside of enterprise vertical markets adopting them in 5 years as this crowd thinks it will happen.

My first takeaway is that with VR, the only real market for this technology outside of gaming and very high-end theatrical entertainment will be enterprise. I talked with many individuals at the show who were there to take a serious look at VR solutions for use in specialized training programs and other enterprise applications and they very much want to find VR products to meet their current needs now. I am not sure if they found their solution at this show but major enterprise buyers showed up to see what was available and you can tell that they are in their due diligence phase now.

Although many products at the show had either a VR or AR dedicated focus, the real big message of this year’s AWE is that eventually mixed reality products and solutions will more likely power these technologies and drive them to a broader audience.

Microsoft’s booth that demoed HoloLens was always packed as some people stood in line for as much as 90 minutes to get a chance to experience HoloLens. Lorraine Bardeen, GM Windows and Hololens Experience at Microsoft did the opening keynote and showed a great example of mixed reality. However, its focus was really on a business enterprise example suggesting that Microsoft sees Hololens being adopted by the enterprise well before consumers jump in to use the Hololens technology. Here is the video she showed in her keynote.

My second takeaway is that for VR and AR to gain any ground in the future it needs a lot of tools and creativity based solutions both in hardware and software. The good news is that at AWE there were a lot of these types of tools and specialized 3D and creative SDK’s shown that will become important building blocks for the future of VR and AR.

As for goggles and glasses that show promise, Kopin showed a new and much smaller, brighter video screen for glasses and goggles that could bring AR to new head-mounted displays in the future.

And OBG’s new AR glasses is one to watch and even though it is very expensive now. Its design may be the closest to something that might be acceptable to enterprise users now and consumers eventually.

Also, Meta’s glasses are noticeable as they won the best of show award and while I am not sure how consumer friendly their design is, their approach to a mixed reality solution is one to follow.

I also see the potential for Intel’s Project Alloy, a product I got to test a few weeks back and they too have a solid mixed reality solution in the works.

However, I still maintain that goggle’s or glasses based VR and even mixed reality as they are designed today will mostly be interesting to the enterprise and vertical markets for the foreseeable future. But I do understand their bias. This group understands the groundbreaking immersive experience one gets from AR and VR viewed through goggles or glasses and wants to try and get that experience to a broad audience as soon as possible. But cost, complexity of setup and ease of use at this time is why its greatest interest will be in vertical markets initially. I have not seen anything in this area that even remotely could be taken to a broad consumer audience given current pricing and no killer app for consumers.

What surprised me a bit is that nobody in this crowd discussed the idea of delivering AR via a smartphone. Even with the success of Pokemon Go, this crowd almost exclusively favors AR and VR solutions delivering apps and programs mainly by some sort of goggles or glasses.

They avoided what would be the elephant in the room of the potential of Apple doing an AR product around the iPhone 8 or whatever they will call the new iPhone and its impact on mixed reality. I sincerely doubt Apple has any goggle’s or glasses based products ready for prime time anytime soon and instead if they enter the AR field will do it on a smartphone platform first.

What is important here is that while AWE and the goggles and glasses crowd have such a hard focus on goggle’s based AR and VR that Apple could actually use the iPhone to bring AR to the masses and cement its role as a leader in AR and influence the direction that all AR and VR goes in the future. I do believe that Apple will eventually have some goggle’s or glasses of their own but this would come as an evolution of any AR platform that becomes more of a mixed reality platform as it evolves over time and the technology needed to advance this platform becomes more available.

I find AWE to be an important show since each year I go to it I see how this kind of progress in the area of AR and VR has evolved and get a sense of where we are in the adoption cycle for this new technology. But like last year, I still came away with the belief that AR and VR using goggles and glasses for enterprise, gaming, and high-end entertainment will be the first audience for these products and consumer versions will still be well into the future. I am just not convinced we are anywhere close to having the technology to create the types of AR or VR glasses that could pass muster with consumers any time soon that would not make them look like geeks or are really usable in day to day activities unless they are focused on specialized interests, which is why AR and VR goggles will stay in niche markets for at least 5 years and perhaps even more.

Why Apple Introduced AR Now

On March 1 of 2017 I wrote a piece in Techpinions titled “How Apple Might Deliver AR on the iPhone.”

In that article I stated-

If Apple decides to bring AR to iPhones soon, I believe Apple’s initial move into AR will be at the platform level and delivered on some next generation iPhones. This is just speculation on my part but it is highly plausible Apple tackles the AR opportunity by creating a special AR SDK kit for iOS that takes full advantage of the two cameras in the iPhone 7 Plus and, most likely, will be in some new iPhone models they release in the fall. There are additional rumors Apple has a special 3D camera coming in some high-end models. If that is true, this camera may also play a key role for user-created AR content on this special AR platform.

By creating an iPhone that supports a special AR SDK, Apple could be well positioned to expand the idea of AR-based apps and features to millions of users almost overnight. Like other SDKs of the past, first generation AR apps could be pretty straightforward and, like Pokemon Go, allow a person to just place virtual objects or specialized information on top of a live image. Imagine going into a museum and pointing the iPhone at a woolly mammoth and seeing information about this animal on your screen. Or, if you are in NYC and have the Empire State Building in your view, you point the camera at it and see data about its dimensions or info on its history.

It could utilize the cameras in innovative ways for anyone to create specialized AR content of their own. Over time, and with a powerful AR SDK kit to work with, developers could innovate on this special platform and create AR content we can’t even imagine at the moment.

Knowing Apple well, it was not much of a stretch to predict how Apple could get into AR. They usually start at a platform level and then create an SDK that allows developers to go crazy on creating innovate apps that help define the focus on Apple’s platform strategy.

Although my comments were pretty accurate on how Apple could get into AR, I had assumed that Apple might wait and introduce it on the new iPhone in the fall and in this case take special advantage of the cameras and maybe even a 3D like camera either integrated or as an accessory.

But at the same time, I told my staff that if Apple were smart they would introduce it at WWDC and help their developers get a start on helping Apple define AR for mobile by the time a new iPhone comes out in the fall.

I had the chance to spend some time with Tim Cook after the event and I asked him why Apple introduced their ARKit SDK now and not in the fall.

He stated that Apple believed that given their platform approach that their AR program should be able to be run on most iPhones and iPads and not be tied to a specific new design if they wanted to move the AR market forward fast.
It is clear from talking to Tim Cook that Apple has a really well baked AR strategy in place and more likely an even grander vision for AR in the future.

When you think about it, this strategic move will most likely allow Apple to become the major leader in AR in mobile almost overnight as Cook said in his keynote and I wrote in the March 1 column on this subject. Although Google has the Tango AR project and SDK out, the fragmentation of Android and the need to create dedicated Tango phones to run Tango AR will make it hard for the Android crowd to compete at this level with Apple.

It was also important to use the iPad as the vehicle to introduce the AR SDK program and give people in multiple markets a look of how it would work on a small and large screen. Cook told me that he believes that there will be serious interest from the enterprise and verticals for their AR apps and the iPad would most likely be the device optimized for the business audience. You can especially see how interior decorators, home furnishing stores, real estate and other similar companies can show off how a piece of furniture might look in an actual home they are selling or about to decorate.

AR will be a huge differentiator for Apple on both the iPhone and iPad and will be another one of Apple’s sticky programs that help Apple expand their base of users and tie them to their echo system. Because Apple announced so many things during the keynote I am not sure the media or the public really understand how big a deal this ARKit SDK program will be for Apple. I imagine that it will eventually become a key building block for them to extend their UI prowess to perhaps even other devices such as a Google or glasses when and if the technology is available to make it acceptable to a mass audience.

Apple’s move into AR will be the thing that brings AR to the masses much sooner than I had anticipated and place them in a powerful position to lead the AR charge to a very broad market fast.

How Maker Movement Makes Creative Dreams Come True

For the fourth year in a row, I took the train from San Jose to the San Mateo, CA Event Center in mid May where the grandaddy of Maker Faires took place. Over 125,000 people trekked to the show to check out all types of products, maker ideas and related services and to attend various sessions to help kids and adults alike become makers.

The Maker Movement started out small some 12 years ago and had more of a tech focus driven by kids interest in things like robotics, electrical circuitry and making one’s own electronic gadgets such as a PC and creating things like motors to drive all types of devices such as small miniature cars, small trains, robots, etc.

But over time, and thanks to the Maker Magazine and Maker Faires around the world, the Maker movement has gained great steam and its emphasis on getting people to make things spans everything from bee keeping, quilting, and hydroponics to full blown make it yourself electronic kits and tools, 3 D printers, wood and metal etching and shaping tools to creating robots, drones, mechanical engines and much more.

Maker Faire’s have drawn major attention from many companies such as Intel Microsoft, Google, Avnet, Cognizant, Kickstarter, IBM, Oracle and dozens of others. They understand that many of their employees in the future may come from the ranks of kids coming to Maker Faire’s today who catch the bug and could eventually become tomorrow’s scientists, electrical engineers, coders and people who can make things and get things done.

I first wrote about Maker Faires in 2014 where I explained why the Maker Movement is important to America’s Future.

After attending last year’s Maker Faire I wrote about why Maker Faire’s are so important for our kids and stated that “The Maker Faires’ true importance lies in its focus on getting kids interested in making things. Over the last few years, I have written multiple pieces on STEM focusing on how companies around the world are backing STEM-based programs. All of them see how important these disciplines will be in the future. Still more germane to them is the real concern that if we cannot get kids trained in the sciences, we will not have the engineers and scientists to run our companies in the future.”
http://time.com/4344680/maker-faires/

Although the Maker Faire delights people of all ages who attend the event, the greatest enthusiasm and joy from encountering and inspiring all to create and make things of their own can really be seen on the faces of the kids at the show. As they go from booth to booth and area to area to see all of the exhibits, models and electronic tools and kits that can be used by them to make their own creative dreams come true, their smiling faces and excitement is contagious.

While the show attracts a lot of boys, I also saw many girls at this year’s show and can see a rise in their attendance year upon year as this show strives to be highly inclusive and attract people of all ages, genders, and ethnicities.

At this years show crowds went to see swimming drones, a bunny robot, a robotic giraffe, an all-electric Porsche 911 and the Microsoft coding booth was packed with kids checking out new ways to learn to code. This year’s show also had a VR slant as Microsoft’s booth had a demo of Hololens and HTC had a small tent where all could see how HTC’s VIVE VR goggles worked. Also, Google’s “soldering booth” where kids can learn to solder electronic connections is always a big hit at the Faire. One of the other hottest areas is where they had the Drone races.

What is really interesting about the Maker Faire is that technology is not presented as math and science per se but is shown in highly entertaining ways that channel the underlying role science and technology plays in the creation of all types of products, devices, and related services.

While the Maker Faire itself is fun and educational, its reason to exist is extremely important. For many of these kids, attending the Maker Faire introduces them to Science, Technology, Engineering and Math (STEM) and in many cases the Arts (STEAM.) These skills and disciplines are important to America’s growth as technology will have a dramatic impact on all types of industries and jobs in our future. Millions of the youth of today will need to have many of these STEM and STEM-related skills in order to get work and become the leaders in our corporations over time and will be next inventors and innovators of the future. For many of them, the Maker Movement and Maker Faire’s could be the catalyst that helps them garner the type of interest in these types of skills and steer them towards an educational path that prepares them for many of the jobs of the future.

The next Flagship Faire is in New York – World Maker Faire, September 23 & 24, 2017 at the New York Hall of Science.
For a list of other Maker Faire’s around the US and the world, check it out here.

Tech Adoption and Senior Citizens

I was looking at a recent Pew Research Center report abut tech adoption by seniors. It stated 40% of American adults ages 65+ own a smartphone now. At the same time, more than 2/3rds of seniors use the internet, which is a 55% increase from 2000. And as the chart below states, 51% now have some form of broadband in their homes.

The report also states:

  • Roughly one-third of older internet users say they have little to no confidence in their ability to use electronic devices to perform online tasks
  • About half of seniors say they usually need someone else to set up a new electronic device for them or show them how to use it.

Keep in mind, the average population of seniors is on the rise in the U.S. The Census Bureau projects the number of 65+ people will rise from 15% to 22% by 2050.

The report also suggests seniors still report feeling disconnected from the internet and digital culture. In essence, what is happening is digital technology moves so fast, it is hard for them to keep up with new tech advancements, especially if it is not something that is of any real interest to them.

But, as more aspects of daily life become dependent on technology, particularly health care, seniors’ adoption of new technologies will become increasingly important.

The chart above also says about 32% of seniors have tablets and 34% of them use social media. As a professional researcher, I have no doubt the Pew survey results were solid and these numbers are pretty accurate.

But, in one sense, it surprises me some of these technologies do not have an even greater uptake by seniors since the internet connections and devices needed to gain access to the internet have become as fundamental to their daily activities today as the telephone was to them in their earlier lives.

There is one chart in this survey that does show a much higher senior adoption rate — seniors that are more affluent and younger.

Many of us in our 60s actually grew up in the PC age and, although we now carry the senior designation, we were part of the PC revolution. When the internet came on the scene. some of us were in our 40s. Most of us had to use a PC during our younger years and using a PC, tablet, and smartphone is second nature.

However, to the current generation that drives tech and the creation of tech products, seniors are so out of their demographic they hardly acknowledge we exist. I have very seldom sat in a marketing meeting of any of the major tech companies, and even some smaller startups, and heard them say, “We also need to make sure these products appeal to seniors.”

The irony is many seniors have some of the largest pocketbooks and are willing to buy new tech if it is something that really meets their needs. They are also big spenders on their own children and grandchildren. While we look at the current products and services with an eye towards the younger members of our family, we just don’t see many products designed with seniors in mind other than some dedicated devices such as the very easy to use Jitterbug smartphone created by Arlene Harris, the wife of Martin Cooper, who invented the cell phone.

Advancements in things like AR and VR could become very relative to seniors. Look at VR for travel, for example. This demographic could be their largest audience because, as we age and travel becomes more of a challenge, VR travel could whisk us away to far off lands while never leaving our homes. And AR could deliver a plethora of related information layered on top of real world objects that could impact our ability to see and understand the world around us as our physical and cognitive skills change as we age.

I admit I am a bit more sensitive to what I call the Senior Tech Challenge since I hit my mid-60s recently. While I am in relatively good health and mentally strong as ever, I know full well I can’t stop the aging process and, as a techie, I understand tech-related products could be very helpful to me, even as a senior. I hope the younger generation who are creating the next big things in tech include we seniors in their product designs and services and, where possible, even create new things just for us to help us make later years more fun and productive.

Interest in VR is Minimal

Although I am a big fan of VR and more specifically, its potential for both business and consumers, my belief is that VR as an immersive experience is in its very early stages. I don’t expect VR to be significant outside of key verticals and gaming for at least five to seven years. While I do see VR-based entertainment experiences happening sooner, since Sony and other Hollywood studios hope to create VR theaters and deliver VR movies by 2018, the costs of these headsets and their need for a powerful PC of some type to drive it will keep this out of the hands of mainstream consumers for the foreseeable future.

The two charts below, which took the temperature of the consumer market in Q1 this year, pretty much confirms interest in VR today is minimal, mostly due to either lack of interest or prices of the current VR solutions.

In a recent Tech.pinions column for subscribers, I also pointed out I had some major concerns about VR that included the fact a person could get so caught up in virtual worlds, they have trouble dealing with the real world as well as the issues with eye strain, motion sickness, and anti=social behavior.

This does not mean companies working on VR will not push hard to try and get VR into the mainstream much sooner.
Google has its Daydream project, Samsung has Gear VR and both are pursuing VR by using mobile devices as their vehicle for delivering it. In that sense, mobile as a VR platform has the potential to deliver what I call “Introductory VR” to a broad audience much sooner. But the problem with a mobile smartphone approach, even with headsets like Gear VR, is this VR experience is nowhere as immersive as the one you get from PC or game console-based versions like Oculus, HTC Vive, or Sony Playstation VR. And even with Microsoft’s Hololens moving to a Goggle only solution, cost will still be a factor for any rapid uptake in this platform.

There is one caveat though. I have recently seen a VR system based on a PC platform that shows real promise. It is already in use in education but it has some real potential to bring VR to more consumers in the near future. Full disclosure: We are working with this company, zSpace.com, so I will refrain from commenting on it but check out the link to their site which will give you an idea of how VR could gain a stronger foothold in the PC space much sooner than other VR programs that use dedicated VR goggles.

What I am seeing happen in the VR, AR and mixed reality space is more of a shift to an AR focus for multiple reasons. The big one is it can be delivered on a mobile smartphone platform and be used to extend the user interface and deliver virtual objects on top of real-world objects. Pokemon Go showed consumers the idea of AR on a smartphone and has laid some important groundwork in the form of what consumer expectations are with AR apps.

Because of how Pokemon Go used the smartphone as a platform to deliver AR, all of the major smartphone platforms are moving fast to make AR one of their offering for their customers to give a richer mobile computing experience.

At Google’s recent developer conference, Google showed an updated version of their Tango AR platform for mobile as well as introducing Google Lens, which adds AR like functionality to photos and other content.

There are many signs pointing to the possibility Apple will design the next iPhone to take advantage of their own AR strategy going forward. Given Apple’s history of being a leader in any new area they concentrate on, Apple could end up being a very significant company that brings AR to the mainstream relatively fast.

The other reason AR could gain traction with consumers well before VR is that, with the right SDK and platform, software developers could start to crank out thousands of AR apps for smartphones relatively quickly. In Apple’s case, I could see them seeding five or six developers by the time the new iPhone is launched in the fall to show off Apple’s AR approach, followed quickly by an SDK that would allow developers to create innovate AR apps to help drive demand and sales of any new iPhone in the future.

Google did this recently with their updated SDK for Daydream and added Google Lens as a feature that will play well into Google’s own approach to advancing AR on Android phones.

While I am a big fan of VR, I see AR dominating this new area of merging virtual content on top of real world content as the first big step in next generation user interfaces and, in essence, serve as a precursor for the day when VR and true mixed reality apps via goggles or even a dedicated PC make its way to a broader mass market audience.

How to Think about Windows 10 S

A few weeks ago, I went to NYC to be at the Windows 10 S launch. Leading up to the event, there had been many rumors floating around about a potential new OS from Microsoft on the horizon aimed at education that would take on Google’s Chrome OS. Various rumors suggested it would be called Windows Cloud or be a “skinny” version of Windows.

Now that Microsoft has unveiled this new OS, we know its official name is Windows 10 S and it is indeed aimed at education markets. However, Microsoft is also seeing interest from some enterprise accounts who like its tighter security. It could be used in deployments where a full version of Windows might be overkill for some workers.

Although this new OS is looked at as a lighter version of Windows and could especially be attractive to schools, I believe Windows 10 S is much more important to Microsoft’s future. In fact, Windows 10 S may actually eventually become the version of Windows used by most people in the coming years. Many people who use Windows 10 now do so because most of the PC apps they use are Windows-based and they need whatever the newest version of Windows is available today.

But if you talk to many IT users and especially consumers, Windows 10 is viewed as a very rich OS but they acknowledge that, in most cases, they probably use less than 30% of Windows’ actual power. On the other hand, there are many professionals in graphics, engineering, finance, government, etc. that are power users and, to them, a full blown version of Windows 10 is important to the work they do day in and day out.

With Windows 10 S, Microsoft introduces a new metaphor for their apps. It has a new type of store that sandboxes these apps and only apps that are vetted can be purchased or downloaded to Windows 10 S. This adds a powerful new level of security compared to the ad hoc access and delivery of millions of Windows apps on the market today. Only apps in this store can run on Windows 10 S along with with web apps that use their Explorer browser.

Sounds familiar? It’s what Apple does with both the Mac and iOS app stores. All of their apps are vetted and sandboxed to increase security and give Apple more control of their ecosystem. The important idea is that, by creating a sandboxed store with only vetted apps, Microsoft can take more control of their app ecosystem and deliver a more secure environment to those who use Windows 10 S. This is one reason Windows 10 S could be more attractive to consumers and many IT customers as well as education.

When I think of Windows 10 S, it is easier for me to understand what this new OS’ goal is by thinking of it in terms of Mac OS and iOS. Today, macOS is really targeted at their power users while OS 10 is targeted at the masses.

Of course, Apple is still bullish on the Mac and macOS continues to get richer in both features and functions. However, iOS-based devices dwarf Mac sales by a scale of 13X per quarter. In that sense, Apple has already transitioned their core market to iOS and, with the new iPad Pro with keyboard, they are giving users a better option for using iPads for productivity.

Of course, Windows machines used on PC’s are still selling well and the ASPs on Windows hardware has continued to decrease over the last 5 years. But even with lower ASPs, the PC market continues to shrink and we estimate in calendar 2017 vendors will only sell about 275 million PCs compared to Apple selling at least 300+ million iOS devices this year.

To be fair, most of Apple’s iOS device sales are coming from iPhone. iPads represent only about 20% of overall iOS sales. However, Apple sees iOS as their most important OS and is banking on it to drive sales of their smartphones and tablets in the future into every market they compete in around the world.

One key reason for iOS’ existence is that, while it is a powerful OS in its own right, it was designed for a much smaller form factor than a PC. In starting with a small form factor and then introducing larger hardware with the iPad, Apple could scale this OS up and make it even more powerful for use in larger iPads and perhaps even a small laptop of their own in the future. However it uses the same source code as the Mac.

With Microsoft’s Windows 10 OS, Microsoft is doing something I had recommended to the Windows mobile group 12 years ago when I worked on that project for them. At the time, there were two distinct OS camps. One was focused on just Windows and the other was given the charter to create a new mobile OS from scratch. I felt they needed to scale down Windows and use it as the core for Windows mobile but that is not what happened. Besides some political squabbles that kept the two groups apart, I was told the OS core of Windows back then could not be scaled down and used for a mobile OS.

But, over the last 10 years, I am told the Windows base OS has become much more portable and this has made it possible to create a version of Windows that can be used more like a skinny version yet be as powerful a product for many who never need the full power of a robust version of Windows 10.

While the comparison of Windows 10 S to Apple’s iOS is not exactly accurate, it still helps me think about Windows 10 S in terms of its market promise. This is an OS that, with its sandboxed apps, greater security and streamed down Windows OS capabilities, gives Microsoft the type of OS that will be very attractive to education, IT users and, most likely, a larger mass market audience. (BTW, I wonder if Microsoft realizes 10 S is very close to iOS.) And, because it appears to be a more scalable version of Windows, it could be used not only for less expensive PC’s but also new types of Windows hardware including 2-in-1s, tablets, and maybe even smartphones.

Microsoft also makes it possible for a Windows 10 S owner to upgrade to a full version of Windows 10 for about $30. This is not something Apple could do with OS 10 and it will force them to keep the Mac OS and iOS as separate operating systems. But should a Windows 10 S user eventually need more power to handle their workloads, this upgrade path will be very attractive, especially if it is used on a laptop or 2-in-1.

While Windows 10 will dominate the PC landscape for at least the next two to three years, I see Microsoft eventually transitioning their market to Windows 10 S to give their users a more secure OS platform and, more importantly to Microsoft, allow them them to take more control of their ecosystem. In the end, this could benefit both Microsoft and their customers in many ways.

I see Windows 10 S as one of Microsoft’s most important new operating systems for the end of this decade and central to their future well into the next.

Should Facebook come under FCC Regulatory Rules in the US for Live Broadcasting?

Last week, Facebook announced they would hire up to 3,000 people to monitor and scrutinize Facebook Live content and other posts that use Facebook to share or broadcast heinous crimes such as the recent live streaming of a murder and a couple of suicides. Facebook has also been a place where people have posted taped incidents of crimes committed and these live editors would be tasked with catching them and making sure they never see the light of day on Facebook.

Facebook has close to 1.8 billion users around the world and we estimate that, at any given time, at least 3-5 million people are live broadcasting some type of event or situation on Facebook Live.

Today, Facebook has about 1,500 live editors. Adding 3,000 more would surely increase the amount of eyes and ears to keep watch over live broadcasts and look for other posts that may post images or information not allowed under Facebook’s rules and/or not in line with the spirit of Facebook. After all, it was designed as a sharing site for communications with friends and family. While Facebook understood what was being shared could be both good and bad, I am not sure they ever anticipated the site being used to share murders, suicides and posts about hatred and bigotry.

Until they added live video sharing, their algorithms and live editors where looking for key words that included things like “murder”, “hate”, “kill”, and terms that could be literal or figurative. For example, A person might post, “This picture is hilarious and it kills me,” which even though it used the word “kills”, the sentence in this context is harmless.

However, if the post says something like “I just killed a person” or used as a threat like “I am going to kill you”, the AI behind these algorithms and rules used by live editors should catch these posts and keep them from public view and, if considered a real threat, reported to authorities. Facebook’s AI software is even smart enough to catch things like images such as the ISIS beheadings that were posted on Facebook and keep them from public viewing.

But when Facebook introduced live streaming, it created a new type of medium for sharing and a new set of problems and challenges for its AI software and live editors. The virtue of live streaming of events, parties, concerts etc., is that it is live. At a concert, you want to share that experience live. In a group setting or at a sports game, you also want it to be a shared real time experience.

At the moment, Facebook is not regulated by any form of government body even though that type of government intervention has been suggested in countries where freedom of speech is not a right. And in the US, the idea of Facebook having to come under any regulatory agency would be onerous to all. However, I have to believe the FCC, at the very least, is looking at Facebook’s live broadcasting program and trying to determine if these types of broadcasts would need to come under scrutiny and if they should apply their seven second delay rules to this part of Facebook’s program in the US.

As I understand it, these FCC rules are applied to live broadcasts of any programs or content that goes over any form of live video distribution that uses sanctioned bands. However, I am told that, even if live video is distributed through cable networks, under many circumstances, the seven second delay can apply too.

According to Wikipedia, this rule was established in 1952.

“A short delay is often used to prevent profanity, bloopers, violence, or other undesirable material from making it to air, including more mundane problems such as technical malfunctions (i.e. an anchor’s lapel microphone goes dead) or coughing. In this instance, it is often referred to as a seven-second delay or profanity delay.”

I have reached out to my FCC contacts for comment but have not heard back from them on this issue as of yet. I will update this piece if and when they respond. However, as I stated earlier, I do know the FCC and other government agencies in the US are looking very closely at how Facebook and other social networks that could be used to communicate content via live streaming will try and keep things like suicides and any other violent content out of the public’s view.

More importantly, if Facebook, Twitter, and others cannot solve this problem in real time, I would not at all be surprised if, at some point, the seven second delay will be forced on them to make sure this type of content never sees the light of day over these social networks, at least in the US.

The Windows 10 S Dilemma for PC OEMs

I flew out to New York city for the Microsoft education event earlier this week as I was extremely interested in this new education version of Windows 10 S just introduced. This new OS is a lighter version of Windows 10 and optimized for education. It is Microsoft’s answer to Google’s Chrome OS. Microsoft has sandboxed their app store so only those apps run on Windows 10 S along with any Web apps. You have an option to upgrade to a full version of Windows 10 for $50 but, for education markets, Windows 10 S would work fine.

They also used the event to introduce their first Surface Laptop. This is their first foray into a real laptop and this has interesting implications for Microsoft and their partners. We’ll get to this product shortly but first I want to discuss the dilemma Microsoft’s OEM partners will have with this new version of Windows 10 S.

Some vendors who put out low-cost laptops, like Asus and Acer, who have pushed Chromebooks in their line for some time will most likely sell Windows 10 S laptops to both education and consumers and be OK with the slim margins. But, for their major partners like Dell, HP, and Lenovo, Windows 10 S is a challenge. Although these three do sell Chromebooks as part of their education offerings, Chromebooks are mostly loss leaders for them. The real money in education is selling full-priced laptops to the teachers and administrators and, while they move some Chromebooks in volume, their low margins make it hard for them to make any real money on them and instead really just use them to get their higher priced laptops into the schools.

But Windows 10 S, even with its sandboxed apps, is a real Windows machine. It would not surprise me that, when they go calling on educators and teachers, these folks will be more than willing to abandon their interest in more pricier Windows 10 models and just opt for a lower cost Windows 10 S laptop instead. In this case, if a Windows 10 S laptop from HP or Lenovo is actually priced at $189 or even $299, a simple $50 is all that is needed to upgrade to a full version of Windows and it will be easy to justify instead of buying more expensive $599-$999 models they would be buying if a lower cost Windows machine was not offered.

With Chromebooks, this is not even a consideration. Chromebooks are for the kids. Teachers and administrators use Windows PCs for their lesson plans, running the administration, and managing the kids’ Chromebooks. This dilemma for OEMs is a real one and it will be interesting to see how much they are willing to back Windows 10 S, given both the margin challenge and the potential threat it brings to their education PC business.

They also introduced the Surface laptop, which looks very much like a MacBook Air and is priced about the same at entry level, which is around $999. Up to now, Microsoft’s hardware business was in the Surface tablet 2-in-1s and even the Surface Book is a detachable form of a laptop. But the Surface laptop takes direct aim at products like Dell’s XPS 13, Lenovo’s ThinkPad Yoga and a couple of HP thin and light models they sell to business and higher ed students. I talked with multiple OEMs and, although they heard mumblings about Microsoft doing a laptop, until this week they really did not know Microsoft was going to be a direct competitor.

To say the OEMs were not happy with this move would be an understatement. Microsoft will not do a Windows 10 S notebook and expects the OEMs to do their hardware bidding for them in this competitive low end of the market with thin margins, while Microsoft takes direct aim at their partners’ laptop cash cows.

Although many OEMs were at the Windows 10 S launch, it will be interesting to see how many of them really get behind the low end while now having to do battle with Microsoft at the higher end as well.

To be fair, Microsoft had to do Windows 10 S to try and reclaim some of the market they lost to Chromebooks. But the fact Microsoft is not doing their own low-cost hardware for education to support this new lighter version of Windows 10 is telling.

The next few months will be interesting to watch since Microsoft has already lost the sell-in to education for the next school year as these products are bought in April and May. On the other hand, the Surface laptop will come to market by late August, early September and they will then be directly competing with their partners for higher ed back to school. I would bet this new Microsoft laptop will be hot in this market in the fall.

Microsoft’s hardware partners now have a stronger love/hate relationship with Microsoft because of these moves. They need Microsoft yet struggle with their new hardware prowess as well as their new expectations for them. I sense Microsoft’s relationship with their OEM partners will be even more strained in the future and how they all deal with each other will be even more challenging.

Silicon Valley’s Misguided Motto – Creating Products “Because We Can”

In the late 1990s, I had the privilege of serving as an advisory board member to Xerox Parc’s venture arm. Our charter at the time was to go into Xerox Parc and look at what their many scientists were creating and see if they had any potential for commercial applications. This was in the early days of the internet and Xerox Parc had been developing both new software and hardware technologies the parent company wanted to try and either license or sell to other companies.

If you know Silicon Valley’s history, you know Xerox Parc has either dreamed up or created some of the most important technologies of our time, such as Ethernet, the original vision of a laptop and tablet, the mouse along with other core technologies for PCs — Graphical User Interfaces and WYSIWYG DTP systems and many more. However, in most cases, they never reaped a serious financial reward from these inventions. The most famous example of this came from a visit Steve Jobs took to Parc in the early 1980s where he saw what they were doing with a prototype of a graphical user interface. Jobs went to school on this and created his own GUI for the first Mac and, as they say, the rest is history. However, Xerox never received licensing fees or any other compensation for Apple GUI.

During my four years working with their venture arm, I came to understand why many of the technologies they created during those days never got to a commercial market. I don’t have time to go into detail but there was a lot of politics, infighting, licensing model disagreements, patent sharing issues, etc., that pretty much kept them from ever receiving the kind of compensation they should have on technologies they created. Most of what did achieve commercial success came about because, as in the case of Ethernet, the person who created it inside Xerox Parc, Bob Metcalfe, left to start a separate company. He founded 3COM and used the core technology of Ethernet to build a powerful networking communication company.

But there was another interesting lesson I learned from my time inside Xerox Parc. Often, when I was shown a product an engineer had created with the hope of commercializing it, I encountered a product that may have been interesting but I could not see an actual need for it. One great example was a two-handed mouse I was shown that was difficult to use and, on the surface, could not see anyone adopting it. When I asked the person who created it why he did, his basic answer was because he could.

Over the years, I have often seen products behind the scenes that may have been interesting but had no real market potential. When I asked them why they created it, they often gave very weak answers that, in a lot of cases came down to, “because I could.”

Last week, the media had many stories about a company called Juicero. Juicero created a $399 (initially priced at $700) device that squeezes juice from pouches, doing away with the hassle of cleaning a juicer and making it simple to have fresh juice drinks on demand. But, when some of their investors got their machines, they discovered if they hand squeezed the pouches, the juice would come out on its own. In essence, the need for the Juicero to squeeze the juice from the pouch was unnecessary. The problem is their reason to exist was engineered around the device itself — the device cost was a major focus of their funding. Yes, the recurring revenue from selling pouches with juice ingredients was also part of the business plan but the capital expense of the device was a key part of the business model.

I don’t know how the Juicero story will end or even if it still has potential, given the device itself is not really needed to get the juicing results desired.

While their case for the reason to exist is interesting, this one too follows the “because we can” model in the sense that the total solution offered has major flaws and, when you add the role of the device to squeeze out the juice if you can just squeeze it by hand, one has to ask, why they did a device in the first place?

The consumer products and consumer food industry does serious research and extensive studies about products before they ever bring them to market. They never create a product “because they can.” They don’t only crunch the total available market numbers but they also do serious analysis to understand the numbers and the trends behind them. Some of the big tech companies like Google, Amazon, and Facebook do similar research but smaller companies skip this step for many reasons, mostly related to cost and the lack of understanding the role research needs to play in creating a marketable product.

The “because we can” motto I see often in Silicon Valley needs to be retired permanently. The tech market is too competitive these days and, unless a product breaks new ground or provides a significantly different value to an existing product already in the market, creating anything otherwise will most likely fail unless it has research that shows it adds new value or solves a specific problem. Otherwise, things like Juicero just won’t make it in the long run.

AR, not Voice, is the Next Major Platform for Innovation

I have had a chance to work on speech and voice projects since I first interacted with Kaifu Lee at Apple who, in the early 1990s, was brought in to research voice and speech recognition for what would have been used in Apple’s Newton. Not long after it became clear Newton did not have any real legs, Microsoft lured him away from Apple to head up Microsoft’s first serious work on voice and speech recognition.

In the 25 years or so since that time, voice and speech recognition has evolved a great deal and is now used in all types of applications. With the addition of Artificial Intelligence applied to voice, Google, Apple, Microsoft, Amazon, and others, have now been pushing their voice solutions as a platform and new user interface that helps them interact with customers and provide new types of apps and services.

Recently, Amazon opened up the Alexa voice interface to hardware and software vendors to add a voice UI with direct links to Amazons’s apps and services. Apple’s Siri, Google’s Now and Microsoft’s Cortana are also used as voice UIs that work with third party products and are tied back to each company’s services or dedicated applications. In this sense, voice has become an important new platform for companies to innovate on and AI in voice is a viable platform to use when building new apps and services.

Although AI and voice as a platform will continue to be important, I sense a real shift — AR will soon become the most significant new platform for innovation relatively soon.

PokeMon Go introduced AR to a broad consumer audience and the tech world took note. Once they started to put their strategic thinking caps on, they immediately realized the idea of integrating virtual images, video, and information on top of real world settings has a lot of potential.

To date, most AR is in games like PokeMon Go and apps like SnapChat. But the idea of AR becoming an actual platform within an OS, which could drive a host of innovative apps and services, is just around the corner.
The most likely platform for AR will develop on smartphones first and eventually extend to some type of glasses or goggles as an extension of the smartphone’s user interface. But, for the next few years, AR will be introduced and integrated into the smartphone experience and make it possible to blend virtual worlds into the real world.

Google already has an Android platform for AR called Tango and Lenovo has brought the first Tango phone to market. However, the Tango platform solution is half-baked and I am not clear how serious Google is about AR, given their first generation of AR smartphones on the market today. They still seem to be pushing harder into VR with Daydream and Tango seems to be more of an experiment. But that might change later this year if Apple comes out with their AR platform, something a lot of people believe Apple has up its sleeve with the next-gen iPhone. We should get an AR update from Google at their I/O developer conference next month.

Given the way Apple attacks markets with new software and uses it to sell new hardware, it makes me think Apple could actually be one of the companies that could bring AR to the mainstream market.

Here is the scenario I believe could evolve for Apple to make AR a household name.

First, I would expect Apple to add specific new hardware features to a next generation iPhone that could include extra cameras, incorporating a 360 degree feature, new types of proximity sensors, a new touch screen more sensitive for toggling between virtual and real worlds and perhaps new audio features such as some type of surround feature that could make a virtual scene come alive.

Second, they would create a dedicated AR software layer that sits on top of iOS that serves as an extended platform tied specifically to any new hardware-related features. That would be followed by a special SDK for developers who could create new and innovative apps for AR on a new iPhone.

If Apple does add AR to new iPhones, I suspect they would pre-seed five or six key developers with the AR SDK during the summer so when they launch the new iPhone in September, they can show off these apps along with the homegrown ones they would create themselves. This is pretty much the roadmap they follow when they introduce any new major device or significant new features for the iPad or iPhone and Apple following this plan is very likely should they use the new iPhone to introduce AR this year.

Given the secrecy of Apple, I doubt we will hear anything about AR at Apple’s Worldwide Developers conference in San Jose in early June.

But what is most important about this, should Apple enter the AR market, is the fact they would provide a powerful new AR platform developers can innovate around and serve as a vehicle to bring AR to the mainstream.
This would throw down a major challenge to Google, Samsung, Microsoft, and Amazon to create their own AR platforms and this will become the next major platform gold rush that will drive new tech growth in the next three to four years.

The other company who could bring AR to the masses quickly is Facebook. At their F8 conference this week, Facebook showed off a new camera that will be at the heart of a new AR platform that can be used to add virtual objects to their app.

Here is how VR Focus describes the role of the camera in a Facebook VR platform:

“Facebook is going to use the camera part of the Facebook app to build a new platform for augmented reality by implementing camera effects. Standard effects already used on other apps such as face masks, style transfers etc. will be available from the start. Users will be able to create their own since it will be an open platform. The new AR platform will be launched as open Beta today.

Facebook hopes to take further advantage of developing technologies such as Simultaneous Localisation and Mapping (SLAM) which allows the camera to plot out where an object is in the real world so AR can seems to be placed accurately in the ‘real world’. Additionally, Facebook is working on technology that allows the conversion 2D stills mages into 3D representations that can be modified with AR. The object recognition that will be introduced to the app means that the camera can ‘recognise’ the size, depth and location of the objct so the object can be manipulated within the AR space.”

The commonality of both Facebook and Apple is the development of an AR platform, an SDK, and the role software developers will play in creating innovative AR apps is what is important to understand. Although voice as a platform will continue to grow and be important, it is my sense AR is really the next major platform we will see the most innovation from in the near future.

Apple’s Secret Project to Monitor Blood Sugar for Diabetics

(Editor’s Note: This story has been updated with further information)

Recently, CNBC broke a story that Apple is supposedly working on some type of blood glucose monitoring system I am assuming would be connected to the iPhone and Apple Watch. It would allow diabetic patients to monitor their blood sugar readings in real time.

The story has gotten a bit of attention since any tool that helps people monitor their blood sugar electronically could be a big plus in managing their condition and keep them from having serious complications with this disease.

CNBC’s report says, “The efforts have been going on for at least five years, the people said. Jobs envisioned wearable devices, like smartwatches, being used to monitor important vitals, such as oxygen levels, heart rate and blood glucose. In 2010, Apple quietly acquired a company called Cor, after then-CEO Bob Messerschmidt reportedly sent Jobs a cold email on the topic of sensor technologies for health and wellness. Messerschmidt later joined the Apple Watch team.”

This story has great personal interest to me since I have been a type 2 diabetic for over 25 years. I have worked hard to keep my A1C numbers in check, a measurement that determines blood sugar numbers over a three month period. I have worked hard to keep my A1C numbers in check, a measurement that determines blood sugar numbers over a three month period. Non-diabetic’s have A1C numbers well under 5.0 and, as a diabetic, my safe numbers must be kept in the 6.0-7.0 range. Since the project is secret, the details in the CNBC story only suggests this project centers around some type of sensor that could monitor blood sugars.

If Apple did something like this, it would not be the first to deliver this type of solution. Dexcom has been one of the pioneer’s in this field (called Continuous Glucose Monitoring) and has been one of the leaders in this area for over three years.

I have used the Dexcom CGM for over a year now and it has changed the management of my diabetes for the better. I wear a sensor on my stomach that has two small hair-like wires that penetrate into my stomach and get blood sugar readings from an interstitial fluid just below the skin level every five minutes. When I first started using it, the accuracy rate was within 5-15% of actual glucose readings. But, in the last year, Dexcom has tweaked the software and now my readings are within four or five points of what I would get if I did a pinprick reading via some type of external blood testing kit. Sometimes my readings are even identical to the pinprick numbers, showing that Dexcom has made major strides in delivering more accurate readings through their system.

A wireless Bluetooth transmitter sits on top of the sensor and that sensor reading is sent to the Dexcom app on my iPhone. Dexcom also has an Apple Watch app. To get my current blood sugar reading, all I have to do is glance at my watch and I see what my current blood sugar is. This product has had a major impact on both type 1 and type 2 diabetics who can use it to get these continuous blood sugar readings and either adjust insulin, medicines, or food intake to more accurately keep their blood sugars in a safe range.

Dexom’s approach is called an invasive CGM because it does have these tiny needles going into the stomach to get these blood sugar readings. In fact, many of the other CGM devices being looked at now by the FDA for approval are still invasive. However, there is much work being done to try to get these blood sugar readings without any invasive technology and instead use light pulses or sensors through perhaps a wrist band or watch.

However, this is very hard to do and, as of this date, I have seen no versions of this even close to being in a place for the FDA to approve. Various reports have suggested Apple’s version might be one focused on this light sensor approach to getting blood sugar readings but that is highly speculative.

The only downside of this is cost. If a person were to pay for this solution out of pocket, the sensors cost $300 a month and a transmitter that lasts three months cost $250. Thankfully, in my case, my insurance covers 50% of this cost but even that it is still pricey for me and many who have proper medical insurance.

CGM’s have become so important to the treatment of diabetics that endocrinologists, the type of doctors who treat diabetics, have to pass a special section of their five-year board exams on this subject as CGMs have become a real tool in the treatment of diabetics.

For myself and other diabetics, an Apple product that would do this would be great. But a real breakthrough would be if they could price it well under what is available now and make this solution more price sensitive to millions of diabetics around the world.

If this story is true, this product would represent an important part of Apple’s commitment to health and wellness connected to their products and be seen as fulfilling a dream of Steve Jobs’ to use Apple technology to make people’s lives and health better.

One Big Concern for VR

I recently read an interesting piece in VR Focus talking about the Niantic CEO’s concerns about VR. Niantic is the company that gave us Pokemon Go. Speaking at the Mixed Reality Summit in London, Mr. Hanke said he preferred to concentrate on the benefits of AR and was concerned about the seductive nature of virtual reality (VR) as a form of escape from the real world.

He said as part of his speech: “My thing about VR is I’m afraid it can be too good, in the sense of being an experience that people want to spend a huge amount of time in,” he said. “I mean I already have concerns about my kids playing too much Minecraft, and that’s a wonderful game.

“We’re human beings and there’s a lot of research out there that shows we’re actually a lot happier when we get exercise, when we go outside – and outside in nature in particular. I think it’s a problem for us as a society if we forgo that and spend all of time in a Ready Player One-style VR universe.”

Given Mr. Henke’s own research and commitment to AR, his comments could sound like sour grapes. But I think he is on to something very important. If you have played with Oculus Rift, HTC’s Vive, or Playstation VR, you know how immersive the VR experience can be. Once inside a virtual world, you become captivated by the experience and for many people, staying in virtual reality is as much of an escape from the real world as it is a gaming experience.

I am in no way against VR and its potential. In fact, its use in vertical markets like medical, construction, military, pilot simulations, and even gaming will be solid growth markets for years to come. But I see two general concerns related to VR people need to be aware of and self-management of this will be important to the impact of the VR experience in their lives.

The first is one Mr. Henke stated in his comments and it is important. Getting caught up in virtual worlds that keep a person from socializing, getting out to mingle, network, and exercise is antithetical to the human experience. Spending mass amounts of time under the hood of a VR world could warp personalities and push people to be more internally focused and that can’t be healthy.

The other concern I see with too much VR is medically related. The first is its impact on a person’s eyes. We already know how staring at a PC screen for hours on end can impact one’s vision and eventually force them to use glasses or take other measures to relieve eye strain. But having VR goggles so close to the eyes with moving visuals that impact, not only the eyes, but motion stability also has its downsides. To date, I have not seen any technology that could help correct the motion sickness associated with some VR apps.

I believe many of these concerns are why Apple is most likely going to push hard into AR in the near future. If they ever support VR in a mixed reality setting, Apple would do it in a way that still makes AR the center of their mixed reality solution. The same goes for Microsoft. Even though they use goggles, their focus is on AR and mixed reality. While VR apps can be created for it, Microsoft is really pushing their developers hard to focus on AR apps, something I expect to be very clear at their Build Conference coming up in mid-May.

I am a fan of VR and see its potential. But the more I play with and test VR products, like Mr. Henke of Niantic, I also see its pitfalls. This is something the folks in the VR space need to be watching closely and, if possible, architecting their products to take these issues into consideration.

Why the Tech World Needs to be Concerned about North Korea’s Nukes

Like many in the tech industry, I have traveled dozens of times to Japan, Hong Kong, China, Taipei, Singapore and S.Korea as part of my job. This region of the world has been important to our tech market as it has become our key manufacturing arm and has made it possible for most US companies to deliver a product at cheaper prices and thus allowed them to grow over the years.

This region has also given us major tech competitors such as Sony, Panasonic, Toshiba, Samsung, LG etc and helped keep prices down on all products as part of the competitive circle of life.

If you travel to these parts of the world you are perhaps more aware of the political climate in many of these countries and know that China believes Taipei belongs to them. And now that Hong Kong is a region of China, it is subject more and more to China’s political rules and regulations.
But at the moment the major tinder box in this region is S. Korea and N.Korea and it is this area of the world that our tech leaders need to be watching closer than ever.

S. Korea is vitally important to the tech world for many reasons. Samsung not only makes smartphones but also appliances, computers, hard drives, TV’s and perhaps more importantly, semiconductors and flash memory and provides chips to Apple and other big tech players as well. Then you have Hyundai, LG and POSCO, one of the largest steel manufacturers in the world. All told, S. Korea has over a hundred major companies providing products and services all over the world.

Some years back, on a trip to Asia and especially S. Korea, I asked a top tech official what concerns him the most. He told me that one of his greatest concerns would be the collapse of N. Korea and the fact that millions of N. Koreans would rush over the border and paralyze S. Korea’s region and economy. He felt that this rush over the boarder would destabilize S. Korea for a time and it could take years to bring it back to a level of normalcy where the tech companies who are running smooth now could get back to what they do best when not interrupted.

High on the list of this disruption is the fact that so many people in N. Korea have relatives in S. Korea and they would seek refuge with them, many who work in the companies and factories that turn out the products we use. This type of personal, political and economical disruption could have a major impact on S. Korea’s companies and their ability to continue to deliver in a timely fashion for a period of time that it would need for S. Korean officials to stabilize the region.

But this would be much bigger than causing shipping delays and business disruptions. The human toll could be devastating for the country and we could have a major humanitarian crisis if S. Korea has trouble dealing with this onslaught of N. Koreans flooding their country and needing assistance to stay alive.

Because of this perspective, I have been watching the recent moves by North Korea to advance their long range nuclear reach and what I fear is this is more than saber rattling given the instability of N. Korea’s leadership. Their leader Kim Jong UN may want to do anything he can to remain in power.

This week President Trump meets with China’s Premier XI Jinping and is reportedly going to tell him that if China won’t help with solving the N. Korean problem with us, the US is willing to go about it on its own to deal with this nuclear threat.

Now I don’t profess in the slightest that I know what that means to “go it alone” but as Secretary of State Rex Tillerson has said “all options are on the table” when it comes to dealing with North Korea.
Given the fact that our current administration is unpredictable and has little experience in dealing with a crisis like the one we have in N. Korea, anything is possible including some type of surgical strike to try and take out their Nuclear sites. And an outside of attack like this could cause major panic and start the process of a boarder rush that would be hard for S. Korea to control.

Some US companies have this same concern and are already working on contingency plans should their own business be disrupted by what could happen in S. Korea. For small companies seeking optional sources for components, this is probably a manageable problem given how other major countries in Asia have some many of the things they rely on from S. Korea now. But for big companies like Apple, who buy a lot of chips, screens etc from Samsung and other big companies who source millions of chips and products specifically from S. Korean companies, this could be a big problem for them. How they deal with this problem, should it come up, will be a big test of their sourcing teams and their ability to minimize the impact on their ability to deliver products to their customers and will determine how well they can manage a crisis of this magnitude.

My hope is that China does agree to work to keep N. Korea from advancing their nuclear program and help to stabilize this ticking time bomb in the Hermit Kingdom. Trumps meeting with China’s Premier will be the most important one he has since coming to office. But should China not agree to help, or in the end not be able to keep N. Korea from becoming more aggressive with their nuclear program, Trump and team seems to be determined to find a way of their own to solve this problem and as Tillerson has said “all options are on the table” so anything could happen.

A good friend of mine, who travels to this area of the world 10-12 times a year and really understands the political side of these countries, says that “the only way to normalize North Korea, which may sound counterintuitive, is to help them find a way to feel more secure. North Korea will start focusing on its prosperity and potentially surrender its nuclear deterrent only when it feels safe and a part of the northeast Asian economy. More sanctions, or more disastrously any military action, will not end well.”

I believe this is a wise observation and I would hope that our current administration has someone inside that understands this option. Although those in the political arena are watching this N. Korean problem closely, our tech companies need to as well since whatever happens over there could eventually impact their companies too.

Why Samsung’s DeX could be a Mobile Game Changer

Back in 1989, after being fed up with lugging around 10-pound portable computers on flights around the world, I began to fantasize about a day when a personal computer could be truly portable. However, I did not imagine a lighter, slimmer clamshell. Instead, my idea was to create what I called a portable brick or oblong device that would house a CPU and have various I/Os to support a connection to a screen, keyboard, printer etc.

In my wildest travel dreams, I had this “brick” being plugged into the back of an airplane seat in front of me where it also housed a screen and the table tray flipped over and gave me a keyboard. At a hotel, I would connect the brick to a TV and a hotel would have a cheap keyboard in the room for me to use these external devices with my brick. As I pondered this idea, I even imagined a day when this brick could be placed into a laptop shell or, at the very least, be tethered to it and serve as its CPU. The screen and keyboard would just be part of the design.

Of course, back in those days, the technology was not even close to being able to deliver and this fantasy, especially the one for the airplane, faded from sight. In my own work with PC makers, I pushed for lighter and thinner laptops to meet my goals of carrying a smaller and lighter computer with me when I traveled. However, I have always kept the idea of some type of small device that housed a CPU and provided an OS and UI that could be used with a connected screen and keyboard in my mind as I’ve felt this had potential as an alternative way to deliver on the promise of a truly portable computing experience.

Over the last few years, companies like Motorola, Asus, and others have actually brought out prototypes that used a smartphone as the core CPU, OS, and UI that was tethered to a laptop shell. Most recently, a patent emerged from Apple that actually shows an iPhone being placed in a MacBook-looking shell and served as the system’ss CPU as well as its trackpad.

But a new product introduced by Samsung this week is one of the best solutions I have seen to date. It lets the Samsung S8 and S8+ serve as your personal PC CPU and, using its OS, UI, and your data, a person can be connected to a big screen and keyboard to deliver a personal computing experience.

Called the DeX, it is a round device/dock that has multiple I/Os and includes a HDMI port to link it to a TV or monitor, a USB port for printers and various other USB supported devices, and Bluetooth to connect it to Bluetooth keyboards.

In this scenario, you just pop your S8 or S8+ into the DeX cradle and you have a PC experience in front of you ready to go. In this case, the OS is Android and the UI is based on their modified Android UI so it is very intuitive and acts just like the smartphone — but now on a big screen and with a keyboard like a desktop computer. This version of the DEX only works with these new Samsung phones and it does not appear Samsung can make it work with previous versions of their smartphones.

One of the things that has made this idea possible is the fact mobile processors have become extremely powerful in the last few years. They allow us to use our smartphones and tablets as a serious alternative to a PC in many instances. Of course, a PC or even a laptop processor can still deliver a better computing experience since it has more real estate to work with. But smartphone and tablet processors now deliver great performance that allow us to do most tasks we do on a computer, except ones that require “heavy lifting” to handle things like enhanced graphics, images, and more involved productivity.

But the one thing a laptop and desktop PC have a smartphone does not is more real estate to deliver a bigger screen and full keyboard experience.

DeX was created to address this issue. This is especially important if it is to be used for “real” productivity. This is where DeX could fit in, especially for mobile users who already do most of their mobile productivity on a smartphone.

Since I have been studying this idea for many years, I actually like the idea behind DeX and believe it has some interesting potential. In fact, I think it might strike a nerve with many mobile workers whose smartphone is at the center of their business and personal lives today.

I know this concept is a bit radical and, for many, a smartphone may not have enough power to deliver a real desktop-like experience that meets their needs. But, when it comes to extending the role of a smartphone in the lives of mobile business users, DeX gives them an important new way to use their phones for productivity. For many, it could be a game changer in how they can add a desktop-like experience through their smartphones.

Liability: the Biggest Problem Facing Autonomous Driving

I have been looking into the issue of liability within the realm of autonomous automobiles and ran across a recent post by one of our contributing writers who posted this on his own site. He too has been studying this subject and does an excellent job detailing some of the key issues that need to be considered when it comes to liability. He lays out some important questions that need to be answered before these types of self-driving cars can ever get on the streets legally. He also lays out a much later time line than many who are involved with autonomous vehicles have laid out and believes this liability issue could be at the heart of this delay.

Here is what Richard Windsor has to say on this subject.

• There are signs that the regulatory environment for autonomous driving is coming together but the thorny issue of liability still needs to be solved before these vehicles make it onto the roads.
• The California Department of Motor Vehicles (DMV) has updated its guidelines for autonomous vehicles removing a regulation proposed 15 months ago that obviated half of the use case for self-driving cars.
• The California DMV originally proposed a law that requires a person who is licenced to drive the vehicle to be present at all times while the vehicle is in motion.
• If this had become law it would have completely destroyed the promise of freedom for those that can’t drive, the promise of releasing parents who become taxi services for teenage children and any form of automated delivery service.
• On Friday, the California DMV altered this proposal in removing its requirement for a licenced driver to be present and also the requirement for conventional controls to be present.
• I think that this is significant as the California DNV serves as a yardstick for the rest of USA which is likely to be one of the first to deploy these vehicles.
• However, while regulators appear to have seen the light on autonomous driving, the issue of liability remains.
• I think that liability is the biggest problem that faces autonomous driving as sending an algorithm to prison is not a practical option.
• When an autonomous vehicle crashes (and they will), the question arises as to who is responsible for the crash.
• There are many potential answers to this question including:
◦ The driver: If the driver as was asleep at the time of the incident can he really be to blame?
◦ The current stance is to solve this problem by pushing all liability onto the driver.
◦ The problem with this is that it completely destroys the use case of a self-driving vehicle.
◦ Any driver who will be held liable for a death that results from software glitches in his vehicle is unlikely to take his hands of the wheel.
◦ The auto maker: This would instantly make the automotive industry one of the riskiest industries on the planet.
◦ Furthermore, many automakers will not create the entire system themselves.
◦ Cameras, silicon chips, software, servo motors and so on will come from third parties and if they fail, they have the potential to cause a crash.
◦ For most automakers writing software means creating hugely detailed specifications against which suppliers bid with the lowest winning.
◦ If part of the AI is written on the cheap and causes the car to crash, whose fault is it?
◦ The supplier: If the liability is to fall upon the supplier, then it is almost certain to claim that the auto maker didn’t install the software or component properly or otherwise made modifications that caused it to fail.
◦ This is one of the biggest problems when systems get complex is that there is a combinatory explosion of possible outcomes in any one scenario.
◦ It is clear that in any one fatal incident, the blame game has the potential to go on for years and there are
◦ likely to be fatal incidents on a daily basis. (35,092) people died in 2015 in road vehicle crashes in the US)

• Liability is the main reason why I continue to think that the technology for autonomous autos will be ready long before the market is ready to receive it.
• Many automakers have set a deadline of 2020 by when they expect to have a commercial offering in the market but I think that it is doubtful that these vehicles will leave the factories at that time.
• This is good news for the automotive industry which is notoriously slow to adapt to and implement new technology as it will have more time to defend its position against the new entrants.
• With things taking much longer than expected to come to fruition, I can see lots of ventures struggling to keep the lights on and being acquired by the larger, slower moving companies.
• I am sticking to my 2030 target for this becoming a real commercial reality.

Why Russia’s Hacking of Yahoo Matters

Over the last week, I have had to respond to many media requests to discuss my take on the story about the two Russian spies and 2 other Russia-related hackers who stole 500 million records from Yahoo in 2014. The media questions asked were mainly focused on its impact on Yahoo and its customers. This is, of course, a major part of the story and as I stated to the reporters that I spoke with, consumers have to be much more aware these days that many nefarious people want their digital ID, credit cards, etc., and they need to take as many security precautions as they can to protect themselves.

But the bigger story was the direct connection of the Russian spies. It implies the Russian government was involved in this crime. This fact cannot be downplayed and, in fact, needs to be trumpeted loud and clear to our government officials who need to be more aware of and diligent of the fact Russia is very much behind this and other hacking ventures aimed at American citizens.

I have some experience with Russian spies; more specifically, the KGB and I know how ruthless they can be and how they will do anything to achieve their goals. In 1973, I was part of a group that went to Russia to protest the lack of religious freedom in that country. Back then, Leonid Brezhnev was in charge and he ruled Russia with an iron fist. Our group went in as student tourists and we kept the fact we planned to do a protest in Red Square during the May Day Parade very secret.

Somehow, the Russian government found out what we were doing and planted a “spy” in our group whose goal was to track and feed the KGB our plans. By that time, our Russian visas had been granted and, since our group had 12 countries represented, we were allowed to go into Russia via Finland to avoid a potential international incident. But we now know they tracked us every step of the way and made sure we never got to Moscow. We got as far as Kalinin when we were stopped and put under house arrest and then kicked out of the country. During the day and a half it took for us to drive back out of the country, we had to deal with two high ranking KGB officials who were hopping mad at us and made life quite miserable during that time.

That experience taught me a lot about Russian spies, the KGB, and their ultimate goal to get what they want no matter what it takes. In fact, even after we got to Finland, we were tailed by two Russian spies all the way back to Stockholm before we finally shook them off as they were trying to find out who put us up to this “adventure.”

What the Russian spies did to Yahoo I fear is the tip of the iceberg when it comes to Russian-sponsored hacking of tech companies at many levels and, possibly, other government agencies and officials as well. I understand the need for diplomacy and why our government has to tread lightly but the evidence we now have of Russia’s direct involvement in the Yahoo case should give us enough pause to be even more suspicious of the fact that, at the very least, Russia has our tech companies in their sights. Because they can and will protect their spies involved in this case from any form of US prosecution, they will be emboldened to do more of the same in the future.

The American States and the Autonomous Auto Conundrum

I have the privilege of working with a US state (who shall remain nameless) on various issues related to tech in general and in education, broadband policy and, most recently, the need to develop their own autonomous driving laws that would govern their roads in the future. All of the states will come under federal laws and policies for autonomous vehicles but these laws will be mostly applied to interstates and other federal roads the US government might control within any state’s borders.

At the moment, the US Secretary of Transportation, Ellen Chao, has the task of crafting the laws that will govern Federal roads. With a large team helping her, she is working diligently to try and get an initial draft of sample rules and regulations in place as soon as possible. They need this draft in order to get feedback from all types of companies and citizens who will want and need to weigh in on any regulations in order to help the Federal government create and fine tune their rules and regulations related to autonomous vehicles by the end of the decade.

All of the major auto makers are lobbying both the federal and state governments to be diligent in creating these laws as soon as possible, as these laws will impact how they design cars and what types of extra guidance systems, cameras and sensors they will need to install in order to get on the road legally.

At least two major car companies want to have a fleet of self-driving cars on the streets in major cities by 2020, with a goal of actually selling autonomous vehicles to individual buyers in the 2022-2024 time frame. Uber and Google, if they have their way, would like to start rolling out their versions of these cars even sooner.

During the recent Governors Meeting with President Trump in Washington D.C., all of the governors who attended the dinner with the president also had a side meeting with the Secretary of Transportation, who outlined portions of the government’s thinking on their autonomous vehicle strategy. She said their rules should be a starting point for every state to look to adopt when working on their own, more localized versions of state rules that will also need to be in place to compliment the Federal one but at a more localized level.

But, in talking with various different state officials dealing with crafting their own laws for autonomous vehicles, they pointed out to me that, at the moment, a real conundrum for them lies in any final Federal rules that will eventually make their way into law and how those laws can be mirrored for state laws yet they still need to have flexibility to create their own laws, even in competition with Federal laws driven by Washington. As they pointed out, given things like terrain, weather, and road conditions, as well as rural roads where even their own state laws are mirky at times, trying to craft state regulations to accommodate these quirks as well as adhere to all federal guidelines when they come out makes their job of developing state laws governing autonomous vehicles even more difficult.

These discussions with officials I work with as well as two other states whose officials I have been speaking to points out the incredible amount of work that must take place before we can have the proper rules and regulations needed to govern self-driving vehicles in the future. In fact, when I suggest to these state officials they will need to have their own laws in place by 2020, a mere three years from now, they pretty much scoff at this suggestion. In fact, these officials are not even sure the Federal rules and regulations can be in place by 2020 and meet the aggressive schedule of some auto makers who want to have fleets of self driving cars in major cities by then.

When I speak to officials at the city level, where they need to add cameras and sensors to things like traffic lights, street signs, etc. to communicate with self-driving vehicles when they approach intersections as part of a even more localized approached to crash avoidance intelligence, the first question they ask me is, “Nice idea but who will pay for that?” Besides adding new levels of technology and IT infrastructure to city roads and intersections, this added burden to make their streets smarter to deal with self-driving vehicles makes their head spin.

I think we in tech get too caught up in the big picture of autonomous vehicles and their value and promise and do not really understand the enormous complexities something like this adds to those who have to write the rules and regulations governing this new technology at the federal, state and local levels. It is going to take a great deal of planning and foresight of the people making this technology. They will have to understand what is needed and work very hard as soon as possible with regulators at all levels if they want to get their technology in place even in the next 5 years.

To that end, I really believe Google, Uber, the auto makers, and others involved with creating self-driving cars need to come together and create a serious road map to be used at the federal, state and city levels that educates on how self-driving cars are created and what they can and can’t do. They also should lay out what type of rules of the road are needed for them to even launch the first generation of self-driving vehicles. I know they are lobbying government officials now and providing some education but, from what I can tell, each of them are doing so only with their own agenda in mind. It would be in their best interests if they pool their efforts together and come up with what might be a set of first generation rules that can be applied to “Fleet Use” and then even more precise guidelines that can be in place by the time these companies want to start selling these cars to private citizens. If they want the help of the local cities in delivering even more sophisticated crash avoidance cameras and sensors, they need to lay out at least a basic blueprint for how smart cities need to develop the proper smart signs, traffic lights, etc., and not make them try and figure this out for themselves.

While I can see how the technology for self-driving vehicles is advancing rapidly and those behind it would like to get these vehicles on the roads as early as possible, I am convinced it will be the regulatory issues at the federal, state and local levels that will slow this advancement down. If the folks behind this technology do not come together to help these government officials navigate these completely new waters when it comes to autonomous vehicles, it will be the lack of consistent and concise regulations for the rules of the road that will keep self-driving cars from reaching their potential anytime soon.

How Apple might Deliver AR on the iPhone

If you follow the world of tech, you know that two of its new big things are AR and VR. VR got a major push with the introduction of Oculus Rift and made Oculus a household name once Facebook bought the company. Since then, HTC’s Vive and Sony’s Playstation VR have delivered VR headsets and Microsoft is working hard to deliver a mixed reality solution around HoloLens. Samsung and Google are both trying to deliver a VR experience through smartphones aided by low-cost VR headsets. At the moment, VR is mainly aimed at gaming but it has had some buy-in with vertical markets such as travel, entertainment, sports, and even advertising. However, VR is in its very early stages and requires a head mounted display and will take many years to get into the broader consumer market.

Last year, a large consumer audience was introduced to AR via Pokemon Go, which allows for characters to be superimposed on real life settings as part of the game. This game gave consumers a small taste of what AR is about and has left them wanting more of this technology on their smartphones.

Google has realized the smartphone is an important vehicle for delivering AR and has created the Tango AR platform that is currently deployed in Lenovo’s Phab 2 Tango phone and will soon be in other smartphones as well. In this case, the AR experience is just delivered on the smartphone and the Tango platform is designed to help develop AR apps for use on Tango-supported phones. The Tango AR platform is still in its early stages and few Tango apps are even available to take advantage of this platform. But it is an important AR program for the Android crowd and needs to be watched closely to see how Google and their partners use this platform to bring AR to more Android phones in the future.

There is a school of thought that says the best way to deliver AR is through some type of glasses or goggles. In the end, a mixed reality set of eyewear will be the best way to make VR and AR deliver on the promise of bringing this technology to the masses. The problem is this eyewear is expensive now and, especially in VR cases, they have to be powered by a PC with a graphics card to get the full effect.

Tim Cook has repeatedly stated Apple sees AR as the more interesting product at the moment and, while not discounting VR, he seems to suggest that, if Apple does get into this new area of VR and AR, AR will be the technology they will drive first to their platforms.

There have been rumors Apple is working on a set of glasses that could be part of their AR solution but, even if they have this in the works, I just don’t see that coming this year, or even next year, given the costs and lack of AR-based apps to support them.

If Apple decides to bring AR to iPhones soon, I believe Apple’s initial move into AR will be at the platform level and delivered on some next generation iPhones. This is just speculation on my part but it is highly plausible Apple tackles the AR opportunity by creating a special AR SDK kit for iOS that takes full advantage of the two cameras in the iPhone 7 Plus and, most likely, will be in some new iPhone models they release in the fall. There are additional rumors Apple has a special 3D camera coming in some high-end models. If that is true, this camera may also play a key role for user-created AR content on this special AR platform.

By creating an iPhone that supports a special AR SDK, Apple could be well positioned to expand the idea of AR-based apps and features to millions of users almost overnight. Like other SDKs of the past, first generation AR apps could be pretty straightforward and, like Pokemon Go, allow a person to just place virtual objects or specialized information on top of a live image. Imagine going into a museum and pointing the iPhone at a woolly mammoth and seeing information about this animal on your screen. Or, if you are in NYC and have the Empire State Building in your view, you point the camera at it and see data about its dimensions or info on its history.

It could utilize the cameras in innovative ways for anyone to create specialized AR content of their own. Over time, and with a powerful AR SDK kit to work with, developers could innovate on this special platform and create AR content we can’t even imagine at the moment.

Although I have no clue if Apple will actually do an AR SDK optimized for new iPhones, Tim Cook’s fascination with AR at least suggests AR is very much in their crosshairs. If they do, I expect it to follow Apple’s proven playbook in which they develop innovative new hardware, tie it to an enhanced OS, then create a special SDK for developers to allow them to create innovative AR apps and make the iPhone a window to the world of AR-based functions and applications.

Side note: If Apple does deliver an iPhone optimized for AR, this could start a new super cycle for iPhone replacement and drive huge numbers of iPhone sales for another three years. Many financial analysts believe Apple kickstarted this super cycle of replacement growth with the iPhone 7. But I suspect an AR-based iPhone would pretty much kick this super cycle into high gear and last well into 2020.

Is Mark Zuckerberg becoming the Most Powerful Man in the World?

From what I have read by people who really understand what happened during the last election, almost all suggest Facebook played a role in the final outcome. From last June through the election, my Facebook feeds were heavily populated with political commentary and personal agendas from people I know and follow on both sides of the political coin.

With 1.8 billion users, Facebook has become one of the most powerful communications mediums we have ever seen and it grows every month. While it can deliver real news, it is also a vehicle for fake news and allows people to spout off on their personal beliefs ad nauseam. Consequently, Facebook and its leadership are faced with new levels of managing content I am sure was not in the original business plan.

While they want the site to maintain its freedom of speech profile, Facebook must now combat fake news, hateful speech and other issues of non-ethical content that runs rampant on this platform. This means there is a new level of responsibility that has been thrust upon Zuckerberg and his highest officials and, since Mark appears to have the last word on things, it kind of makes him the most powerful person at Facebook and, in some ways, the most powerful person in the world.

Don’t get me wrong. There are many “powerful people” in the world and millions could be impacted by their good or bad actions. However, Facebook/Zuckerberg is the only site/person with 1.8+ billion followers that crosses all geographical, ethnic and political lines. As a communications medium, he and his site have emerged as a means of influencing people in ways we have never seen in our history. Facebook as an influencing platform is where it potentially gets into some very tricky waters that demand more hands-on navigation from Zuckerberg and team.

Last week, my Tech.pinion’s colleague Jan Dawson, laid out these same issues when he commented on Zuckerberg’s 6,000 word manifesto. In his article, he gives commentary on the role Facebook should play in the democratic process. He includes a quote from Zuckerberg:

“Nowhere is this more striking than when he (Zuckerberg) starts talking about participation in the democratic process:

“The second is establishing a new process for citizens worldwide to participate in collective decision-making. Our world is more connected than ever, and we face global problems that span national boundaries. As the largest global community, Facebook can explore examples of how community governance might work at scale.”

That sounds like Zuckerberg envisions a world where Facebook itself becomes the medium through which communities (i.e. cities, states, countries) would govern themselves. Given existing concerns about Facebook’s power to shape media consumption, the idea it would take a direct role in governance (rather than merely allowing people to vote or connect with their elected representatives as it has done in the past) should be terrifying.

It’s arguable that even Facebook’s “Get Out the Vote” efforts have potential to distort the democratic process, given that usage skews younger than the overall population. But at least it doesn’t give Facebook a direct role in the democratic process itself. If I were a local government, I’d be extremely wary of allowing Facebook a deeper role in any of these processes – I think it’s time for both individuals and organizations to push back against Facebook’s enormous power rather than embracing an expansion of it.

But this concern should go beyond just the democratic process and institutions – we should all be thinking about how much power we want Facebook to have over our lives.

On the face of it, this seems great – Facebook would be helping to identify those who would hurt others while they’re still in the planning stages. But it refers to terrorists using private channels, which implies Facebook looking into the contents of private messages shared between users on Facebook’s various platforms. This is yet another area where Facebook’s power is already considerable – not only does it control much of our media consumption but it also hosts and carries much of our communication via four huge platforms: Facebook itself, Messenger, WhatsApp, and Instagram.

Facebook’s instincts here are understandable but also worrying. It finally recognizes its power and the ways in which that power has caused problems in the world but its instinct is to wield that power even more, rather than back off. Given Facebook seems unlikely to police itself, it’s up to its users and other organizations to start to exert pressure for it to do so.”

Jan’s perspective is important but I believe Facebook has to also police itself and be more aggressive in doing so. Putting this much power into the hands of Zuckerberg and a few key members of his team, given its influence, needs to come with extra checks and balances.

They need to have outside teams of independent ethicists, educators, constitutional scholars and others with special skills that truly understand democracy as well as fairness to serve as strategic advisors as they craft an ongoing policy to “do no evil” and make sure Facebook’s power and authority is always in check.

I am uncomfortable putting that much worldwide power into the hands of just a few, even if their intentions are good. Facebook needs more outside help to make sure they are advancing the role of democracy and in no way derailing it by allowing things like fake news and hateful speech to influence anyone’s thinking on making the world a better place.

Has Apple delivered Steve Jobs’ Vision of Disrupting TV?

One of the last real public mysteries surrounding Steve Jobs comes from a comment he made to his biographer, Walter Isaacson, telling him about his vision for TV. Here is the passage that caused quite a stir in the tech world when the book was released and is still a topic today:

“He very much wanted to do for television sets what he had done for computers, music players, and phones: make them simple and elegant,” Isaacson wrote. 

Isaacson continued: ‘I’d like to create an integrated television set that is completely easy to use,’ he told me. ‘It would be seamlessly synced with all of your devices and with iCloud.’ No longer would users have to fiddle with complex remotes for DVD players and cable channels. ‘It will have the simplest user interface you could imagine. I finally cracked it.’”

The tech media took Jobs’ comment at face value and started saying Apple was going to make a TV. To be fair, Jobs set this speculation up by using “TV” in the physical sense instead of what I believe was meant to be a metaphorical idea. While there have been some reports that, at some point, Apple looked at doing a TV, my sources say that idea never really got any serious support within the company. A physical TV, to Apple, is just another screen and doing one with their logo on it made no sense at all.

Six years after, I think we can look back at the comment and get a better picture of what I believe Steve Jobs was saying and how Apple is delivering on Jobs’ full vision now.

As one who has followed Apple since 1981, I have become adept at understanding what some call “Apple Speak”. This loosely means I try and look past what Apple actually says and to what is either behind the comment or what Apple really means from what is always a measured public statement.

To understand what Jobs was likely saying and how it has shaped Apple’s overall TV strategy, one has to realize that ultimately, Apple is a software and UI company first and a hardware company second. Don’t get me wrong, hardware is critical to Apple but, inside the company, it is seen as just a vehicle for delivering their software, UI and services. When the iPhone was introduced, Apple SVP of Marketing Phil Schiller showed me the original iPhone before the launch. He put it on the table in off mode and asked me what I saw. I said I saw a block of metal with a glass screen. He then told me “It is a blank piece of glass for them to deliver their exciting new software”.

I remember that conversation as if it was yesterday since it has helped me understand Apple much better over the years and has shaped my thinking and comments about Apple since 2007. Schiller’s emphasis on the idea the iPhone was a blank screen or canvas for Apple to paint on is at the heart of Apple’s real reason to exist. Jobs understood that from the time he introduced the Mac and carried it over to every product Apple has introduced since then.

The second thing to understand is all of Apple’s software innovations are built around a platform of an OS, a UI, and a set of services then delivered on “blank screens” such as a PC, tablet, phone, or even a TV. This concept of platform is what drives Apple and all of their innovation stems from this core value proposition.

A little side note about how Jobs came to develop this way of thinking. 

A few years before Sony’s founder Akio Morita passed away, I had a chance to interview him about his decision to buy a movie studio. He told me, “Movies, TV, and music are just content”. Sony wanted to own content to use on their devices. Morita made it clear to me and others these properties were just “content” for him to exploit on their devices. Jobs had met Mr. Morita and was a great admirer of his. I believe this helped Jobs formulate his view of the world and, ultimately, influenced his decision to create the iPod and eventually the iPhone. 

We all know how Apple disrupted the PC market with the Mac through its GUI and mouse. We also know how Jobs disrupted the music industry with the iPod and the communications world with the iPhone. And the iPad was the first fresh new design of a mobile computer we have had since the early days of laptops and it disrupted the PC market in many ways. They all had one thing in common — they all had a powerful platform that used an OS, UI and services delivered on some type of hardware with a screen.

Let’s look at Apple TV. When it was introduced, it was called a hobby. But since then, it has sold in the tens of millions and, for many Apple users, this is an important vehicle for streaming movies, TV, and even music to their TV. But it was an important piece of technology for Apple for another reason. It allowed them to develop a TV OS platform in real time that would allow them to create their approach to disrupting TV. While Jobs probably had an actual TV in mind when he made the statement to his biographer, his real emphasis was not on a physical box but instead, as he said, “It would seamlessly sync with all of your devices and with iCloud.” No longer would users have to fiddle with complex remotes for DVD players and cable channels — “It will have the simplest user interface you could imagine.” This was a software OS, UI and platform vision — the TV was just another “blank screen” to Jobs.

This vision of Apple disrupting TV was made clear at the recent Recode Media conference when Apple SVP Eddy Cue said, “The Apple TV platform is what was disruptive.” The disruptive nature was in creating a TV platform that delivered video on every screen Apple has in their hardware arsenal, one that has an easy UI (Siri), and uses iCloud to keep all of that content in synch and deliverable on demand. It is also a platform where Apple and their developers can create innovative apps and services to bolster this vision.

As my friend Benedict Evans of Andreesen Horwitz recently tweeted:

“Apple failed at TV makes me laugh. They’ve sold 1.5 billion TV’s” just with the iPhone and iPad. Add the TV experience to cumulated Mac’s and through Apple TV’s and that number is closer to 2 billion “TV’s.”

Apple’s TV platform allows them to innovate well beyond the OS, UI, and devices. Apple has two original content shows that will debut soon. One is focused on “Carpool Karaoke” and the other is about app developers, patterned somewhat on the concept of “Shark Tank”. Apple creating original content is just following Akio Morita’s playbook that Jobs borrowed to create a disruptive vision for their TV experience. 

Although other video distributors like Comcast, Amazon, etc. have adopted the idea of allowing a user to play back their video on TVs, tablets, and smartphones, Apple’s approach is based more on a platform play that they and their developers can innovate on to go beyond just video. Eventually, they will integrate many more features and add-on content and interactions through the Apple TV OS, something that cannot be done by pure video content distributors. 

This phase of making Apple TV even more disruptive is still in its early stages but it is clear, at least to me, Jobs’ vision of creating a richer TV environment will change the overall TV experience in time and is on track to deliver Steve Jobs’ last big vision he created for his customers. 

Is Mobile Innovation Dead?

I have a very long history when it comes to being involved with laptop designs. In 1984, as part of my consulting work with IBM, I was asked to be part of the team that worked on their first laptop. The core design team was based in Austin in those days but for months this team and another in Atlanta would go to IBM’s Boca Raton facility where they would work on the original design of this forerunner of their ThinkPad line. At least twice a month, I would fly to Boca Raton to meet with them, review the work and give input from a mobile researcher and user standpoint.

Since our research had a strong mobile user focus, I also got involved with laptop research design work with many of the PC clone companies back then and got to see up close how portable computers evolved from sewing machine-styled designs to the clamshell form factors we still have today. However, I have been surprised that, given that clamshells came into the market in 1985 via the original Panasonic laptop, clamshells are still the dominant mobile computing form factor.

The good news is that, over time, the screens have gotten better and now are touch sensitive, their battery life is longer, there are more storage options and they are lighter and thinner. But they are still clamshells. Not that this is bad. This form factor has proven to work well but I believe there at least needs to be more innovation applied to this design if it is to remain the dominant one for mobile productivity.

Of course, Microsoft could argue their Surface has broken the mold of the clamshells and represents an important innovation in portable computing. But all it really is is a large tablet with a keyboard. Together, it is still clamshell-like in its overall design. A few weeks back, I wrote a piece that talked about how Lenovo’s Yogabook was an example of a radical design with two screens and provided a greater level of versatility in portable computing. I have also been talking to some OEMs who have suggested they are working hard to go beyond the clamshell designs and make their mobile devices more powerful and more versatile as well.

While I am excited that OEMs are trying to break the clamshell mold and create some new portable computing designs for the future, I am pretty sure the clamshell is not going away soon. If true, then we need more innovation in this form factor.

One of the more practical innovations I have seen recently comes from HP in a new laptop they have that has a virtual privacy screen. If you are working on a laptop on a plane or in areas where people are sitting right next to you, it is good to have a privacy screen on your laptop so that only you can see what is on your display. Privacy screens you can place over a display have been around for decades but HP has a unique hardware and software solution that, at a touch of a button, gives the display a privacy filter. This is a brilliant idea and I wish all laptop makers would add something like this to their laptops.

Another important innovation comes through next-generation cameras such as Intel’s Real Sense camera appearing in some high-end laptops today. This adds 360-degree images to the mobile experience and, over time, could become an important vehicle for user created content for AR and VR.

I also see innovation coming through Microsoft’s Hello and its link to advanced biometrics. Biometrics can help deliver another level of security to a laptop design and add multiple levels of authentication through iris scanning and next-gen fingerprint readers. In this day and age, where local information on a laptop needs even greater protections, biometrics is an important advance in keeping our laptops more secure.

There is also some interesting work going on in eye tracking and, with the addition of Cortana on Windows and Siri on Macs, voice adds another dimension to the navigational functions in laptops.

One last innovation to mention is waterproof keyboards. Dumping coffee or soda on a keyboard happens more often than one would think and unprotected keyboards get fried when this happens. Lenovo and others have introduced various models with waterproof keyboards and I think this should be a default feature on all laptops.

These new technologies are a good way to innovate on a clamshell platform but I would like to see even greater innovation, including the addition of VR and AR within these PC operating systems. While I actually hope someone comes up with a really radical design in laptops, clamshell portable PCs will be here for at least another decade, I hope the vendors who make these mobile computers never stop innovating.

The Unintended Consequences of H1B Visa Cutbacks

As someone who has lived in Silicon Valley all of his life and worked in the tech sector for 35 years, I have come to appreciate the role H1B Visas have had on the growth of tech, here and around the US. In fact, some of my very good friends came out of top US universities and, thanks to this special Visa program, were allowed to stay and work in the US and contribute to our tech economy.

President Trump has said he will study the H1B Visa program and how it impacts both the US and his immigration goals but it does appear he is on track to do something to this program that could have an impact on how many of these Visas will be allowed. Any decrease would impact our tech companies — about 70% of these visas are used by them.

Given the confusion on this at the moment, other countries are seizing the opportunity to lure potential H1B Visa recipients to their countries to bolster their tech scene and be more competitive with the US. France, China, and Canada have been the most outspoken on this subject but I am hearing from my friends in the UK, Switzerland, Scandinavia, and Germany a similar program and invites for US-bound immigrants is in the works in those countries too.

I recently saw this note from DesiOpt, a site that connects students to employers. It highlights France and China’s objective to invite any US-bound students who would be affected by a lack of H1B Visas to their countries instead:

• Axelle Lemaire, France’s Minister of State for Digital Affairs, said earlier this month that the new program goes further than an earlier program known as the French Tech Ticket. “If you’re a foreigner coming from the rest of the world, you can apply and you might get fast-track processing. Your family is also eligible, and there’s no quota as far as I know,” he said, TechCrunch reports.

• As the U.S. looks to revamp and possibly curb the H-1B program that invites foreign tech talent to work in America, France is taking an opposite tact, opening up its visa program in hopes it might attract some top tech workers to that country. And France is not alone. Last month, tech companies based in China said they are hoping President-elect Donald Trump’s proposed crackdown on immigration and hiring of non-American tech workers will mean they can attract and retain more tech talent for themselves.

• Though it’s not clear yet if or how the president-elect will revise the H-1B program, it’s clear there will need to be a delicate balance between preventing Americans from being replaced by foreign workers and shutting the doors to foreign talent that could help America stay competitive when it comes to tech innovation.

• The H-1B visa program brings 85,000 foreign skilled workers into the U.S. each year. A large portion of those employees work in the tech industry.

I suspect this is the tip of the iceberg of some unintended consequences if Trump goes through with any serious curbs on this visa program. Many of these countries have wanted to siphon tech talent away from the US and have been trying to find ways to do this for a long time. Any reduction of these H1B Visas gives them more incentives to recruit skilled workers to their countries and bolster there own tech programs.

This is not bad in a broad sense since innovation can come from anywhere in the world where the right type of people and skilled workers have an environment that fosters opportunity and growth. But the US has been a beacon for providing the type of working conditions that attract students from the best colleges in the US and allows them to stay in the US to work and contribute to tech and our overall economy.

This is why tech execs are going out of their way to school President Trump on the value of H1B Visas with the hopes that whatever he does to this program favors their position. Whether this will be successful or not is the big question. But, if they can’t get through to the president and he severely limits the amount of H1B Visas available each year, many other countries are ready to take these students in and have them help grow their tech programs at our expense.

Has Apple Missed the Voice-Controlled Hub Revolution?

By now, most have either heard of Amazon’s Echo or Google’s Home hub that uses voice as the means to get answers to questions, set alarms, play music or be used as a control center for controlling IoT devices in the home.

Amazon’s Echo is the most popular version with an estimated 11 million sold in its first year. It has also become the talk of the tech world as it has made popular the idea of voice interfaces and AI all the while becoming an entrenched concept in the minds of techies and nontechies alike.

If one looks at what Amazon has done with the Echo and, given its success with selling it by the millions, it makes one wonder how Amazon could drive this type of innovation yet Apple seems to have missed the boat altogether. Well, it comes down to a technical as well as a philosophical approach to delivering this concept of a home hub with voice controls.

Apple has invested heavily in AI and Siri and, to them, it is the best way to deliver the idea of a voice-controlled assistant. More importantly, it is primarily delivered via a smartphone with a screen that fits in your pocket. Apple’s position is that the best personal assistant is the one you have with you at all times. They recently broadened this from a mobile-only delivery platform and have extended it to Apple TV and the Mac, which makes it possible to have that voice assistant at your disposable anytime and anywhere you happen to be.

Amazon was just as enamored with voice-controlled assistants and AI but they had a problem. Unlike Apple, who had a smartphone to deliver this type of voice assistant, they killed their smartphone and did not have anything with a screen on it under an Amazon-made label. Consequently, their only way to deliver a voice assistant was through a dedicated screen-less device like the Echo. It gave Amazon a great voice assistant for the home and allowed them to deliver a sort of Trojan Horse and get people to use more of their Prime services.

We could argue all day about which approach is better. In my case, I have Siri at my disposal on multiple devices as well as an Echo in my kitchen and Echo Dots in my study and at my bedside. I like both and use them depending on my situation at the time.

There is also the issue of both platforms being used to connect to IoT devices in the home. Apple’s HomeKit has solid backing as does Amazon with their own Echo-connected ecosystem of devices. Google has both smartphones and Google Home, also with solid backing for its IoT connected devices. So, from a consumer’s standpoint, they have more options to use voice and AI than ever before.

But Apple’s decision to not create a home hub competitor is predicated on their belief that Siri at your fingertips has a much greater potential. Their recent introduction of the AirPods bolsters their thinking and position on this. Interestingly, over the years I have written multiple articles talking about wearing an earbud that would be connected to some type of pocket device to give you information on demand and allow you to interact with data. I did my first piece on this idea for a UK-based publication called Microscope in 1988.

Although it has taken decades to get here, Apple’s AirPods, tied to an iPhone, is pretty much what I envisioned when I wrote about this over the years. While I admit AirPods make one look geeky, they serve the purpose of an always on, always connected personal assistant that can act on command anytime and anywhere. This is why Apple has opted to bet on this idea rather than create a dedicated home hub like the Echo or Google Home.

As Ben pointed out in yesterday’s Insider, Tim Cook made it clear Siri and HomeKit are critical and Apple is very bullish on it.

Here is Cook’s comments on this from the anaylyst call:

“The number of HomeKit compatible accessories continues to grow rapidly with many exciting solutions announced just this month including video cameras, motion detectors and sensors for doors, windows and even a water leak. Perhaps even more importantly, we are unmatched when it comes to securing your home with HomeKit enabled door locks, garage doors and alarm systems.

I’m personally using HomeKit accessories and the Home App to integrate iOS into my home routine. Now when I say good morning to Siri my house lights come on and my coffee starts brewing. When I go io the living room to relax in the evening I use Siri to adjust the lighting and turn on the fireplace. And when I leave the house, a simple tap on my iPhone turns the lights off, adjusts the thermostat down and locks the door. When I return to my house in the evening as I near my home, the house prepares itself for my arrival automatically by using a simple geofence. This level of home automation was unimaginable just a few years ago and its here today with iOS and HomeKit.”

We recently had a stellar demo of how HomeKit with Siri works in a home fully equipped with HomeKit-connected devices throughout the house. It was impressive and underscores Apple’s belief that the best voice AI assistant is the one that is with you all of the time instead of being a single device planted in the kitchen or den.

Here is a link to the many HomeKit devices that work with IOS and Siri-
http://www.apple.com/ios/home/
https://support.apple.com/en-us/HT204903

Apple actually has a lead in this area by nature of hundreds of millions of iPhones in use around the world. In the end, I believe their approach will be the biggest winner in the home hub race. But Apple needs to be more active in promoting the idea that the iPhone and iPad, with Siri, gives you a digital assistant anytime, anywhere. They need to show that this is the best solution and to raise its profile in the marketplace.