Apple’s Neural Engine = Pocket Machine Learning Platform

on September 19, 2018
Reading Time: 4 minutes

I had a hunch going into today’s Apple event that the stars of the show would be Apple’s silicon engineering team. The incredible amount of custom silicon engineering that went into making yesterday’s products is worthy of a whole post at some point. For now, I want to focus on the component that may have the most significant impact in future software design which is the neural engine.

Big Leap Year Over Year
It’s first helpful to look at some specific year-over-year changes Apple made with the neural engine. In the A11 Bionic, the neural engine not only took a much smaller part of the overall SoC block, but it was integrated with some other components. It was capable of 600 billion operations per second and was a dual-core design.

The neural engine in the A12 Bionic now has its dedicated block in the SoC and has jumped from two-cores to eight and is now capable of 5 trillion operations per second. While these cores are designed with machine learning in mind, they also play an exciting role in helping to manage how the CPU and the GPUs are also used for machine learning functions. Apple referred to this as the smart computer system. Essentially a machine learning task has three systems that work together to complete the task, the neural engine, the CPU, and the GPU. Each plays a role and is managed by the neural engine.

As impressive as the engineering is with the whole A12 Bionic, where it all comes together is in the software that allows developers to take advantage of all this horsepower. That is where Apple now letting developers use CoreML to make apps we have never experienced before is a big deal.

The Machine Learning Platform
Apple is getting dangerously close to bringing a great deal of science fiction into reality, and the efforts they are doing with machine learning is at the center. In particular, something geeks in the semiconductor industry like to call computer vision.

At the heart of a great deal of science fiction, and the subject of many analysis I have done is the question about what happens when we can give computers eyes. This is front and center in the automotive industry since cars need to be able to see, detect, and react accordingly to all kinds of objects in the road and around them. Google Lens has shown off some interesting examples around this as well where you point your phone at an object, and the software recognizes it and gives you information. This is a new frontier of software development, and up to this point, it has been relegated to highly controlled experiences.

What is exciting is to think about all the new apps developers can now create using the unprecedented power of the A12 Bionic in a smartphone and rich APIs to integrate machine learning into their apps.

If you have not seen it, I encourage you to watch this bit of Apple’s keynote to see the app, but a fantastic demonstration of this technology took place on stage. It was an app called Homecourt that did real-time video analysis of a basketball player and analyzed everything from how many shots he made or missed, to where on the court he made and missed them as a percentage of his shots, and even could analyze his form down to the legs and wrist in order to look for patterns. It was an incredible demonstration with real-world value, yet it is only scratching the surface of what developers can do with a new era of iPhone software with machine learning at the core of their software.

Machine Learning and AI as the New Software Architecture
When it comes to this paradigm change in software it is important to understand that machine learning and AI is not just a feature developers will add but a fundamentally new architecture which will touch every bit of modern-day software. Think of AI/ML being added into software as a new paradigm as the same way multi-touch become a foundation for UI for the modern smartphone. AI/ML is a new foundational architecture enabling a new era of modern software.

I can’t overstate how important semiconductor innovation is to this effort. We have seen it in cloud computing as many Fortune 500 companies are now deploying cloud-based machine learning software thanks to innovations from AMD and NVIDIA. However, the client side processing for machine learning has been well behind the capabilities of the cloud until now. Apple’s has a brought a true machine learning powerhouse and enabled it to be in the pockets of its customer base and opened it up to the largest and most creative developer community of any platform.

We are just scratching the surface of what is possible now and the next 5-7 years of software innovation may be more exciting than the last decade.

Competing With Apple’s Silicon Lead
If you have followed many of the posts I’ve written about the challenges facing the broader semiconductor industry, you know that competing with Apple’s silicon team is becoming increasingly difficult. Not just because it is becoming harder for traditional semiconductor companies to spend the kind of R&D budget they need to meaningfully advance their designs but also because most companies don’t have the luxury of designing a chip that only needs to satisfy the needs of Apple’s products. Apple has a luxury as a semiconductor engineering team to develop, tune, and innovate specialized chips that exist solely to bring new experiences to iPhone customers. This is exceptionally difficult to compete with.

However, the area companies can try with cloud software. Good cloud computing companies, like Google, can conceivably keep some pace with Apple as they move more of their processing power to the cloud and off the device. No company will be able to keep up with Apple in client/device side computing but they can if they can utilize the monster computing power in the cloud. This to me is one of the more interesting battles that will come over the next decade. Apple’s client-side computing prowess vs. the cloud computing software prowess of those looking to compete.