Why Apple’s A Series Processors are Central to Their Future

Anyone who has had a chance to use the iPad Pro realizes very quickly that this is one really fast tablet. Apple’s A9 processor is what they calla a “desktop class processor” and that sure seems to be the case.

The folks at ArsTechnica did an excellent piece using data from GeekBench and GFX Bench to show speed comparison of Apple processors used in past iPads and iPhones. They showed just how far Apple has come with this chip. I highly recommend you check the piece out to get a deeper look at these speed comparisons.

Apple’s move to create faster and more powerful processors of their own is a really big deal. In fact, we have been warning our PC and Semiconductor customers for years Apple clearly has a plan to make their A Series processor central to their future and use it to create a whole host of new products that would help them gain even more control their destiny. At the moment, Apple is highly reliant on Intel for processors they use in their Mac line. I don’t see that changing anytime soon. I see the Mac continuing to be Apple’s flagship PC product and favored by a traditional PC crowd for years to come. However, I also think that, over time, the Mac crowd will shrink and Apple will be OK with this as it transitions over to more iOS devices that could supplant the Mac in many use cases.

Over the years, Apple has hired hundreds of semiconductor engineers to work on their chip designs and, under the late Steve Jobs’ guidance, crafted what one could call at least a 15-year plan to make the A series chips equal to or more powerful than the Intel processors they use today. I don’t think Apple has any plans to put their A series chips in a Mac soon. I’m not sure that, in the broad scope of their long term strategy, this makes sense. The reason I think a Mac could keep using an Intel chip for some time is I believe Apple has no interest in merging the two operating systems. In fact just last week, Tim Cook told the Irish Independent newspaper “Apple would not make a “converged” Mac and iPad”:

We feel strongly that customers are not really looking for a converged Mac and iPad,” said Cook. “Because what that would wind up doing, or what we’re worried would happen, is that neither experience would be as good as the customer wants. So we want to make the best tablet in the world and the best Mac in the world. And putting those two together would not achieve either. You’d begin to compromise in different ways.

It would be important to note here I believe this also represents a demarcation line between user interfaces. I sense Apple wholeheartedly believes the Mac’s UI should be tied to a keyboard and trackpad and will not put a touch screen in a Mac. On the other hand, iOS was built specifically for touch as the primary UI and, as with the iPad Pro, makes a keyboard just an accessory, albeit an important one for when the iPad Pro is needed for more keyboard intensive tasks such as writing documents, answering email or creating very long text messages.

This transition to making their A Series processors, specifically designed to work with iOS, is highly strategic and aimed at a younger generation of users who have been using iOS on their smartphones and tablets since they were young and consider this their OS of choice. Apple knows full well these younger users are the future workers in business and is working hard to create the kind of tools that can follow them into the business world when they enter it over the next 3-10 years. The iPad Pro is the first with others to follow (including, I believe, an iOS-based clamshell). This would give those workers another option for use in a business setting sometime in the near future.

But advancing the power and features of their A9 architecture and future processors goes well beyond their current device offerings. Rumors that Apple is working on a car is intriguing and, if true, it would need a powerful set of processors to handle the entertainment and navigations systems as well as any crash avoidance functions a smart car needs. I also see Apple eventually creating products for VR, AR, more advanced home automation, smarter TVs and perhaps even specialized gaming systems that would work with a future Apple TV system.

Although they could use chips from other vendors, advancing their own semiconductor architecture would give them more control of their designs and ecosystems. While we can’t think of Apple being a semiconductor company like Intel and Qualcomm, this division inside Apple has the same goals and objectives a mainstream semiconductor company has. However, these current and future processors are designed for use in custom Apple products and by using their own processors they can really control their destiny.

Apple’s semiconductor division needs to be seen as one of Apple’s greatest assets whose charter is to give Apple the processor power they need to continue to be the innovative and tech trendsetting company it is today. Indeed, the chip designers and engineers are central to Apple’s future and will only become more important to Apple products over time.

Published by

Tim Bajarin

Tim Bajarin is the President of Creative Strategies, Inc. He is recognized as one of the leading industry consultants, analysts and futurists covering the field of personal computers and consumer technology. Mr. Bajarin has been with Creative Strategies since 1981 and has served as a consultant to most of the leading hardware and software vendors in the industry including IBM, Apple, Xerox, Compaq, Dell, AT&T, Microsoft, Polaroid, Lotus, Epson, Toshiba and numerous others.

42 thoughts on “Why Apple’s A Series Processors are Central to Their Future”

  1. Should we give Apple’s “no converged device” pledge more or less credence than their “designed for your hand” campaign ? That one had a one-year shelf life ^^ CEOs are sales reps, they’ll say whatever boosts quarterly sales.

    As for SoCs, yep, SoCs are fairly central to IT products, and Apple’s are good, and look especially good after a double-fumble from the other main SoC player (slow to 64 bits, and bad first chip). In the great scheme of things, I’m not sure making your own is a lot more important for SoCs than for screens, cameras, batteries, or RAM :
    – Apple aren’t making them, just designing them
    – those designs integrate a lot of 3rd-party IP even blocks: graphics, radios…
    – ARM SoCs are a lot more flexible than x86 CPUs, Apple could easily source custom variants of 3rd-party parts (no big.LITTLE, more GPU, more cache, focus on single-core+thread performance seems to be their recipe)
    – most importantly, Macs are using off-the self CPUs, and are about as successful on the PC side as iPhones on the smartphone side. To me, that says very loudly that internals are not where Apple get their success from.

    1. “In the great scheme of things, I’m not sure making your own is a lot more important for SoCs than for screens, cameras, batteries, or RAM”

      I think it’s all a matter of scale economies. I have no doubt that the other smart phone manufacturers, in their wildest dreams, would love to have their own proprietary chips if their production scale only made it commercially feasible.

      1. Samsung and Huawei do have their own chips, Moto have a companion chip. The rest (HTC, Xiaomi, Sony, Lenovo) are rumored more or less strongly to be working on custom designs. For some reason, Samsung are using a lot of third-party chips though. Huawei/Honor are totally in-house, AFAIK.
        Barriers to entry can be fairly low, ARM IP and ready-to-burn designs make it a lot like Lego, OEMs can use standard blocks in a custom arrangement/proportion, and/or re-design individual blocks.

        1. There is a difference in licensing the chip vs. licensing the instruction set and designing your own chip. Samsung just launched their first of the latter and Huawei will at some point as well.

          1. I think the way ARM’s licensing deals are set up, there’s a whole continuum of customization.

            The SoCs are structured into several relatively independent building blocks (CPU core, GPU, VPU for video, DSP for image processing, RAM/storage interfaces, sound, networking/radios, cryptography, trustzone, system management, cache…). Each and every one of those blocks can be used as-is in their vanilla ARM version or swapped out in favor of something else, in-house or 3rd-party (Apple for example use PowerVR GPUs, not ARM’s).
            And all those blocks are linked over some kind of bus, again ARM have their own, but not all OEMs use it (I think Samsung will use their own).

            I’m not sure which parts of the whole mess each OEM is currently customizing and will customize in the next gen. The CPU core gets the most PR, but aside from that I’m fairly sure Samsung’s and Kirin’s SoCs haven’t been vanilla ARM SoCs for a while.

  2. Not disputing A Series performance, “in their class”. Pretty sure it’s very good, if not the best.

    What gets me going are claims like “desktop class performance”. Until they put one in a desktop, and this is important, with a full desktop OS, then I might see that claim as anything other than hyperbole.

    The extra services and subsystems of a desktop alone, over iOS, not to mention support of a true filesystem, are the added burden, while maintaining application performance, that they must bear to support that claim.
    Let’s not even get into the range of applications on the desktop versus a walled in environment.

    1. “The extra services and subsystems of a desktop alone, over iOS, not to mention support of a true filesystem, are the added burden,”
      I was a file system advocate till I found that spotlight search was better and faster than me remembering where I “filed” a doc. I was in need of. I think the future is in “the search” for a file system not folders per say.(sp.)

        1. I’m with you on that one: I don’t want to search for files, I want to know where they are. Same as for socks.
          Are we dinosaurs ?

    2. All that is meant by the claim of “desktop class performance” is that the CPU is as fast as a fairly recent desktop/laptop CPU. It’s a claim about benchmarks, and more practically, about the amount of computational power the chip can deliver to an app. It means that the kind of heavy lifting computations that used to be marked “for PCs running on mains power only” can now be done in a fanless battery powered device. It means that developers can build heavy duty number crunching tablet apps that just a couple of years ago they would never have dreamed of even trying to implement for a mobile device.

      I am sure that for you, “performance” connotes certain things which are antithetical to a mobile device with curated apps and without an accessible file system. Sadly, those are your personal connotations around the word. They are not in any way denotations of the word as used by Apple or Intel when advertising a new chip design.

      1. As krabbie said above, a full OS imposes a “full” burden. MS-DOS was faster than Windows 3.1, on the same hardware, for the same reasons.

        How is it when running two, three four applications (not apps) at once? Is there pre-emptive multitasking, ports and interrupts to take care of? The aforementioned filesystem? When (if) the A-Series can do that competitively, then it will be desktop class, not wherever Apple chooses to place the goalposts.

        1. The filesystem is there, it’s just not user-accessible, and would be mostly pointless because each app can only see its own files, except for a few folders that are either shared between apps or users. It’s very much like Android, or any Linux/Unix really; except locked down.

          As for the rest, I’m not sure mobile OSes are that much lighter than Desktop ones. I’m sure a bit of generic and backwards-compatibility stuff has been cleaned out, but mobile OSes still support interrupts, multi-tasking/threading, memory protection, garbage collection… On jailbroken iPhones, you can apparently run the usual clients and servers/services (ftp, ssh, samba, VNC…). Im’ sure it’s all optimized for a comparatively lighter load, but that’s most probably adjustable.

          I think a huge difference is that iOS is tweaked to keep the UI smooth over everything else, Desktop OSes don’t do that to the same extent. That must carry an overall performance penalty (more switching between UI and app logic), but that’s probably adjustable too.

          1. iOS limits multitasking to certain number of processes and only to certain apps.

            By contrasting with MS-DOS, I wasn’t implying that iOS was built on top of OSX, rather that it’s a watered down version. That must mean less burden on the CPU, otherwise why build iOS at all? (Other than to control things)

          2. “Actually, I think the two are so similar I really don’t grok why Apple is making such a fuss about converging them.”

            The primary problem is how to make the UI work for both scenarios. The file system is tricky but, because of the way that iOS finds files, could be managed (I think). But “touch” vs. “kbd/pointer” is a big shift. This is a really hard problem, as Microsoft’s efforts have demonstrated.

          3. Frankly, I’m daily using Android with a mouse + keyboard, and it’s not only nice enough, but the few niggles are oversights that could be solved at the drop of a that (mainly: enable mousewheel scrolling+zooming OS-wide).
            I’m sure there are border cases where a mouse would be unwieldy, but in hundreds of hours I haven’t come across one yet, and I don’t even have a touch mouse, just a plain old 2 buttons +wheel.

            Maybe it’s a mindset: I’m not expecting a mindgasm everytime I diddle my mouse, just a handy way to do the job. That, it is: I miss it when I don’t have it, and I’ve cursed several times when I forgot to connect it (to be fair, I’ve also tried swiping on my desktop monitor, after a very long day ^^).
            To me, Apple’s position is “we don’t have it, so it’s bad”. Not the other way around. Like for phablets 2 yrs ago.

    3. “Desktop class” really just refers to the ability to run applications that traditionally ran on a desktop computer Replicating the full desktop experience is neither the goal nor he intent.

      1. I understand what you’re saying. It’s also marketing, how shall I say it politely…. “mumbo jumbo”.

        When we are speaking of a leading ARM chip “today” offering desktop level performance, it should be compared to a leading desktop chip “today”. Some reports are i5 level performance. Some i5s are merely adequate, when it comes to performance. This is why I think they are remarkable “for their class”. You’re not going to see an ARM performance desktop any time soon. You’ll see multi ARM servers before that.

  3. The importance of the A series is threefold, I think:
    1) secrecy — without informing competitors in advance, Apple can develop custom features (e.g. 64bit, secure enclave) that give it a temporary monopoly (i.e. the old fast follower strategy just does not work for SOCs).
    2) features — designing their own CPU/SOC allows Apple to focus on their priorities (cutting edge mobile CPU/GPU performance), without being held back by diverging priorities of Intel (plugged-in computing) or Qualcomm (good enough computing).
    3) tax — it denies Intel (and Qualcomm) the opportunity to impose the Intel-tax and extract profits out of the mobile industry (i.e. you should not assume that cutting edge SOCs would stay cheap without Apple as a competitor).

    1. I’m not convinced:
      1- innovations-wise, Apple’s SoCs follow ARM’s, with excellent execution/implementation but no groundbreaking feature I can think of: 64-bit, Secure Enclave, all originated at ARM.
      2- features/designs could be negotiated because the ARM world does not work like x86. Intel won’t make a custom x86 chip for Apple, but I’m sure at least one of the numerous large ARM licencees (Qualcom, Samsung, Mediatek, Rockchip, Intel, AMD, nVidia…) would be happy to craft a custom ARM SoC.
      3- If Apple are doing it, they must be finding it worthwhile on a cost/benefit basis. Since they don’t sell any SoCs to third parties, I’m sure the competition between Qualcom, Mediatek and Rockchip + the in-house solutions from Samsung and Huawei + the low-ish barriers to entry have a lot more impact on market prices for the rest of the OEMs.

      1. Designing good silicon is really hard, it benefits from economies of scale and if you have a near monopoly on high-end customers you can claim the bulk of the industry’s profits. All of this still applies to Intel in the plugged-in computing space, but Apple appears to have run away with the crown in the mobile space.

        It is true that there are many runners up, but they experience life in the same way as AMD has for the longest time (i.e. a day late and a dollar short). Qualcomm’s struggles to get a high performing 64bit SOC out seem to indicate that fast following is not something that really works for silicon (i.e. your skating to where the puck used to be).

        IBM relied on Intel to provide the CPU for the PC, but they never locked in the intellectual property and thereby gave Intel a very profitable franchise (30 years and counting). It would be silly for Apple to make the same mistake and give away such a valuable franchise to a SOC parts supplier, who would also enable its competitors (much better to keep a few strategic components in house).

        Nothing that Intel or Apple does to SOCs/CPUs gives them a permanent monopoly, instead they operate a series of rolling temporary monopolies that are just as effective.

        The single core performance of the Ax series has quadrupled over the past three generations. While this will not go on forever, it takes maybe one more doubling for x86 to be overtaken. I’m sure this is something that keeps people at Intel awake at night.

        1. It’s funny: I’m disagreeing wholesale with all of your points and your reasoning:
          – I always thought economies of scale referred to volume, not margins ? ie, would be a strong reason for OEMing SoCs ?
          – I’m not seeing any “following”, fast or slow: ARM publishes the playbook, then Apple, Qualcom et al go about implementing it. Sure, one got there before the others, but I don’t think that’s what “fast following” means, it means one rips off the other, which I see no sign of. Apple’s signature feature is few but fast cores, yet everyone but them it still on big.LITTLE which is the polar opposite.
          – I don’t get the IBM parallel. IBM used a free-market CPU, a free-market OS, and free-market apps and ports/peripherals. You need the whole stack for the ecosystem to be free/commoditized. Apple could use any ARM SoC, competitors still wouldn’t be able to sell iPhone clones (ie phones that run iOS, run AppStore apps, use MFI peripherals). And the iPhone itself would be essentially the same (maybe be 1 extra mm thick ^^)
          – How do you get to monopolies ? on what besides their own chips ? The x86/x64 intruction set is tightly licensed (to lame ducks AMD and VIA), the ARM feature+instruction set and reference implementation are widely licensed… Apple will probably lose the performance lead in the future, at least temporarily: they have already lost it in the past, even Intel on a very much more uneven playing field managed to lose it a few years back….
          – The single core performance of pretty much all ARM SoCs has progressed identically, driven by ARM’s designs. I’d argue that for Desktop, multi-core is more important, and non-Apple has progressed faster there. I’d say Intel is worried by ARM and Mobile in general, where it has no intrinsic advantage and is having to buy its way in.

          1. Your analysis is predicated on a homogeneous smartphone processor used throughout the industry, and that doesn’t exist.

            When Apple designs *software elements into its ARM processors, that increases iPhone performance and its ARM processors are no longer standard, off-the-shelf chips like those used in Android phones. Moreover, not all Android phone makers use the same processors and that creates fragmentation. Many Android processors are so slow that users switch off encryption, so the availability of encryption is not the operating reality.

            Those are among the problems that are now causing Google to consider designing a standard Android processor. It was a big news (rumor) item last week.

          2. Mmmmm… I’m not predicating a “homogenous” SOC industry-wide, whetever that means. Where did you get that from ? There’s an homogenous instruction set, and almost-homogenous feature set, and that’s it.

            I’m unclear about what you mean by “designing sofware elements into its procs”.

            And by “standard off-the-shelf parts”: To you, is any SoC available to 3rd parties “standard off-the-shelf”, and any SoC that isn’t, mmmm magic ? Samsung and Huawei’s SoCs are for their own internal use, does that make them off the shelf or magic ?

            Not all iDevices use the same SoCs either, so that makes them fragmented too ? Is “fragmentation” code for “bad” or for “nicely diverse and tailored to different needs” ?

            Encryption speed is a matter of having an encryption engine, not general performance; not all old ARM SoCs have that, so those who care shoudl chack. It’s an operating reality if you care about it, it isn’t if you don’t. Whatever “operating reality” means. If you mean “ecosystem wide baseline”, not it isn’t, since it doesn’t impact compatibility.

            There’s absolutely 0 rumor about “Google designing a standard Android processor”. There’s a rumor about Google wanting a custom SOC for their own use and maybe as a reference implementation available to all, specced by Google, designed by someone else. And alongside, not instead of, the usual smorgasbord of other ARM SoCs.

          3. “Is “fragmentation” code for “bad” or for “nicely diverse and tailored to different needs” ?”

            I think the great fragmentation in Android’s hardware and software leads directly to widespread buggy experience that is very difficult for Google, OEMs, and telecoms to fix. All Android devices, even Google’s, are fixable theoretically, but not in reality at scale.

            (There are more than 5000 models of cars. Most cars break rarely, but all can be fixed at scale. Our choice in cars could be said to be ‘fragmented’ in your very good sense of the word.)

      2. Apple actually does have some unique stuff in their processors. Imagination tech , which makes the best GPU cores for mobile , have an exclusivity(for 2 years) deal with Apple.

  4. I noticed that Tim Cook did not rule out a dual boot system like bootcamp but this time with both iOS and MacOs. The Mac Os would still use the Intel CPU. I think that could work out in two different ways. In the case of the iPad Pro the screen would be the monitor for when using the Mac Os and you would just have a bluetooth mouse and keyboard. In the case of the iPad Air 2 size or even the iPad mini if you go far enough into the future to shrink microprocessor components and increase storage capacity. They would be like a Mac mini and connect somehow to a desktop monitor (usb-c port/s) and bluetooth mouse and keyboard. The trick would be allowing both OS’s to share the same storage, a dynamic partition rather than the static partition of bootcamp. This product would be a different kind of convergence. Apple might not be able to pull it off today because of various tradeoffs and constraints, but give it a few more years and I bet we see this product.

    1. Isn’t it more jarring to have to have to switch OS than to have to replicate touches with… touches (on a trackpad instead of a screen) or mouse movements (especially with the fancy mice that integrate a touch surface too) ?
      As confusing and undiscoverable Mobile UIs have become, I’m really not grokking what’s so undoable about melding Touch and non-Touch UIs. Also, everybody but Apple is doing it: Windows and Android support mice and touchpads, I use both daily w/o having had to grow an extra brain ?

      1. I suppose two OS’s could be jarring if the same user had to switch back and forth between environments on one device, in order to complete one task or set of tasks. Thinking of Metro here, where you could suddenly find yourself in full Windows mode if you went a layer or two deep in a menu.

        (IF Apple went this route and combined two OSs in one hybrid device, it wouldn’t be particularly jarring if you picked up a device to use in a certain way (say in your hands or balanced on knees) and it worked in touch mode; then set it on a desk and it worked in desktop mode. (My devices are already extremely quick at waking up, relaunching stuff, etc., so that shouldn’t be much of an issue).

        You’re right, users can cope with a lot of stuff. I can cope with a Touch UI one second, and turn to a desktop the next. What I don’t want to do is reach across my trackpad and keyboard to my screen and faff about with a touch UI on certain elements at certain times and not other times, or vice versa. I am happier with one or the other in a given situation or use case, even if the OS is really good at predicting my intentions through touch.

        Meanwhile, continuity between devices enables great team collaboration on a variety of devices and interfaces, or a single user to use multiple devices all at one time.

        What’s jarring is UI elements trying to be all things to all inputs at the same time, and not doing anything particularly well or consistently. Kind of like a website that is not optimised for phones. What’s jarring is not knowing where the capabilities of one interface may end and the next may be required. What’s jarring is not having smooth continuity between the work done in different environments or on different devices.

        “to replicate touches with……touches (on a trackpad instead of a screen) or mouse movements (especially with the fancy mice that integrate a touch surface too).”

        Yes, you “touch” your mouse or your trackpad, and you “touch” your screen. After that, there is a world of difference. Trackpads and mice are about an abstraction, plus an economy of movement and precision. I can accurately access the whole of a huge screen by moving my finger a few mm on a trackpad. Fingers are a whole different type of movement and interaction, as well as being more immediate or direct. There is little use in convergence of the interfaces right now, unless developers are willing to think about both.

        Also, If you have an awesome trackpad on a desktop device, and an awesome touch UI on a touch device, then you don’t particularly miss either one on the other device. I’m loving both right now. But, yeah, I can see how you think of it as replication if the experience on both is mediocre and neither stands out as the unique experience they ought to be.

    2. I agree – it would be no surprise to see dual-boot on an iPad Pro. Of course, you’d need a keyboard and mouse/trackpad for use on the OS X side.

      More importantly, there’s no real reason that Apple couldn’t put a newer generation A processor in MacBook os some MacBook Pros. The OS and Apple apps would be fat-binary (ARM + x86) of course and there’d be a new version of Rosetta to handle the x86-only binaries. Apple doesn’t own Rosetta – it was licensed but the current owner of that technology is — IBM. I am sure a suitable deal can be arranged. Otherwise, Apple has – finally – the ability to build its own.

      As you correctly intuit, the devil is how to get both sides to work with some set of common files, and have people understand how that all works. *That* is the hard part – the hardware and OS is easier, although by no means “easy”.

  5. To understand Apple’s ambitions in silicon, I think it is important to go back in history about 20 years, and to recall all the problems that Apple has had with not owning their own CPUs. In particular, the embarrassing situation where G5 PowerPCs never came to PowerBooks due to excessive power consumption, which forced Apple to switch to Intel (which was an excellent decision in hindsight, regardless of the G5 issues).

    Apple has been burnt badly by CPU vendors not sharing their priorities. Motorola was said to be more focused on the embeddable market rather than PC chips, and IBM was more focused on servers. Neither Motorola nor IBM was sharing Apple’s wishes for a CPU optimised for “power per watt”, as Steve Jobs put it at the time.

    Ten years after that debacle, it is now apparent that Apple’s priorities were not necessarily aligned with Qualcomm. Apple priorities single core performance while the other ARM licensees pursue designs with many cores. Apple prioritised 64-bit at a time when Qualcomm was still thinking that it was still a few years out. Apple strongly emphasised full-encryption when the others still did not.

    Apple cannot make the products that it wants to unless the chipmakers collaborate, and Qualcomm was not sharing Apple’s vision. Even if they were, I’m sure Apple would have had doubts for the future; they’ve suffered hard before.

    Obviously, CPUs are central to computing. Jony Ive couldn’t design the MacBooks that he wanted without going Intel. He would never have remained as iconic if he was stuck designing fat PowerPC Powerbooks. Tim Cook would never have been able to stick to his pledge on privacy without Apple’s own silicon (Lollipop still hasn’t been able to make full-disk encryption mandatory due to performance issues, for example).


    Of course there are competitive advantages to designing your own chips as well. However, in Apple’s case, so much of their innovation in smartphones rests on having cutting-edge silicon that shares their priorities, and I think competition is secondary.

    1. “Jony Ive couldn’t design the MacBooks that he wanted without going Intel. He would never have remained as iconic if he was stuck designing fat PowerPC Powerbooks.”

      Actually, Apple’s notebooks back then were much thinner than the competition, and the main thing making notebooks fat wasn’t the cpu, but the presence of the optical drive, the removable battery, the thickness of the CFL lights behind the screens, and so on. The TDP of the CPUs in the first macbook air isn’t much less than the TDP of CPUs in regular, thicker notebooks of the same vintage.

      The entire CPU industry followed Intel into lengthy detour into the weeds with the P4 generation of cpus (and their competitors in AMD and power pc land) designed to have high clock speeds without regard for power usage. It was only when they realized that they could not simply keep ramping up clock speeds forever that performance per watt (which was intel’s talking point for selling the Core CPUs, apple did not invent it) and performance per clock cycle became a mantra.

      1. Two points. You are correct in the US up till the G4. The problem was with fitting the G5 in a laptop. It just wasn’t feasible and so we were stuck with G4s. That was what Jobs was embarrassed about. If Apple didn’t switch to Intel and Ive was forced to make G5 PowerBooks, they would have been chubby indeed.

        Regarding G4 PowerBooks being less thick that the huge DELLs of that time, well that would probably be true if you lived in the US. But that doesn’t change the fact that there were much slimmer Wintel laptops in Japan, and that Apple clearly foresaw the ultra book category becoming mainstream in the future, and wanted to build them. It’s just that DELL and HP sucked.

        1. G5 could be seen as Apple’s Tejas processor, except they actually put it into production whereas Intel cancelled Tejas and had to do the radical re-think that led to the core architecture.

          Either that, or g5 is apple’s prescott. Either way, g5 was a dead end and the writing was on the wall that a major restructuring of the architecture was necessary the minute engineering samples of the g5 rolled off the assembly line.

          Apple’s problem was, as you said, that the other members of the power pc group weren’t interested in the same goals that apple had for the architecture, so apple had to jump ship and go with a different CPU architecture.

          Apple has always been an american company (iirc, only with the iPhone era have they started selling more overseas than in the US), so it’s proper to compare their offerings to other N American designs and not to Japanese or Taiwanese designs.

          1. Apple wanted to push the very limits of portability, an initiative that bore fruit in the form of the MacBook Air, just 2 years after Apple went to Intel. If they had stayed with the PowerPC, this would not have been possible without severely compromising performance.

            Dell and HP did not push the limits of portability. They made PCs that their Taiwanese contractors would build at low cost. They were instead pushing the limits of price.

            Japanese PC companies also have been pushing the limits of portability. This is because we carry around laptops in our bags during our hour-long train commute (standing 2 hours both ways) and we demand mobility.

            I made the comparison to the Japanese companies because, like Apple, they were pushing the limits. I think this is more relevant than comparing simply by geographic proximity.

        2. One thing though, the mere possibility of DIY, and the existence of several suppliers, make the issue Apple had with PowerPC (and mostly still has with x86, though the danger is remote because Intel is solid, its goal are aligned with Apple’s, and AMD/VIA do mitigate it) a practical impossibility.
          Apple don’t *have* to make their own. If long-term safety is their goal, they could leave their license unused just in case, pick a SoC from a large handful of OEMs, or have a slightly smaller handful of ODMs custom-design one for them… Obviously the route they took requires more investment but has short-term tech and monetary rewards.

    2. Precisions:
      1- full-disk encryption is always available on Android like on iOS (and became so around the same time: iOS 4 6/2010 vs Android 3 2/2011);
      2- it is optional on both
      3- it has a noticeable performance impact on SoCs that don’t support ARM’s accelerated encryption. That feature used to be optional, is now baseline in ARMv8 (=ARM 64 bits). Users who care about it are expected to either get a 64-bits phone or check their SoCs support hardware encryption (all current Qualcoms do)

      Apple’s Secure Enclave is a rebranding and custom implementation of TrustZone/Cryptocell ( http://www.arm.com/products/processors/technologies/trustzone/ ; https://www.quora.com/What-is-Apple%E2%80%99s-new-Secure-Enclave-and-why-is-it-important ). I haven’t dug deep, but high-end Qualcoms and Exynos are supposed to have the same.

      The security differences are not so much technical as financial: some (most ?) consumers would rather save $10 and get a barebones SoC. I’m sure Apple could have procured a 3rd-party SoC to their liking on that score. Their difference is branding, not technical.

      1. “1- full-disk encryption is always available on Android like on iOS (and became so around the same time: iOS 4 6/2010 vs Android 3 2/2011); ”

        FD encryption is ‘available’, but /unused/ in most all Android devices except recent premium and mid market mobiles with SOCs that can actually run the encryption without killing the battery or the UX. In a few years, as SOCs develop, encryption will be used across all Android except for the weakest/cheapest models.

Leave a Reply

Your email address will not be published. Required fields are marked *