Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Apple Switches To ARM
#1
https://techreport.com/featured/3471264/...cial-2020/
Quote:Apple says its putting “relentless focus on performance per watt,” noting that the iPhone performance has increased over 100 times since its original launch and 1000-times performance on the iPad GPU versus the same phone. The company plans to release a whole family of SoCs (System-on-a-Chip) called Mac SoCs to support the change, including GPUs, SSD controllers, and more. The change will encompass Macbooks and desktop systems alike. The first ARM-based Mac is coming this year; Apple expects to complete the transition to ARM within two years.

Apple’s upcoming update, macOS Big Sur, will feature Apple applications already updated to support the upcoming chips–including Final Cut Pro. The company is also taking measures to ensure they don’t leave developers and longtime users in the cold. Apple will offer fast, transparent emulation of legacy apps through its Rosetta 2 emulation platform, upon which Apple says that “most apps will just work.” To show off Rosetta 2, Apple gave us a look at Shadow of the Tomb Raider running on its ARM chip at 1080p in real-time.

Apple is also creating an Apple Developers’ Transition Kit, for which registered developers can apply through its developer website. The system is a Mac Mini enclosure that includes Apple’s A12 SoC, 16 GB RAM, a 512 GB SSD, and a pre-release version of macOS Big Sur. This could be the first ARM Mac that Apple plans to release, too, though Apple was quiet about the specifics on its 2020 ARM system aside from this potential candidate.

https://www.extremetech.com/mobile/31207...based-macs
Quote:The big question on everyone’s mind since Apple’s unveiling of its upcoming ARM shift is what kind of performance we can expect the new chips to offer. It’s not an easy question to answer right now, and there’s some misinformation about what the differences are between modern x86 versus ARM CPUs in the first place.
...
What people are actually arguing, when they argue about CISC versus RISC, is whether the decoder block x86 CPUs use to convert CISC into RISC burns enough power to be considered a categorical disadvantage against x86 chips.

When I’ve raised this point with AMD and Intel in the past, they’ve always said it isn’t true. Decoder power consumption, I’ve been told, is in the 3-5 percent range. That’s backed up by independent evaluation. A comparison of decoder power consumption in the Haswell era suggested an impact of 3 percent when L2 / L3 cache are stressed and no more than 10 percent if the decoder is, itself, the primary bottleneck. The CPU cores’ static power consumption was nearly half the total. The authors of the comparison note that 10 percent represents an artificially inflated figure based on their test characteristics.

A 2014 paper on ISA efficiency also backs up the argument that ISA efficiency is essentially equal above the microcontroller level. In short, whether ARM is faster than x86 has been consistently argued to be based on fundamentals of CPU design, not ISA. No major work on the topic appears to have been conducted since these comparisons were written. One thesis defense I found claimed somewhat different results, but it was based entirely on theoretical modeling rather than real-world hardware evaluation.

CPU power consumption is governed by factors like the efficiency of your execution units, the power consumption of your caches, your interconnect subsystem, your fetch and decode units (when present), and so on. ISA may impact the design parameters of some of those functional blocks, but ISA itself has not been found to play a major role in modern microprocessor performance.

PC Mag’s benchmarks paint a mixed picture. In tests like GeekBench 5 and GFX Bench 5 Metal, the Apple laptops with Intel chips are outpaced by Apple’s iPad Pro (and sometimes, by the iPhone 11).
...
This implies a few different things are true. First, we need better benchmarks performed under something more like equal conditions, which obviously won’t happen until macOS devices with Apple ARM chips are available to be compared against macOS on Intel. GeekBench is not the final word in CPU performance — there’ve been questions before about how effective it is as a cross-platform CPU test — and we need to see some real-world application comparisons.

Factors working in Apple’s favor include the company’s excellent year-on-year improvements to its CPU architecture and the fact that it’s willing to take this leap in the first place. If Apple didn’t believe it could deliver at least competitive performance, there’d be no reason to change. The fact that it believes it can create a permanent advantage for itself in doing so says something about how confident Apple is about its own products.

At the same time, however, Apple isn’t shifting to ARM in a year, the way it did with x86 chips. Instead, Apple hopes to be done within two years. One way to read this decision is to see it as a reflection of Apple’s long-term focus on mobile. Scaling a 3.9W iPhone chip into a 15-25W laptop form factor is much easier than scaling it into a 250W TDP desktop CPU socket with all the attendant chipset development required to support things like PCIe 4.0 and standard DDR4 / DDR5 (depending on launch window).

It’s possible that Apple may be able to launch a superior laptop chip compared with Intel’s x86 products, but that larger core desktop CPUs with their higher TDPs will remain an x86 strength for several years yet. I don’t think it’s an exaggeration to say this will be the most closely watched CPU launch since AMD’s Ryzen back in 2017.

Apple’s historic price and market strategy make it unlikely that the company would attack the mass market. But mainstream PC OEMs aren’t going to want to see a rival switch architectures and be decisively rewarded for it while they’re stuck with suddenly second-rate AMD and Intel CPUs. Alternately, of course, it’s possible that Apple will demonstrate weaker-than-expected gains, or only be able to show decisive impacts in contrived scenarios. I’m genuinely curious to see how this shapes up.

https://www.techpowerup.com/269024/bad-i...ntel-split
Quote:According to a sensational PC Gamer report citing former Intel principal engineer François Piednoël, Apple's dissatisfaction with Intel dates back to some of its first 14 nm chips, based on the "Skylake" microarchitecture. "The quality assurance of Skylake was more than a problem," says Piednoël. It was abnormally bad. We were getting way too much citing for little things inside Skylake. Basically our buddies at Apple became the number one filer of problems in the architecture. And that went really, really bad. When your customer starts finding almost as much bugs as you found yourself, you're not leading into the right place," he adds.

It was around that time that decisions were taken at the highest levels in Apple to execute a machine architecture switch away from Intel and x86, the second of its kind following Apple's mid-2000s switch from PowerPC to Intel x86. For me this is the inflection point," says Piednoël. "This is where the Apple guys who were always contemplating to switch, they went and looked at it and said: 'Well, we've probably got to do it.' Basically the bad quality assurance of Skylake is responsible for them to actually go away from the platform." Apple's decision to dump Intel may have only been more precipitated with 2019 marking a string of cybersecurity flaws affecting Intel microarchitectures. The PC Gamer report cautions that Piednoël's comments should be taken with a pinch of salt, as he has been among the more outspoken engineers at Intel.
Reply
#2
https://www.extremetech.com/computing/31...nce-vs-x86
Quote:Ever since Apple announced the A12Z and its shift away from x86, there’ve been questions about exactly how these ARM chips will perform and what we can expect from them. The first benchmark results are starting to appear from Apple dev kits, and as long as you take them with a mountain of salt, they’re pretty interesting.
...
One thing to keep in mind is that emulation performance can vary drastically depending on the application. Some programs might run with relatively small penalties, while others crater and die. Rosetta 2 is specifically designed to avoid those outcomes, but historically, there’s a nasty corner case or two lurking somewhere in any emulator. Some applications are harder to emulate than others. But the upshot of this effect is that we don’t really know if that 1.44x lead the 13-inch MacBook has is the product of emulator handicapping or if it’s a pretty good look at the CPU’s performance. Data from the iPad Pro suggests it might be the former.

If we assume that the A12X in the iPad Pro is a pretty good stand-in for the A12Z, we can check ARM-native Geekbench performance, albeit in iOS, not macOS. Here, we’re looking at 1120 single-core, 4650 multi-core, with a scaling factor of 4.16x. The MacBook Pro 13-inch is only about 8 percent faster than the iPad Pro in single-thread, and 10 percent slower in multi-thread.

Frankly, that should send a frisson of fear through Intel and AMD. The implication of these results is that the gap between the 13-inch Mac and the A12Z is largely the result of emulation. That’s not a guarantee, because OS differences matter in situations like this, but it certainly looks as though most of the penalty the A12Z is carrying is related to emulating x86 code.

Apple’s year-on-year record of delivering new performance improvements is considerably better than Intel’s right now. AMD can make a much stronger argument for its own recent improvement, thanks to Ryzen, but the enormous 1.52x IPC improvement from Excavator to Ryzen tilts the comparison a bit. To put it bluntly, AMD’s improvements the last three years would be a little less impressive if Bulldozer hadn’t been such an awful chip to start with.

We’re in a weird situation at the moment. Intel has always been Apple’s chief supplier, but AMD is selling more performant mobile CPUs today, making them the more obvious point of comparison. The 4900HS appears to score a 1116 single-core and a 7013 multi-threaded score. x86 MT is, at least, in no immediate danger, in absolute terms. Keep in mind that the 4900HS also draws far more power than either the Intel or Apple chips.

What we see here isn’t proof that Apple will launch a MacBook ARM chip that rivals the best Intel and AMD can offer. But it certainly puts a floor under expected performance, barring unusual emulator quirks that Apple will spend the next few months quashing. The x86 companies may want to ask their mobile CPU designers to put an extra pot of coffee on.
Reply
#3
Apple will still have Thunderbolt 4 on its ARM CPUs: https://www.techpowerup.com/269664/apple...nderbolt-4
Reply
#4
https://www.tomshardware.com/news/report...n-for-macs
Quote:Just last month Apple finally announced that it would be transitioning away from Intel to its own custom ARM silicon for Macs. To accomplish this it needs a production partner, and who better than TSMC? A new report on Digitimes states that this will be the case, and that by the first half of 2021 Apple will be contracting a small portion of TSMC's capacity.

Of course, this comes as a surprise to exactly no-one. Apple was already one of TSMC's big customers for mobile chips, and with TSMC's progress on advanced node sizes, it is only logical for Apple to turn to its trusted partner. Earlier rumors also pointed to Apple partnering up with TSMC for ARM-Macs.

But although Apple might be starting small with TSMC, the chipmaker has a potentially big client on its hands. Analysts predict that the first Macs with Apple Silicon in them to be the MacBook Air and 13-inch MacBook Pro, with the remainder of the lineup following in due time. Apple also explained that it would be transitioning to ARM in phases spanned over a period of two years.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)