Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Apple Switches To ARM
#1
https://techreport.com/featured/3471264/...cial-2020/
Quote:Apple says its putting “relentless focus on performance per watt,” noting that the iPhone performance has increased over 100 times since its original launch and 1000-times performance on the iPad GPU versus the same phone. The company plans to release a whole family of SoCs (System-on-a-Chip) called Mac SoCs to support the change, including GPUs, SSD controllers, and more. The change will encompass Macbooks and desktop systems alike. The first ARM-based Mac is coming this year; Apple expects to complete the transition to ARM within two years.

Apple’s upcoming update, macOS Big Sur, will feature Apple applications already updated to support the upcoming chips–including Final Cut Pro. The company is also taking measures to ensure they don’t leave developers and longtime users in the cold. Apple will offer fast, transparent emulation of legacy apps through its Rosetta 2 emulation platform, upon which Apple says that “most apps will just work.” To show off Rosetta 2, Apple gave us a look at Shadow of the Tomb Raider running on its ARM chip at 1080p in real-time.

Apple is also creating an Apple Developers’ Transition Kit, for which registered developers can apply through its developer website. The system is a Mac Mini enclosure that includes Apple’s A12 SoC, 16 GB RAM, a 512 GB SSD, and a pre-release version of macOS Big Sur. This could be the first ARM Mac that Apple plans to release, too, though Apple was quiet about the specifics on its 2020 ARM system aside from this potential candidate.

https://www.extremetech.com/mobile/31207...based-macs
Quote:The big question on everyone’s mind since Apple’s unveiling of its upcoming ARM shift is what kind of performance we can expect the new chips to offer. It’s not an easy question to answer right now, and there’s some misinformation about what the differences are between modern x86 versus ARM CPUs in the first place.
...
What people are actually arguing, when they argue about CISC versus RISC, is whether the decoder block x86 CPUs use to convert CISC into RISC burns enough power to be considered a categorical disadvantage against x86 chips.

When I’ve raised this point with AMD and Intel in the past, they’ve always said it isn’t true. Decoder power consumption, I’ve been told, is in the 3-5 percent range. That’s backed up by independent evaluation. A comparison of decoder power consumption in the Haswell era suggested an impact of 3 percent when L2 / L3 cache are stressed and no more than 10 percent if the decoder is, itself, the primary bottleneck. The CPU cores’ static power consumption was nearly half the total. The authors of the comparison note that 10 percent represents an artificially inflated figure based on their test characteristics.

A 2014 paper on ISA efficiency also backs up the argument that ISA efficiency is essentially equal above the microcontroller level. In short, whether ARM is faster than x86 has been consistently argued to be based on fundamentals of CPU design, not ISA. No major work on the topic appears to have been conducted since these comparisons were written. One thesis defense I found claimed somewhat different results, but it was based entirely on theoretical modeling rather than real-world hardware evaluation.

CPU power consumption is governed by factors like the efficiency of your execution units, the power consumption of your caches, your interconnect subsystem, your fetch and decode units (when present), and so on. ISA may impact the design parameters of some of those functional blocks, but ISA itself has not been found to play a major role in modern microprocessor performance.

PC Mag’s benchmarks paint a mixed picture. In tests like GeekBench 5 and GFX Bench 5 Metal, the Apple laptops with Intel chips are outpaced by Apple’s iPad Pro (and sometimes, by the iPhone 11).
...
This implies a few different things are true. First, we need better benchmarks performed under something more like equal conditions, which obviously won’t happen until macOS devices with Apple ARM chips are available to be compared against macOS on Intel. GeekBench is not the final word in CPU performance — there’ve been questions before about how effective it is as a cross-platform CPU test — and we need to see some real-world application comparisons.

Factors working in Apple’s favor include the company’s excellent year-on-year improvements to its CPU architecture and the fact that it’s willing to take this leap in the first place. If Apple didn’t believe it could deliver at least competitive performance, there’d be no reason to change. The fact that it believes it can create a permanent advantage for itself in doing so says something about how confident Apple is about its own products.

At the same time, however, Apple isn’t shifting to ARM in a year, the way it did with x86 chips. Instead, Apple hopes to be done within two years. One way to read this decision is to see it as a reflection of Apple’s long-term focus on mobile. Scaling a 3.9W iPhone chip into a 15-25W laptop form factor is much easier than scaling it into a 250W TDP desktop CPU socket with all the attendant chipset development required to support things like PCIe 4.0 and standard DDR4 / DDR5 (depending on launch window).

It’s possible that Apple may be able to launch a superior laptop chip compared with Intel’s x86 products, but that larger core desktop CPUs with their higher TDPs will remain an x86 strength for several years yet. I don’t think it’s an exaggeration to say this will be the most closely watched CPU launch since AMD’s Ryzen back in 2017.

Apple’s historic price and market strategy make it unlikely that the company would attack the mass market. But mainstream PC OEMs aren’t going to want to see a rival switch architectures and be decisively rewarded for it while they’re stuck with suddenly second-rate AMD and Intel CPUs. Alternately, of course, it’s possible that Apple will demonstrate weaker-than-expected gains, or only be able to show decisive impacts in contrived scenarios. I’m genuinely curious to see how this shapes up.

https://www.techpowerup.com/269024/bad-i...ntel-split
Quote:According to a sensational PC Gamer report citing former Intel principal engineer François Piednoël, Apple's dissatisfaction with Intel dates back to some of its first 14 nm chips, based on the "Skylake" microarchitecture. "The quality assurance of Skylake was more than a problem," says Piednoël. It was abnormally bad. We were getting way too much citing for little things inside Skylake. Basically our buddies at Apple became the number one filer of problems in the architecture. And that went really, really bad. When your customer starts finding almost as much bugs as you found yourself, you're not leading into the right place," he adds.

It was around that time that decisions were taken at the highest levels in Apple to execute a machine architecture switch away from Intel and x86, the second of its kind following Apple's mid-2000s switch from PowerPC to Intel x86. For me this is the inflection point," says Piednoël. "This is where the Apple guys who were always contemplating to switch, they went and looked at it and said: 'Well, we've probably got to do it.' Basically the bad quality assurance of Skylake is responsible for them to actually go away from the platform." Apple's decision to dump Intel may have only been more precipitated with 2019 marking a string of cybersecurity flaws affecting Intel microarchitectures. The PC Gamer report cautions that Piednoël's comments should be taken with a pinch of salt, as he has been among the more outspoken engineers at Intel.
Reply
#2
https://www.extremetech.com/computing/31...nce-vs-x86
Quote:Ever since Apple announced the A12Z and its shift away from x86, there’ve been questions about exactly how these ARM chips will perform and what we can expect from them. The first benchmark results are starting to appear from Apple dev kits, and as long as you take them with a mountain of salt, they’re pretty interesting.
...
One thing to keep in mind is that emulation performance can vary drastically depending on the application. Some programs might run with relatively small penalties, while others crater and die. Rosetta 2 is specifically designed to avoid those outcomes, but historically, there’s a nasty corner case or two lurking somewhere in any emulator. Some applications are harder to emulate than others. But the upshot of this effect is that we don’t really know if that 1.44x lead the 13-inch MacBook has is the product of emulator handicapping or if it’s a pretty good look at the CPU’s performance. Data from the iPad Pro suggests it might be the former.

If we assume that the A12X in the iPad Pro is a pretty good stand-in for the A12Z, we can check ARM-native Geekbench performance, albeit in iOS, not macOS. Here, we’re looking at 1120 single-core, 4650 multi-core, with a scaling factor of 4.16x. The MacBook Pro 13-inch is only about 8 percent faster than the iPad Pro in single-thread, and 10 percent slower in multi-thread.

Frankly, that should send a frisson of fear through Intel and AMD. The implication of these results is that the gap between the 13-inch Mac and the A12Z is largely the result of emulation. That’s not a guarantee, because OS differences matter in situations like this, but it certainly looks as though most of the penalty the A12Z is carrying is related to emulating x86 code.

Apple’s year-on-year record of delivering new performance improvements is considerably better than Intel’s right now. AMD can make a much stronger argument for its own recent improvement, thanks to Ryzen, but the enormous 1.52x IPC improvement from Excavator to Ryzen tilts the comparison a bit. To put it bluntly, AMD’s improvements the last three years would be a little less impressive if Bulldozer hadn’t been such an awful chip to start with.

We’re in a weird situation at the moment. Intel has always been Apple’s chief supplier, but AMD is selling more performant mobile CPUs today, making them the more obvious point of comparison. The 4900HS appears to score a 1116 single-core and a 7013 multi-threaded score. x86 MT is, at least, in no immediate danger, in absolute terms. Keep in mind that the 4900HS also draws far more power than either the Intel or Apple chips.

What we see here isn’t proof that Apple will launch a MacBook ARM chip that rivals the best Intel and AMD can offer. But it certainly puts a floor under expected performance, barring unusual emulator quirks that Apple will spend the next few months quashing. The x86 companies may want to ask their mobile CPU designers to put an extra pot of coffee on.
Reply
#3
Apple will still have Thunderbolt 4 on its ARM CPUs: https://www.techpowerup.com/269664/apple...nderbolt-4
Reply
#4
https://www.tomshardware.com/news/report...n-for-macs
Quote:Just last month Apple finally announced that it would be transitioning away from Intel to its own custom ARM silicon for Macs. To accomplish this it needs a production partner, and who better than TSMC? A new report on Digitimes states that this will be the case, and that by the first half of 2021 Apple will be contracting a small portion of TSMC's capacity.

Of course, this comes as a surprise to exactly no-one. Apple was already one of TSMC's big customers for mobile chips, and with TSMC's progress on advanced node sizes, it is only logical for Apple to turn to its trusted partner. Earlier rumors also pointed to Apple partnering up with TSMC for ARM-Macs.

But although Apple might be starting small with TSMC, the chipmaker has a potentially big client on its hands. Analysts predict that the first Macs with Apple Silicon in them to be the MacBook Air and 13-inch MacBook Pro, with the remainder of the lineup following in due time. Apple also explained that it would be transitioning to ARM in phases spanned over a period of two years.
Reply
#5
https://www.techpowerup.com/271098/apple...e-i9-9880h
Quote:The Apple A14X Bionic is an upcoming processor from Apple which is expected to feature in the upcoming iPad Pro models and should be manufactured on TSMC's 5 nm node. Tech YouTuber Luke Miani has recently provided a performance graph for the A14X chip based on "leaked/suspected A14 info + average performance gains from previous X chips". In these graphs, the Apple A14X can be seen matching the Intel Core i9-9880H in Geekbench 5 with a score of 7480. The Intel Intel Core i9-9880H is a 45 W eight-core mobile CPU found in high-end notebooks such as the 2019 16-inch MacBook Pro and requires significant cooling to keep thermals under control.

If these performance estimates are correct or even close then Apple will have a serious productivity device and will serve as a strong basis for Apple's transition to custom CPU's for it's MacBook's in 2021. Apple may use a custom version of the A14X with slightly higher clocks in their upcoming ARM MacBooks according to Luke Miani. These results are estimations at best so take them with a pinch of salt until Apple officially unveils the chip.
Reply
#6
https://www.extremetech.com/computing/31...or-2h-2021
Quote:According to the new report, Apple’s new GPU — codenamed Lifuka — is progressing smoothly and intended for 5nm deployment in next year’s iMac. The one question nobody seems to be able to answer (that we’d expect an answer to at this point, anyway), is whether this is a discrete GPU, integrated silicon, or if Apple will launch both, similar to Intel.

It’s hard to imagine Apple seriously attempting to replace AMD without a discrete GPU to offer. It’s possible that Apple still intends to partner with AMD to offer creatives an accelerated GPU at the high-end of the market, but will ship its own integrated silicon first before tackling discrete. The company could also be planning to launch its own discrete and integrated solutions simultaneously, though whether it would actually ship silicon independently of system sales is its own question. Previous rumors have indicated Apple won’t ship its upcoming Macs with AMD GPU support, implying the company plans to have a top-to-bottom solution in place when it finally ships hardware.
...
One way Apple could solve this problem — though let me be clear, there’s no reason to think the company has gone this route — would be to build an ARM core, GPU core, and integrated HBM memory on a single slice of silicon, like Kaby Lake-G. The biggest reason to think Apple wouldn’t go this route are issues of cost and difficulty. HBM has yet to make a home for itself anywhere but high-end silicon, and it seems unlikely that Apple would field such an esoteric part as its first move out of the gate. There’s no sign of HBM in the dev kits Apple has shipped, and no one has mentioned it as a solution. I realize I’m the one writing the article here, but I think my own idea is unlikely.

It would, however, be cool as hell to see Apple go this route. The argument goes like this: Apple has always wanted to distinguish itself from every other PC/CPU manufacturer. Apple’s marketing team may not have had much luck selling the late-90s G4 and G5 systems as materially faster than their x86 counterparts, but “Think Different” was a slogan Apple literally expressed in its hardware choices. An HBM-based solution would give Apple an on-silicon GPU with enough memory bandwidth to match at least a midrange GPU, and up to 8GB of RAM in a single stack. The company claims its ARM cores are more efficient than the x86 chips Intel or AMD build, and that additional TDP budget could be spent on other components integrated into the same “socket” (consider the term loosely applied, since Apple might not use socketed parts).

I don’t think it’s likely, for multiple reasons. But it’d be a heck of a way for the company to try and put its own stamp on the hardware market.
Reply
#7
(09-02-2020, 07:33 AM)SteelCrysis Wrote: https://www.extremetech.com/computing/31...or-2h-2021
Quote:According to the new report, Apple’s new GPU — codenamed Lifuka — is progressing smoothly and intended for 5nm deployment in next year’s iMac. The one question nobody seems to be able to answer (that we’d expect an answer to at this point, anyway), is whether this is a discrete GPU, integrated silicon, or if Apple will launch both, similar to Intel.

It’s hard to imagine Apple seriously attempting to replace AMD without a discrete GPU to offer. It’s possible that Apple still intends to partner with AMD to offer creatives an accelerated GPU at the high-end of the market, but will ship its own integrated silicon first before tackling discrete. The company could also be planning to launch its own discrete and integrated solutions simultaneously, though whether it would actually ship silicon independently of system sales is its own question. Previous rumors have indicated Apple won’t ship its upcoming Macs with AMD GPU support, implying the company plans to have a top-to-bottom solution in place when it finally ships hardware.
...
One way Apple could solve this problem — though let me be clear, there’s no reason to think the company has gone this route — would be to build an ARM core, GPU core, and integrated HBM memory on a single slice of silicon, like Kaby Lake-G. The biggest reason to think Apple wouldn’t go this route are issues of cost and difficulty. HBM has yet to make a home for itself anywhere but high-end silicon, and it seems unlikely that Apple would field such an esoteric part as its first move out of the gate. There’s no sign of HBM in the dev kits Apple has shipped, and no one has mentioned it as a solution. I realize I’m the one writing the article here, but I think my own idea is unlikely.

It would, however, be cool as hell to see Apple go this route. The argument goes like this: Apple has always wanted to distinguish itself from every other PC/CPU manufacturer. Apple’s marketing team may not have had much luck selling the late-90s G4 and G5 systems as materially faster than their x86 counterparts, but “Think Different” was a slogan Apple literally expressed in its hardware choices. An HBM-based solution would give Apple an on-silicon GPU with enough memory bandwidth to match at least a midrange GPU, and up to 8GB of RAM in a single stack. The company claims its ARM cores are more efficient than the x86 chips Intel or AMD build, and that additional TDP budget could be spent on other components integrated into the same “socket” (consider the term loosely applied, since Apple might not use socketed parts).

I don’t think it’s likely, for multiple reasons. But it’d be a heck of a way for the company to try and put its own stamp on the hardware market.


This will fail. ARM will not replace x86 architecture.
Reply
#8
https://www.extremetech.com/computing/31...res-report
Quote:Once the M1 hit a few weeks back, it was clear that the diminutive processor was but a sign of things to come. Reports suggest that Apple will be upping the competitive ante in short order. The company plans to launch M1 follow-ups with up to 16 high-performance cores in 2021, targeting the MacBook Pro and iMac market. In 2022, it’ll launch machines with 32 high-performance cores in systems like the Mac Pro. While the 2021 CPU might still tap the FireStorm CPU core, it’s a good bet that the 2022 CPU will be at least one generation more advanced.

All of these rumors come from Bloomberg, which has a good record when it comes to Apple CPU coverage. The same report notes that Apple wants to bring laptop chips to market with 16-core and 32-core GPUs, and that the company is eyeing chips with 64 or 128 dedicated GPU cores. Each GPU manufacturer defines a “core” somewhat differently, so the fact that Apple is talking about a “128 core GPU” compared with, say, a 4096-core GPU from AMD or Nvidia, isn’t meaningful, but the rapid plan to scale up to higher levels of GPU performance is an effort to replace AMD GPU hardware the same way Intel will be pushed out of the CPU stack.
...
I expect that by 2022, both AMD and Intel will have their own new technology deployments explicitly intended to draw down x86 power consumption, improve efficiency, and boost performance per watt. The M1’s appearance will have lit a fire under such efforts at both companies. Timelines at the top of the market are also longer — if Apple launches a highly-competitive-to-superior 32-core part in 2022, it’d probably be 2023 or 2024 before said chip began bleeding off market share — but the stakes are also higher.

Intel’s entire justification for foundry self-ownership rests on the sales of high-end Xeon and Core i7 / i9 CPUs. The loss of these markets would be disastrous for the firm’s financials. AMD has much more experience operating on low margins and absolutely no interest in returning to the days where the question wasn’t “I wonder how much profit AMD made this quarter,” but “I wonder if AMD managed to lose less than $500M?”

Both companies will fight tooth and nail for their own market share. Both enjoy benefits like long-term guaranteed back-compatibility, familiarity, and customer loyalty. The sheer size of the x86 ecosystem is its own bulwark, and the final outcome of the x86 versus ARM fight, long-term, is uncertain.

But here’s one thing I am certain of: The M1 will be considered an inflection point in the history of CPUs, if only because it’s the first real challenge to x86 hegemony in decades. AMD and Intel will have to improve their own designs to meet that challenge, and even if they do so successfully, the products they build afterward will continue on a different evolutionary path than they might have taken otherwise.

https://www.extremetech.com/computing/31...erformance
Quote:The problem here is that x86 CPUs are designed to be run optimally in 2T1C configurations, as a recent Anandtech deep dive into the performance advantages and disadvantages of enabling SMT indicates, while the M1 is designed to run optimally in a 1T1C configuration.

This may well be an ongoing problem for x86. Remember that scaling per-thread is far from perfect and gets worse every thread you add. Historically, the CPU that delivers the best per-core performance in the smallest die area and with the highest performance per watt is the CPU that wins whatever “round” of the CPU wars one cares to consider. The fact that x86 requires two threads to do what Apple can do with one is not a strength. Whether only loading an x86 CPU with one thread constitutes a penalty will depend on what kind of comparison you want to make, but the difference in optimal thread counts and distribution needs to be acknowledged.

The big takeaways of the M1 remain unchanged. In many tests, the CPU shows consistently higher results than x86 CPUs when measured in terms of performance per watt. When it is outperformed by x86 CPUs, it is typically by chips that consume far more power than itself. The M1 appears to take a 20-30 percent performance hit when running applications built for Intel Macs, and there it may consume more power in this mode. Apple’s emulation ecosystem and third-party support are still in early days and may not meet the needs of every user depending on the degree to which you are plugged into the overall Apple ecosystem. None of these is a direct reflection on the M1’s silicon, however, which still looks like one of the most interesting advances in silicon in the past few decades — and a harbinger of problems to come for Intel and AMD.
Reply
#9
https://www.extremetech.com/computing/31...benchmarks
Quote:Now that Microsoft’s Windows on ARM emulation layer can also run 64-bit x86 code, inevitably there will be questions as to how it compares with the Apple M1. Qualcomm’s 8cx platform and its minimal refresh earlier this fall have never been known for speed — these chips have been sold on the basis of excellent battery life, not pure performance.

Even with that said, the M1 eats Windows on ARM-powered laptops for breakfast. That’s the conclusion PCWorld reached after comparing the Snapdragon 8cx against the M1, with a Core i5 (4C/8T, 1GHz base, 3.6GHz boost) machine tossed in for good measure. Moving to 64-bit apps on Windows doesn’t seem to do anything for the 8cx’s overall performance.

It must be said that this isn’t particularly surprising, but it highlights the critical weakness in the nascent Windows on ARM ecosystem. Qualcomm essentially punted on updating its SoC this year, and while there technically was a refresh for devices like the Surface Pro X, the performance is practically identical. There’s been no architectural update.
...
In short, Microsoft needs an ARM hardware developer that’s willing to invest in improving its platform on a regular cadence with real performance advances. Besides Qualcomm, the top three potential players would be Nvidia, AMD, and Samsung. Samsung shut down its homegrown CPU effort, so they seem out of the running. AMD has built ARM chips relatively recently, but it’s shown no interest in pulling focus away from Ryzen and competing against its own x86 CPUs. That leaves Qualcomm and Nvidia. Qualcomm could launch a true successor to the 8cx this coming year, and probably boost single-thread performance by 1.4x or more, since the CPU would be at least two generations more advanced.

Right now, Qualcomm seems like the best option for a competitive product, but I’m not convinced we should count out the idea of a future Windows on ARM system powered by Nvidia hardware. I’ve got absolutely zero inside information backing that up, but Nvidia is a competitive company that had some genuine plans for itself on the desktop, once upon a time. The company’s name literally derives from the Roman goddess of envy, Invidia. If ARM CPUs take the lead in a way Intel and AMD are unable to answer — and yes, that’s a very big “if” — I can see Nvidia throwing its hat into the metaphorical ring.

Either way, PCWorld’s testing makes clear that we’re a long way from any Windows on ARM laptop that can compete with the Apple M1.
Reply
#10
https://www.techpowerup.com/277760/apple...-subsystem
Quote:Apple has today patented a new approach to how it uses memory in the System-on-Chip (SoC) subsystem. With the announcement of the M1 processor, Apple has switched away from the traditional Intel-supplied chips and transitioned into a fully custom SoC design called Apple Silicon. The new designs have to integrate every component like the Arm CPU and a custom GPU. Both of these processors need good memory access, and Apple has figured out a solution to the problem of having both the CPU and the GPU accessing the same pool of memory. The so-called UMA (unified memory access) represents a bottleneck because both processors share the bandwidth and the total memory capacity, which would leave one processor starving in some scenarios.

Apple has patented a design that aims to solve this problem by combining high-bandwidth cache DRAM as well as high-capacity main DRAM. "With two types of DRAM forming the memory system, one of which may be optimized for bandwidth and the other of which may be optimized for capacity, the goals of bandwidth increase and capacity increase may both be realized, in some embodiments," says the patent, " to implement energy efficiency improvements, which may provide a highly energy-efficient memory solution that is also high performance and high bandwidth." The patent got filed way back in 2016 and it means that we could start seeing this technology in the future Apple Silicon designs, following the M1 chip.

Update 21:14 UTC: We have been reached out by Mr. Kerry Creeron, an attorney with the firm of Banner & Witcoff, who provided us with additional insights about the patent. Mr. Creeron has provided us with his personal commentary about it, and you can find Mr. Creeron's quote below.
Reply
#11
https://www.extremetech.com/computing/31...-intel-cpu
Quote:Ever since Apple launched the M1 it’s been clear that the CPU was going to be trouble for Intel and AMD. Apple has now published its own power consumption figures for the M1-based Mac Mini as compared with the 2018 Intel Mac mini refresh, and the Intel systems don’t compare very well.

Apple’s published figures on its own website look good against data published by independent reviewers. The 39W peak power consumption is higher than what reviewers measured, as is the idle power. Apple, in other words, claims higher figures than measured independently. This strengthens the likelihood that the evaluation was fairly done.

The 2018 Mac mini refresh draws 19.9W idle, according to Apple, and 122W at maximum. The Apple M1-powered system is drawing less than a third of the power of the equivalent Intel rig. That’s not a great look for Intel, and it illustrates the problem M1 poses for both x86 manufacturers. Keep in mind that this is a comparison against a 14nm Intel CPU — Coffee Lake vintage — not Ice Lake or Tiger Lake. We don’t know how a six-core ICL or TGL CPU would compare against the M1, but it would likely be somewhat better on both idle and max power.
...
The M1’s greatest strength is not its performance. While it absolutely can outperform x86 CPUs, the M1’s performance varies depending on whether a workload is emulated or native. Comparisons with Tiger Lake as opposed to Ice Lake slice into the lead it claims in certain tests. The M1 is a threat, not a one-shot knockout.

The problem with the M1, from Intel and AMD’s perspective, is that even when it loses to x86 it draws a fraction of the power doing it. And low-power, highly efficient CPUs are typically the ones that have the most room to grow. Part of the reason for the M1’s lauded efficiency is that the CPU is only running at 3.2GHz. Higher clock speeds are inefficient, and each additional MHz costs more power the higher you clock a chip.

We don’t know how well the M series scales above 3.2GHz, but if Apple can scale this design or has further significant IPC improvements ready and waiting, it’s going to get harder for x86 to compete. Data centers are very interested in lowering CPU power consumption, and while Apple probably doesn’t have plans to start selling servers again, Qualcomm just bought Nuvia, a company focused on building ARM server solutions.

For now, software ecosystem issues, user preferences, and Apple’s business decision to only focus on certain parts of the PC market will limit Intel and AMD’s competitive risk. Neither company is saying much about the M1 yet, but both of them are going to have to contend with it in the future. Laptop OEMs such as Dell, HP, and Lenovo operate on the assumption that the chips they buy are the fastest processors in the world. If it turns out that ARM CPUs are faster than x86 CPUs in a way AMD and Intel can’t match over time, somebody is going to fund the development of a competitive ARM chip to sell to companies that aren’t Apple.

Intel and AMD aren’t talking about the M1 much right now. Benchmark performance between the two ISAs, while interesting and indicative of the overall comparison, isn’t the real threat. The real threat is that Apple has more than enough room in its power consumption budget to either add CPU cores, boost clock, or both. The x86 manufacturers are in no near-term danger, but they have no time to waste, either. Both companies have affirmed that they are taking the M1 seriously. We’ll have to wait and see what that means a year or two from now.
Reply
#12
https://www.tomshardware.com/news/apple-...chitecture
Quote:A couple of weeks ago, we reported that a startup called Corellium had managed to run Linux on an Apple M1-based computer. Back then, the operating system ran, but it did not support many things, essentially making the PC unusable to a large degree. Recently the company finally managed to make most of the things (including Wi-Fi) work, which means that Linux can now be used on the latest Macs. But the whole project of running a non-Apple OS on such computers has an interesting side effect as it reveals how different Apple’s SoCs are compared to other Arm-based architectures.

It's no secret that Apple has focused on building its own Arm-based microarchitectures to offer unbeatable performance with its iPhones and iPads for quite a while now. Unlike its rivals, the company did not throw in more cores, instead improving its cores' single-core/single-thread performance. In addition to custom cores, Apple apparently uses a highly custom system architecture too, according to Corellium.

When virtually all 64-bit Arm-based systems bootup, they call firmware through an interface called PSCI, but in the case of the M1, the CPU cores start at an address specified by an MMIO register and then start to run the kernel. Furthermore, Apple systems also use a proprietary Apple Interrupt Controller (AIC) that is not compatible with Arm’s standards. Meanwhile, the timer interrupts are connected to the FIQ, an obscure architectural feature primarily used on 32-bit Arm systems that is not compatible with Linux.
...
Using a proprietary system architecture is not something new for Apple, but it will make it much harder to port other operating systems to its platforms as well as running those OSes in virtualization mode. Recently a developer managed to make Microsoft’s upcoming Windows 10X run on an Apple M1-based system using QEMU virtualization, but this OS is not yet final, and it is unclear how stable it is. Furthermore, Windows 10X does not run Win32 apps, making it less valuable for some users.

Running Windows 10 or Linux on an Apple Mac may not be crucially important for most Mac owners. But a complicated system architecture featuring multiple proprietary technologies will likely make it harder to develop certain kinds of software and hardware for Arm-based Macs.
Reply
#13
https://www.extremetech.com/computing/32...hey-should
Quote:Concerning reports from Apple M1 Mac users have surfaced in the past few days, as different folks compare notes on how often their systems are writing to disk. Some comparisons display eye-popping levels of drive writes, especially given how long some of these systems have been in use. Evidence of a truly systemic problem, however, is limited — and I’m not sure how much we can trust some of the counters people are using to report data.
...
A system losing 3 percent of its functional NAND every two months would see 90 percent of its NAND exhausted within 5 years, assuming linear rates of progression. Many SSDs can run well past their lifetimes, but the risk of hitting the manufacturer-specified limit is going to make a lot of people antsy, no matter what. The fact that these SSDs are soldered down and effectively impossible to replace also concerns some folks. Others have chimed in, claiming this issue affects both x86 and ARM, that it began after Catalina (as opposed to Big Sur), or that it affects x86, but to a lesser degree. At least one user has claimed his power-on hours are incorrect. If that value isn’t accurate, it would wreck any basis for comparison on these systems.

Right now, there doesn’t seem to be a consistent explanation for what’s going on here, and some changes Apple made to the M1 storage system may also be concerning. An M1 Mac, unlike an x86 Mac, cannot be booted from external storage if the internal storage completely fails. This, to our eye, is a problem that needs more attention than it’s gotten. SSDs fail for reasons other than hitting their write limits.

It is possible that some tools are reporting incorrect values for certain fields. It’s also possible that Apple has a low-level storage bug that’s drastically inflating drive writes. There’s no sign of this being a near-term problem, since even the most aggressive usage we’ve charted would support more than five years of use. But people keep laptops for longer than they used to, and Apple has aggressively traded on the idea of the M1 as a step up from Intel in every respect. For now, we’re still hunting for a cause or to understand if Apple considers this standard operating procedure and what the associated implications for long-term longevity are.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)