[ad_1]
For practical purposes, the M1 Ultra acts like a single, impossibly large slice of silicon that does it all. Apple’s most powerful chip to date has 114 billion transistors packed into over a hundred processing cores dedicated to logic, graphics, and artificial intelligence, all of it connected to 128 gigabytes of shared memory. But the M1 Ultra is in fact a Frankenstein’s monster, consisting of two identical M1 Max chips bolted together using a silicon interface that serves as a bridge. This clever design makes it seem as if the conjoined chips are in fact just one larger whole.
As it becomes more difficult to shrink transistors in size, and impractical to make individual chips much bigger, chipmakers are beginning to stitch components together to boost processing power. The Lego-like approach is a key way the computer industry aims to progress. And Apple’s M1 Ultra shows that new techniques can produce big leaps in performance.
“This technology showed up at just the right time,” says Tim Millet, vice president of hardware technologies at Apple. “In a sense, it is about Moore’s law,” he adds, in reference to the decades-old axiom, named after the Intel cofounder Gordon Moore, that chip performance—measured by the number of transistors on a chip—doubles every 18 months.
It is no secret that Moore’s law, which has driven progress in the computer industry and the economy for decades, no longer holds true. Some extremely complex and costly engineering tricks promise to help shrink the size of components etched into silicon chips further, but engineers are reaching the physical limits of how small these components, which have features measured in billionths of a meter, can practically be. Even if Moore’s law is outdated, computer chips are more important—and ubiquitous—than ever. Cutting-edge silicon is crucial to technologies such as AI and 5G, and supply chain disruptions triggered by the pandemic have highlighted how vital semiconductors now are to industries such as automaking.
As each new generation of silicon takes a smaller step forward, a growing number of companies have turned to designing their own chips for performance gains. Apple has used custom silicon for its iPhones and iPads since 2010—then, in 2020, it announced that it would design its own chips for Macs and MacBooks, moving away from Intel’s products. Apple leveraged the work it did on smartphone chips to develop its desktop ones, which use the same architecture, licensed from the British company ARM. By crafting its own silicon, and by integrating functions that might normally be performed by separate chips into one system-on-a-chip, Apple has control over the entirety of a product, and it can customize software and hardware together. That level of control is key.
“I realized the whole [chipmaking] world was upside down,” says Millet, a chip industry veteran who joined Apple from Brocade, a US networking company, in 2005. In contrast to, say, Intel, which designs and makes chips that are then sold to computer makers, Millet explains that Apple can work on the design of a chip for a product at the same time as the software, hardware, and the industrial design.
[ad_2]
Source link