A brief overview of IBM’s new 7 nm Telum mainframe CPU


Enlarge / Every Telum package deal consists of two 7nm, eight-core / sixteen-thread processors working at a base clock pace above 5GHz. A typical system could have sixteen of those chips in complete, organized in four-socket “drawers.”

From the angle of a conventional x86 computing fanatic—or skilled—mainframes are unusual, archaic beasts. They’re bodily monumental, power-hungry, and costly by comparability to extra conventional data-center gear, typically providing much less compute per rack at a better value.

This raises the query, “Why hold utilizing mainframes, then?” When you hand-wave the cynical solutions that boil right down to “as a result of that is how we have all the time carried out it,” the sensible solutions largely come right down to reliability and consistency. As AnandTech’s Ian Cutress factors out in a speculative piece targeted on the Telum’s redesigned cache, “downtime of those [IBM Z] techniques is measured in milliseconds per 12 months.” (If true, that is a minimum of seven nines.)

IBM’s personal announcement of the Telum hints at simply how totally different mainframe and commodity computing’s priorities are. It casually describes Telum’s reminiscence interface as “able to tolerating full channel or DIMM failures, and designed to transparently get better information with out affect to response time.”

Once you pull a DIMM from a reside, working x86 server, that server doesn’t “transparently get better information”—it merely crashes.

IBM Z-series structure

Telum is designed to be one thing of a one-chip-to-rule-them-all for mainframes, changing a way more heterogeneous setup in earlier IBM mainframes.

The 14 nm IBM z15 CPU which Telum is changing options 5 complete processors—two pairs of 12-core Compute Processors and one System Controller. Every Compute Processor hosts 256MiB of L3 cache shared between its 12 cores, whereas the System Controller hosts a whopping 960MiB of L4 cache shared between the 4 Compute Processors.

5 of those z15 processors—every consisting of 4 Compute Processors and one System Controller—constitutes a “drawer.” 4 drawers come collectively in a single z15-powered mainframe.

Though the idea of a number of processors to a drawer and a number of drawers to a system stays, the structure inside Telum itself is radically totally different—and significantly simplified.

Telum structure

Telum is considerably easier at first look than z15 was—it is an eight-core processor constructed on Samsung’s 7nm course of, with two processors mixed on every package deal (much like AMD’s chiplet strategy for Ryzen). There isn’t any separate System Controller processor—all of Telum’s processors are equivalent.

From right here, 4 Telum CPU packages mix to make one four-socket “drawer,” and 4 of these drawers go right into a single mainframe system. This gives 256 complete cores on 32 CPUs. Every core runs at a base clockrate over 5 GHz—offering extra predictable and constant latency for real-time transactions than a decrease base with increased turbo price would.

Pockets stuffed with cache

Eliminating the central System Processor on every package deal meant redesigning Telum’s cache, as effectively—the large 960MiB L4 cache is gone, in addition to the per-die shared L3 cache. In Telum, every particular person core has a non-public 32MiB L2 cache—and that is it. There isn’t any {hardware} L3 or L4 cache in any respect.

That is the place issues get deeply bizarre—whereas every Telum core’s 32MiB L2 cache is technically personal, it is actually solely nearly personal. When a line from one core’s L2 cache is evicted, the processor appears for empty area within the different cores’ L2. If it finds some, the evicted L2 cache line from core x is tagged as an L3 cache line and saved in core y‘s L2.

OK, so we’ve got a digital, shared up-to-256MiB L3 cache on every Telum processor, composed of the 32MiB “personal” L2 cache on every of its eight cores. From right here, issues go one step additional—that 256MiB of shared “digital L3” on every processor can, in flip, be used as shared “digital L4” amongst all processors in a system.

Telum’s “digital L4” works largely the identical approach its “digital L3” did within the first place—evicted L3 cache traces from one processor search for a house on a distinct processor. If one other processor in the identical Telum system has spare room, the evicted L3 cache line will get retagged as L4 and lives within the digital L3 on the opposite processor (which is made up of the “personal” L2s of its eight cores) as a substitute.

AnandTech’s Ian Cutress goes into extra detail on Telum’s cache mechanisms. He ultimately sums them up by answering “How is that this attainable?” with a easy “magic.”

AI inference acceleration

IBM’s Christian Jacobi briefly outlines Telum’s AI acceleration on this two-minute clip.

Telum additionally introduces a 6TFLOPS on-die inference accelerator. It is meant for use for—amongst different issues—real-time fraud detection throughout monetary transactions (versus shortly after the transaction).

Within the quest for max efficiency and minimal latency, IBM threads a number of needles. The brand new inference accelerator is positioned on-die, which permits for decrease latency interconnects between the accelerator and CPU cores—however it’s not constructed into the cores themselves, a la Intel’s AVX-512 instruction set.

The issue with in-core inference acceleration like Intel’s is that it sometimes limits the AI processing energy out there to any single core. A Xeon core working an AVX-512 instruction solely has the {hardware} inside its personal core out there to it, that means bigger inference jobs have to be cut up amongst a number of Xeon cores to extract the complete efficiency out there.

Telum’s accelerator is on-die however off-core. This enables a single core to run inference workloads with the would possibly of the total on-die accelerator, not simply the portion constructed into itself.

Itemizing picture by IBM



Source link

Thebestdeals.store
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart