This isn't really a server. This is a NIC with some small cores to help handle management functions. The server you plug it into will have hundreds of x86 cores.
Yep, it's only recently they've even properly started cranking out 10nm themselves. Pretty embarrassing. I wonder what future we have if everyone is just sat ontop of TSMC, not great.
You must be using odd definitions for "properly" and "recently". Intel started volume shipments of 10nm-family parts for laptops in 2019, servers in 2021, and desktops in 2022. They've since moved most of their products off of the 10nm family and onto EUV-based processes: two generations of laptop parts, one generation of desktop parts, and the CPU chiplets of last year's server parts (which still use "Intel 7" for the IO chiplets).
Additionally, the second and third round of desktop parts released on 10nm (aka "Intel 7") are now known to have pushed clocks and voltages somewhat beyond the limits of the process, leading to embarrassing reliability problems and microcode updates that hurt performance. Intel has squeezed everything they can out of their 10nm and have mostly put it behind them, so talking about it like they only recently ramped production is totally wrong about where they are in the lifecycle.
What? Intel has been doing large scale production runs of their 10nm node for years now. If you're talking about Raptor Lake failures, that was one generation of products on that note, there has also never been any indication AFAIK that e.g. Emerald Rapids suffered the same oxidization/voltage failures the consumer line did despite being on the same process node. They're already moving on from all this, really.
Missteps happen but I have a feeling Intel's fab is going to be forced to be near the leading edge one way or another. The US government has plenty of levers to pull to manipulate the global semiconductor market.
Is the name based on the Australian Mount Morgan that was once the largest gold mine in the world? One of the owners of it invested everything earned from the mine into Persian oil exploration and created what eventually became BP.
The ability to connect to 4 hosts makes it seem like MRIOV all over again!
Still, it does look like a fun device from the 'big arm chip with lots of connectivity' side
It's quite interesting. Basically Nitro on a stick. For the "repatriation" crowd this seems appealing. But would you invest in the software necessary to exploit this, knowing that Intel could lose interest or just go bankrupt with little warning?
Presumably all hyperscalers who aren't Amazon could be a customer for this?
One of them might be enough to keep it viable. See sibling comment on b Google being a customer for presumably the previous generation.
Not that my memory is ironclad, but I don’t recall any custom IP or even FPGA attempts at Google re: host networking or NICs. Any good search terms I should try to enlighten myself? thanks!
I think at this point, it's clear that the US government will not let Intel go bankrupt without a serious effort to put the company in healthy financial standing first.
Whether or not that's a good thing, well, people have their opinions, but they're considered a national security necessity.
The primary customer for this would be infrastructure providers that want to give the host full control of the hardware (bare metal, no hypervisor) while still maintaining control of the IO (network attached storage and network isolation).
Conventionally this is done in software with a hypervisor which emulates network devices for VMs (virio/vmxnet3, etc...) and does some sort of network encapsulation (vlan, vxlan, etc...). Similar things are done for virtual block storage (virtio blk, nvme, etc..) to attach to remote drives.
If the IaaS clients are high bandwidth or running their own virtualization stack, the infrastructure provider has nowhere to put this software. You can do the infrastructure network and storage isolation on the network switches with extra work but then the termination of the networking and storage has to be done in cooperation with the clients (and you can't trust them to do it right).
Here, the host just sees PCI attached network interfaces and directly attached NVMe devices which pop up as defined by the infrastructure. These cards are the compromise where you let everyone have baremetal but keep your software defined network and storage. In advanced cases you could even dynamically traffic shape bandwidth between network and storage prioritization.
This is Intel making a 24 core Neoverse N2 server on TSMC - not their ISA, not their core design, and not their fab
This isn't really a server. This is a NIC with some small cores to help handle management functions. The server you plug it into will have hundreds of x86 cores.
the arm cores are absolutely the least interesting part of this thing, does it matter much if they're outsourced?
Barefoot was always on TSMC so why change now.
Yep, it's only recently they've even properly started cranking out 10nm themselves. Pretty embarrassing. I wonder what future we have if everyone is just sat ontop of TSMC, not great.
You must be using odd definitions for "properly" and "recently". Intel started volume shipments of 10nm-family parts for laptops in 2019, servers in 2021, and desktops in 2022. They've since moved most of their products off of the 10nm family and onto EUV-based processes: two generations of laptop parts, one generation of desktop parts, and the CPU chiplets of last year's server parts (which still use "Intel 7" for the IO chiplets).
Additionally, the second and third round of desktop parts released on 10nm (aka "Intel 7") are now known to have pushed clocks and voltages somewhat beyond the limits of the process, leading to embarrassing reliability problems and microcode updates that hurt performance. Intel has squeezed everything they can out of their 10nm and have mostly put it behind them, so talking about it like they only recently ramped production is totally wrong about where they are in the lifecycle.
What? Intel has been doing large scale production runs of their 10nm node for years now. If you're talking about Raptor Lake failures, that was one generation of products on that note, there has also never been any indication AFAIK that e.g. Emerald Rapids suffered the same oxidization/voltage failures the consumer line did despite being on the same process node. They're already moving on from all this, really.
This is some quite outdated/interesting hot takes.
Missteps happen but I have a feeling Intel's fab is going to be forced to be near the leading edge one way or another. The US government has plenty of levers to pull to manipulate the global semiconductor market.
Hah, I was not imagining it https://www.cpubenchmark.net/cpu.php?cpu=Intel+Pentium+E2200... same name as an old cpu.
I had one of these
Color me confused
Is the name based on the Australian Mount Morgan that was once the largest gold mine in the world? One of the owners of it invested everything earned from the mine into Persian oil exploration and created what eventually became BP.
https://en.wikipedia.org/wiki/William_Knox_D%27Arcy
Intel putting CPUs on an expansion card? https://en.wikipedia.org/wiki/Intel_Inboard_386
The ability to connect to 4 hosts makes it seem like MRIOV all over again! Still, it does look like a fun device from the 'big arm chip with lots of connectivity' side
It's quite interesting. Basically Nitro on a stick. For the "repatriation" crowd this seems appealing. But would you invest in the software necessary to exploit this, knowing that Intel could lose interest or just go bankrupt with little warning?
Presumably all hyperscalers who aren't Amazon could be a customer for this? One of them might be enough to keep it viable. See sibling comment on b Google being a customer for presumably the previous generation.
I wouldn't be surprised if Google buys the IP since they're the only customer.
How, though? Does the TPU team (literally or logically) map to owning IPU h/w successfully?
(I miss having these kinds of convos on twitter as networkservice ;)
There's a lot more silicon at Google aside from the TPU team, including their own previous NICs.
Not that my memory is ironclad, but I don’t recall any custom IP or even FPGA attempts at Google re: host networking or NICs. Any good search terms I should try to enlighten myself? thanks!
https://news.ycombinator.com/item?id=30757889
https://web.archive.org/web/20230711042824/https://www.wired...
https://static.googleusercontent.com/media/research.google.c...
I believe they have other custom silicon beyond TPUs so it wouldn't be crazy to take this in house if Intel really cans it.
I think at this point, it's clear that the US government will not let Intel go bankrupt without a serious effort to put the company in healthy financial standing first.
Whether or not that's a good thing, well, people have their opinions, but they're considered a national security necessity.
That begs the question: how would one go about utilising this thing in their own deployment?
The primary customer for this would be infrastructure providers that want to give the host full control of the hardware (bare metal, no hypervisor) while still maintaining control of the IO (network attached storage and network isolation).
Conventionally this is done in software with a hypervisor which emulates network devices for VMs (virio/vmxnet3, etc...) and does some sort of network encapsulation (vlan, vxlan, etc...). Similar things are done for virtual block storage (virtio blk, nvme, etc..) to attach to remote drives.
If the IaaS clients are high bandwidth or running their own virtualization stack, the infrastructure provider has nowhere to put this software. You can do the infrastructure network and storage isolation on the network switches with extra work but then the termination of the networking and storage has to be done in cooperation with the clients (and you can't trust them to do it right).
Here, the host just sees PCI attached network interfaces and directly attached NVMe devices which pop up as defined by the infrastructure. These cards are the compromise where you let everyone have baremetal but keep your software defined network and storage. In advanced cases you could even dynamically traffic shape bandwidth between network and storage prioritization.
Here are some examples: https://ipdk.io/documentation/Recipes/ (keep in mind IPU = E2200 when you read this)
Presumably first hire a few developers to program it.
I hope their Linux code isn’t as out-dated and buggy as their IPMI system.