This is still far away from being viable for actually useful models, like bigger MoE ones with much larger context windows. I mean, the technology is very promising just like Cerebras, but we need to see whether they are able to keep up this with the evolution of the models to come in the next few years. Extremely interesting nevertheless.
Keep in mind though that if you can run a model at 100-1000x the speed, then even if the model is less capable the sheer speed of them may make you do more interesting things (like deep search explorations with LLM-guided heuristics).
Is this a paid ad placement? I'm seeing a load of breathless "commentary" on Taalas and next to no serious discussion about whether their approach is even remotely scalable. A one-off tech demo using a comparatively ancient open source model is hardly going to be giving Jensen Huang sleepless nights.
Hmm, isn't manufacturing the elephant in the room here. What am I missing. The HC1 is built on TSMC’s N6 process with an 815 mm² die. TSMC’s capacity is already heavily allocated to major customers such as NVIDIA, AMD, Apple, and Qualcomm.
A startup cannot easily secure large wafer volumes because foundry allocation is typically driven by long term revenue commitments. the supply side cannot scale quickly. Building new foundry capacity takes many years. TSMC’s Arizona fab has been under development since 2021 and is still not producing at scale. Samsung’s Texas fab and Intel’s Ohio project face similar long timelines. Expanding semiconductor production requires massive construction, EUV equipment from ASML, yield tuning, and specialized workforce training.
Even if demand for hardwired AI chips surged, the manufacturing ecosystem would take close to a decade to respond.
The foundation models themselves will be cheap to deploy, but we’ll still need general purpose inferencing hardware to work along side them, converting latent intermediate layers to useful, application-specific concerns. This may level off the demand for “gpu/tpu” hardware, though, by letting the biggest and most expensive layers move to silicon.
what prevents digital holography on DVD writables from performing such computations optically, even if less efficient?
imagine each layer in the computation consisting of a DVD + a number of (embedding dimension) light sensors and light sources (or perhaps OPA / external cavity laser setups);
instead of N light sources it could be 1 light source and a ferroelectric FLCOS display like the cheap 320 x 240 monochrome high refresh rate displays in the cheap toy projectors from the past
it doesn't sound too crazy and could permit a low entry cost to a bulky and probably less energy efficient setup, but with updates models you could just burn a new hologram on a fresh DVD
and people wouldn't be tied to advanced semiconductor manufacture.
I speculate that they are hitting the reticle limit for models not much bigger than this. Judging by the size of the chip in their demonstrator for a 8B model I'm sure they know this already.
To scale this up means splitting up large models into multiple chips (layer or tensor parallelism). And that gets quite complicated quite quickly and you'll need really high bandwdith/low latency interconnects.
Still a REALLY interesting approach with a ton of potential despite the unstated challenges.
I always thought once we have the models figured out, getting the meat of it into an FPGA was probably the logical next step. They seemed to have skipped that and are directly writing the program as ASIC (ROM). Pretty wild.
This is still far away from being viable for actually useful models, like bigger MoE ones with much larger context windows. I mean, the technology is very promising just like Cerebras, but we need to see whether they are able to keep up this with the evolution of the models to come in the next few years. Extremely interesting nevertheless.
Keep in mind though that if you can run a model at 100-1000x the speed, then even if the model is less capable the sheer speed of them may make you do more interesting things (like deep search explorations with LLM-guided heuristics).
Is this a paid ad placement? I'm seeing a load of breathless "commentary" on Taalas and next to no serious discussion about whether their approach is even remotely scalable. A one-off tech demo using a comparatively ancient open source model is hardly going to be giving Jensen Huang sleepless nights.
Probably being astroturfed by people with a financial interest in it working. The critical commentary in this thread is what to watch for.
Hmm, isn't manufacturing the elephant in the room here. What am I missing. The HC1 is built on TSMC’s N6 process with an 815 mm² die. TSMC’s capacity is already heavily allocated to major customers such as NVIDIA, AMD, Apple, and Qualcomm.
A startup cannot easily secure large wafer volumes because foundry allocation is typically driven by long term revenue commitments. the supply side cannot scale quickly. Building new foundry capacity takes many years. TSMC’s Arizona fab has been under development since 2021 and is still not producing at scale. Samsung’s Texas fab and Intel’s Ohio project face similar long timelines. Expanding semiconductor production requires massive construction, EUV equipment from ASML, yield tuning, and specialized workforce training.
Even if demand for hardwired AI chips surged, the manufacturing ecosystem would take close to a decade to respond.
If the hardwired chips are magnitudes faster couldn't they be manufactured on an older process and still be competitive?
The foundation models themselves will be cheap to deploy, but we’ll still need general purpose inferencing hardware to work along side them, converting latent intermediate layers to useful, application-specific concerns. This may level off the demand for “gpu/tpu” hardware, though, by letting the biggest and most expensive layers move to silicon.
How specifically would that work? I’ve seen no framework for that happening.
what prevents digital holography on DVD writables from performing such computations optically, even if less efficient?
imagine each layer in the computation consisting of a DVD + a number of (embedding dimension) light sensors and light sources (or perhaps OPA / external cavity laser setups);
instead of N light sources it could be 1 light source and a ferroelectric FLCOS display like the cheap 320 x 240 monochrome high refresh rate displays in the cheap toy projectors from the past
https://github.com/ElectronAsh/FLCOS-Mini-Projector-ESP32
it doesn't sound too crazy and could permit a low entry cost to a bulky and probably less energy efficient setup, but with updates models you could just burn a new hologram on a fresh DVD
and people wouldn't be tied to advanced semiconductor manufacture.
I speculate that they are hitting the reticle limit for models not much bigger than this. Judging by the size of the chip in their demonstrator for a 8B model I'm sure they know this already.
To scale this up means splitting up large models into multiple chips (layer or tensor parallelism). And that gets quite complicated quite quickly and you'll need really high bandwdith/low latency interconnects.
Still a REALLY interesting approach with a ton of potential despite the unstated challenges.
It's crazy. In a few years we will be able to buy Qwen on a chip, doing 10K tokens per second.
Yeah, well might just come on your new laptop
Or your phone.
Hopefully not vendor locked with pay-per-token licensing.
I always thought once we have the models figured out, getting the meat of it into an FPGA was probably the logical next step. They seemed to have skipped that and are directly writing the program as ASIC (ROM). Pretty wild.
Yes, FPGAs are not sufficiently dense. Because of their programmability they sacrifice a lot of capacity. The factor is something like 5x-10x.
Give me a 120B dense model on one of these and yeah my API use will probably drop.