Ollama started out as the clean "just run a model locally" story, but now it’s branching into paid web search, which muddies the open-source appeal. Windows ML goes the other direction: deep OS integration, but at the cost of tying your stack tightly to the Windows ecosystem, very reminiscent of DirectX.
Now, the real question is whether vLLM/ONNX or just running straight on CUDA/ROCm are the only alternatives or we are all trading one vendor lock-in with another.
It's only a matter of time before the LLMs start having paid product placement in their results. Even open-source - the money needed to train & operate these models is just too enormous.
Could there be a distributed training system run by contributors, like the various @Home projects? Yeah, decent chance of that working, especially with the widespread availability of fiber connections. But then to query the model you still need a large low-latency system (i.e. not distributed) to host it, and that's expensive.
I have extensive experience building hardware accelerated AI inference pipelines on Windows, including on the now retired DirectML (not a great loss). One thing I learned is that the hardware vendor support promised in press released is often an outright lie, and that reporting any kinds of bugs or missing functionality will only result in a infinite blame game loop between Microsoft developer support and the hardware vendors. It appears that Windows is no longer a platform that anyone feels responsible for or takes pride in maintaining, so your best hope is to build a web app on top of Chromium, so that at least you will have Google on your side when something inevitably breaks.
System ONNX might be quite nice for Windows applications, provided the backends are actually reliable on most systems. AMD currently has three options for example (ROCm, MIGraphX, and Vitis), and I've never gotten any of them to work. (Although MIGraphX looks to be no longer marked as experimental so maybe I should give it another try.)
How does Windows ML compare to just using something like Ollama plus an LLM that you download to your device (which seems like it would be much simpler)? What are the privacy implications of using Windows ML with respect to how much of your data it is sending back to Microsoft?
It’s kind of a bummer because this is the exact same playbook as DirectX, which ended up being a giant headache for the games industry, and now everyone is falling for it again.
I would be curious to see whether it's a common opinion that DirectX was a bad thing for the games industry. It was preceded by a patchwork of messy graphics/audio/input APIs, many of them proprietary, and when it started to gain prominence, Linux gaming was mostly a mirage.
A lot of people still choose to build games on Direct3D 11 or even 9 for convenience, and now thanks to Proton games built that way run fine on Linux and Steam Deck. Plus technologies like shadercross and mojoshader mean that those HLSL shaders are fairly portable, though that comes at the cost of a pile of weird hacks.
One good thing is that one of the console vendors now supports Vulkan, so building your game around Vulkan gives you a head start on console and means your game will run on Windows, Linux and Mac (though the last one requires some effort via something like MoltenVK) - but this is a relatively new thing. It's great to see either way, since in the past the consoles all used bespoke graphics APIs (except XBox, which used customized DirectX).
An OpenGL-based renderer would have historically been even more of an albatross when porting to consoles than DX, since (aside from some short-lived, semi-broken support on PS3) native high-performance OpenGL has never been a feature on anything other than Linux and Mac. In comparison DirectX has been native on XBox since the beginning, and that was a boon in the XBox 360 era when it was the dominant console.
IMO historically picking a graphics API has always been about tradeoffs, and realities favored DirectX until at least the end of the XBox 360 era, if not longer than that.
While Switch supports Vulkan, if you really want to take advantage of Switch hardware, NVN is the way to go, or make use of the Nintendo Vulkan extensions that are only available on the Switch.
Usually it is an opinion held by folks without background in the industry.
Back in my "want to do games phase", and also during Demoscene days, going to Gamedev.net, Flipcode, IGDA forums, or attending GDCE, this was never something fellow coders complained about.
Rather how to do some cool stuff with specific hardware, or gameplay ideas, and mastering various systems was also seen as a skill.
DirectX carried the games industry forward because there weren't alternatives. OpenGL was lagging, and Vulkan didn't exist yet. I hope everyone moves to Vulkan, but DX was ultimately a net positive.
It is a FOSS, complaining about proprietary APIs, because there is this dissonance between communities.
Game developers care about IP, how to make it go beyond games, getting a publisher deal, gameplay, the proprietary APIs is a set of plugins on a middleware engines in-house or external, and done it is.
Also there is a whole set of companies whose main business is porting games, where is where several studios got their foot into the door before coming up with their own ideas, as a means to get experience and recognition in the industry, they are thankful each platform is something else.
Finally anyone claiming Khronos APIs are portable never had the pleasure to use extensions or deal with drivers and shader compiler bugs.
It is only an headache for FOSS folks, games industry embraces proprietary APIs, it isn't the elephant problem FOSS culture makes it to be, as anyone that has ever attended game development conferences, or Demoscene parties can tell.
Yeah DirectX ended up being a giant headache but there were times in its history where it was the easiest api to use and very high performance. DirectX came about because the alternatives at the time were, frankly, awful.
OpenGL (the main competition to DirectX) really wasn't that bad in the fixed-function days. Everything fell apart when nVidia / AMD came up with their own standards for GPU programming.
DirectX was nice in that the documentation, and example/sample code was excellent.
The fixed function version of OpenGL was non thread safe with global state, it made for some super fun bugs when different libraries set different flags and then assumed they knew which state the OpenGL runtime was in the next time they tried to render something.
What's stopping you from using ONNX models on other platforms? A hardware agnostic abstraction to make it easier for consumers to actually use their inference capable hardware seems like a good idea, and exactly the kind of stuff I think an operating system should provide.
> Call the Windows ML APIs to initialize EPs [Execution Providers], and then load any ONNX model and start inferencing in just a few lines of code.
i exclusively use ONNX models across platforms for CPU inference. it's usually the fastest option on CPU. hacking on ONNX graphs is super easy, too...i make my own uint8 output ONNX embedding models
It is not llm specific. A large swathe of it isn’t that much Microsoft specific either.
And it is a developer feature hidden from end users.
e.g. - In your ollama example, does the developer ask end users to install ollama? Does the dev redistribute ollama and keep it updated?
The ONNX format is pretty much a boring de-facto standard for ML model exchange. It is under the linux foundation.
The ONNX Runtime is a microsoft thing, but it is an MIT licensed runtime for cross language use and cross OS/HW platform deployment of ML models in the ONNX format.
That bit needs to support everything because Microsoft itself ships software on everything.(Mac/linux/iOS/Android/Windows.
The primary value claims for Windows ML (for a developer using it)—
This eliminates the need to:
Bundle execution providers for specific hardware vendors
Create separate app builds for different execution providers
Handle execution provider updates manually.
Since ‘EP’ is ultra-super-techno-jargon:
Here is what GPT-5 provides:
Intensional (what an EP is)
In ONNX Runtime, an Execution Provider (EP) is a pluggable backend that advertises which ops/kernels it can run and supplies the optimized implementations, memory allocators, and (optionally) graph rewrites for a specific target (CPU, CUDA/TensorRT, Core ML, OpenVINO, etc.). ONNX Runtime then partitions your model graph and assigns each partition to the highest-priority EP that claims it; anything unsupported falls back (by default) to the CPU EP.
Extensional (how you use them)
• You pick/priority-order EPs per session; ORT maps graph pieces accordingly and falls back as needed.
• Each EP has its own options (e.g., TensorRT workspace size, OpenVINO device string, QNN context cache).
• Common EPs: CPU, CUDA, TensorRT (NVIDIA), DirectML (Windows), Core ML (Apple), NNAPI (Android), OpenVINO (Intel), ROCm (AMD), QNN (Qualcomm).
It's an optimized backend for running LLMs, much like CoreML on macOS, which has been received very positively due to the acceleration it enables (ollama/llama.cpp use it).
Since this uses ONNX you probably won't be able to use ollama directly with it, but conceptually you could use an app like it to run your models in a more optimized way.
Correct me if I'm wrong, but if the LF AI & Data Foundation (Linux Foundation) ONNX working groups support advanced quantization (down to 4-bit grouped schemes, like GGUF's Q4/Q5 formats), standardize flash attention and similar fused ops, and allow efficient memory-mapped weights through the spec and into ONNX Runtime, then Windows ML and Apple Core ML could become a credible replacement for GGUF in local-LLM land.
Funny how the only mention of privacy in the post is this -
This ability to run models locally enables developers to build AI experiences that are more responsive, private and cost-effective, reaching users across the broadest range of Windows hardware.
> Windows ML is the built-in AI inferencing runtime optimized for on-device model inference...lets both new and experienced developers build AI-powered apps
This sounds equivalent to Apple's announcement last week about opening up access for any developer to tap into the on-device large language model at the core of Apple Intelligence[1]
No matter the device, this is a win-win for developers making & consumers getting privacy-focused apps
This is good news, hopefully this come with much better integration with Nvidia and also non-Nvidia AI/ML eco-system including the crucial driver, firmware and toolkit as discussed in this very recent HN posting [1].
[1] Docker Was Too Slow, So We Replaced It: Nix in Production [video]
How is this going to support custom layers like variations of (flash) attention that every company seems to introduce? Would it mean one won't be able to run a specific model (or only have its bastardized version) until MS implements it in the runtime?
Exactly why should anyone pay attention to anything Microsoft says when it comes to AI, when their flagship product that they’re shoehorning into all their software could best be described as a high-latency ChatGPT wrapper with the virtual equivalent of down syndome?
Ollama started out as the clean "just run a model locally" story, but now it’s branching into paid web search, which muddies the open-source appeal. Windows ML goes the other direction: deep OS integration, but at the cost of tying your stack tightly to the Windows ecosystem, very reminiscent of DirectX.
Now, the real question is whether vLLM/ONNX or just running straight on CUDA/ROCm are the only alternatives or we are all trading one vendor lock-in with another.
It's only a matter of time before the LLMs start having paid product placement in their results. Even open-source - the money needed to train & operate these models is just too enormous.
Could there be a distributed training system run by contributors, like the various @Home projects? Yeah, decent chance of that working, especially with the widespread availability of fiber connections. But then to query the model you still need a large low-latency system (i.e. not distributed) to host it, and that's expensive.
Ollama are for LLMs, this is not the same thing. You'll note that one of the examples given is Topaz Photo by Topaz Labs, to upscale images.
I have extensive experience building hardware accelerated AI inference pipelines on Windows, including on the now retired DirectML (not a great loss). One thing I learned is that the hardware vendor support promised in press released is often an outright lie, and that reporting any kinds of bugs or missing functionality will only result in a infinite blame game loop between Microsoft developer support and the hardware vendors. It appears that Windows is no longer a platform that anyone feels responsible for or takes pride in maintaining, so your best hope is to build a web app on top of Chromium, so that at least you will have Google on your side when something inevitably breaks.
System ONNX might be quite nice for Windows applications, provided the backends are actually reliable on most systems. AMD currently has three options for example (ROCm, MIGraphX, and Vitis), and I've never gotten any of them to work. (Although MIGraphX looks to be no longer marked as experimental so maybe I should give it another try.)
Vitis AI is just riddled with bugs and undocumented incompatibilities. I also failed to get it to work for anything beyond the demo models.
How does Windows ML compare to just using something like Ollama plus an LLM that you download to your device (which seems like it would be much simpler)? What are the privacy implications of using Windows ML with respect to how much of your data it is sending back to Microsoft?
Windows ML is an abstraction to use local LLM models across CPU, GPU or NPU, making the code independent of the actual hardware.
It is the evolution of DirectX for ML, previously known as DirectML.
It’s kind of a bummer because this is the exact same playbook as DirectX, which ended up being a giant headache for the games industry, and now everyone is falling for it again.
I would be curious to see whether it's a common opinion that DirectX was a bad thing for the games industry. It was preceded by a patchwork of messy graphics/audio/input APIs, many of them proprietary, and when it started to gain prominence, Linux gaming was mostly a mirage.
A lot of people still choose to build games on Direct3D 11 or even 9 for convenience, and now thanks to Proton games built that way run fine on Linux and Steam Deck. Plus technologies like shadercross and mojoshader mean that those HLSL shaders are fairly portable, though that comes at the cost of a pile of weird hacks.
One good thing is that one of the console vendors now supports Vulkan, so building your game around Vulkan gives you a head start on console and means your game will run on Windows, Linux and Mac (though the last one requires some effort via something like MoltenVK) - but this is a relatively new thing. It's great to see either way, since in the past the consoles all used bespoke graphics APIs (except XBox, which used customized DirectX).
An OpenGL-based renderer would have historically been even more of an albatross when porting to consoles than DX, since (aside from some short-lived, semi-broken support on PS3) native high-performance OpenGL has never been a feature on anything other than Linux and Mac. In comparison DirectX has been native on XBox since the beginning, and that was a boon in the XBox 360 era when it was the dominant console.
IMO historically picking a graphics API has always been about tradeoffs, and realities favored DirectX until at least the end of the XBox 360 era, if not longer than that.
While Switch supports Vulkan, if you really want to take advantage of Switch hardware, NVN is the way to go, or make use of the Nintendo Vulkan extensions that are only available on the Switch.
So it isn't that portable as people think.
Anyone who thinks DirectX was bad for the games industry needs to go back and review the history of graphics APIs.
Usually it is an opinion held by folks without background in the industry.
Back in my "want to do games phase", and also during Demoscene days, going to Gamedev.net, Flipcode, IGDA forums, or attending GDCE, this was never something fellow coders complained about.
Rather how to do some cool stuff with specific hardware, or gameplay ideas, and mastering various systems was also seen as a skill.
DirectX carried the games industry forward because there weren't alternatives. OpenGL was lagging, and Vulkan didn't exist yet. I hope everyone moves to Vulkan, but DX was ultimately a net positive.
There were others, there is this urban myth that games consoles used OpenGL and only Windows was the outlier.
Even Mac OS only adopted OpenGL after the OS X reboot, before it was QuickDraw 3D, and Amiga used Warp 3D during its last days.
In the last 25 years I have never gotten this vibe from devs, DirectX likely enabled a ton of games that would have never seen the light without it.
It is a FOSS, complaining about proprietary APIs, because there is this dissonance between communities.
Game developers care about IP, how to make it go beyond games, getting a publisher deal, gameplay, the proprietary APIs is a set of plugins on a middleware engines in-house or external, and done it is.
Also there is a whole set of companies whose main business is porting games, where is where several studios got their foot into the door before coming up with their own ideas, as a means to get experience and recognition in the industry, they are thankful each platform is something else.
Finally anyone claiming Khronos APIs are portable never had the pleasure to use extensions or deal with drivers and shader compiler bugs.
It is only an headache for FOSS folks, games industry embraces proprietary APIs, it isn't the elephant problem FOSS culture makes it to be, as anyone that has ever attended game development conferences, or Demoscene parties can tell.
Yeah DirectX ended up being a giant headache but there were times in its history where it was the easiest api to use and very high performance. DirectX came about because the alternatives at the time were, frankly, awful.
OpenGL (the main competition to DirectX) really wasn't that bad in the fixed-function days. Everything fell apart when nVidia / AMD came up with their own standards for GPU programming.
DirectX was nice in that the documentation, and example/sample code was excellent.
The fixed function version of OpenGL was non thread safe with global state, it made for some super fun bugs when different libraries set different flags and then assumed they knew which state the OpenGL runtime was in the next time they tried to render something.
What's stopping you from using ONNX models on other platforms? A hardware agnostic abstraction to make it easier for consumers to actually use their inference capable hardware seems like a good idea, and exactly the kind of stuff I think an operating system should provide.
> Call the Windows ML APIs to initialize EPs [Execution Providers], and then load any ONNX model and start inferencing in just a few lines of code.
i exclusively use ONNX models across platforms for CPU inference. it's usually the fastest option on CPU. hacking on ONNX graphs is super easy, too...i make my own uint8 output ONNX embedding models
Exactly, any work you do on top of this makes your work hostage to Windows.
People mainly think of Direct3D when referring to DirectX.
The other components were very well: DirectInput etc.
I think this is very neat. So many possibilities.
It is not llm specific. A large swathe of it isn’t that much Microsoft specific either.
And it is a developer feature hidden from end users. e.g. - In your ollama example, does the developer ask end users to install ollama? Does the dev redistribute ollama and keep it updated?
The ONNX format is pretty much a boring de-facto standard for ML model exchange. It is under the linux foundation.
The ONNX Runtime is a microsoft thing, but it is an MIT licensed runtime for cross language use and cross OS/HW platform deployment of ML models in the ONNX format.
That bit needs to support everything because Microsoft itself ships software on everything.(Mac/linux/iOS/Android/Windows.
ORT — https://onnxruntime.ai
Here is the Windows ML part of this —https://learn.microsoft.com/en-us/windows/ai/new-windows-ml/...
The primary value claims for Windows ML (for a developer using it)— This eliminates the need to: Bundle execution providers for specific hardware vendors
Create separate app builds for different execution providers
Handle execution provider updates manually.
Since ‘EP’ is ultra-super-techno-jargon:
Here is what GPT-5 provides:
Intensional (what an EP is)
In ONNX Runtime, an Execution Provider (EP) is a pluggable backend that advertises which ops/kernels it can run and supplies the optimized implementations, memory allocators, and (optionally) graph rewrites for a specific target (CPU, CUDA/TensorRT, Core ML, OpenVINO, etc.). ONNX Runtime then partitions your model graph and assigns each partition to the highest-priority EP that claims it; anything unsupported falls back (by default) to the CPU EP.
Extensional (how you use them) • You pick/priority-order EPs per session; ORT maps graph pieces accordingly and falls back as needed. • Each EP has its own options (e.g., TensorRT workspace size, OpenVINO device string, QNN context cache). • Common EPs: CPU, CUDA, TensorRT (NVIDIA), DirectML (Windows), Core ML (Apple), NNAPI (Android), OpenVINO (Intel), ROCm (AMD), QNN (Qualcomm).
It's an optimized backend for running LLMs, much like CoreML on macOS, which has been received very positively due to the acceleration it enables (ollama/llama.cpp use it).
Since this uses ONNX you probably won't be able to use ollama directly with it, but conceptually you could use an app like it to run your models in a more optimized way.
Ollama doesn't support NPUs.
Correct me if I'm wrong, but if the LF AI & Data Foundation (Linux Foundation) ONNX working groups support advanced quantization (down to 4-bit grouped schemes, like GGUF's Q4/Q5 formats), standardize flash attention and similar fused ops, and allow efficient memory-mapped weights through the spec and into ONNX Runtime, then Windows ML and Apple Core ML could become a credible replacement for GGUF in local-LLM land.
Funny how the only mention of privacy in the post is this -
This ability to run models locally enables developers to build AI experiences that are more responsive, private and cost-effective, reaching users across the broadest range of Windows hardware.
> Windows ML is the built-in AI inferencing runtime optimized for on-device model inference...lets both new and experienced developers build AI-powered apps
This sounds equivalent to Apple's announcement last week about opening up access for any developer to tap into the on-device large language model at the core of Apple Intelligence[1]
No matter the device, this is a win-win for developers making & consumers getting privacy-focused apps
[1] https://www.apple.com/newsroom/2025/09/new-apple-intelligenc...
This is the evolution of Direct ML, taking into account the issues with it being too focused on C++, like anything DirectX.
Thus C#, C++ and Python support as WinRT projections on top of the new API.
Which should also make it pretty easy to drop Java JNI on top of it.
Panama would be better.
I'm not seeing the equivalence. Isn't the announcement here to let you run any model?
This is good news, hopefully this come with much better integration with Nvidia and also non-Nvidia AI/ML eco-system including the crucial driver, firmware and toolkit as discussed in this very recent HN posting [1].
[1] Docker Was Too Slow, So We Replaced It: Nix in Production [video]
https://news.ycombinator.com/item?id=45398468
Can anyone decipher the marketing blurb? It might as well have told me it was going to "harness synergy".
How is this going to support custom layers like variations of (flash) attention that every company seems to introduce? Would it mean one won't be able to run a specific model (or only have its bastardized version) until MS implements it in the runtime?
ONNX already includes an escape hatch for this, there are official Microsoft maintained Execution Providers https://onnxruntime.ai/docs/execution-providers/ But vendors like Arm/rockchip/Huawei can also write them https://onnxruntime.ai/docs/execution-providers/community-ma...
Amazed they didn’t just name it “Windows AI”
Give them a few months and it will be called Microsoft Copilot AI ML Inferencing Engine for Microsoft Windows and will come in 5 editions.
> Windows ML is generally available
Is this the new Windows 12 ?
[flagged]
[flagged]
Exactly why should anyone pay attention to anything Microsoft says when it comes to AI, when their flagship product that they’re shoehorning into all their software could best be described as a high-latency ChatGPT wrapper with the virtual equivalent of down syndome?
Great news then! Per this announcement, Windows ML can enable someone to build a better, local version of Rewind.