I have a lot of respect for the .NET team. They often publish great in-depth articles and their pursuit for performance is relentless (e.g. see Kestrel and Entity Framework evolution).
And ASP.NET is one of the few large projects which managed to survive a large breaking changes. Almost to Python 2->3 level. You had to change how your web app behaved completely if you relied on their magic session which worked hard to keep state synched between back and front.
Feels good to have 3 trillion dollars interested in improving the stack you use and actually care.
I am not sure what the stats are but I am certain they left behind a huge number of projects. Probably a population that won't be very vocal on HN but that doesn't mean they don't exist.
Even for new projects there are problems I run into that force me to use 4.8. Like if you build an Excel formula, the mandatory async/await approach just doesn't work. It's not an asynchronous operation and runs into a UI context where you deadlock if you wait. They also broke a lot of the windows integration, where in a corporate environment the same network calls with the same syntax authenticate successfully with 4.8 but fail with core.
And because they broke backward compatibility on so many libraries, it's a non trivial effort to convert a complex code base to core.
It's great that they focus on performance, but the .net framework fossilised in term of functionality. It took like 15 years before they added a json serializer to the standard library, and don't even think about support for any new major image format (webp, heic). Everything has become complicated, there isn't a day where I don't see a crash message in visual studio. I used to be a big .net fan but I miss Anders' leadership.
Last time I tried Entity Framework it was slow. Replaced it with Dapper and a simple custom migration system. This took database validation and seeding from 10 seconds to less than 2 seconds during startup on low powered hardware with SQLite. The queries created by Entity had pointless cascade of multiple join statements.
I have been reaching for GO with simple tooling and HTTP back end. .NET is useful for other solutions.
I have had too many issues with their frameworks, like WPF, needing to implement Win32 hacks. Example, .Net 9 was the first Windows version that properly returns all network interfaces. Older runtimes only expose Enabled NICs. I still have to maintain Windows 7 support for some solutions.
I just went the other way and tore all my Dapper + SQL and repositories out of a big project and switched it all to EF Core 10. Not noticed any change in performance but dumped thousands of lines of unnecessary code. I like the tighter code, but you definitely do need to keep your eye on the "magic" of EF to avoid it doing anything goofy that isn't easily visible.
We use newer Entity Framework (Core) in a few large projects with zero issues.
Even have Dapper and moving away from it as it brings nothing to the table but strings and bunch of SQL.
We are also running into more and more performance issues with EF. There are ways to tune it but I am not sure if it’s worth learning this for EF or if it’s not better to just go for straight SQL. Seems MS has this tendency to create abstractions that then don’t work 100%. I see this with .NET too. Often you have to go down to Win32 for apps that are tightly coupled with Windows and hardware.
ef core is great for simple queries and modification of your data while using the changetracker.
You can use AsNoTracking/Projection to perform similar like dapper for queries.
When using command query seperation you can also use dapper for queries and ef core for commands.
one thing that struck me was that the foundation for this effort was the linux distro build system. in other words, the work they put into making .net open-source and cross-platform eventually made everyone's lives easier.
I think .NET is way beyond the situation that they question their open source move. The amount of pull requests and positive outcomes for them in the last 10 years (yeah that long already) is mind blowing.
yeah, I didn't mean that it was evidence the open source move was successful or valuable, more that it showed that the engineering effort that went into the open sourcing also yielded dividends for the project in general
Modern .NET is awesome. In a small side hustle project, I develop a REST API Backend in C# on my macOS using VSCode, and deploy it to Linux for the past 3 years without any issues. I use SQLite, EFCore, Minimal APIs and it is a delight compared to the frontend part - which is NextJS/React/MaterialUI with 50+ (dev-)packages in npm.
Oh, wow, I didn't expect that the best thing I'd read about software engineering, like, this year would come out of Microsoft! Don't get me wrong: I like .NET, especially its recent incarnation, but until just now, I would have expected its robustness to be an against-all-odds under-the-radar lucky escape from the general enshittification that seems to be the norm for the industry.
Reading something like this, which outlines a coordinated effort (diagrams and even a realistic use case for agentic LLM usage and all!) to actually and effectively make things better was a breath of fresh air, even if towards the end it notes that the remarkable investment in quality will not be in full force in the future.
Even if you don't care about .NET and/or Microsoft, this is worth reading, doubly so if you're in charge of re-engineering just about anything -- this is how it's done!
I can see that high level overviews of complex systems are useful to get some insights, but in the same way I have the feeling that this mentality of high level, abstract organization is the root of the problem. If you have a complex system and simplify the components into abstractions, you will repeatedly run into difficulties because you've actively ignored the dirty bits. It's an top down approach that tries to tackle all issues, but an bottom up approach could even eradicate myriads of issues.
No, read the article. It needs to build some "sub" SDKs to build the final 'full' SDK packages. That's the whole point; they want to get to a state where they can do that.
.NET was a solid choice for backend builds before Node became so popular (And .NET is generally more performant than Node).
I hope this churn in .NET builds is temporary because a lot of people might be looking to go back to something stable especially after the recent supply chain attacks on the Node ecosystem.
> I hope this churn in .NET builds is temporary because a lot of people might be looking to go back to something stable especially after the recent supply chain attacks on the Node ecosystem.
Can you elaborate a bit? This article talks about internal machinery of building .net releases. What does that have to do with "this churn", whatever that is?
My guess is if you build with .NET Framework you can just forever run your builds, but if your source code is based on newer .NET you have to update to a new version each year, and deal with all the work in upgrading your entire project, which also means everyone in your team is also upgrading their dev environment, and now you have new things in the language and the runtime to deal with, deprecation and all that. Plus lots of packages don’t update as fast when version changes occurs, so chances are you will probably take more work and use as few dependencies as possible if at all, which may cause a lot of work. Instead it’s best to, if you need to depend on something, to be a very big Swiss Army knife like thing.
I think node is just more flexible and unless .NET Framework like forever releases or much longer term support make a come back, there’s no good trade off from node, since you don’t even get more stability.
> if your source code is based on newer .NET you have to update to a new version each year
.NET has a really refreshingly sane release life cycle, similar to nodejs:
- There's a new major release every year (in November)
- Even numbers are LTS releases, and get 3 years of support/patches
- Odd numbers get 18 months of support/patches
This means if you target LTS, you have 2 years of support before the next LTS, and a full year overlap where both are supported. If you upgrade every release, you have at least 6 months of overlap
There's very few breaking changes between releases anyway, and it's often in infrastructure stuff (config, startup, project structure) as opposed to actual application code.
I think it's important to remember that Dotnet projects can use code built for older releases; to an almost absurd degree, and if you don't go to before the .NET Framework divide, you largely don't even need to change anything to move projects to newer frameworks. They largely just work.
The .Net platform is honestly the most stable it has ever been.
Recent experience report: I updated four of my team's five owned microservices to .net 10 over the past two weeks. All were previously on .net 8 or 9. The update was smooth: for the .net 9 services, I only had to update our base container images and the csproj target frameworks. For the .net 8 services, I also had to update the Mvc.Testing reference in their integration tests.
It's hard for me to imagine a version increment being much easier than this.
.NET Framework had back then, when it was not in frozen state as it is now, every release a list of breaking changes. Modern .NET breaking changes are not worth talking about. Keeping up with the state of the art however is more interesting... But that is needed to be a solution for today and to stay relevant.
What do you mean? The .Net ecosystem has been generalized chaos for the past 10 years.
A few years ago even most people actively working in .Net development couldn't tell what the hell was going on. It's better now. I distinctly recall when .Net Framework v4.8 had been released and a few months later .Net Core 3.0 came out and they announced that .Net Standard 2.0 was going to be the last version of that. Nobody had any idea what anything was.
.Net 5 helped a lot. Even then, MS has been releasing new versions of .Net at a breakneck pace. We're on .Net 10, and .Net Core 1.0 was 9 years ago. There's literally been a major version release every year for almost a decade. This is for a standard software framework! v10 is an LTS version of a software framework with all of 3 years of support. Yeah, it's only supported until 2028, and that's the LTS version.
The only chaos occurred in the transition from .NET Framework to .NET (Core). Upgrading .NET versions is mostly painless now because the breaking changes tend to only affect very specific cases. Should take a few minutes to upgrade for most people.
Except it is a bummer when one happens to have such specific cases.
It never takes a few minutes in big corp, everything has to be validated, the CI/CD pipelines updated, and now with .NET 10, IT has to clear permission to install VS 2026.
If you can't get permission to update/change IDE, the company processes aren't working at all tbh. Same if cicd is in another department that doesn't give a shit.
That is pretty standard in most Fortune 500, whose main business is not selling software, and most development is done via consulting agencies.
In many cases you get assigned virtual computers via Citrix/RDP/VNC, and there is a whole infra team responsible for handling tickets of the various contractors.
Similar story at my prior job. Heck, we still had one package that was only built using 32-bit .Net Framework 1.1. We were only just starting to see out-of-memory errors due to exhausting the 2 GB address space in ~2018.
I love the new features of .Net, but in my experience a lot of software written in .Net has very large code bases with a lot of customer specific modifications that must be supported. Those companies explicitly do not want their software framework moving major supported versions as quickly as .Net does right now, because they can't just say "oh, the new version should work just fine." They'd have to double or triple the team size just to handle all the re-validation.
Once again, I feel like I am begging HN to recognize not everyone is at a 25 person microservice startup.
Not sure about the past tense here. .NET is still excellent and getting even better with every release. What instability are you talking about? There was the leap to .NET Core which was majorly breaking, but that was almost 10 years ago now.
If they’re in a team similar to some I’ve worked with, engineers are barely getting comfortable with the shift away from .NET Framework (!)
There are legions of developers for whom Visual Studio on Windows is the only place they have ever been comfortable. And upgrading between versions of .NET is a point-click exercise between the various UIs (Visual Studio Installer, “Get New Components or Features”, and the NuGet package manager)
The advent of .NET Core happened to coincide with initiatives to adapt:
* toward the cloud and away from IIS and Windows Server
* toward Git and away from TFS
* toward remote CI/CD and away from “drag my files into inetpub”
* toward SPAs and away from ASP.NET XAML programming (Blazor notwithstanding)
* toward a broader toolkit where the familiarity with OSS and open specs is advantageous, and away from Visual Studio as the center of the universe (though it still arguably reigns supreme in its class of IDEs)
Coming from the Linux/Docker world before going deep in .NET, I was both resented and leaned on heavily for these teams’ transitions. Most of my teammates had never read the contents of their .csproj or .sln files, or run a build command from a terminal and read its log output. They were annoyed by my requests to do so when helping them troubleshoot; some just rejected the idea outright (“there’s no need to look at VS internals here”, “we shouldn’t need to run DOS commands in today’s world, VS should hable this!”)
I can definitely sympathize with developers who were sold on what seemed like a promise that deep VS/IIS/etc knowledge would be the rock-solid foundation for business software for the rest of their careers. During the uprooting process, other promises like “netstandard2.0 will be forever for your core libraries and all future .NET runtimes!” end up with asterisks the following year.
I am 100% in agreement that .NET dev team is doing an amazing job, but it’s precisely because of their continued shakeups when they see major opportunities to improve it from the ground up, and probably the same reason that others feel wary of it
Anyways, I work with .NET Framework and .NET. Being a developer is a joy where you can learn daily new tricks and satisfy your curiosity.
So to me this reads so alien that people fail to learn new tricks within .NET world. For me it's like a stream of amazement: Ohh, this is so much better with new language features. Ohh, this code is so much more clean. Woa, logging is so simple with ILogger and I can plug whatever underlying log engine I want! Configuration via JSON files, niice. I can override with Env variables out of the box? Amazing. This all follows particular rules and patterns. Ohh, I can customize the way I read configuration any way I want if I need it so. Ohh, I can build and run from CLI using simple commands! Ohh, I can package in docker and run on Linux. Oh, wow, .csproj is so much more clean, gotta use SDK-style project for my .NET Framework projects too!
You're totally right on all those changes, and I think all of those things were the bane of .NET development. Getting rid of all the cruft in one swift operation was life-changing. Finally being able to junk IIS and deploy web apps to Linux running their own tight little web server (Kestrel) is fantastic.
I feel the pain, as polyglot consultant, I would like to see more RFPs asking for .NET skills, unfortunely it seems it is all about nodejs, some Java, and plenty of low code tools (iPaaS).
At least exactly due to performance issues, I get some excuses to push for C++ addons in some cases.
I love working with dotnet, but lately I’ve been writing more backend applications in Python. The code is simpler, testing is simpler since method privacy doesn’t really exist, and code is quicker to deploy because you do not have to compile it.
This could also change but in my experience AI is better at generating Python code versus dotnet.
Problem is though Python is slow at runtime. May not matter for many use cases, but I've worked with a lot of startups that suffered terrible reliability problems because they chose Python (or Rails, or Node to some extent) and the service cannot handle peak time load without a lot of refactoring and additional app servers.
Depending on your framework Python is at best ~3x slower (FastAPI) and at worst ~20x (Django) than asp.net on the techempower benchmarks, which maps pretty well to my real world experience.
Can confirm. Just finished load testing a FastApi service. Now the biggest selling point is that a lot of real backend never experience the level of load where this actually matters
I don't spend a lot of time building services, but the last few I've done, I actually went straight to Rust. The downside is that it's quite slow to develop -- I probably don't have the knowledge that others do, but it seems that frameworks could really use some work. That said, I love that I can find and fix most my problems during development. Building a service in Python means I'm constantly fixing issues in production.
.NET is certainly better than Python, but I'm not very happy with the type system and the code organization versus my Rust projects.
Not saying that it’s necessarily the right choice, but it opens up contributions to code to a broader user base and making those rapid iterations that tools like fastapi allow can be pretty important when proving out a concept early on.
Horses for courses… also, a Horizontal Pod Autoscaler and Load Balancer setup is pretty cheap.
I'm moving from Python to Java because of how much easier it is to actually use all CPU cores in Java and strict typing prevents so many bugs and it is much faster. I don't think it is actually that much more complicated than Python in 2025.
Agreed. It's sort of crazy how little people understand about multicore software design given nearly everyone is using machines with >8 CPU cores these days (even a cheap android phone tends to have 8 cpu cores these days).
In python and node it is _so_ painful to use multiple cores, whereas in .net you have parallel for loops and Task.WhenAll for over a decade. Java is similar in this sense that you don't have to do anything to use multiple cores and can just run multiple tasks without having to worry about passing state etc between 'workers'.
This actually becomes a really big problem for web performance, something I'm deeply passionate about. Not everything is just IO driven holdups, sometimes you do need to use a fair bit of CPU to solve a problem, and when you can't do it in parallel easily it ends up causing a lot of UX issues.
I am doing backend in Kotlin, but I must admit that Java has been catching up quickly, and it seems like Kotlin has been shifting its focus to Kotlin Multiplatform. Modern Java is a good, pleasant language and a safer bet.
Gradle with Kotlin DSL is nice, what's annoying is Gradle's constant API reshuffling for the sake of it that breaks plugins. Some plugins also introduce pointless breaking changes just to have a fancier DSL.
The IDE support is not an issue in practice, in my opinion, because IDEA is the best IDE for both Java and Kotlin. The official Kotlin LSP was released 6 months ago, but I haven't tried it.
I'm dabbling and like it but there is just SO MUCH JAVA code. There are 1000 Java examples for every 1 Kotlin. Maybe LLMs make this less of an issue now though.
Has too much sugar, and without JetBrains IDE you're stuck with a plain text editor. Not sure if it's generalizable to normal Kotlin or not, but learning Gradle Kotlin DSL made me want to rip my hair out when trying to understand what happens under the hood.
> Introduces a very messy and complex history which would not work for the repo of our size
> Apparently the support in git is buggy and can lead to problems in the repo (the SO is full of examples)
> Doesn't support cloaking
.NET has a far better developer experience than Node and is nearly as fast as Rust if written for performance, certainly much faster than Node or Python
Numbers are inflated not by choice but by force. Node is not a choice but a consequence of frontend heavy work. And JavaScript was made good using typescript by the guy who also created C#.
Same goes with Python with its data science and ML/AI background.
And the general malus is Microsoft as a company.
In summary: it is not the tech. It is the landscape.
I use c# also to earn my money. Sadly the new custom to hyperinflation in language sugar and framework makes following new things quite hard. Even today starting a new project I choose .net framework 3.5 and syntax. I know this sounds extreme but 3.5 has anything I need to build great software. It also offers a very tested environment. Setting up the software stack is a very easy process. Programmed following v2 runtime also work on v4 runtime so only a simple config file side by side to exe makes it run on any windows machine without any framework deployment.
.NET Framework 3.5 is so old it’s not available by default on Windows (maybe not available at all on the latest Windows), you’d probably have to work with ancient developer tooling to work with it, it’s probably unsupported and has security issues.
And that’s ignoring how you’re essentially severely handicapping yourself in terms of what is possible.
Unless you’re in an environment stuck 20 years in the past (which implies serious security liabilities considering they must be a Microsoft shop), this is a mind bogglingly bizarre strategy.
3.5 is approaching end of life in the next few years, you definitely should not building anything new with it. There's a lot of QoL changes in modern .NET that makes your life as a developer significantly nicer. Even for building windows services, the modern Generic Host model is orders of magnitude better than anything in .NET Framework.
I remember these days. But I have to say: .NET Core and .NET 5+ are awesome. They bring this ease you speak about into the cloud, into Linux, into containers. Obviously with the notable exception of UI development, but there the landscape has turned 5 times since 3.5 was released in 2007.
This is a strange take to me. I've been doing .NET for 25 years now. I don't see why you can't just write your code in "Framework 3.5" style in Visual Studio/VSCode 2026 with .NET 10? I would think almost any file from 3.5 would just drop in and compile with practically zero changes. I can't even think of any new language features that are being forced upon the developer.
Reminds me of an algorithms professor who'd get us to implement some modified standard algorithm in Java and then go over each implementation really quickly at the beginning of the next lecture. He'd find parts of your code weird, replace them really quickly and leave your clean, working implementation with 20 deprecation warnings.
At the end of the day both solutions/syntaxes work, but..
I have a lot of respect for the .NET team. They often publish great in-depth articles and their pursuit for performance is relentless (e.g. see Kestrel and Entity Framework evolution).
And ASP.NET is one of the few large projects which managed to survive a large breaking changes. Almost to Python 2->3 level. You had to change how your web app behaved completely if you relied on their magic session which worked hard to keep state synched between back and front.
Feels good to have 3 trillion dollars interested in improving the stack you use and actually care.
Developers! Developers! Developers!
I am not sure what the stats are but I am certain they left behind a huge number of projects. Probably a population that won't be very vocal on HN but that doesn't mean they don't exist.
Even for new projects there are problems I run into that force me to use 4.8. Like if you build an Excel formula, the mandatory async/await approach just doesn't work. It's not an asynchronous operation and runs into a UI context where you deadlock if you wait. They also broke a lot of the windows integration, where in a corporate environment the same network calls with the same syntax authenticate successfully with 4.8 but fail with core.
And because they broke backward compatibility on so many libraries, it's a non trivial effort to convert a complex code base to core.
It's great that they focus on performance, but the .net framework fossilised in term of functionality. It took like 15 years before they added a json serializer to the standard library, and don't even think about support for any new major image format (webp, heic). Everything has become complicated, there isn't a day where I don't see a crash message in visual studio. I used to be a big .net fan but I miss Anders' leadership.
Last time I tried Entity Framework it was slow. Replaced it with Dapper and a simple custom migration system. This took database validation and seeding from 10 seconds to less than 2 seconds during startup on low powered hardware with SQLite. The queries created by Entity had pointless cascade of multiple join statements.
I have been reaching for GO with simple tooling and HTTP back end. .NET is useful for other solutions.
I have had too many issues with their frameworks, like WPF, needing to implement Win32 hacks. Example, .Net 9 was the first Windows version that properly returns all network interfaces. Older runtimes only expose Enabled NICs. I still have to maintain Windows 7 support for some solutions.
I just went the other way and tore all my Dapper + SQL and repositories out of a big project and switched it all to EF Core 10. Not noticed any change in performance but dumped thousands of lines of unnecessary code. I like the tighter code, but you definitely do need to keep your eye on the "magic" of EF to avoid it doing anything goofy that isn't easily visible.
We use newer Entity Framework (Core) in a few large projects with zero issues. Even have Dapper and moving away from it as it brings nothing to the table but strings and bunch of SQL.
The trick with EF Core is to let it do the simple stuff and if you need anything more complicated than .Include write the query yourself.
We are also running into more and more performance issues with EF. There are ways to tune it but I am not sure if it’s worth learning this for EF or if it’s not better to just go for straight SQL. Seems MS has this tendency to create abstractions that then don’t work 100%. I see this with .NET too. Often you have to go down to Win32 for apps that are tightly coupled with Windows and hardware.
Are you talking about EF or EF Core? If the later, did you enable any or all of the magic like automatic property evaluation/eager fetching etc.?
EFCore by default is pretty performant - and pretty basic. You need to manually .Include() joins etc. which makes it pretty hard to become slow.
Is not that the core problem with ORMs. All of them. In the end you do straight SQL when it comes to Performance.
ef core is great for simple queries and modification of your data while using the changetracker. You can use AsNoTracking/Projection to perform similar like dapper for queries. When using command query seperation you can also use dapper for queries and ef core for commands.
What version was it?
one thing that struck me was that the foundation for this effort was the linux distro build system. in other words, the work they put into making .net open-source and cross-platform eventually made everyone's lives easier.
I think .NET is way beyond the situation that they question their open source move. The amount of pull requests and positive outcomes for them in the last 10 years (yeah that long already) is mind blowing.
yeah, I didn't mean that it was evidence the open source move was successful or valuable, more that it showed that the engineering effort that went into the open sourcing also yielded dividends for the project in general
Modern .NET is awesome. In a small side hustle project, I develop a REST API Backend in C# on my macOS using VSCode, and deploy it to Linux for the past 3 years without any issues. I use SQLite, EFCore, Minimal APIs and it is a delight compared to the frontend part - which is NextJS/React/MaterialUI with 50+ (dev-)packages in npm.
Oh, wow, I didn't expect that the best thing I'd read about software engineering, like, this year would come out of Microsoft! Don't get me wrong: I like .NET, especially its recent incarnation, but until just now, I would have expected its robustness to be an against-all-odds under-the-radar lucky escape from the general enshittification that seems to be the norm for the industry.
Reading something like this, which outlines a coordinated effort (diagrams and even a realistic use case for agentic LLM usage and all!) to actually and effectively make things better was a breath of fresh air, even if towards the end it notes that the remarkable investment in quality will not be in full force in the future.
Even if you don't care about .NET and/or Microsoft, this is worth reading, doubly so if you're in charge of re-engineering just about anything -- this is how it's done!
Must have been an amazing effort to be involved in.
I can see that high level overviews of complex systems are useful to get some insights, but in the same way I have the feeling that this mentality of high level, abstract organization is the root of the problem. If you have a complex system and simplify the components into abstractions, you will repeatedly run into difficulties because you've actively ignored the dirty bits. It's an top down approach that tries to tackle all issues, but an bottom up approach could even eradicate myriads of issues.
> We’re asking how much it will cost to build 3-4 major versions with a dozen .NET SDK bands between them each month.
Why so many variants?
Well you've got .NET 8 (LTS), .NET 9 (standard support), .NET 10 (LTS). These are all supported at once.
Then you've got the .NET SDK/aspnet/runtime (on x64/arm32/arm64 linux/mac/windows), and also the various SDK packages themselves.
3**4 = 81 builds - but aren’t all of those independent and thus parallelizable?
No, read the article. It needs to build some "sub" SDKs to build the final 'full' SDK packages. That's the whole point; they want to get to a state where they can do that.
.NET was a solid choice for backend builds before Node became so popular (And .NET is generally more performant than Node).
I hope this churn in .NET builds is temporary because a lot of people might be looking to go back to something stable especially after the recent supply chain attacks on the Node ecosystem.
> I hope this churn in .NET builds is temporary because a lot of people might be looking to go back to something stable especially after the recent supply chain attacks on the Node ecosystem.
Can you elaborate a bit? This article talks about internal machinery of building .net releases. What does that have to do with "this churn", whatever that is?
My guess is if you build with .NET Framework you can just forever run your builds, but if your source code is based on newer .NET you have to update to a new version each year, and deal with all the work in upgrading your entire project, which also means everyone in your team is also upgrading their dev environment, and now you have new things in the language and the runtime to deal with, deprecation and all that. Plus lots of packages don’t update as fast when version changes occurs, so chances are you will probably take more work and use as few dependencies as possible if at all, which may cause a lot of work. Instead it’s best to, if you need to depend on something, to be a very big Swiss Army knife like thing.
I think node is just more flexible and unless .NET Framework like forever releases or much longer term support make a come back, there’s no good trade off from node, since you don’t even get more stability.
> if your source code is based on newer .NET you have to update to a new version each year
.NET has a really refreshingly sane release life cycle, similar to nodejs:
- There's a new major release every year (in November)
- Even numbers are LTS releases, and get 3 years of support/patches
- Odd numbers get 18 months of support/patches
This means if you target LTS, you have 2 years of support before the next LTS, and a full year overlap where both are supported. If you upgrade every release, you have at least 6 months of overlap
There's very few breaking changes between releases anyway, and it's often in infrastructure stuff (config, startup, project structure) as opposed to actual application code.
> Odd numbers get 18 months of support/patches
The recently fixed the friction with odd number releases by providing 24 months of support.
Ah, but if you use node.js you get breaking changes every other day from dependencies on dependencies you didn’t even know you had.
I think it's important to remember that Dotnet projects can use code built for older releases; to an almost absurd degree, and if you don't go to before the .NET Framework divide, you largely don't even need to change anything to move projects to newer frameworks. They largely just work.
The .Net platform is honestly the most stable it has ever been.
Recent experience report: I updated four of my team's five owned microservices to .net 10 over the past two weeks. All were previously on .net 8 or 9. The update was smooth: for the .net 9 services, I only had to update our base container images and the csproj target frameworks. For the .net 8 services, I also had to update the Mvc.Testing reference in their integration tests.
It's hard for me to imagine a version increment being much easier than this.
I'm currently migrating dozens of projects to .NET 10. All of them so far were basically one line changes and a recompile.
You should be able to go from .NET 6->10 without almost any changes at all.
The past three years of dotnet upgrades have been completely painless for me.
.NET Framework had back then, when it was not in frozen state as it is now, every release a list of breaking changes. Modern .NET breaking changes are not worth talking about. Keeping up with the state of the art however is more interesting... But that is needed to be a solution for today and to stay relevant.
Note how practitioners of .NET praise it and non-practitioners (users of .NET Framework) criticize it.
Or users of other programming tool chains.
What do you mean? The .Net ecosystem has been generalized chaos for the past 10 years.
A few years ago even most people actively working in .Net development couldn't tell what the hell was going on. It's better now. I distinctly recall when .Net Framework v4.8 had been released and a few months later .Net Core 3.0 came out and they announced that .Net Standard 2.0 was going to be the last version of that. Nobody had any idea what anything was.
.Net 5 helped a lot. Even then, MS has been releasing new versions of .Net at a breakneck pace. We're on .Net 10, and .Net Core 1.0 was 9 years ago. There's literally been a major version release every year for almost a decade. This is for a standard software framework! v10 is an LTS version of a software framework with all of 3 years of support. Yeah, it's only supported until 2028, and that's the LTS version.
The only chaos occurred in the transition from .NET Framework to .NET (Core). Upgrading .NET versions is mostly painless now because the breaking changes tend to only affect very specific cases. Should take a few minutes to upgrade for most people.
Except it is a bummer when one happens to have such specific cases.
It never takes a few minutes in big corp, everything has to be validated, the CI/CD pipelines updated, and now with .NET 10, IT has to clear permission to install VS 2026.
If you can't get permission to update/change IDE, the company processes aren't working at all tbh. Same if cicd is in another department that doesn't give a shit.
That is pretty standard in most Fortune 500, whose main business is not selling software, and most development is done via consulting agencies.
In many cases you get assigned virtual computers via Citrix/RDP/VNC, and there is a whole infra team responsible for handling tickets of the various contractors.
Similar story at my prior job. Heck, we still had one package that was only built using 32-bit .Net Framework 1.1. We were only just starting to see out-of-memory errors due to exhausting the 2 GB address space in ~2018.
I love the new features of .Net, but in my experience a lot of software written in .Net has very large code bases with a lot of customer specific modifications that must be supported. Those companies explicitly do not want their software framework moving major supported versions as quickly as .Net does right now, because they can't just say "oh, the new version should work just fine." They'd have to double or triple the team size just to handle all the re-validation.
Once again, I feel like I am begging HN to recognize not everyone is at a 25 person microservice startup.
You're describing a specific case of working in a big rigid enterprise. It doesn't have anything to do with .NET itself, does it?
Not sure about the past tense here. .NET is still excellent and getting even better with every release. What instability are you talking about? There was the leap to .NET Core which was majorly breaking, but that was almost 10 years ago now.
If they’re in a team similar to some I’ve worked with, engineers are barely getting comfortable with the shift away from .NET Framework (!)
There are legions of developers for whom Visual Studio on Windows is the only place they have ever been comfortable. And upgrading between versions of .NET is a point-click exercise between the various UIs (Visual Studio Installer, “Get New Components or Features”, and the NuGet package manager)
The advent of .NET Core happened to coincide with initiatives to adapt:
* toward the cloud and away from IIS and Windows Server
* toward Git and away from TFS
* toward remote CI/CD and away from “drag my files into inetpub”
* toward SPAs and away from ASP.NET XAML programming (Blazor notwithstanding)
* toward a broader toolkit where the familiarity with OSS and open specs is advantageous, and away from Visual Studio as the center of the universe (though it still arguably reigns supreme in its class of IDEs)
Coming from the Linux/Docker world before going deep in .NET, I was both resented and leaned on heavily for these teams’ transitions. Most of my teammates had never read the contents of their .csproj or .sln files, or run a build command from a terminal and read its log output. They were annoyed by my requests to do so when helping them troubleshoot; some just rejected the idea outright (“there’s no need to look at VS internals here”, “we shouldn’t need to run DOS commands in today’s world, VS should hable this!”)
I can definitely sympathize with developers who were sold on what seemed like a promise that deep VS/IIS/etc knowledge would be the rock-solid foundation for business software for the rest of their careers. During the uprooting process, other promises like “netstandard2.0 will be forever for your core libraries and all future .NET runtimes!” end up with asterisks the following year.
I am 100% in agreement that .NET dev team is doing an amazing job, but it’s precisely because of their continued shakeups when they see major opportunities to improve it from the ground up, and probably the same reason that others feel wary of it
Thanks for making me feel like a 10x dev :)
Anyways, I work with .NET Framework and .NET. Being a developer is a joy where you can learn daily new tricks and satisfy your curiosity.
So to me this reads so alien that people fail to learn new tricks within .NET world. For me it's like a stream of amazement: Ohh, this is so much better with new language features. Ohh, this code is so much more clean. Woa, logging is so simple with ILogger and I can plug whatever underlying log engine I want! Configuration via JSON files, niice. I can override with Env variables out of the box? Amazing. This all follows particular rules and patterns. Ohh, I can customize the way I read configuration any way I want if I need it so. Ohh, I can build and run from CLI using simple commands! Ohh, I can package in docker and run on Linux. Oh, wow, .csproj is so much more clean, gotta use SDK-style project for my .NET Framework projects too!
You're totally right on all those changes, and I think all of those things were the bane of .NET development. Getting rid of all the cruft in one swift operation was life-changing. Finally being able to junk IIS and deploy web apps to Linux running their own tight little web server (Kestrel) is fantastic.
I love C#. When combined with JetBrains Rider it may be the most satisfying dev experience I’ve had in my career.
I feel the pain, as polyglot consultant, I would like to see more RFPs asking for .NET skills, unfortunely it seems it is all about nodejs, some Java, and plenty of low code tools (iPaaS).
At least exactly due to performance issues, I get some excuses to push for C++ addons in some cases.
.NET churns less than any other major stack. Every upgrade since Core 2 (released in 2017) has been minimally painful or, more recently, painless.
I love working with dotnet, but lately I’ve been writing more backend applications in Python. The code is simpler, testing is simpler since method privacy doesn’t really exist, and code is quicker to deploy because you do not have to compile it.
This could also change but in my experience AI is better at generating Python code versus dotnet.
Problem is though Python is slow at runtime. May not matter for many use cases, but I've worked with a lot of startups that suffered terrible reliability problems because they chose Python (or Rails, or Node to some extent) and the service cannot handle peak time load without a lot of refactoring and additional app servers.
Depending on your framework Python is at best ~3x slower (FastAPI) and at worst ~20x (Django) than asp.net on the techempower benchmarks, which maps pretty well to my real world experience.
Can confirm. Just finished load testing a FastApi service. Now the biggest selling point is that a lot of real backend never experience the level of load where this actually matters
I don't spend a lot of time building services, but the last few I've done, I actually went straight to Rust. The downside is that it's quite slow to develop -- I probably don't have the knowledge that others do, but it seems that frameworks could really use some work. That said, I love that I can find and fix most my problems during development. Building a service in Python means I'm constantly fixing issues in production.
.NET is certainly better than Python, but I'm not very happy with the type system and the code organization versus my Rust projects.
Not saying that it’s necessarily the right choice, but it opens up contributions to code to a broader user base and making those rapid iterations that tools like fastapi allow can be pretty important when proving out a concept early on.
Horses for courses… also, a Horizontal Pod Autoscaler and Load Balancer setup is pretty cheap.
Most web apps are waiting on the DB anyway. Rarely have I seen the speed of the actual framework make any meaningful difference.
If you don't want your methods to be private make them public?
Just make them internal and use [InternalsVisibleTo] on the assembly.
I'm moving from Python to Java because of how much easier it is to actually use all CPU cores in Java and strict typing prevents so many bugs and it is much faster. I don't think it is actually that much more complicated than Python in 2025.
Agreed. It's sort of crazy how little people understand about multicore software design given nearly everyone is using machines with >8 CPU cores these days (even a cheap android phone tends to have 8 cpu cores these days).
In python and node it is _so_ painful to use multiple cores, whereas in .net you have parallel for loops and Task.WhenAll for over a decade. Java is similar in this sense that you don't have to do anything to use multiple cores and can just run multiple tasks without having to worry about passing state etc between 'workers'.
This actually becomes a really big problem for web performance, something I'm deeply passionate about. Not everything is just IO driven holdups, sometimes you do need to use a fair bit of CPU to solve a problem, and when you can't do it in parallel easily it ends up causing a lot of UX issues.
On most cloud deployments, you get one shared “virtual” core — whatever that means.
No you get how ever many you choose and are willing to pay for. 1vCPU is not good for very much.
Even Guido van Rossum admits that if he had known how common high core count CPUs would become he wouldn't have chosen to use the GIL
That’s one reason I’ve preferred .Net. Put ahead of time compilation on top and it is glorious.
out of curiosity, why not kotlin? I had the impression it was the jvm language to reach for by default these days.
I am doing backend in Kotlin, but I must admit that Java has been catching up quickly, and it seems like Kotlin has been shifting its focus to Kotlin Multiplatform. Modern Java is a good, pleasant language and a safer bet.
Gradle with Kotlin DSL is nice, what's annoying is Gradle's constant API reshuffling for the sake of it that breaks plugins. Some plugins also introduce pointless breaking changes just to have a fancier DSL.
The IDE support is not an issue in practice, in my opinion, because IDEA is the best IDE for both Java and Kotlin. The official Kotlin LSP was released 6 months ago, but I haven't tried it.
I'm dabbling and like it but there is just SO MUCH JAVA code. There are 1000 Java examples for every 1 Kotlin. Maybe LLMs make this less of an issue now though.
Has too much sugar, and without JetBrains IDE you're stuck with a plain text editor. Not sure if it's generalizable to normal Kotlin or not, but learning Gradle Kotlin DSL made me want to rip my hair out when trying to understand what happens under the hood.
Is nuget any different from npm
It is, shall we say, more curated
It has the benefit of being late to the game (like .NET was to Java).
This isn't really anything user facing. It's just yet again an example of why monorepos are better.
Anything is a monorepo if you submodule hard enough lol
Why don't people use subtree?
https://www.atlassian.com/git/tutorials/git-subtree
The .NET source build team looked at subtrees (https://github.com/dotnet/arcade/issues/10257#issuecomment-1...).
> Introduces a very messy and complex history which would not work for the repo of our size > Apparently the support in git is buggy and can lead to problems in the repo (the SO is full of examples) > Doesn't support cloaking
(I think by "cloaking" they are referring to https://github.com/premun/dotnet/blob/766c564dd379e634c38739... )
.Net need a "node" level of developer experience and perfomance of rust/zig since node/python ecosystem rewrite make it more perfomance than ever
I cant see .net win againts those odds tbh
.NET has a far better developer experience than Node and is nearly as fast as Rust if written for performance, certainly much faster than Node or Python
numbers speak for themselves
Numbers are inflated not by choice but by force. Node is not a choice but a consequence of frontend heavy work. And JavaScript was made good using typescript by the guy who also created C#.
Same goes with Python with its data science and ML/AI background.
And the general malus is Microsoft as a company.
In summary: it is not the tech. It is the landscape.
Would you mind providing yours as well as benchmarks used? All benchmarks I could find point to a different picture than described in parent comment
Bravo, such a well written article.
Feeling motivated enough to deep dive into .NET 10
I use c# also to earn my money. Sadly the new custom to hyperinflation in language sugar and framework makes following new things quite hard. Even today starting a new project I choose .net framework 3.5 and syntax. I know this sounds extreme but 3.5 has anything I need to build great software. It also offers a very tested environment. Setting up the software stack is a very easy process. Programmed following v2 runtime also work on v4 runtime so only a simple config file side by side to exe makes it run on any windows machine without any framework deployment.
.NET Framework 3.5 is so old it’s not available by default on Windows (maybe not available at all on the latest Windows), you’d probably have to work with ancient developer tooling to work with it, it’s probably unsupported and has security issues.
And that’s ignoring how you’re essentially severely handicapping yourself in terms of what is possible.
Unless you’re in an environment stuck 20 years in the past (which implies serious security liabilities considering they must be a Microsoft shop), this is a mind bogglingly bizarre strategy.
3.5 is approaching end of life in the next few years, you definitely should not building anything new with it. There's a lot of QoL changes in modern .NET that makes your life as a developer significantly nicer. Even for building windows services, the modern Generic Host model is orders of magnitude better than anything in .NET Framework.
I'm shocked that 3.5 is still supported, it came out in 2007!
I remember these days. But I have to say: .NET Core and .NET 5+ are awesome. They bring this ease you speak about into the cloud, into Linux, into containers. Obviously with the notable exception of UI development, but there the landscape has turned 5 times since 3.5 was released in 2007.
This is a strange take to me. I've been doing .NET for 25 years now. I don't see why you can't just write your code in "Framework 3.5" style in Visual Studio/VSCode 2026 with .NET 10? I would think almost any file from 3.5 would just drop in and compile with practically zero changes. I can't even think of any new language features that are being forced upon the developer.
Reminds me of an algorithms professor who'd get us to implement some modified standard algorithm in Java and then go over each implementation really quickly at the beginning of the next lecture. He'd find parts of your code weird, replace them really quickly and leave your clean, working implementation with 20 deprecation warnings.
At the end of the day both solutions/syntaxes work, but..