I played with this a bit today. Only downside is, no easy way to update containers yet. But on the other hand, no more dealing with macvlan or custom docker networks.
The idea is that your container image is the thing you want, and is (relatively) immutable, so you delete and create containers when you want things to change. If you need state you can do that with volume mounts, but the idea is that you don't need to 'update' a container, you just replace it with a new one.
That's also what docker compose does, under the hood. It doesn't 'update' a container, it just deletes it and recreates it with the new image and the same settings/name/ports/volumes/etc.
To the end user, this looks exactly the same as "updating".
If replacing a "regular" program that's just an executable and then restarting it is "updating", why isn't it the same for containers? Except theb the "executable" is the container image and the "running program" is the actual container.
Another level would be "immutable" distributions: would you say they don't "update", they just "download a fresh image to boot from"?
For some reason though that command updates all containers configured to auto-update (ex, "AutoUpdate=registry" in the quadlet file). It would be nice to be able to pass a container name after the command, but that is unsupported.
It's unclear to me why running Docker directly in Proxmox (it's just Debian) and using it like any other Docker host is a bad idea, and why this extra layer of abstractions is preferable.
Docker has security issues if you're not careful, and it's frankly kind of a shitshow out of the box with defaults. Maybe that's part of the reason. But I struggle to see how a bespoke solution like this is the right answer.
Proxmox is a hypervisor OS, and its value comes from its virtualization and container-management features. These features include being able to pause, resume, snapshot, backup/restore from snapshot, and live-migrate VMs or LXCs to another server in just a couple hundred milliseconds of downtime. Once you run docker on the hypervisor itself, you lose these features, which defeats the purpose of running Proxmox in the first place.
There's also the security angle. Containers managed by Proxmox are strongly isolated from the host, but containers running on Docker sidestep this isolation model. Docker is not insecure by design, but it greatly increases the attack surface. If the hypervisor gets compromised, the entire cluster of servers will also get compromised. In general, as little software as possible should be installed on the host.
Largely management, observability, and then the way that docker mucks with firewalls. Running them this way will allow proxmox to handle all that in the same way {I assume) as the LXC and VMS so automation, and all the rest can be consistent
I've been running Docker natively on the host since Proxmox 7. The only major problem was an iptables rule that I had to add so that the containers are accessible from outside. Besides that, it runs smoothly.
You have a bunch of tooling that deals with apples. You have a clear conceptual picture of what an apple is and what it does.
Then someone brings you a pear. It's kind of like an apple but not exactly. Their pear however works well with some other toolscape that's beyond the shire. You want to do things with their pears.
You invent a way to put a pear inside an apple (docker in VM). That works but you lose some functionality and break some stuff in the conversion, plus now you don't have the clean conceptual integrity of your apple-only system.
Run the pve8to9 script first to do some sanity checks (it should already be installed if the system is up to date).
Update the box to latest 8.x with apt update etc. Change the package sources to the new ones and update the system.
The packages databases can be a bit confusing: You have two lots - stock Debian and Proxmox (enterprise OR no-subscription).
Stock Debian is in the single file /etc/apt/sources.list - change "bookworm" to "trixie".
Proxmox sources is in a file in /etc/apt/sources.list.d/ Remove all of the Proxmox related ones you have there and run this (or do it yourself with an editor). This example is no-sub - the official doc notes the enterprise equivalent:
Run apt dist-upgrade then the pve8to9 script again and then reboot. If in doubt choose Y for install the maintainer's version when prompted. There are notes in the doc about several packages.
I just followed their guide last week and was surprised how smooth it went. Their documentation seemed very thorough. I kinda expected a few issues, but everything worked flawlessly. Seems like they do a pretty good job of detecting most of the edge cases that would cause issues. Granted, my installation hasn’t been modified too heavily outside the norm. I think I had one or two modified config files I had to edit, but the helper script found and told me about them and how to handle it.
I had put off the upgrade for a while figuring it would be a breaking change. But it went so smoothly I’ll probably be upgrading to 9.1 pretty soon.
Quite. Its almost as though the docs are written by people who actually use it.
I was (still am sadly) a VMware consultant for about 25 years. It makes me laugh when I hear breathless "enterprise noises" with regards VMware and how PVE isn't quite ready yet.
PVE is just so easy and accommodating. It's Linux on Debian with a few nobs on. The web interface is so quick and uncluttered and simple. The clustering arrangements are superb and simple. The biggest issue for me and many like me was how to deal with iSCSI SANS (no snapshots - long story) It turns out you can pull the SSDs out of a Dell Msomething SAN and wack them into the hosts and you have a hyperconverged Ceph thingie with little effort.
VMware rapidly gets very expensive. Nowadays with Broadcom you have to fork out for the full enterprise thing to get DRS and vDS - that's auto balancing clusters and funky networking. PVE gifts you Open vSwitch support out of the box and all clusters are equal. Storage DRS (migrate virty hard discs on the fly) is free on PVE too. Oh and you get containers too on PVE - VMware Tanzu is seriously expensive.
Anyway, I could grind on about this for quite some time but in my opinion, PVE is a far better base product in general for your VMs. A vCentre is a horrendous waste of resources and the rest of VMware's appliances are pretty tubby too. I recall evaluating their first efforts at SDN with edge firewalls and so on - no thanks!
My homelab upgrade from 8.x to 9.x was pretty smooth from following their upgrade guide[1]. I just upgraded from 9.0 to 9.1 this morning without any issues.
For what it’s worth I went through the upgrade last weekend. There is a compatibility check script and, frankly, the whole process proxmox had described on their site worked precisely as advertised.
5 host cluster; rebooted them all at completion and all of the containers came back up without issue (combination of VMs and LXC)
Perhaps in spirit? But I don't think you can term LXC a microVM, and I doubt they start close to as fast as Firecracker or smolbsd, and similar ilk. EDIT - appears I am probably wrong about firecracker being faster than LXC as LXC is kernel based virtualization and likely has faster startup than microVMs?
I played with this a bit today. Only downside is, no easy way to update containers yet. But on the other hand, no more dealing with macvlan or custom docker networks.
“update”, I assume you mean “recreate with new image”?
I think docker itself doesn’t support that.
I use Docker compose to recreate containers with a new image regularly.
I'm sure you could be creative with volumes in Proxmox and build a new LXC container from a new OCI image with the old volumes attached.
> I use Docker compose to recreate containers with a new image regularly.
try doing so without the compose file though.
That's true, isn't it? It was one of those features you'd think they would have had figured out, but no.
The idea is that your container image is the thing you want, and is (relatively) immutable, so you delete and create containers when you want things to change. If you need state you can do that with volume mounts, but the idea is that you don't need to 'update' a container, you just replace it with a new one.
That's also what docker compose does, under the hood. It doesn't 'update' a container, it just deletes it and recreates it with the new image and the same settings/name/ports/volumes/etc.
To the end user, this looks exactly the same as "updating".
If replacing a "regular" program that's just an executable and then restarting it is "updating", why isn't it the same for containers? Except theb the "executable" is the container image and the "running program" is the actual container.
Another level would be "immutable" distributions: would you say they don't "update", they just "download a fresh image to boot from"?
Isn't the ability to do blue/green deployments, canary releases and easy rollbacks huge incentives to use containers?
I think virtually nobody cares about being able to change the image of a container when you can so easily start a new one.
* blue/green deployments
* canary releases
* easy rollback
Have never needed containers to do any of these things.
People figuring out how to use containers as pets.
With podman its just `podman auto-update` Will pull the latest version of the image down.
For some reason though that command updates all containers configured to auto-update (ex, "AutoUpdate=registry" in the quadlet file). It would be nice to be able to pass a container name after the command, but that is unsupported.
Not too hard. The original run command is stored if you inspect a running container.
They are converted to LXC images then run. No compose file either. Still pretty neat.
It's unclear to me why running Docker directly in Proxmox (it's just Debian) and using it like any other Docker host is a bad idea, and why this extra layer of abstractions is preferable.
Docker has security issues if you're not careful, and it's frankly kind of a shitshow out of the box with defaults. Maybe that's part of the reason. But I struggle to see how a bespoke solution like this is the right answer.
Proxmox is a hypervisor OS, and its value comes from its virtualization and container-management features. These features include being able to pause, resume, snapshot, backup/restore from snapshot, and live-migrate VMs or LXCs to another server in just a couple hundred milliseconds of downtime. Once you run docker on the hypervisor itself, you lose these features, which defeats the purpose of running Proxmox in the first place.
There's also the security angle. Containers managed by Proxmox are strongly isolated from the host, but containers running on Docker sidestep this isolation model. Docker is not insecure by design, but it greatly increases the attack surface. If the hypervisor gets compromised, the entire cluster of servers will also get compromised. In general, as little software as possible should be installed on the host.
still then it would have been just a process in a namespace. there are ways to dump a process and then resume it
Largely management, observability, and then the way that docker mucks with firewalls. Running them this way will allow proxmox to handle all that in the same way {I assume) as the LXC and VMS so automation, and all the rest can be consistent
I've been running Docker natively on the host since Proxmox 7. The only major problem was an iptables rule that I had to add so that the containers are accessible from outside. Besides that, it runs smoothly.
It's a kind of apples vs pears problem:
You have a bunch of tooling that deals with apples. You have a clear conceptual picture of what an apple is and what it does.
Then someone brings you a pear. It's kind of like an apple but not exactly. Their pear however works well with some other toolscape that's beyond the shire. You want to do things with their pears.
You invent a way to put a pear inside an apple (docker in VM). That works but you lose some functionality and break some stuff in the conversion, plus now you don't have the clean conceptual integrity of your apple-only system.
This is a way to transform a pear into an apple.
#TIL Proxmox 9.1 is out.
Im still on 8.x -- it was a fun way to consolidate my different hacky projects -- home assistant, frigate, wireguard, qbittorrent etc
Kinda scared to think of what it would take to upgrade to 9.1 :)
https://news.ycombinator.com/item?id=45980005
Do this: https://pve.proxmox.com/wiki/Upgrade_from_8_to_9 No need to overthink it too much. I've done several so far.
Run the pve8to9 script first to do some sanity checks (it should already be installed if the system is up to date).
Update the box to latest 8.x with apt update etc. Change the package sources to the new ones and update the system.
The packages databases can be a bit confusing: You have two lots - stock Debian and Proxmox (enterprise OR no-subscription).
Stock Debian is in the single file /etc/apt/sources.list - change "bookworm" to "trixie".
Proxmox sources is in a file in /etc/apt/sources.list.d/ Remove all of the Proxmox related ones you have there and run this (or do it yourself with an editor). This example is no-sub - the official doc notes the enterprise equivalent:
Run apt dist-upgrade then the pve8to9 script again and then reboot. If in doubt choose Y for install the maintainer's version when prompted. There are notes in the doc about several packages.Job done.
I just followed their guide last week and was surprised how smooth it went. Their documentation seemed very thorough. I kinda expected a few issues, but everything worked flawlessly. Seems like they do a pretty good job of detecting most of the edge cases that would cause issues. Granted, my installation hasn’t been modified too heavily outside the norm. I think I had one or two modified config files I had to edit, but the helper script found and told me about them and how to handle it.
I had put off the upgrade for a while figuring it would be a breaking change. But it went so smoothly I’ll probably be upgrading to 9.1 pretty soon.
Quite. Its almost as though the docs are written by people who actually use it.
I was (still am sadly) a VMware consultant for about 25 years. It makes me laugh when I hear breathless "enterprise noises" with regards VMware and how PVE isn't quite ready yet.
PVE is just so easy and accommodating. It's Linux on Debian with a few nobs on. The web interface is so quick and uncluttered and simple. The clustering arrangements are superb and simple. The biggest issue for me and many like me was how to deal with iSCSI SANS (no snapshots - long story) It turns out you can pull the SSDs out of a Dell Msomething SAN and wack them into the hosts and you have a hyperconverged Ceph thingie with little effort.
VMware rapidly gets very expensive. Nowadays with Broadcom you have to fork out for the full enterprise thing to get DRS and vDS - that's auto balancing clusters and funky networking. PVE gifts you Open vSwitch support out of the box and all clusters are equal. Storage DRS (migrate virty hard discs on the fly) is free on PVE too. Oh and you get containers too on PVE - VMware Tanzu is seriously expensive.
Anyway, I could grind on about this for quite some time but in my opinion, PVE is a far better base product in general for your VMs. A vCentre is a horrendous waste of resources and the rest of VMware's appliances are pretty tubby too. I recall evaluating their first efforts at SDN with edge firewalls and so on - no thanks!
My homelab upgrade from 8.x to 9.x was pretty smooth from following their upgrade guide[1]. I just upgraded from 9.0 to 9.1 this morning without any issues.
[1] https://pve.proxmox.com/wiki/Upgrade_from_8_to_9
Do you happen to know if they fixed the memory ballooning issue?
What issue?
For what it’s worth I went through the upgrade last weekend. There is a compatibility check script and, frankly, the whole process proxmox had described on their site worked precisely as advertised.
5 host cluster; rebooted them all at completion and all of the containers came back up without issue (combination of VMs and LXC)
I have an "error" "I am not a teapot"
719 - I am not a teapot Espresso Web (Red Hat Enterprise Linux) at raymii.org
Looks suspicious, ... not 418, 719.
I think 418 is 'I am a teapot' so it would not be correct to use it in your case. 719 must be a typo though, perhaps it should be 419.
Haha, this was funny. https://datatracker.ietf.org/doc/html/rfc2324
This is something I've always loved about Unraid. The whole apps/containers ecosystem is so well done.
Is this similar to what FlyIO is doing? Running containers as microVMs?
Perhaps in spirit? But I don't think you can term LXC a microVM, and I doubt they start close to as fast as Firecracker or smolbsd, and similar ilk. EDIT - appears I am probably wrong about firecracker being faster than LXC as LXC is kernel based virtualization and likely has faster startup than microVMs?
Firecracker would start faster, lxc would perform better. Firecracker should have better actual isolation... I think.