• 3 Posts
  • 50 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle

  • glizzyguzzler@lemmy.blahaj.zonetoSelfhosted@lemmy.worldPodman or rootless docker?
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    27 days ago

    Hey bigdickdonkey, I recently tried and wasn’t able to shit my way through podman, there just wasn’t enough chatter and guides about it. I plan to revisit it when Debian 13 comes out, which will include podman quadlets. I also tried to get podman quadlets to work on Ubuntu 24 and got closer, but still didn’t manage and Ubuntu is squicky.

    I read about true user rootless Docker and decided that was too finicky to keep up to date. It needs some annoying stuff to update, from what I could tell. I was planning on many users having their own containers, and that would have gotten annoying to manage. Maybe a single user would be an OK burden.

    The podman people make a good argument for running podman as root and using userns to divvy out UIDs to achieve rootless https://www.redhat.com/en/blog/rootless-podman-user-namespace-modes but since podman is on the back burner till there’s more community and Debian 13, I applied that idea to Docker.

    So I went with root Docker with the goals of:

    • read only
    • set user to different UID:GID for each container
    • silo containers in individual Docker networks
    • nothing gets /var/run/docker.sock
    • cap_drop: all
    • security-opt=no-new-privileges
    • volumes all get tagged with :rw,noexec,nosuid,nodev,Z

    Basically it’s the security best practices from this list https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html

    This still has risk of the Docker daemon being hacked from the container itself somehow, which podman eliminates, but it’s as close to the podman ideal I can get within my knowledge now.

    Most things will run as rootless+read-only+cap_drop with minor messing. Automatic ripping machine would not, but that project is a wild ride of required permissions. Everything else has succumbed, but I’ve needed to sometimes have a “pre launch container” to do permission changes or make somewhere like /opt writable.

    I would transition one app stack at a time to the best security practices, and it’s easier since you don’t need to change container managers. Hope this helps!



  • Good to know Proxmox’s bad updates are more pervasive than the latest bad update.

    I have been able to install Docker in the LXC containers and pull images in with the normal commands. I do that container-in-container to get effectively rootless docker containers for stuff that I couldn’t figure out how to run rootless. So you don’t even lose out on docker if you’re determined! And as you said incus goes on any OS, you can docker just fine on the base OS of your choice and use incus for specific things!







  • Incus is way easier to work with than Proxmox, and it sits on your OS of choice instead of being the OS you must use. For home use it’s way easier to use with the web ui, it even has clustering if you want to go hard.

    So you can install Incus when you want a VM/LXC container and not have to commit to a VM/LXC container OS from the start.

    Also Proxmox free just had a bad update that björked some stuff if you updated when it was live. Proxmox free is rolling and apparently lacks basic sanity checks for updates.


  • Your budget is really near a https://store.ui.com/us/en/collections/unifi-dream-router/products/udr Unifi dream router. Your family is gonna be way happier with you (0 downtime) and it’ll give you extender options if you ever need it. Unifi is good enough and they update regularly, just disable cloud access stuff and you’re good.

    Otherwise you want Opnsense instead of Openwrt. The upgrade process for Openwrt is not automatic, while Opnsense is. Worth it not to have to dote on your router.

    And you should get an access point (Unifi something or Tplink Omsomething), wifi is problematic with openwrt and I’m not sure if opensense even lets you do it (haven’t tried).

    And you’ll need a switch, dumb or managed, up to you if you want VLANs. The Opnsense box will have just one LAN port, so it requires a switch if you want to plug more than one thing into it. A switch with PoE+ can power the access point directly.

    Opnsense needs x64 arch (Intel or AMD CPUs), get a small thin client like a Dell Wyse 5070 extended or HP T730 or that mentioned Fujitsu Futro S720 (its CPU is old tho, you can do better). There may be newer thinclients, you just want a mini PCIe slot to install some Intel gigabit card from eBay with 2 ports. Google power efficient gigabit mini PCIe card - there’s an older model that sucks power and a newer one that doesn’t suck; if you go more than gigabit skip 2.5 on Intel unless you google hard and expect extra power draw. Very limited point to 4 port cards, just go higher gigabit speeds don’t think about multiplexing ports or whatever it is called; and switches switch better than the router can and remove CPU overhead for more actual routing work - 2 port card is the way.

    Slap Incus (superior but newer, less guides, LXD is previous name if googling stuff) or Proxmox (good enough, more guides for this) on it, make a VM and pass through the 2 ports of the PCIe cards, slap Opnsense in the VM. Make an LXC container and slap Debian on it and spin up the Unifi controller for your AP. Another container for adguard home or pi hole and you’ve got a box that does the basic nets all in one. The built-in port on the thin client is how you will access the underlying OS, it gets plugged into the switch you’ll have to get. If you got something with 2 gigs of RAM and an AMD Geode/GX or aged Intel Atom CPU I’d just only do Opnsense no hypervisor stuff.

    Sorry for the info dump but there’s a lot of angles!

    But really, the Unifi dream router is much easier and solves it all-in-one. You need 3 pieces (router, wifi access point, Ethernet switch) for a good experience otherwise.



  • It looks like regular PSUs are isolated from the mains ground with a transformer. That means that two PSUs’ DC grounds will not be connected. That will likely cause problems for you, as they’ll have to back flow current in places that do NOT expect back flow current to account for the voltage differences between the two ground potentials. Hence it might damage the GPU which is going be the mediator between these two PSUs - and maybe the mobo if everything goes to shit.

    Now I am not saying this will be safe, but you may avoid that issue by tying the grounds of the two PSUs together. You still have the issue where if, say, PSU1’s 12V voltage plane meets PSU2’s 12V voltage plane and they’re inevitably not the same exact voltage, you’ll have back flowing current again which is bad because again nothing is designed for that situation. Kind of like if you pair lithium batteries in parallel that aren’t matched, the higher voltage one will back charge the other and they’ll explode.



  • It accomplishes the same thing as Proxmox (VMs and LXC containers, which are “lite VMs” for if you wanted a Linux VM), I recently learnt about it too! It is new, but it was backed by Canonical up until the LXD/Incus split so it’s very solid. Split because Canonical tried to control LXD heavily, so they forked and renamed it Incus.

    I just used Incus and it’s very nice, use the profiles to create a profile for “GPU pass through” and “macvlan”, among others you’ll find you want. Then make instances as needed! It was easier for me to use than Proxmox.


  • First try an HDMI dummy plug, in case the thing doesn’t dig no screen (classic intel firmware)

    Then try Debian + Incus, less Proxmox shims to go wrong. Install Incus via the “zabby” repo mentioned on the incus install page. Search for “LXD” if Incus help/guides aren’t enough for you, they’re the same thing (for now). Providing an ISO in Proxmox is really clunky, and incus smooths that out so nicely. And again, less Proxmox shims to go wronk