I like my Linux installs heavily customized and security hardened, to the extent that copying over /home won’t cut it, but not so much that it breaks when updating Debian. Whenever someone mentions reinstalling Linux, I am instinctively nervous thinking about the work it would take for me to get from a vanilla install to my current configuration.

It started a couple of years ago, when dreading the work of configuring Debian to my taste on a new laptop, I decided to instead just shrink my existing install to match the new laptop’s drive and dd it over. I later made a VM from my install, stripped out personal files and obvious junk, and condensed it to a 30 GB raw disk image, which I then deployed on the rest of my machines.

That was still a bit too janky, so once my configuration and installed packages stabilized, I bit the bullet, spun up a new VM, and painstakingly replicated my configuration from a fresh copy of Debian. I finished with a 24 GB raw disk image, which I can now deploy as a “fresh” yet pre-configured install, whether to prepare new machines, make new VMs, fix broken installs, or just because I want to.

All that needs to be done after dd’ing the image to a new disk is:

  • Some machines: boot grubx64.efi/shimx64.efi from Ventoy and “bless” the new install with grub-install and update-grub
  • Reencrypt LUKS root partition with new password
  • Configure user and GRUB passwords
  • Set hostname
  • Install updates and drivers as needed
  • Configure for high DPI if needed

I’m interested to hear if any of you have a similar workflow or any feedback on mine.

  • Unmapped@lemmy.ml
    link
    fedilink
    arrow-up
    46
    arrow-down
    3
    ·
    3 months ago

    You should check out Nixos. You make a config file that you can just copy over to as many machines as you want.

    • 4am@lemm.ee
      link
      fedilink
      arrow-up
      7
      arrow-down
      3
      ·
      3 months ago

      That or Ansible, if you will have a machine to deploy from

      • TunaCowboy@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        3
        ·
        3 months ago

        if you will have a machine to deploy from

        You can run ansible against localhost, so you don’t even need that.

      • Possibly linux@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        3 months ago

        You don’t need a machine to deploy from. You just need a git repo and Ansible pull. It will pulldown and run playbooks against the host. (Use the self target to run it on the local machine)

  • lordnikon@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    6
    ·
    3 months ago

    that workflow seems fine if it works for you. seems overkill for debian but if it works i don’t see anything wrong with it.

    one way I do it is dpkg - l > package.txt to get a list of all install packages to feed into apt on the new machine then to setup two stow directories one for global configs. when a change is made and one for dot files in my home directory then commit and push to a personal git server.

    Then when you want to setup a new system it’s install minimal install then run apt install git stow

    then clone your repos grab the package.txt run apt install < package.txt then run stow on each stow directory and you are back up and running after a reboot.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    6
    ·
    edit-2
    3 months ago

    Use configuration tooling such as Ansible.

    You also could build a image builder to build your system. You could utilize things like docker and or Ansible to repeatedly get to the same result.

  • ouch@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    6
    ·
    3 months ago

    Just put your system configuration in Ansible playbook. When your distro has new release, go through your changes and remove ones that are no longer relevant.

    For home, I recommend a dotfiles repository with subdirectories for each tool, like bash, git, vim, etc. Use GNU stow to symlink the required files in place on each machine.

  • data1701d (He/Him)@startrek.website
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    6
    ·
    3 months ago

    You might be able to script something with Debootstrap. I tested Bcachefs on a spare device once and couldn’t get through the standard Debian install process, so I ended up using a live image to Debootstrap the drive. You should be able to give a list of packages to install and copy over configs to the partition.

  • darius@lemmy.ml
    link
    fedilink
    arrow-up
    2
    arrow-down
    5
    ·
    3 months ago

    I have the exact same workflow except I have two images: one for legacy/MBR and another for EFI/GPT – once I read your post I was glad to see I’m not alone haha!

  • boredsquirrel@slrpnk.net
    link
    fedilink
    arrow-up
    2
    arrow-down
    6
    ·
    3 months ago

    I did the same, exactly the way you did but my “zygote” isnt as advanced.

    I should make a raw ISO too, but currently I just use Clonezilla (which shrinks and resizes automatically) and have a small SSD with a nearly vanilla system.

    Just because the Fedora ISO didnt boot

  • ezekielmudd@reddthat.com
    link
    fedilink
    arrow-up
    1
    arrow-down
    6
    ·
    3 months ago

    I believe that Proxmox does this because I have installed/created containers from their available images. I wonder how they create those container images?