• 1 Post
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 8th, 2023

help-circle

  • I love Traefik! When I started, I tried NGinx, but could not wrap my head around it. So I tried Caddy. Pretty easy to understand andI used it for a while. Then I had demands Caddy could not do ant stumbled uponTraefik. As you said, a learning curve, butfor me much easier than NGinx. I like that you can put the Traefik config inside the Compose files and that the service only is active in Traefik when the actual Containers are up and running. I added Crowdsec to my external facing Traefik instance and even use a plain Traefik instance for all my internal services also. And it can forward http, https, TCP and UDP.










  • The thing that stuck with me was that I always had the impression that the Video quality was much worse than on Youtube. IIRC when there was content that was available on both platforms, Youtube had the much better picture and sound. But maybe that was just specific to the content I watched back then. There was not THAT much to see in the beginning, not like today where you can spend 24h straight and always see new stuff :-)




  • Setup of the HMAC Key for the CouchDB was indeed the step I struggled with too. I think the first time I either made a mistake or used a broken Website to generate a Base64 value. The 2nd time my mistake was that I put in the Base64 value for the HMAC Key into the jwt.ini AND in the docker-compose.yml. But in the docker-compose.yml COUCHDB_HMAC_KEY, I had to put it unencoded and in the jwt.ini hmac:_default it has to be Base64 encoded. Maybe this is the thing you did wrong too?

    I bet you are close!

    On the other hand, if you are the only person using the shopping list and your current setup offers you what you need, maybe it is not worth it for you. For me it was (and updating when it runs is super easy, I promise!). The instant sync over all devices is great + it keeps working when I lose reception in a shop and syncs again instantly when I have internet again. But what makes Groceries for me are:

    • The ability to have an item on multiple shopping lists if needed and if it is checked off from one list, it is checked of from the other lists too. I stopped forgetting buying stuff that was not available in the 1st shop to get in the 2nd.
    • The ability to add items to aisles and move the aisles in different order for each list (every shop I visit has a bit of a different layout). This made shopping super quick for me, because I enter the shop and walk through it exactly once and have everything I need, because it is all in the correct order on the respective list.

    Oh, and adding a photo to an item is super useful if you are like me and need very close instructions what to get for your partner if you stand in front of a shelf with 100 different types of cheese which look all exactly the same to you… having a photo is sometimes a life saver for me :-)


  • As others mentioned, you probably do not need VMs. If you thought about VMs because of isolation, then yes. that might be a good idea.

    In an ideal world, if I had the budget / hardware, I would have a Server with multiple NICs (Network Interface Cards) connected to different ports on my Firewall for LAN and DMZ. Then I would create VMs for LAN and DMZ and on those the Docker Containers needed for that zone. Everything that is accessible from the Internet gets into the DMZ, the rest in the LAN. I could further lock it down by creating 2 DMZ zones and only put let´s say NGINX or Traefik into the Zone that gets exposed and the services behind the Reverse Proxy in the 2nd DMZ zone, which will still be isolated from LAN.

    But since I only have a small box with 1 NIC, instead I created VLANs on my Router and created a Docker Network for each VLAN. Every single service I run is a docker container and in one of the VLANs, appropriate to their level of exposure. I have one VLAN called LAN that obviously is connected to my LAN and 2 other VLANs where I basically do what I described above. One holds Traefik and has exposed ports to the Internet and the other VLAN hosts the Services which are accissible through traefik. With that setup you at least isolate network traffic and it is something I would look into if you plan to expose any of your services to the internet. Usually when you start with Docker, you probably would just expose Ports from the Containers, which get mapped to the IP of your host… and so all those Containers will have access to your LAN. At least try to separate that.

    The next thing I wanted to do, is run my Containers rootless, which means that no container has root permissions if in case something within the container decides to let the docker service do something malicious on the host, it should not be able to run as root. The caveat here is, that docker does not support VLANs in rootless mode. I spend half a day converting everything to Podman, because people where praising podman left and right if you want to run rootless, but then I found out that Podman does not support VLANs in rootless mode either :->

    Using VMs as described above would make the “I can not use docker rootless” problem less of a problem, but I decided against VMs because of Resources / Budget.

    What I can recommend when you start, do not try to make things too complicated until you are familiar with Docker and understand what you are doing. As you get better, you might want more and learn more stuff as you go.

    You could just install a Linux Distribution you are familiar with (I use Ubuntu Server LTS 22), install Docker and just play around with it a bit to see how everyting works. Only start exposing Services to the Internet if you know what you are doing.

    Maybe a few tips or keywords for you of stuff I went through step by step for later usage.

    • If you expose Services to the Internet, use a Reverse Proxy you think you will understand (NGINX, Traefik, Caddy…)
    • Try to segment your network if your Hard- / Software allows it to separate LAN Services from Services exposed in the Internet
    • Start documenting your setup from the beginning! If you are like me, everything is clear as you do it… but when I come back a month later I wonder how I set up the VLANs or what each Environment Setting does for a specific container etc ;-)
    • Instead of using Docker Volumes, think about redirecting Container directories to directories on the host instead. All my containers have their data under /opt/<container> and all my docker-compose files are in another, separate directory.
    • Implement a Backup solution early on (I use kopia, which backs up my compose directory and /opt, which should be everything I need to set up everything again on a new host)
    • Once you have a few containers up and running and think you are familiar how they work, start use docker-compose. Having a compose file for each container makes updating and maintaining them super easy. There is an updated image for a container? Just run docker-compose up -d and you are done. You need a variation of a container for testing? Copy the compose file, make adjustments and run it.
    • I use watchtower to automatically check if new docker images are available. I use it in monitoring mode. It will check and download for new images, but will not restart the containers. Instead I receive an E-Mail from watchtower. I can then check if the update is for a container exposed to the internet and then will let kopia do another backup run and just do a docker-compose up -d to restart / update the respective container, check if it still does what it does and am done.
    • Did I mention that you should document everything you do? If you are like me and have a memory like an earthworm, you should document your setup from the beginning ;-)

    All in all: Do not rush it, do not feel the pressure to do everything I wrote. You might even come up with other, much better fitting solutions for you than what I or others here are doing. The most important things? Have fun and think twice what and how you expose a service to the public :-)