• otl@lemmy.sdf.org
    link
    fedilink
    arrow-up
    16
    ·
    edit-2
    1 year ago

    In a word: convenience.

    It was in the right place at the right time with easy UX. A big audience were developers not so familiar with sysadmin in the commercial software world. It provided an easy way to get a kind of executable package. Devs could throw in all their Python/Ruby/JS dependencies and not worry about it. “works on my machine” was basically good enough because you just ship the whole damn thing over.

    Docker then supervised the process for you, too. The whole Docker package took care of a lot of things

    PS: for those really interested in containers, I always recommend looking into Plan 9: the OS from the original UNIX team intended as a successor to UNIX. Every process has its own namespace and the whole OS is built around that concept (plus a few other core things… too much to go into here). see also https://pdos.csail.mit.edu/~rsc/plan9.html

    • AggressivelyPassive@feddit.de
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      Don’t forget configuration. A properly built Docker image can be configured purely via environment variables, which are all in one place. That’s much more transparent than having 20 locations with tiny changes to the defaults.

      There are obviously edge cases where this doesn’t work, but even then you still have a just a bundle of config files in one place.