I’ve got forgejo configured and running as a custom docker app, but I’ve noticed there’s a community app available now. I like using the community apps when available since I can keep them updated more easily than having to check/update image tags.

Making the switch would mean migrating from sqlite to postgres, plus some amount of file restructuring. It’ll also tie my setup to truenas, which is a platform I like, but after being bit by truecharts I’m nervous about getting too attached to any platform.

Has anyone made a similar migration and can give suggestions? All I know about the postgres config is where the data is stored, so I’m not even sure how I’d connect to import anything. Is there a better way to get notified about/apply container images for custom apps instead?

  • MangoPenguin@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    5 hours ago

    No IMO.

    Docker is universal, you can easily migrate to any system. If you migrate you’re stuck on TrueNAS.

    Also you can use watchtower for auto updates with major version pinning when needed (ie; postgres), or one of the many docker images that notify you when updates are available.

  • Dremor@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    8 hours ago

    As a fellow TrueNAS user, i’d advice you to wait a little bit, especially if you already have a working deployment.
    The action runner isn’t yet available, and there is still some bugs to iron out (wrong password used for the database, you have to manually correct it on first init).

  • WASTECH@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    18 hours ago

    I moved all of my Docker containers over to TrueNAS apps recently, and it’s been great so far. Alternatively, I think the best option for keeping your compose files and all that would be to upgrade to 25.04 (Fangtooth). Fangtooth lets you deploy containers using compose YAML. Each app has to be in its own YAML which can be a bit of a pain, but you would fully own everything so no need to worry about another rug pull.

    Alternatively, I’ve seen some people just install Dockge and run all of their containers inside of that.

  • Justin@lemmy.jlh.name
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    1 day ago

    nah you’re probably not going to get any benefits from it. The best way to make your setup more maintainable is to start putting your compose/kubernetes configuration in git, if you’re not already.

    • irmadlad@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      setup more maintainable is to start putting your compose/kubernetes configuration in git, if you’re not already.

      I don’t want to derail this thread, but you piqued my interest in something I’ve always wanted to do, maybe just for the learning aspect, and to see what I could accomplish.

      I’ve always wanted to see if I could have all my docker compose/run files, and various associated files to a git where I could just reinitialize a server, with everything I already have installed previously. So, I could just fire up a script, and have it pull all my config files, docker images, the works from the git, and set up a server with basically one initial script. I have never used github or others of that genre, except for installation instructions for a piece of software, so I’m a little lost on how I would set that up or if there are better options.

      • Justin@lemmy.jlh.name
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        Yeah, what you’re talking about is called GitOps. Using git as the single source of truth for your infrastructure. I have this set up for my home servers.

        https://codeberg.org/jlh/h5b

        nodes has NixOS configuration for my 5 kubernetes servers and a script that builds a flash drive for each of them to use as a boot drive (same setup for porygonz, but that’s my dedicated DHCP/DNS/NTP mini server)

        mikrotik has a dump of my Mikrotik router config and a script that deploys the config from the git repo.

        applications has all my kubernetes config: containers, proxies, load balancers, config files, certificate renewal, databases, clustered raid, etc. It’s all super automated. A pretty typical “operator” container to run in Kubernetes is ArgoCD, which watches a git repo and automatically deploys any changes or desyncs back to the Kubernetes API so it’s always in sync with git. I don’t use any GUI or console commands to deploy or update a container, I just edit git and commit.

        The kubernetes cluster runs about 400 containers, most of them just automatic replicas of services for high-availability. Of course there’s always some manual setup steps outside of git, like partitioning drives, joining the nodes to the cluster, writing hardware-specific config, and bootstrapping Argocd to watch git. But overall, my house could burn down tomorrow and I would have everything I need to redeploy using this git repo, the secrets git repo, and my backups of my databases and container /data dirs.

        I think Portainer supports doing GitOps on Docker compose? Never used it.

        https://docs.portainer.io/user/docker/stacks/add

        Argocd is really the gold standard for GitOps though. I highly recommend trying out k3s on a server and running ArgoCD on it, it’s super easy to use.

        https://argo-cd.readthedocs.io/en/stable/getting_started/

        Kubernetes is definitely different than Docker Compose, and tutorials are usually written for Docker compose.yml, not Kubernetes Deployments, but It’s super powerful and automated. Very hard to crash once you have it running. I don’t think it’s as scary as a lot of people think, and you definitely don’t need more than one server to run it.

        • irmadlad@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          Man, I really appreciate all this info. Very helpful. It will take me some time to digest everything and put it into an action plan. I just thought, hey that would be cool and a nice project I can sink my teeth into and learn a lot on the way while deploying. Again, thank you for taking the time to give some direction and inspiration.

        • filister@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Yes, and Nix is another bag of worms. My suggestion is first try to backup your Docker compose file and the configuration files, you can define in .gitignore which files or dirs to ignore and not backup. You don’t need any automated installation for your server, as it is fairly standard but you can easily do that if you run it as a VM on top of Proxmox and just create a snapshot of your VM.

      • Live2day@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        This is exactly what I do. I have a git repo with the config files and docker compose file that through the folder mapping all I have to do is docker compose up and it’s fully setup.

        • irmadlad@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Awesome! When you were putting it all together, did you find some resources/reading material/tutorials that helped you?

          • Live2day@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            I did. That was the way they had it setup in an *arr stack setup guide I was following. Unfortunately it’s been over a year so I don’t have a link. But if you’re interested I can send you my docker compose when I get chance

            • irmadlad@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              Unfortunately it’s been over a year

              Dude, I feel that loud and clear. LOL

              I can send you my docker compose when I get chance

              That sounds like a lot of work, having to remove secrets and clean it up just for me. If you feel up to it, I would certainly love to have a look see. At your convenience of course.

        • irmadlad@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Really? Cool! I am going to have to investigate. Sounds like a great project for me to learn from.

          • mesa@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 day ago

            Yep take a look, theres quite a few examples, but they use Github Actions, CircleCI, Gitlab etc… etc…

            Most CI/CD that use the above-ish model will use the same kinda scripts (bash or otherwise). Basically if you can do it on your desptop, you can automate it on a server. Make it work first, then try to make it better.

            Most of the time, ill throw my Docker/Docker Compose (and/or terraform if need be) on the root of the repo and do the same steps I do on the development side for building/testing on the CI side. Then switch over to CD with either a new machine (docker build/ compose) or throw it all on a new server. At that point, if you script it out correctly, it doesnt really matter what kind of server you use for CI/CD, since they are all linux boxes at the end of the day.

            You can also mix it up by using bare metal, docker alternatives, different password managers, QA tools, linters, etc…etc…

            But virtualization will get you quite far. In my opinion start with just trying to get the project to build on another server via a script from scratch, then transfer it over to the CI. Then go with testing/deployment.

            GL!

    • gwheel@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      That’s what I figured, it’s already running without issue and converting the custom app to a standard docker would be trivial. Git sounds like a nice next step, right now my backup script just extracts the app configs from truenas and sticks them in a json file. It’s good enough to recreate the apps, but if I mess something up I have to dive into backups to see what changed.

  • IronKrill@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    I have all my apps running in Docker under Jailmaker and I don’t intend on moving to TrueNAS apps unless I am forced to. Currently I could move this entire setup to any machine I want, set up my jail mount points, launch up Dockge and I’d be up and running (with the same static IP at that!). If I moved to TrueNAS apps I think the transition and handling of mount points would probably be painful. If they remove jailmaker support in 25.xx like I’ve heard I’ll look into Incus or other solutions before using their apps.

  • yaroto98
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    Not sure if you were aware of the recent (last year) drama with a major contributing group to the community apps. TrueCharts I think they were called? I had some truecharts containers and some straight truenas containers. Then TrueCharts ragequit and took down their repo. I ended up reinstalling all those apps manually because for the life of me I still couldn’t get the dumb truenas versions to work. Also, I wasn’t a fan of the pvc (or whatever it was called) storage containers that got used by default. Made eveverything more difficult. My advice is to use the truenas community apps as a learning tool to configure your own properly with the truenas software. I noticed the community apps would seriously take around a minute to restart, but the ones I made manually would takes seconds. Same docker image, never figured out why, maybe a k8s thing?

    • gwheel@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Yup, that’s exactly why I’m iffy about tying my configuration too closely to a specific platform. Luckily my setup was still pretty small last year so the only significant thing was Jellyfin, which I just rebuilt from scratch.

      Paperless takes forever to start up, it seems to be something about setting permissions on all of its files.

      Do you have anything in place to track updates to your custom apps, or are you just leaving everything on the latest tag?

      • yaroto98
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I only had issues with the latest tag when dealing with the community apps. Some of them would randomly break and I’d have to roll back. Once I manually configured the docker settings using normal file mounts things were plenty stable. I think the issues were with the k8s community charts not with the underlying software. And that was fixed by just configuring it manually like however the dockerhub docs suggest.

        I would still have the occasional issue where a container would freeze and a force stop wouldn’t work, and spinning up a new one wouldn’t work because the ports were still used. But I traced that back to a bad ssd with write timeouts. I still think truenas’s k8s wrapper is buggy. Even if a container crashes hard, I shouldn’t have to reboot the system to fix it. I switched to unraid and have been blissfully happy since.