Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
My deployment platform is a shell script (j3s.sh)
130 points by j3s on April 9, 2024 | hide | past | favorite | 138 comments


I use similar things for bigger (multi-server) deploys too. It's light and it just works and works for decades without changes/updates. People say it's brittle; I have a proof of n>0 that this is not the case compared to many other solutions, this post making that point too. Sh/bash/perl(8) have been around forever, they don't break after update etc.

I sadly don't recommend it for my day job, simply because of liability. When something messes up with ansible, terraform, docker, cloudformation etc, no-one gets any blame because 'complex systems', 'it happens' etc etc; with a simple script going wrong, they would hang me high even though it probably saves a crap load in maintenance, compute etc over the 10s of years. Same reason we use clusters and IaC while of course nothing we do needs it; if an aws cluster goes down, no-one but aws gets blamed, while if the $2 postgres vps@cheapafhosting (with an higher uptime than that aws cluster by the way; human error downed it a few times, short, but still) is down even for a ping, everyone is upset and pointing fingers.


[flagged]


You've been breaking the site guidelines frequently lately by posting much too aggressively. We have to ban accounts that won't stop doing this, so if you'd please stop doing this, that would be good.

HN is for thoughtful, respectful, curious conversation—not attacking other users or elbowing.

https://news.ycombinator.com/newsguidelines.html


I don’t think ‘axe to grind’ was what I took away from the comment. And if it’s a ‘mess of shell scripts’, I can’t imagine any of the mentioned solutions to this being better.

I think we lose focus of the fact that we invented a whole new career track- devops. And it basically is about getting code from developers’ machine or environments to production. And we’re still terrible at it.

Perhaps that should be a signal that we’re doing it wrong…


There is a constellation of companies (living and dead) that have attempted to solved this problem in different ways because everyone thinks we're doing it wrong. The reality is every company has different needs. There isn't (will never be?) a one-size-fits-all approach to deployments because they're the fun intersection of people/process/technology.

The best deployment pipeline is the one that no one notices. Everything else is yak shaving/bike shedding.


It’s funny because in my many years of development I don’t think I’ve ever encounter a “mess of shell scripts” that was difficult to maintain. They were clear, did their job and if they needed to be replaced it was usually simple and straightforward.

Can’t say the same for whenever the new abstraction of the day comes along. In my experience what the OP is saying is exactly my experience. The abstractions get picked not because they are best but because they reduce liability.


Hello. I have found the mess of shell scripts. Please don't do this.

I was able to deal with the weird skaffold mess by getting rid of it, and replacing it with argocd. I was able to get rid of jenkins by migrating to github actions. I have yet to replace the magic servers with magic bash scripts. They take just enough effort that i can't spend the time.

Use a tool i can google. If your bash script is really this straight forward and takes you from standard A to standard B, and it's in version control then bash is AMAZING. Please don't shove a rondom script that does a random thing on a random server.


Bash is good but can grow out of control. The problem is solo engineers and managers who push/approve 500+ line bash scripts that do way too much. A good engineer will say it's getting too complicated and reimplement it in Python.


Wasn't there a rule about that?

Something like "in software development the only solution that sticks is the bad one, because the good ones will keep getting replaced until it's so bad, nobody can replace it anymore"


i have encountered messes of shell scripts that were difficult to maintain; in my first sysadmin job in 01996 i inherited a version control system written as a bunch of csh scripts, built on top of rcs

but they were messy not because they lacked 'abstractions' but because they had far too many

i think shell scripts are significantly more bug-prone per line than programs in most other programming languages, but if the choice is hundreds of thousands of lines in an external dependency, or a ten-line or hundred-line shell script, it's easy for the shell script to be safer


If it was in RCS, then you could directly move the archives under a CVSROOT and use them natively.

CVS had been out since Brian Berliner's version of 1989.

I actually moved a PVCS archive into RCS->CVS this way, and I'm still using it.


that version control system provided a number of facilities cvs didn't (locking, and also a certain degree of integration with our build system permitting the various developers to only recompile the part of the system they were working on, which was important because recompiling the whole thing usually took me about a week, once a month), but it had never actually occurred to me that turning an rcs repository into a cvs repository like that was a possibility. also i never realized pvcs used rcs under the covers. thank you very much


PVCS did not use the RCS format, but the RPM distribution included a perl script to convert the archives.

  $ rpm -ql cvs | grep pvcs
  /usr/share/cvs/contrib/pvcs2rcs


ooh. that would have been very useful two jobs later when i got stuck with pvcs


Shell seems great until your tens of lines in googling every other line of obscure error-prone syntax.


… or maybe you are not proficient at shell scripting? I never had this issue, including large projects written in tcl, bash or perl in the 90s when it was more normal to do so.

The modern answer seems to be some kind of dsl with yaml syntax mixed with Unix (and thus bash) snippets which are often incredibly verbose and definitely not easier to read than a well written bash script. The only thing I think of when I see those great solutions is; another greenspun’s tenth rule in action.


bash and other sh related approaches have a lot of "foot guns". python, or powershell, or even C++ are often easier to read and follow.

> are often incredibly verbose and definitely not easier to read than a well written bash script

define well written -- getting into no true scotsman here.

bash is fine for what it was and what it did, and i'm glad to know enough sed and awk to be dangerous, but it's a PITA unless we're forced to use it


If we’re using the script in the post as an example, that’s hardly a “mess” of shell scripts. I’d rather maintain that than almost any other build system I’ve ever seen.


Just yesterday I was working on a monitoring agent that was a python script deployed through gitlab-ci, and executed inside a docker container through logstash.

I bet you could do the same with about 5 lines of shell (wget|grep|curl -XPOST). It would be simpler, easier to modify, and it wouldn't ever break.

Shell scripts aren't necessarily messy or complex.


python is a particularly bad choice for things that are not supposed to break; the python maintainers have adopted a new policy of intentionally breaking new things every release. recent casualties include asyncore, distutils, and imp

https://docs.python.org/3/whatsnew/3.12.html#removed

there are people who foolishly depend on python for things that need to work. i hope that at some point they band together behind a fork that doesn't pull this kind of nonsense, but for the time being, that hasn't happened. if you wrote stuff in python in 01995 or 02000 or 02005 it probably still worked in 02015, but today, basically no python from before 02015 works


Perhaps don't upgrade your Python willy-nilly? That seems like a sane solution to this problem.


debian stable no longer has python 2. debian's pypy package has its own fork of cpython 2 in order to be able to build itself. numerous other projects are doing similar things

debian's versioning system for packages doesn't really contemplate needing to install old versions of a package because the maintainers are deliberately breaking new versions of it, so in 'bookworm' debian 12.5, the only cpython available to install is python 3.11. so if you want to 'not upgrade python' you are going to have to take on the maintenance burden of the old versions yourself; debian isn't going to help

some other package managers handle this kind of situation better (nix is notably excellent at it) but that doesn't really help with the maintenance burden of keeping up with security fixes


My "willy-nilly" comment meant "don't upgrade it without planning in advance" not "don't upgrade it, period." Part of that planning can be choosing to stay on an older LTS distro until you have time to properly migrate your code to a Python released in this decade.


or you could just write the code in the first place in a language that doesn't have maintainers that intentionally break old code. that's what you would do if you aspire to write code that constitutes part of the intellectual heritage of humanity, as opposed to being an expedient way to resolve a problem you have this week, to be thrown away in a month or two. you can still compile c code that uses gets(), you know. a couple of years ago i fixed a perl cgi script i'd written in 01998 to work with current perl; it took me half an hour. and euclid's proofs are still valid 2500 years after he wrote them down


C is a very slow moving language, but platform-specific APIs (meaning outside of stdlib) change more frequently. "C" does not exist in isolation, and outside of toy applications, not useful without the underlying platform APIs. If you were migrating C code, from say, Mac OS 7, or even Mac OS X 10.1, you would have to make many, many changes. If you're migrating code across Unix platforms (say, 1990's SunOS to modern Linux), you'll require changes, too.


sure, but the gratuitous breakage i'm complaining about in python is inside of stdlib, and even in the language syntax itself. and i've frequently taken c code from 1990s sunos and compiled it on modern linux, and yes, it's true that it usually requires a few changes where it interacts with the outside world—but only a few

it's true that very small programs are often little more than glue between the underlying platform apis, but large programs tend to have a much lower surface-to-volume ratio. that's why τεχ still runs, virtually unchanged since 01990, and you can easily render plain τεχ documents from the 80s or 90s with τεχ today—even with pdfτεχ


Are you sure you're not the one grinding an axe? Shell scripts are a mess but YAML isn't? On the happy path they can both be fine, and on the unhappy path they both suck.


Most CI/CD configurations are also a mess. Often YAML, shell command snippets, weird conditional syntax... I can't say it's better.


> nobody wants to maintain your mess of shell scripts.

Have you seen the scripts in question? Are you in any kind of way able to make such a value judgement over this persons work? Have you considered the impact of your choice of words on others?


Shell scripts are a more evolved form of programming and nobody can change my mind on that. They require less work, they're easier to make, they're flexible, compatible, composable, portable, small, interpreted, and simple. You can do more with few characters and do complex things without the complexity of types, data structures, locks, scoping, etc. You don't write complex programs in it, but you use complex programs with it, in ways that would be over complicated, buggy and time consuming in a traditional language.

That said, it's a tool. Like any tool, it depends how you use it. People who aren't trained on the tool, or don't read the instruction manual, might get injured. I'd like to see a version of it that is safer and retains its utility without getting more complicated, but it would end up less useful in many cases. Maybe that's fine; maybe it needs to be split into multiple tools.


I agree with most of this, my biggest issue is how hard it is for me to recall any moderately complex shell syntax (or the slightly different Makefile syntax). LLMs largely solve that for me.


The bash man page is my bible. It's dense and long but it always has the answers, you just gotta know where to look.


When people mention `bash` it's immediately a code smell. In order to write portable shell scripts, it must be only POSIX `sh`. If the need ever arises for a more complex data structure, typically I jump into AWK since it's also POSIX compliant.

Here's a note from the Ubuntu recommendation: https://wiki.ubuntu.com/DashAsBinSh


Not every shell script has to be that portable. And even then, "POSIX compatible" shells vary in their "POSIXness". Sometimes using arrays is the right thing to do, or having a reliable indicator of the source script location, so you code to Bash 3. Sometimes associative arrays or something else is better, and your platform is known, so you code to Bash 4 or 5. Maybe all your users are Zsh users so you use that. So you can start by writing POSIX compatible scripts, but there's no sense in tying your hands if you don't need to.

The Bash manual also covers POSIX semantics, I just remember which is which. You're right that the Dash manual is a good place to check what is mostly POSIX (Dash isn't actually strictly POSIX). I would probably use Shellcheck with the correct shebang to double check what's compatible.


I've never been in a situation where I had to care. Bash is everywhere, it's a de facto standard of its own. Even a lot of buildroot and Alpine based Linux deployments, which don't come with bash by default, usually have bash added to them.


It’s not everywhere though, FreeBSD for example doesn’t use it.


It doesn't come with bash pre-installed, but installing it is as simple as "pkg install bash", and that is the option that everyone working with FreeBSD will choose when they encounter a shell script that only works with bash. My point is that bash is already so ubiquitous that almost no one is going to care about an extra few megabytes for the bash executable in their OS image.


If you're installing packages anyway, then why not just write in Python, Go, or any other languages which have relatively less footguns and have more features? I'm sure at least some of those packages are also available on FreeBSD.


Eh, bash is pretty damn portable


I would RTFM so I actually learn something instead of using """AI""" regurgitation.


I find the manual for bash to be a terrible way to learn things


man pages are great if you already know 100% what you need to do, and just need a refresher.

but if you don't, and the tldr command isn't available, then good freakin luck.


`curl cheat.sh/$command_name` usually works pretty well! (as long as you've got curl, of course)


I have written a lot of bash. When you know what you're doing it's very productive. But it still feels like walking a tightrope, where some corner case in quoting, interpolation, comparison, etc will one day rm -rf / you.

What i really want is "python, but with really easy running of subcommands". Imagine extending python with a $ operator (prefix, applied to iterables) so that

  files_iter = $('ls')
would run ls and put an iterator over its lines of output in that variable, throwing an exception if ls exits with an error status (i realise there is a rabbithole of subtleties here - getting those right would be part of this). Or

  contains_pattern = $?('grep', '-q', pattern, file)
to get just the exit status as a boolean. I think i'd drop bash in a heartbeat.


Languages like Ruby, Perl, and PHP have a backtick that can be used for that; for example in Ruby:

    >> files = `ls / | grep ^b`.split("\n")
    => ["bin", "books", "boot"]
    >> p $?
    #<Process::Status: pid 10889 exit 0>

    >> `false`
    => ""
    >> p $?
    #<Process::Status: pid 10898 exit 1>
I don't use it a lot because I don't feel like installing Ruby just for a few scripts, and zsh solves enough of the problems for me anyway.

Also parsing ls output isn't necessarily a good idea, and what you really miss is "first class globbing" like:

  for f in /b*; [..]
  arr=(/b*)


Backticks are going in the right direction, but not quite it. I want to operate on lists (or iterables etc), not strings, so i have all the usual safe and convenient facilities of the language available to build them. I want to get an object back that is extremely easy to get various kinds of results out of (Python's subprocess object isn't; i'm not familiar with Ruby's). I want more aggressive raise_for_status style error checking.

Thinking about it, this doesn't need to be an operator, and i could probably just write this myself and start trying it.

Interesting point about globbing. My feeling is that i don't actually use it a lot in scripts - the criteria for matching are usually complicated enough that i'm much more likely to use find. In a project with 1376 non-comment lines of shell script, i found eleven uses of globbing.


The use of ls in this way is not good form:

  cd /root
  for project in $(ls go-cicd);
I think a better expression would be:

  for project in ./*
  do [ -d "$project" ] || continue
     ...


Some other nit-picks about this script.

- Everything here is done as root. For the day that you want to build with lesser privilege, do this:

  BLDUSER=~root
- That will be difficult for you, because you are moving the projects to /usr/local/bin; for the day that you stop running as root, make a subdirectory "/usr/local/bin/$BLDUSER" (owned by the namesake account) and move the projects there instead.

- Very minor nitpick, use <<- and prefix tabs on the here document to make it slightly easier to read.

- Slight improvement, so this can print more than one argument:

  println() { y=
    for x
    do printf %s%s "$y" "$x"
       y=' '
    done >> /root/gocicd.log
    echo >> /root/gocicd.log # for the newline
  }


Why is that not good form?


It omits some characters from its output. It also mangles some others through octal escape codes or other such stuff. Depending on flags it will also not handle filenames with spaces or newlines properly. ls output is meant for human consumption, not parsing.


There are several reasons.

- Running ls forks an unnecessary process.

- The ls may be aliased with "-F" (or -p) which will corrupt the filenames.

- Environment variables may otherwise (unexpectedly) manipulate ls behavior.

- Files with spaces will not be evaluated correctly.

- Hostile files can be placed that mimic command line arguments.

The shell should evaluate filenames itself; it is very capable of doing so.

POSIX ls: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/l...


https://mywiki.wooledge.org/BashPitfalls#for_f_in_.24.28ls_.... describes the failure modes (and note that it's the very first pitfall because it's a common one - and one I've perpetrated plenty of times myself).

Given the situation and the project names, I wouldn't expect the use of `ls` described in TFA to ever be a problem, but doing it with a simple glob would still be nicer and is a good habit to get into overall since then you don't have to ask yourself "is this use of ls going to be safe?"


Not that I agree 100%, but the topic is covered fairly well here:

https://mywiki.wooledge.org/ParsingLs



I was wondering if Shellcheck warned on this, and of course it does. God I love Shellcheck.


Never, ever parse or rely on the output of ls. It's very unpredictable.


You can also use webhooks to deploy with each GitHub push. The advantage over GitHub actions is you don’t have to store any secrets on GitHub or with integrators like Vercel. Just send a payload to your own endpoint each time a commit is made, and that can trigger your shell script to rebuild and deploy. Using symbolic links helps make it more robust to errors. Trigger a pull of the repo, and build. Only if the build is successful, move the symbolic link of your production app to the new build. This also allows keeping some history of builds in case you ever need to troubleshoot.


I also started with a simple shell script. Upload sources, build on the target system (golang) and restart the systemd service(es).

Then, I needed to make another machine like this, so enter Ansible. This worked well for a long time, and was relatively content with it. Along the way, I leaned about nix (and enough of it) to adopt a simple flake to pull out my tools (like golang, ansible, terraform) through. For a long time, I used it like this (e.g. still ansible, but I started building locally)

Finally, I learned enough nix to adopt NixOS. Now, I've converted my project to a nix package and a NixOS module, which allows me to totally describe the state of the machine I want. With this, remote builds and colmena (mostly for pushing secrets), I deploy a complete system, including my own software.


"i like things that work for years with as little interaction from me as possible."

Shell scripts written in NetBSD sh/Debian ash will work for as long as I live.


Or, you could use NixOS and just declare your systems in some text files, git commit; git push.

You build script becomes:

   while true; do

   git pull

   nixos-rebuild switch

   sleep x

   done
That's it. You can even do it remotely and push the new desired state to remote machines (and still build on the target machine, no cross compile required).

I've completely removed Ansible as a result and no more python version mismatches, no more hunting endless task yaml syntax, no more "my god ansible is slow" deplyments.


Instead of saying:

  while true
You can instead say:

  while :
There is actually a /bin/true, which could involve the fork of a new process for each iteration of the loop. The form that I have shown you is guaranteed not to fork.


Thank you sir!


You might find it interesting to know exactly what is (and is not) in the POSIX shell. The description of the colon : operator is there.

https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V...

Most of the familiar userland utilities are at that website, accessible as a (somewhat crude) Apache index:

https://pubs.opengroup.org/onlinepubs/9699919799/utilities/

Any POSIX-compliant system is required to implement the functionality described there.


Sounds interesting. Let's say the software is a web backend. Can you deploy it like this with zero downtime? So that the new version starts, new traffic goes to it, and the old version handles its active requests to completion and then shuts off.


I don't think so, by default I think the nixos process will simply stop (probably by sending SIGINT) the service and then start it again.

But if you could have the server into 'lame duck mode' (no new connections accepted, but existing ones can finish) / gracefull shutdown and that's a blocking call (or you could poll if it's still up etc), then you could script that before the 'nixos-rebuild switch' call. Maybe sending SIGINT to the service does that already?


My current deployment method for most of my personal hosts is:

    nixos-rebuild switch --target-host x.example.com 
(I still have a few Arch hosts using Ansible, but will migrate them in future)


Yeah that's where I'm headed also, it's more reliable to push the configs rather than have them poll/pull automatically.

There's also https://github.com/zhaofengli/colmena which may be of interest to folks.


Not to deride this (too much), but the 'robustness' of deployments with shell scripts is tempting bait. Things are until they aren't, 'nobody rides for free' - decide what you're willing to pay.

Example: this interprets the output of 'ls'. Reliability is dependent on good quoting/never introducing a project with spaces

Ansible is a nice middle ground, personally. I write the state that differs, use a library of scripting.


Parsing ls is an anti-pattern, but the author says it works for years - we all do mistakes and as long as it works you don't notice.

And it's an easy fix:

    - for project in $(ls go-cicd); do
    + cd go-cicd || exit 1; for project in \*; do


I would suggest not to mess with the current working directory and instead do something like this:

  for project in go-cicd/*; do
    project="${project#*/}"


Indeed, thank you for mentioning that - I was going to suggest similar.

That said, if you do (ie: scratch files, things outside of control, whatever), consider the directory stack:

https://www.gnu.org/software/bash/manual/html_node/Directory...

Using 'pushd' and 'popd' can save your fingers/brain from getting lost in context.


I like using subshells for this:

  (
      cd dir
      for f in ./*; [..]
  )
There's a few scenarios where this won't work (mainly if you want to set a variable from the "outer scope"), but 99% of the time it works nicely.


Ah, that's a nice/neat thought - thank you for sharing! Makes total sense, temporary/disposable shell for such things


Putting

  SAVEIFS=$IFS
  IFS=$(echo -en "\n\b")
or something similar at the top of the script might not come across as comparable to adopting Ansible to some people.


Pro tip: unsetting IFS has the same effect as saving and restoring the old value.


Ah, nice, thanks.

I've been cargo-culting this for ages without thinking much about it.


I knew giving specific examples would lead this way, that's not my point.

This case has been handled. Others?

There isn't anything really insightful here. Someone pleased with a script has not yet grown beyond the needs of it. Yay.

The more portable/maintainable version of this is a playbook or whatever. Someone wrote and tested a better version of whatever Work Unit as a module.

Wheel enthusiast is pleased with their finely polished, but not particularly traveled, wheel. It's great, but they're commodities and not seeing much.

I would have done the same thing with 'ansible-pull' and zero code, in even less time/effort... but I also already know Ansible and bash.

Configuration management tools are excellent at managing applications and configs. Who knew.

Something else to suggest: systemd timers. At a glance info for scheduling of the job, instead of inferred from logs that may or may not have been recorded. You can also then actually declare your deployment needs networking.

Eyeroll.jpg. This is great because the bar is so low. They'll generate a ton of useless logs if they lose networking, as-is. Long enough and the disk will fill: this is trying every minute.


OK, and I wouldn't have used either Ansible or bash. I would have used picolisp, which is clearly superior for this task due to the flexibility that comes with the deep POSIX integration.

When I think about excellent software Ansible isn't the first that comes to mind either. Clearly it's different for you, and the person who wrote the TFA doesn't agree with me either.


Let's take a step back from names for a second - I feel we're getting a little distracted by that.

My gripe isn't with the tech, or even the solution. It's perfectly fine. I know I'm being overly critical, but I think they opened themselves to some judgement by making a post!

I would do something very similar. Sure, the tool wouldn't have the exact same name, but the mechanism would be ~the same~ very similar.

If the goal is to minimize surprises, the amount of effort put into something, etc - is it not best to follow the beaten path?

They're to be commended for making a solution that works well for them, and echoing the KISS methodology, I just don't think a shell script is it. Anecdotes are funny, I guess.

It's worked well for them - but I'm here because similar things have gone terribly for me. Small decisions can have big influence


Pretty sure bash and Perl and C (and make and autotools and probably something I'm forgetting) are the beaten path in this context, i.e. software building and deployment on Unix.

Ansible has surprised me way more often than bash has. The latter is upfront about being weird and a little bit insane, Ansible tries to have a better image. In my experience that is true also for Puppet and some other similar tools, which I'd never use without payment, at least until I can afford personal medium iron somewhere and need to provision and handle hundreds of more or less ephemeral virtual machines.

In part because I enjoy having pets rather than herds in my personal projects, but also because when using the shell or POSIX there's very little overhead, like venv and python libs and so on that will inevitably irritate me a few times a year even if I rarely use them directly.


Bash/Perl/C being precursors don't disqualify Python/Ansible, though. Otherwise my retort would be this: assembly.

While I empathize, your experience isn't [widely] representative. A reproducible Python environment is just as attainable as any other - anyone struggling with it has accepted it, in my opinion.

Most of my peers/community uses the distribution packages. They don't even need to care about venv/pip at all.


Or a project name that overlaps an init.d script they'd prefer to have kept

It also doesn't fail immediately if many of the commands fail (instead blindly moves on to the next statement). Consider using bash and these options: https://github.com/kvz/bash3boilerplate/blob/main/main.sh#L1... for most scripts.


I assume this script runs on the server. I was building Go projects on the server as well, a vps where I have several things running. At some point I noticed that larger builds severely effected the other websites. So now I build locally and push the binary to git. To not bloat the project repo with big binary blobs I use a special deploy repo.


I think the most important part is the last lines:

"consider keeping your little things little.

it worked for little old me."

The rest are details and every one of us would implement the details in a different way.

For example a similar script could be portable with non go projects by looking for a simple build-deploy.sh script that take care of each project deployment mode/instructions.


As a pythonista, I am a huge fan of the plumbum library as a replacement for bash. It makes it very straightforward to run a sequence of *nix commands, but you get all the simplicity and power of the python language in terms of loops and functions and so forth. These days, I do all my server management and deployment scripts with python/plumbum.

And while simple is great, the necessary features not included in OP's scripts is that I want to spin up the new instance in parallel, verify it is running correctly, and then switch nginx or the load balancer to point to the new server. You are less prone to break production and you get zero downtime deploys.


How do you deal with plumbum not being a builtin module? Do you install it system-wide? This currently holds me back from using sh (the Python lib) for maintaining my servers, especially if I need it with root.


That's not a big deal for me since I only am running a handful of servers. I install it system-wide during initial setup of a new server. Plumbum has the ability to run remote shell commands as well, so I have a script that can login to a new remote machine and do that initial setup.


Love stuff like this. For my personal blog, I have a simple Makefile that builds the Go Binary, generates a static HTML output and then deploys it to a DigitalOcean VPS using ssh, reloads Caddy and Supervisor and boom.


Mine too, but I am not a repoman.

pushthis="rsync -avzh --del ~/path.local/ somedude@path.online:path.online"


There's a lot of good in that script - it's just that it doesn't seem to cover functionalities that I'm used to after years of deploying side and "real" (business, etc.) projects to Heroku and Render.

How do you manage domain names, who deals with the ssl certificates, how do you set environment variables i.e. "secrets", how can you run postgres, how do you run remote commands i.e. dbmigrate.py, etc.

A friend and I have been working for a few months on a project to simplify this - we're not the first to do an open source IaC, but we're scratching our own itch on a lot of features that we've been missing. It's basically "deploy with git push to your own VPS and manage everything with a CLI".

I'd love to ask - what do people feel is mostly lacking from OP's script? Which features seem like the most important when deploying/managing a remote server? How do you choose if you're going to use Ansible or K8S or a script, or a full blown IaC i.e. Heroku? Is it price/ownership (i.e. having full control over the machine)/ease of use/speed of deployment/something else? Thanks!


Off topic: I love the writing style of the author and this blog. Gonna follow it.


here's the deployment script i use most often

http://canonical.org/~kragen/sw/dev3.git/hooks/post-update

    #!/bin/sh
    set -e

    echo -n 'updating... '
    git update-server-info
    echo 'done. going to dev3'
    cd /home/kragen/public_html/sw/dev3
    echo -n 'pulling... '
    env -u GIT_DIR git pull
    echo -n 'updating... '
    env -u GIT_DIR git update-server-info
    echo 'done.'
dev3.git is the origin for dev3, so the `git pull` in there pulls from the bare repo that just got pushed to

it doesn't have the 60-second lag and it doesn't load the server all the time. it also doesn't run `go build` or restart a server with openrc, but those would be easy things to add if i wanted them


I still get a chuckle remembering a co-worker called this the "pull and pray" method.


yeah, dev3 doesn't really require high reliability, so i only have to pray to the small gods


I have a very similar system[1] for my personal projects, only I use GitHub actions to push a docker image to ECR and a commit to a config repo bumping the tag. I then have a cronjob to pull the config repo and reconcile using docker compose.

I wouldn't use it for serious stuff, but it's been working great for my random personal projects (biggest gap is if something crashes it'll stay crashed until manual intervention currently)

- [1] https://github.com/mnahkies/shoe-string-server/pull/2


I use this deploy script for my hobby project: https://gist.github.com/tacone/230d5c305a9c5eff7f58ea2744f20...

It will connect over ssh, pull the code, build the containers and restart them (scripts/live is just a wrapper around docker-compose).

If the build fails, the services will keep running.

The only problem I have is that hitting CTRL+C in the very moment the containers are being restarted will leave me with the services down.


Not exactly the same configuration as the op, but if you are developing software using Go, the combination of Caddy, a single go binary, and systemd or some other supervisor is extremely flexible and i think is the way for running multiple services on a single VM.

A shell script that deploys a couple config files and off you go. Use different accounts for each service for isolation and put all of your static files in your binary using embed.FS. No need for fancy configuration management or K8s.


I have a similar deploy.sh script for my go projects, with a slight twist:

I compile my Go projects to a binary on a github action, scp it to the server, ssh into it and restart - all done in my deploy.sh and the GHA itself only installs Go and deps (its cached) and then calls that deploy.sh script which sits right in the repo itself.

Super happy with it. Speaking as a previous DevOps guy that got sick of AWS complexities.


(Shameless plug) Mine is a Python script: http://piku.github.io


Why not use Ansible for something like this?

Don’t get me wrong, I love bash scripts like any other old hat, but Ansible scratches this exact itch.

You’ve got playbooks that can execute shell, provide logging, better management, history of execution, fleet management, and it’s light weight. And there’s a robust community of shared modules, etc.


Why add the complexity of having to maintain an Ansible installation, a logging stack, deal with their upgrades and whatever python issue one might encounter. I had the issue of Ansible builtin `shell` not doing the right thing (sh vs bash) or it being unnecessarily slow when uselessly looking up `cowsay`.

Adding layers and layers of tooling is often overkill and it is hard to bit the simplicity of 33 lines of shell when the use case is a single person doing the code, deployment and maintenance.


I’m with you on the usecase. Simple server deployment on a VM, bash script is fine, in fact I recommend it. It’s when you start dealing with 5+ VMs that I would start looking into using a tool like Ansible.


Bash is better than ansible for configuring the core infrastructure underneath ansible.

In a devops workflow you "treat servers like cattle instead of pets" but your org still needs a few pets. Some host you control must either host DNS or manage your DNS provider's API key. Same for CA, IdP, git, backup and monitoring services, and the ansible machine itself. You'll have to manually configure these things before your "cattle" tools can run.

Once you're up and running, it's possible to make ansible manage it own dependencies, but this introduces circular dependencies complicates bootstrapping (consider a disaster recovery situation) and amplifies both the impact of faults and the difficulty of troubleshooting them. Do you want to be debugging python dependencies in the middle of the night so you can finally get ansible to execute the couple bash commands that will bring your ACME CA back up? I'd rather run bash directly.

At a small scale with a stable set of requirements, your core infrastructure is better served by a good operations manual and a simple deployment toolset with minimal dependencies. Plain bash fits the bill!


I think even ansible is overkill for such a simple thing. Ansible use case works better when you need to do stuff on multiple hosts.

For years I've started using and abandoned ansible and puppet recipes for setting up my own computers and everytime the conclusion was that I would spend more time installing git, ansible and puppet in the first place and debugging my recipes than using them. Now all my setup lives in shell functions in my .bashrc.d. I still need git but I don't need ansible or puppet anymore.


Ansible is great even for simple single-host 'shell scripts'.

Lean into the module ecosystem. Want to ensure a config file is a certain way? Jinja/template it, or use lineinfile instead of echo/shell redirects.

That's a lot of mumbo-jumbo. The point is, there's a lot of stuff scripts want to do. Ansible provides these as modules. Using the modules spares you from writing code to do something in a robust/repeatable way.

The 'line in a file' example is a good case, IMO. A shell script with redirection either requires specific code to look first, or simply endlessly append. With Ansible you don't have to do all of that.

Your script needs to do something when something changed? Ansible has you covered: handlers!

Python is right within reach too. I find it a way to write Python via YML, basically.


> Using the modules spares you from writing code to do something in a robust/repeatable way.

That is a huge lie: Declarative code is still code. Using modules is similar to reusing functions. The things is, while reusability and declarative code is nice when you want to deploy and manage multiple machines and have an automated network install that bootstrap your automated configuration tool. It is worth the effort because there are many machines but all that automation need to be tested/fixed on a regular basis (distro releases, etc). If you are reinstalling your machine from an usb pendrive, or image once every so many full moons, you need first to bootstrap ansible and the playbook. How do you that in an idempotent manner? The time you have done that you are probably already ready.

The only thing I need on my dev machines is :

- my software configs: comes with a git repo of my dotfiles.

- a dev directory: mkdir is idempotent, it will not destroy dir if it exists so no need for a declarative language

- some packages: a single package install is needed. While using a configuration tool allows you to declare package name depending on distro version and regardless of package tool, usually someone who is managing 2-3 machines stick to one OS so in my case a single `dnf install -y <list of package>` is enough.

- a few tools I curl from github or other places. I have one bash function for each of them to get latest release (a one liner) and one to compare installed version with latest release and install if needed. Ansible doesn't do a better job at it. I checked ansible-galaxy for some of the tools I download, for some no module exists, for those that have modules, they are just made of ... shell scripts called by an ansible task that is larger than my own script. See example[1]

- a few desktop files, they come with my dotfiles git repo, no need to "declare them"

- a handful of stuff that comes with an install shell script (the infamous curl | bash). Ansible don't help me much or I'd have to rewrite the install script as an ansible playbook and maintain it myself forever.

No handler necessary, none of this requires a reboot or a service restart.

Also Ansible is probably the worst example as it is an half assed declarative language that doesn't even encourage idempotence. Basically it is made by and for old unix guys who want to continue writing sequencial scripts the old and crappy way while pretending they do things in a modern way. And it is the reason it has become so popular against cfengine, puppet, chef and salt. I don't see the point of using ansible if it is to have the same low standard of quality as plain old scripts.

My experience working with teams using ansible has only reinforced my view that this language is for people who like to do things the dirtiest way anyway but wrapping it in a declarative language so they can put devops engineer in their resume.

[1] https://github.com/andrewrothstein/ansible-eksctl/tree/main


> reinstalling your machine from an usb pendrive, or image once every so many full moons, you need first to bootstrap ansible and the playbook. How do you that in an idempotent manner?

You just reinstalled; do you really care if the preparation is idempotent?

Anyway: kickstart is how I deal with that. The way one automates installations. Anyone reinstalling their workstation that often should probably look into it.

It wants a list of packages that get installed by default, Ansible is one of them. The install environment makes Ansible available, then runs ansible-pull to fetch the repository and run the play.

I hear you now: "but USB installs!"

Who is this person that does this so often to automate it, but accept clicking through the UI/installer and so on? Set up tftp and PXE already, you're neck deep.

The unix greybeards would put shell scripts in those kickstarts. I feel it's slightly improved by using playbooks held externally in SCM.

The module library doesn't cover everything, but it's great for routine system administration. It may not have the latest whizbang API.

Ansible is useful, I'm not debating this. One can write it as poorly as you represent, but they don't have to.

I do lament people writing it like scripts. They miss the point, we're in agreement there. The core modules are idempotent if used well.

The Command/Script modules shouldn't even exist in my opinion. Force people to custom-fact those things. It can be a plain text file or a robust script.


That is exactly what I am saying, nobody does it that often, so automatising is not even an option usually as things move so fast a playbook would have to be debugged and modified every time you use it.


I think Ansible is a little overkill for some projects tbh. Ideally I'd love a middle ground between bash scripts and Ansible, similar to Caddy's config simplicity over nginx.

>it’s light weight

Eh, don't think that's the case for everyone.

I dabbled with Ansible at a previous job, and set up a very basic personal server setup for Nextcloud and one other app. It was much slower than if I had just written some bash scripts. Idempotency was nice, but the feedback loop wasn't great.


Related: https://github.com/containrrr/watchtower

Polls a docker registry and automatically restarts the container with the same flags that it was started with using the latest image.


Scripts get complicated when people start worrying about "portability," like we were in 1991 so we have to use "sh".

It's 2024 and you're not developing the next vim or Postgres. Use bash.



Laravel has Envoy as well: https://laravel.com/docs/11.x/envoy

I've played around a bit with Deployer for some projects. It's decent, but feels brittle. It's very dependent on you sticking with its assumed default setup, docs are all over the place, and extending/replacing scripts I found confusing.

I moved back to bash scripts.


I agree that sometimes Bash is enough, which is what I show in Deployment from Scratch. However I am moving pretty much everything to Kamal now...


Guessing you're talking about https://kamal-deploy.org/ which looks interesting, though I tend to like reconciliation logic based systems ... but often only fired off imperatively with a plan/apply separation. So I shall be having a poke around anyway :)


Got your Kamal book. It was much needed, great resource. Thanks for that!


If the author is reading this: did you code the fish animation CSS manually or used some wrapper for all these moz- and webkit- variants?


It’s a fish? I thought it was a pocket watch


Well, that would make it quite an exotic watch: https://j3s.sh/static/unnamed-puffy.png


The heredoc in the script is never terminated. Probably it's not the production version but one for publishing ;-)


What is existentialcrisis.sh? :D


You don't wanna know


Can someone ELI5 what he said in the blog post please? What does the script do?


The script does a git fetch

If the current code is behind (there are new commits), it merges them. If it fails, it stops.

If not, it runs `go build`. If it fails, it stops.

If not, it moves the binary to the right location and restarts the service.


I can't speak to the validity for the author's use case as I'm not a golang dev, but in spirit I do like the idea. I think this trend back to simplicity (monoliths, sqlite, bash scripts) makes it good timing to be posting and learning things like this.

Especially as more and more new, easy/low config tools come out like Caddy, this gets simpler over time.

I have a testbed boilerplate project for Laravel in which server provisioning & deployments are done by bash scripts over SSH. Excluding comments/spacing, the provisioning script is 35 lines of mostly installing dependencies and minor file template copies. For simpler projects not needing queue workers and/or not using more "exotic" tech like Laravel Octane, this could probably be cut down to 30.

TL;DR do the simplest thing that works for you and move on with life - the value in your project, if you intend to deploy it, is for it to be used.


I’d like to think that there is in fact a trend towards simplicity. But with AI and AI written code, I unfortunately think we may be heading to having even more opaque code.


I always say: my pipeline is a shell script I call pipeline.sh


> my script will never:

> - go down

I've had cron log files get too big and causes issues.

> - require an upgrade

I can't count the number of times unattended-upgrades has broken something.

> - force me to migrate

Let's hope the OS is still receiving security updates, because installing on a VPS like this always has a high migration cost.

This sort of deployment is a fair starting point, but let's not pretend it's some perfect ideal.

....

Look. For deploying a blog, sure, but no one is deploying their blog on k8s. There is a reason why big complex deployment and orchestration systems exist, because there are use-cases for them. This is not one of them, but there's no need to stick your head in the sand over requirements and pretend they don't exist.


Limitations are a good thing


  my script will never:
  - go down
  - require an upgrade
  - force me to migrate
  - surprise me
  - keep me up at night
Oh my sweet summer child.


but my server will:

  - require direct source code access
  - require go build tools
  - require maintaining GitHub auth
  - require upgrading build tools over time
  - not be trivial to rebuild on failure


[flagged]


> The phrase “sweet summer's child" became a popular way of describing an innocent, naive person (especially among American writers) during the early Victorian era. It was used by a number of authors during the 1840s, notably:- Fredrika Bremer (1840), James Staunton Babcock (1849) in The West Wind and Mary Whitaker (1850) in The Creole. It has been used in a number of other novels, poems and speeches (especially by US authors) throughout the 20th century. "The West Wind," by James Staunton Babcock, New York, 1849::Thy home is all around,:Sweet summer child of light and air,:Like God's own presence, felt, ne'er found,: A Spirit everywhere! The 1996 fantasy novel A Game of Thrones by George R. R. Martin adapted this former usage for a passage in which a young boy is called a "sweet summer child" by an old woman, since seasons last years in the novel's world and he has yet to experience winter. It was later popularized by its use in the episode "Lord Snow " (2011) of the television adaptation Game of Thrones .


My sweet summer child, you couldn't tell I was being sarcastic and rhetorical?


It's an expression that implies that someone is naive and/or inexperienced.


I find it condescending and cliched in the same way those in this thread do:

https://news.ycombinator.com/item?id=39417916

It adds absolutely nothing to the discussion. A better response is for the GP to tell us why they think the OP is naive. It's a low effort unsubstantiated jab that pollutes the comments.


I don't agree it adds nothing - it conveys that the author underestimates the power of Murphy's law.


You getting so tilted[0] at a turn of phrase[1] is polluting the comments far more than GP is. Touch grass[2].

[0] "tilted" is a common idiom used to denote anger or frustration which originates from the gaming community, when frustrated pinball machine players would literally tilt the machine. I am not suggesting that you are literally sitting or standing at an incline.

[1] "Turn of phrase" denotes a particularly notable form of non-literal expression. It was likely coined by Benjamin Franklin. Ironically, "turn of phrase" is itself a turn of phrase, as it is meant to evoke the image of turning words on a lathe, despite it not being physically possible to turn words on a lathe due to words being abstract concepts.

[2] "touch grass" is a lighthearted or humorous way of advising someone to take a break from their online activities, perhaps by going outside and interacting with the real world, particularly if they are excessively immersed in virtual or digital environments. It is also a useful metaphor for partaking in the smoking of marijuana, which is often referred to as "grass."


I hung some washing out and it was grounding to literally touch grass (before it started raining). Thanks & apologies


Yes more you don’t need




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: