SSH server in a simple Alpine container
This is part of my migration strategy of converting all my Vagrant targets into containers. I covered how to get either of a unique IP address or socket address on Vagrant. Here I set out to do the same using Docker.
The WordPress official (:6.0.1) docker is 609 MB, and mariadb:10.8.3-jammy is 383 MB. By the same metric, the Vagrant Jammy Box begins at just 599 MB, but invariably takes at least another 2GB once kitted out for WordPress by a single, convenient, Vagrantfile.
The space-saving isn't huge, and sometimes suboptimal in terms of accurately replicating what is in prod. A prod environment can be too large for everyone to maintain a local copy. Testing individual containers is easier, and containers scale.
SSH by itself isn't as interesting as playbooks. Playbooks require a unique target on the network, which must be running an SSH server.
I will describe how to get a pair of minimal, containers running SSH daemons. Minimal means Alpine.
Docker provides a preferred, simplified, secure, out-of-band, alternative to SSH access in the form of
docker run and
Docker is intended for microservices; one daemon, or service, per container. Images that don't run a service, ie, don't persist a process in the background, don't natively support daemon operation. That 1:1 ratio is represented in the
CMD of the
Dockerfile, which sets the initial process to allow containers to run in daemon mode.
There is nothing stopping us from having the initial process spawn multiple services, like in a VM.
The nice thing about building Docker images are the layers. A new image doesn't need to be built from scratch if only the top (of stack, bottom of file) layers change.
In researching this post I came across an old, long since deleted, tutorial from 2015 on docker.com that describes the image for an SSH server. For posterity, I present it next.
Dockerizing an SSH daemon service
From the wayback machine, in October 2015:
# sshd # # VERSION 0.0.2 FROM ubuntu:14.04 MAINTAINER Sven Dowideit <SvenDowideit@docker.com> RUN apt-get update && apt-get install -y openssh-server RUN mkdir /var/run/sshd RUN echo 'root:screencast' | chpasswd RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config # SSH login fix. Otherwise user is kicked off after login RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd ENV NOTVISIBLE "in users profile" RUN echo "export VISIBLE=now" >> /etc/profile EXPOSE 22 CMD ["/usr/sbin/sshd", "-D"]
$ docker run -d -P --name test_sshd eg_sshd $ docker port test_sshd 22 0.0.0.0:49154
And now you can ssh as
rooton the container’s IP address (you can find it with
docker inspect) or on port
49154of the Docker daemon’s host IP address (
ifconfigcan tell you that) or
localhostif on the Docker daemon host:
$ ssh firstname.lastname@example.org -p 49154 # The password is ``screencast``. $$
That page got redirected to https://docs.docker.com/engine/examples/running_ssh_service/ until late 2020. The next redirection replaced the interesting webpage (then on Ubuntu 20.04) with another code-heavy github repo.
I'm calling our container
alpyd1 (alpine daemon 1). When it is working properly, we can run an
alpyd2 to show they co-exist just like VMs:
docker run --rm -d --name alpyd1 -p 3022:22 alpine:3.16.2 tail -f /dev/null docker exec -ti alpyd1 mkdir /root/.ssh ssh-keygen -f ~/.ssh/docker_poc_key -t ed25519 -q -N '' <<<y >/dev/null 2>&1 docker cp ~/.ssh/docker_poc_key.pub alpyd1:/root/.ssh/authorized_keys
To shell into the container's environment:
docker exec -ti alpyd1 sh
These are the core commands I would think necessary (don't use
600 here, that only works for
root (which we are, BTW)):
apk add openssh chmod -R 700 ~/.ssh
openrc is the init system of Alpine Linux, it manages services. Adding it and running
rc-update add sshd raises no complaints but doesn't advance our cause.
openrc isn't a complete init system inside the docker container; it wasn't fully ported to the docker image for Alpine Linux. Despite having a shell we must avoid the traditional init system,
/etc/init.d/sshd start, and instead interact with services directly, like Sven Dowideit did.
For those wishing to persevere with
openrc, I've put my findings in an appendix.
Let's add a user (for comparison's sake), key, (and passwords for when we mess up the key ownership) with the script:
adduser -h /home/user42 -s /bin/sh -D user42 echo -n 'user42:password' | chpasswd echo -n 'root:password' | chpasswd cp -fr ~/.ssh /home/user42/ chown -R user42:user42 /home/user42/.ssh
What are we getting from
/ # apk add openssh (1/10) Installing openssh-keygen (9.0_p1-r2) (2/10) Installing ncurses-terminfo-base (6.3_p20220521-r0) (3/10) Installing ncurses-libs (6.3_p20220521-r0) (4/10) Installing libedit (20210910.3.1-r0) (5/10) Installing openssh-client-common (9.0_p1-r2) (6/10) Installing openssh-client-default (9.0_p1-r2) (7/10) Installing openssh-sftp-server (9.0_p1-r2) (8/10) Installing openssh-server-common (9.0_p1-r2) (9/10) Installing openssh-server (9.0_p1-r2) (10/10) Installing openssh (9.0_p1-r2) Executing busybox-1.35.0-r17.trigger OK: 12 MiB in 24 packages
Apart from reminding us we may be bound by the limits of a
busybox system, this shows packages that
openssh depends on and were installed. We can find where:
/ # apk -L info openssh-server openssh-server-9.0_p1-r2 contains: usr/sbin/sshd / # /usr/sbin/sshd -h /usr/sbin/sshd: option requires an argument: h OpenSSH_9.0p1, OpenSSL 1.1.1q 5 Jul 2022 usage: sshd [-46DdeiqTt] [-C connection_spec] [-c host_cert_file] [-E log_file] [-f config_file] [-g login_grace_time] [-h host_key_file] [-o option] [-p port] [-u len]
Let's ask it what is wrong:
/ # /usr/sbin/sshd sshd: no hostkeys available -- exiting.
Which leads us to:
/ # apk -L info openssh-keygen openssh-keygen-9.0_p1-r2 contains: usr/bin/ssh-keygen
We demand a bunch of keys and can then start
/ # ssh-keygen -A ssh-keygen: generating new host keys: RSA DSA ECDSA ED25519 / # /usr/sbin/sshd -D -e
-D flag prevents
sshd from detaching; we see useful debugging information instead.
-e adds debug logs to this output.
Meanwhile, on the client (the VM hosting the docker server), attempts to connect to
root work by key but not password (a sensible default). The normal user,
user42 works both ways.
There are two ip:port combinations where we may ssh to the container.
ssh -p 3022 -i ~/.ssh/docker_poc_key email@example.com ssh -i ~/.ssh/docker_poc_key firstname.lastname@example.org
These 2 IP addresses appear in
docker inspect alpyd1, under "Networks", then "bridge", as "Gateway", and "IPAddress", respectively.
The port changes only on the first. The second address is unique to the container.
A minimal alpine
FROM alpine:3.16.2 MAINTAINER silverbullets.co.uk ARG ssh_pub_key RUN apk add --no-cache openssh \ && ssh-keygen -A \ && mkdir /root/.ssh \ && echo $ssh_pub_key > /root/.ssh/authorized_keys \ && chmod -R 700 /root/.ssh \ && echo "root:password" | chpasswd \ && sed -i "s/^# *PermitRootLogin.*$/PermitRootLogin yes/" /etc/ssh/sshd_config CMD ["/usr/sbin/sshd", "-D", "-e"]
docker build --build-arg ssh_pub_key="$(cat docker_poc_key.pub)" -t alpsshd .
docker run --rm -d --name alpss1 -p 3022:22 alpsshd docker run --rm -d --name alpss2 -p 3023:22 alpsshd
ssh email@example.com -p 3022 ssh firstname.lastname@example.org -p 3023
ssh email@example.com ssh firstname.lastname@example.org
Something is scanning my
~/.ssh/ directory for keys and providing them for me here. If this isn't the case for you, append
The discussion of VM vs container ignores the elephant in the room; everyone is moving to the public cloud. Cloud computing isn't free, whereas laptop resources, generally, are.
Local testing is always going to be popular. Migrating VMs to containers is happening in the cloud the same way it is locally, with Kubernetes. The cloud isn't so much a third way as another way, to advance a deployment.
The space-saving wasn't as large as I thought, and jumping between dockers isn't always as convenient as running on the metal.
In my next post, I write about getting out of the container
For argument's sake, we try the
apk add openssh openrc rc-update add sshd chmod -R 700 ~/.ssh
exec into the container and run as root:
~ # /etc/init.d/sshd start * WARNING: sshd is already starting ~ # netstat -tulpn Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name ~ # /etc/init.d/sshd status * You are attempting to run an openrc service on a * system which openrc did not boot. * You may be inside a chroot or you may have used * another initialization system to boot this system. * In this situation, you will get unpredictable results! * If you really want to do this, issue the following command: * touch /run/openrc/softlevel ~ # touch /run/openrc/softlevel touch: /run/openrc/softlevel: No such file or directory
dropbear instead of
openssh and it got a little further. Port 22 was even listening!
apk del openssh apk add dropbear (1/2) Installing dropbear (2022.82-r1) (2/2) Installing dropbear-openrc (2022.82-r1) Executing busybox-1.35.0-r17.trigger OK: 8 MiB in 18 packages ~ # rc-update add dropbear * service dropbear added to runlevel sysinit ~ # rc-status Runlevel: sysinit dropbear [ stopped ] Dynamic Runlevel: hotplugged Dynamic Runlevel: needed/wanted Dynamic Runlevel: manual ~ # rc-service dropbear start /lib/rc/sh/openrc-run.sh: line 108: can't create /sys/fs/cgroup/blkio/tasks: Read-only file system ... * In this situation, you will get unpredictable results! * If you really want to do this, issue the following command: * touch /run/openrc/softlevel * ERROR: dropbear failed to start ~ # touch /run/openrc/softlevel ~ # netstat -tulpn Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name ~ # rc-service dropbear start /lib/rc/sh/openrc-run.sh: line 108: can't create /sys/fs/cgroup/blkio/tasks: Read-only file system ... * Starting dropbear ... [ ok ] ~ # netstat -tulpn Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 199/dropbear tcp 0 0 :::22 :::* LISTEN 199/dropbear
Those are some informative warnings. This server wasn't intended to be run in a container (or at least for that to matter) and is unable to diagnose outright that it will not work.
You are attempting to run an openrc service on a system which openrc did not boot.
docker restart alpyd1. It didn't help.
$ ssh -p 3022 -i ~/.ssh/docker_poc_key.pub email@example.com ssh: connect to host 172.17.0.2 port 3022: Connection refused
I tried without the port, to
172.17.0.1 also. The connection is always refused.