October 2014

Building remotes.club

I have been wanting to play with virtualization/containers for a while and the only way I learn anything is by actually building something useful. So when a few people started talking about building a community for remote tech workers I figured this was a good opportunity to create a container for hosting https://remotes.club. What follows is a slight cleanup of the notes I made along the way which are really more for me than for you so I can go back and remember what I did and so someone else can recreate this quickly somewhere else.

The goals:

My colo box is currently running Debian 7 (Wheezy). It is 6 years old now and only has 8G of ram and 3TB of disk in a Raid5 array. It is not a powerhouse at all and doesn’t have a modern CPU with their amazing VT-x/VT-d support. This meant I needed something very efficient which, to me, ruled out full virtualization solutions like KVM/XEN/Virtualbox and more along the lines of LXC. But there are things that can help you manage LXC.

The options I looked at:

Amusingly when I did:

apt-get install docker

I got some ancient X11 app which segfaulted when I tried to run it. The Debian package is actually called docker.io. The package is a bit old, but before I went down the road to fixing that I realized I didn’t really need most of the things Docker layers on top of the underlying virtualization system. I didn’t have an app to deploy, nor was I looking to automate creating many of these. I just needed a one-time isolated Linux environment.

Next up Vagrant and the vagrant-lxc plugin. It installed easily although I had some issues with LXC being older than it expected and missing a “-B best” option. There is no update in wheezy-backports, unfortunately. I was going to build it from source, but found someone had already built a .deb for Wheezy. See https://blog.deimos.fr/2014/08/29/lxc-1-0-on-debian-wheezy/

After upgrading to LXC 1.0.4 I quickly hit the fact that vagrant-lxc doesn’t support public/private network configurations. I have a couple of static IPs and I wanted to give the container its own public routable ip which didn’t seem possible through vagrant-lxc. It is likely I missed a trick, but when I get stuck on something in an abstraction layer that I know is possible to do I tend to just remove the layer and move on.

note added later: I might actually return to Docker/Vagrant. When I started there was no app to deploy and very little configuration to do, but as you will see by the end of the document, there is now a simple account registration app and plenty of configuration tweaks all of which could be replicated quickly with Vagrant.

LXC Setup

After spending a couple of hours playing with Docker and Vagrant I ended up with plain LXC. The first thing to check is what is in your /etc/default/lxc file since it will determine what your containers do by default. I set mine to be as simple as possible:


We will come back to that USE_LXC_BRIDGE setting later.

Creating an actual container is easy:

lxc-create -n remotes -t debian

Then to test it you do:

lxc-start -n remotes

This will bring up a console and you log in with root/root. Note that there is no real way to get out of the console that I could find. Ctrl-a q should do it but it doesn’t work from lxc-start. If instead you start the container using:

lxc-start -n remotes -d

And then attach the console with:

lxc-console -n remotes

Then once you are back at the login prompt Ctrl-a q works. Bug, I guess.

The default network config wasn’t what I wanted. I needed to give it its own public ip on the same nic that my host box was using to connect to the world. This had me a bit stumped for a while. I played with libvirt and virsh but got utterly confused by it all and it seemed too complicated. I should be able to just assign a static ip to my container and have it work. But, my host OS has no idea that there is a virtual OS inside it trying to share the network card. I needed to bridge the two. Of course, when I first attempted to create a bridge and move my connection to it, I messed it up and locked myself out of my colo. Having console access when you are playing with bridging your primary nic is pretty much essential.

In the end it turned out to be easier than I had expected. In my /etc/network/interfaces file I had:

auto eth0
  iface eth0 inet static

  iface eth0 inet6 static
  address 2607:ff58::fffe:100
  netmask 112
  gateway 2607:ff58::fffe:1
  up /sbin/ifconfig eth0 inet6 add 2607:ff58::fffe:101/112
  up /sbin/ifconfig eth0 inet6 add 2607:ff58::fffe:102/112

It has some ipv6 addresses as well along with ipv4 virtual interfaces not shown here. I removed one of those ipv4 virtual interfaces to be used for my container and then I modified eth0 to be a bridge instead and assigned it all the same IP info:

auto br0
  iface br0 inet static
  bridge_ports eth0
  bridge_fd 0
  bridge_stp off
  bridge_waitport 0
  bridge_maxwait 0

  iface br0 inet6 static
  address 2607:ff58::fffe:100
  netmask 112
  gateway 2607:ff58::fffe:1
  up /sbin/ifconfig br0 inet6 add 2607:ff58::fffe:101/112
  up /sbin/ifconfig br0 inet6 add 2607:ff58::fffe:102/112

Then after a service networking restart where I didn’t even lose my existing ssh sessions, my eth0 was now br0. ifconfig shows:

br0   Link encap:Ethernet  HWaddr 00:30:48:d1:96:dc  
      inet addr:  Bcast:  Mask:
      inet6 addr: fe80::230:48ff:fed1:96dc/64 Scope:Link
      inet6 addr: 2607:ff58::fffe:102/112 Scope:Global
      inet6 addr: 2607:ff58::fffe:100/112 Scope:Global
      inet6 addr: 2607:ff58::fffe:101/112 Scope:Global
      RX packets:3145921 errors:0 dropped:14853 overruns:0 frame:0
      TX packets:2736213 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0 
      RX bytes:996512708 (950.3 MiB)  TX bytes:1857933953 (1.7 GiB)

eth0  Link encap:Ethernet  HWaddr 00:30:48:d1:96:dc  
      RX packets:3738346 errors:0 dropped:0 overruns:0 frame:0
      TX packets:3383587 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000 
      RX bytes:1236682640 (1.1 GiB)  TX bytes:2208700667 (2.0 GiB)
      Interrupt:16 Memory:d8000000-d8020000 

and brctl show:

bridge name bridge id         STP enabled   interfaces
br0         8000.003048d196dc no            eth0

Now I needed to tell LXC to add itself to that bridge and things should in theory magically work. In my /var/lib/lxc/remotes/config file I changed it to:

lxc.network.type = veth
lxc.network.flags = up
# host side
lxc.network.link = br0
lxc.network.veth.pair = veth0
lxc.network.hwaddr = 00:16:3e:2a:52:74

# container side
lxc.network.name = eth0
lxc.network.ipv4 =
lxc.network.ipv4.gateway =

The veth0 and hwaddr can be whatever you want there. You are making up a virtual ethernet adaptor which you are bridging to the br0 bridge which we created in the previous step. Then I was able to assign my static ip directly to the container and after restarting the container lxc-ls -f showed:

NAME     STATE    IPV4            IPV6  AUTOSTART  
remotes  RUNNING  -     YES

But more importantly brctl show gives this:

bridge name  bridge id          STP enabled  interfaces
br0          8000.003048d196dc  no           eth0

So my virtual nic was added to the bridge correctly. And I was able to ping both from the host and from the outside world.

Remember that USE_LXC_BRIDGE setting from above? If that is set to ‘true’ then LXC will create its own bridge and run dnsmasq. This is probably what you want if you don’t want to assign your container its own public IP and simply want to NAT it. The downside to that is that you would need to set up port forwarding or other tricks in order for the outside world to reach your container.

And since putting up anything new without ipv6 support seems like a really bad idea, I figured I would tackle that too. I already have an ipv6 network on my host bridge, so it was just a matter of picking an ip for my container. In /var/lib/lxc/remotes/config I added:

lxc.network.ipv6 = 2607:ff58::fffe:105/112
lxc.network.ipv6.gateway = 2607:ff58::fffe:1

It worked nicely and lxc-ls -f now shows:

NAME     STATE    IPV4            IPV6                 AUTOSTART  
remotes  RUNNING  2607:ff58::fffe:105  YES  

And it is even pingable from the outside world. The ipv6 traceroute from www.php.net to remotes.club looks like this:

11:25am www.php.net:~> traceroute6 2607:ff58::fffe:105
traceroute to 2607:ff58::fffe:105 (2607:ff58::fffe:105), 30 hops max, 80 byte packets
 1  gigabitethernet1-19.core3.fmt2.he.net (2001:470:1:380::1)  9.797 ms  9.851 ms  9.845 ms
 2  10ge1-3.core1.fmt2.he.net (2001:470:0:23b::2)  1.534 ms  1.532 ms  1.003 ms
 3  10ge1-1.core1.sjc2.he.net (2001:470:0:31::2)  9.319 ms  9.713 ms  0.804 ms
 4  10ge12-1.core1.sea1.he.net (2001:470:0:1c7::2)  19.436 ms  26.727 ms  26.712 ms
 5  v6-six.appliedops.net (2001:504:16::9e1b)  583.347 ms  583.644 ms  583.642 ms
 6  2607:ff58::1:17 (2607:ff58::1:17)  37.716 ms  38.364 ms  37.730 ms
 7  2607:ff58::1:9 (2607:ff58::1:9)  41.184 ms  41.164 ms  41.028 ms
 8  2607:ff58::fffe:105 (2607:ff58::fffe:105)  40.439 ms  39.446 ms  40.434 ms

If you followed these various steps and it didn’t work for you, look into your sysctl settings and make sure ip forwarding is on for both ipv4 and ipv6 and make sure your bridge is in promiscuous mode.

Now that LXC is all set up and the networking is working it is just a matter of adding A, AAAA and MX DNS records to match.

The Debian image that gets installed in your container is extremely minimal. So you will want to install your favourite editors, screen/tmux and other niceties. And a rather crucial missing piece in my opinion is that there is no syslogd installed. Fix that by:

apt-get install rsyslog

One thing to keep in mind is that LXC doesn’t provide complete isolation from the host OS. You are sharing a running kernel and people who have root access in your LXC container can still do nasty things by poking around in /proc/kcore or via sysfs for example. This is where AppArmor can help out, although it doesn’t feel very well supported on Debian from the little time I spent looking at it. User namespaces can help here too. If you are super worried about perfect isolation you should spend some time reading up on this. It will require a really recent kernel to work well though.

One minor thing you should probably do is:

echo 1 > /proc/sys/kernel/dmesg_restrict

on the host side to make sure only root on the host can run dmesg. And likewise, inside the container, if you are running rsyslogd, in your /etc/rsyslog.conf file, comment out this line:

$ModLoad imklog

which disables kernel logging in the container. I think it is a better idea to only log kernel messages on the host side.


Debian comes with some ancient version of PHP. Anything older than about a week from the master branch is too old for me, and I don’t really trust other people to build my PHP anyway, so I compiled PHP 7 from git and ’make install’ed it into /usr/local. And my simple /usr/local/etc/php-fpm.conf looks like this:

  user = www-data
  group = www-data
  listen = /var/run/php-fpm.sock
  listen.owner = www-data
  listen.group = www-data
  listen.mode = 0660
  pm = dynamic
  pm.max_children = 25
  pm.start_servers = 5
  pm.min_spare_servers = 5
  pm.max_spare_servers = 10
  pm.status_path = /status
  ping.path = /ping
  access.log = /var/log/nginx/php-fpm_access.log
  access.format = "%R - %u %t \"%m %r%Q%q\" %s %f %{mili}d %{kilo}M %C%%"
  slowlog = /var/log/nginx/php-fpm_slow.log
  request_slowlog_timeout = 30
  request_terminate_timeout = 60

Since I am not using a Debian package and I am too lazy to create one, I just copied the init.d php-fpm script that comes with PHP to /etc/init.d and made sure it starts by default with:

update-rc.d php-fpm defaults

You can check to make sure it worked by doing:

service --status-all

And start it with:

service php-fpm start

To make sure that php-fpm is actually working before worrying about connecting it to a web server do this:

% apt-get install libfcgi0ldbl
  cgi-fcgi -bind -connect /var/run/php-fpm.sock

And you should see something like this:

X-Powered-By: PHP/7.0.0-dev
Content-type: text/plain;charset=UTF-8
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Cache-Control: no-cache, no-store, must-revalidate, max-age=0


If you don’t, make sure you matched the unix socket, or tcp port specified in your php-fpm.conf listen directive.

More interesting is the output from /status:

  cgi-fcgi -bind -connect /var/run/php-fpm.sock

X-Powered-By: PHP/7.0.0-dev
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Cache-Control: no-cache, no-store, must-revalidate, max-age=0
Content-type: text/plain;charset=UTF-8

pool:                 www
process manager:      dynamic
start time:           26/Oct/2014:22:30:24 -0700
start since:          223961
accepted conn:        340
listen queue:         0
max listen queue:     0
listen queue len:     0
idle processes:       5
active processes:     1
total processes:      6
max active processes: 1
max children reached: 0
slow requests:        0

Next, a very basic PHP config in /etc/php.ini for what is really a non-production server:

date.timezone = "America/Los_Angeles"
max_execution_time = 30
max_input_time = 600
memory_limit = 512M
error_reporting = -1
display_errors = On
display_startup_errors = On
default_charset = "UTF-8"
expose_php = On
user_ini.filename = .user.ini
user_ini.cache_ttl = 300

zend_extension = opcache.so
opcache.memory_consumption = 256M
opcache.interned_strings_buffer = 8
opcache.max_accelerated_files = 10000
opcache.revalidate_freq = 0
opcache.enable_cli = 0

For a production server you are going to want to turn off display_errors and set your opcache.revalidate_freq to something higher. This setting tells PHP how frequently to stat the php files to see if they have been modified. By setting this to a non-zero value you gain some performance by caching the stat info for the given number of seconds, especially on systems where disk-access is slow. This means that it can take up to that many seconds after you make a change before you will actually see it in your browser. For a dev server it confuses people if they don’t see their changes right away so in this case I am forcing a stat on every request by setting it to 0.

After you make php.ini changes you need to restart php-fpm:

service php-fpm restart

A simple <?php phpinfo() ?> script will tell you if things are working. Here is mine: info.php

And you can grab my opcache-status script from https://github.com/rlerdorf/opcache-status which looks like this: opcache.php

and that will tell you if your opcache setup is working and actually caching scripts for you. The phpinfo page also has opcache status info, so you don’t need that fancier status script. Of course you will have to set up your web server before this will work for you. Read on…

Also, if individual users need different PHP settings, they can put them in a .user.ini file in their web directories. Anything not listed as being PHP_INI_SYSTEM or php.ini only at http://php.net/manual/en/ini.list.php can be put in a .user.ini file. The name of this file can be changed in the php.ini file using the user_ini.filename directive. And, keep in mind that these files are cached for 5 minutes by default, so users won’t see changes instantly. This can again be changed in php.ini using the user_ini.cache_ttl directive (in seconds).

For example, a user might want to have this .user.ini file in their ~/web directory:

include_path = /home/rasmus/phplib
date.timezone = America/New_York
default_charset = UTF-8
display_errors = 0
log_errors = 1
error_log = /home/rasmus/logs/php_errors.log
mail.log = /home/rasmus/logs/php_mail.log

You have to make sure the user php-fpm runs as has write access to the log files. In my case that is www-data and all new users are automatically added to the www-data group so users can just create a group-writable logs directory for themselves. So a user would do:

mkdir logs
chgrp www-data logs

It might also be a good idea to add some log rotation for our php-fpm setup. In /etc/logrotate.d/php-fpm I added this:

/home/*/logs/php*.log {
        rotate 52
        create 664 www-data www-data
                [ -f /usr/local/var/run/php-fpm.pid ] && invoke-rc.d php-fpm reload

/var/log/nginx/php-fpm_*.log {
        rotate 52
        create 664 www-data www-data
                [ -f /usr/local/var/run/php-fpm.pid ] && invoke-rc.d php-fpm reload


Similar to PHP, we’ll go bleeding edge on MySQL and install MySQL 5.7:

wget http://dev.mysql.com/get/mysql-apt-config_0.2.1-1debian7_all.deb
dpkg -i mysql-apt-config_0.2.1-1debian7_all.deb (and select 5.7)

There are Ubuntu repos available as well if you prefer, see MySQL APT

apt-get update
apt-get install mysql-server

Set a root password when prompted and you can check the various defaults in /etc/mysql/my.cnf. By default MySQL 5.7 doesn’t listen on any external ports, so in most cases you won’t have much to tweak initially. The defaults are sane.


Again, Debian Wheezy’s nginx is too old for me and nginx.org has a wheezy repo. Add /etc/apt/sources.list.d/nginx.list containing:

deb http://nginx.org/packages/debian/ wheezy nginx
deb-src http://nginx.org/packages/debian/ wheezy nginx


apt-get update
apt-get install nginx

And you get a newer version. 1.6.2 in my case when I did it.

Much like skipping ipv6, not doing ssl is just not Internet-friendly, so before we do anything further, we get ourselves a cert. I like getting mine from StartSSL because I can generate as many wildcard certs as I like for the single identity verfication charge of $60 per year.

StartSSL has a tool to generate a CSR (Certificate Signing Request) but I find it rather crazy that anyone would use it. Just hit the “skip this” link and provide your own so there is no chance of your private key ending up somewhere else. To generate the CSR do:

openssl req -new -newkey rsa:2048 -nodes -sha256 -keyout remotes.key -out remotes.csr

Note the -sha256 there. By late 2014 Chrome will start complaining about SHA1 certs so you need to generate a SHA2 one.

This will ask you a bunch of questions. The only one that matters is:

"Common Name (e.g. server FQDN or YOUR name)"

For this wildcard cert it is: *.remotes.club

Then once you get the cert, save it to remotes.crt and then you will need the chain certs to build a unified cert to feed to nginx like this. Make sure you get the SHA2 chain cert. For StartSSL you do it like this:

wget http://www.startssl.com/certs/ca.pem
wget https://www.startssl.com/certs/class2/sha2/pem/sub.class2.server.sha2.ca.pem
cat remotes.crt sub.class2.server.sha2.ca.pem ca.pem > /etc/nginx/conf/remotes-unified.crt

Next up is the nginx configuration in /etc/nginx/conf.d/remotes.conf. There are 3 server blocks in this file. The first is just for http://remotes.club and https://remotes.club. These are both redirected to https://www.remotes.club:

server {
  listen       80;
  listen       443 ssl;
  ssl_certificate     /etc/certs/remotes-unified.crt;
  ssl_certificate_key /etc/certs/remotes.key;
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers         RC4:HIGH:!aNULL:!MD5;
  ssl_prefer_server_ciphers on;
  ssl_session_cache   shared:SSL:10m;
  ssl_session_timeout 10m;
  server_name  remotes.club;
  return 301 https://www.remotes.club$request_uri;

Note the explicit “TLSv1 TLSv1.1 TLSv1.2” there to make sure we are not vulnerable to PoodleBleed. Also we have an ssl session cache configured. Next is a very simple block that just redirects from http to https:

server {
  listen 80;
  server_name  server_name  ~^(?<user>.+)\.remotes\.club$;
  return 301   https://$host$request_uri;

And finally the real block which handles https://user.remotes.club/*:

server {
  listen              443 ssl;
  server_name         server_name  ~^(?<user>.+)\.remotes\.club$;
  root                /home/$user/web;
  access_log          /home/$user/logs/access.log main;
  ssl_certificate     /etc/certs/remotes-unified.crt;
  ssl_certificate_key /etc/certs/remotes.key;
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers         RC4:HIGH:!aNULL:!MD5;
  ssl_prefer_server_ciphers on;
  ssl_session_cache   shared:SSL:10m;
  ssl_session_timeout 10m;

  location / {
      index     index.html index.htm index.php;
      autoindex on;

  location ~ /\.  { return 403; }

  location ~ \.php$ {
      try_files                $uri =404;
      include                  fastcgi_params;
      fastcgi_index            index.php;
      fastcgi_split_path_info  ^(.+.php)(.*)$;
      fastcgi_param            SCRIPT_FILENAME  $document_root$fastcgi_script_name;
      fastcgi_pass             unix:/var/run/php-fpm.sock;

Most of this should be self-explanatory. The nginx config is nicely human-readable, I find. The try_files line might look a bit weird, but that tries to avoid having requests like this:


trick nginx into thinking a gif is a php script while on the fastcgi side we see /uploads/some.gif as the filename since it exists in the filesystem and /somescript.php ends up in PATH_INFO as per the CGI spec. Of course if you run your fastcgi on a different server from your nginx then a try_files check isn’t useful and you have to be more explicit about making sure you disable fastcgi for any directories where uploaded user content could cause problems.

And the return 403 on dot-files is to make sure we don’t leak .git, .user.ini and other such supposedly hidden files. These should never be served up by web servers.

Also note that we are doing per-user access logs in their home directories which means we should make sure we are rotating those logs. In your /etc/logrotate.d/nginx file, add the following below the block that is already there:

/home/*/logs/access.log {
        rotate 52
        create 640 nginx adm
                [ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`

Email forwarding using Postfix

Debian 7 comes with Postfix 2.9.6 which is too old for me. So as a first step I updated it to 2.11.1 via wheezy-backports.

apt-get install postfix/wheezy-backports postfix-policyd-spf-python

The default config is mostly ok. People can just put a .forward file in their home directories to configure where to forward their remotes.club email to. One thing that stands out is when sending mail to Google or any other sites that checks SPF, is:

Received-SPF: softfail (google.com: domain of transitioning rasmus@remotes.club does not designate
              2607:ff58::fffe:105 as permitted sender) client-ip=2607:ff58::fffe:105;

in the headers. Note that it managed to connect via IPv6. I like that! To fix the SPF softfail we can just add an SPF header to our DNS record. Like this:

@  IN TXT "v=spf1 a -all"

Very simple. It just says look up the A (or AAAA for ipv6) record and allow that ip as the sender for remotes.club. Of course, we also added an MX record to point to remotes.club. Checking it again, we now get:

Received-SPF: pass (google.com: domain of rasmus@remotes.club designates 2607:ff58::fffe:105 
              as permitted sender) client-ip=2607:ff58::fffe:105;

For email just being forwarded through remotes.club this doesn’t help all that much since it isn’t the original sender, but we can’t do much about that. This will still help our account request email verification emails get through. SPF is like ipv6 and ssl, it is a basic and simple piece of the stack that we shouldn’t skip. If everyone always did SPF, ipv6 and SSL, our Internet would be much better off.

So, on that note, we can make use of our wildcard SSL cert and turn on TLS for our SMPTD. In /etc/postfix/main.cf:

smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache

And also, turn on a few restrictions to get rid of some junk right at the SMTP stage:

smtpd_helo_restrictions = reject_unknown_helo_hostname

smtpd_sender_restrictions = reject_unknown_sender_domain

smtpd_relay_restrictions = permit_sasl_authenticated,
                           check_policy_service unix:private/policy-spf,
                           reject_rbl_client zen.spamhaus.org,
                           reject_rhsbl_reverse_client dbl.spamhaus.org,
                           reject_rhsbl_helo dbl.spamhaus.org,
                           reject_rhsbl_sender dbl.spamhaus.org

smtpd_data_restrictions = reject_unauth_pipelining

policy-spf_time_limit = 3600s

And finally, to disable local delivery until users set up their forwarding address, I added /etc/skel/.forward containing:

|"echo 'No forwarding address'; exit 67"

Postfix will reply with:

<rasmus@remotes.club>: user unknown. Command output: No forwarding address

If people try to email a user who hasn’t configured a forwarding address.

A Minimal Account Request setup

And yes, I still write my PHP code as if it was 1995. But I write it really quickly :) I timed myself on this one. It took me almost exactly 90 minutes to write the bulk of it which didn’t include the frontend css tweaking. That part always takes me forever.

So a very quick summary of the account request system:

mysqladmin -u root -p create requests

My Schema:

CREATE TABLE `users` (
  `userid` char(32) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
  `email` char(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
  `pubkey` blob NOT NULL,
  `code` char(33) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
  `note` text COLLATE utf8_bin,
  `verified` bool NOT NULL DEFAULT '0',
  PRIMARY KEY (`userid`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='Account requests';

create it:

mysql -u root -p < requests.users.sql

Create a user and grant access to this table:

mysql -u root -p requests
mysql> CREATE USER 'requests'@'localhost' IDENTIFIED BY 'xxxxxxxxx';
mysql> GRANT ALL ON requests.users TO 'requests'@'localhost';
mysql> flush privileges;    

ORMs, frameworks? Those are great, but I still like super simple db layers for tiny applications like this. Here is what I wrote for this. It shows a good use of variadics and argument unpacking introduced in PHP 5.6:

class db {
    static $dbh = false;

    static function connect() {
        global $mysql_host, $mysql_user, $mysql_pass, $mysql_db;

        $mysqli = new mysqli($mysql_host, $mysql_user, $mysql_pass, $mysql_db);
        if ($mysqli->connect_error) {
            echo $mysqli->connect_error;
            return false;
        self::$dbh = $mysqli;
        return true;
    static function query($query, ...$args) {
        if (!self::$dbh) self::connect();
        if (!self::$dbh) return false;
        if (!count($args)) return self::$dbh->query($query);
        else {
            $stmt = self::$dbh->prepare($query);
            if (!$stmt) {
                echo self::$dbh->error;
                return false;
            } else {
                if(!$stmt->bind_param(str_repeat('s',count($args)), ...$args)) {
                    echo self::$dbh->error;
                    return false;
                $res = $stmt->execute();
                if(!$res) {
                    echo self::$dbh->error;
                    return false;
                return $stmt->get_result();

And my actual html page using Yahoo purecss grid and form and PHP 7’s coalesce operator:

<!doctype html>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/pure/0.5.0/pure-min.css">
    <!--[if lte IE 8]>
        <link rel="stylesheet" href="css/grid-old-ie.css">
    <!--[if gt IE 8]><!-->
        <link rel="stylesheet" href="css/grid.css">
    <link rel="stylesheet" href="css/remotes.css">
<?php include '/home/www/phplib/request_form.php'; ?>
<div id="wrapper">
<h1>remotes.club Account Request Form</h1>
<div class="pure-g">
  <div class="pure-u-1-1">
    <div class="error"><?=$error?></div>
<?php /* Show the main form */
  if(empty($code) && empty($vcode)): ?>
    <form class="pure-form pure-form-stacked" method="POST">
      <div class="pure-u-1 pure-u-md-1-3">
        <label for="userid">Desired user id</label>
        <input type="text" name="userid" placeholder="normally 8 or less chars" 
                           value="<?=$userid??""?>" required>
      <div class="pure-u-1 pure-u-md-1-2">
        <label for="email">Email address</label>
        <input class="pure-input-1-2" type="email" 
               name="email" value="<?=$email??""?>" required>
      <div class="pure-u-1 pure-u-md-1-3">
        <label for="pubkey">Your 
           <a href="https://help.github.com/articles/generating-ssh-keys/">public ssh key</a>
        <textarea name="pubkey" placeholder="Just copy and paste your public key here.
Carriage returns and linefeeds will be removed, so don't
worry about trying to manually remove those." 
                  cols="80" rows="10" required><?=$pubkey??""?></textarea>
      <div class="pure-u-1 pure-u-md-1-3">
        <label for="note">Brief description</label>
        <textarea name="note" placeholder="Who are you? Where do you work?"
                  cols="80" rows="10"><?=$note??""?></textarea>
      <div class="pure-u-1 pure-u-md-1-3">
        <button type="submit" class="pure-button pure-button-primary">Submit</button>
<?php /* Show the verification code input form */
  elseif(!empty($code) && empty($vcode)): ?>
    <p>You should have received a code in your email.</p>
    <form class="pure-form pure-form-stacked" method="POST">
      <div class="pure-u-1 pure-u-md-1-2">
        <label for="vuserid">Desired user id</label>
        <input type="text" name="vuserid" value="<?=$userid?>" readonly>
      <div class="pure-u-1 pure-u-md-1-2">
        <label for="vcode">Email verification code</label>
        <input class="pure-input-1-2" type="text" name="vcode" required>
      <div class="pure-u-1 pure-u-md-1-3">
        <button type="submit" class="pure-button pure-button-primary">Submit</button>
<?php /* Success */
  elseif($request_accepted): ?>
   <p>Ok, your account request is in the system. Someone will get to it soon.</p>  
<?php endif?>

And finally the controller, if you prefer to call it that:

include 'phplib/db.php';
include '.dbpass';

function active_user($user) {
    $lines = file("/etc/passwd");
    foreach($lines as $line) {
      list($username,$junk) = explode(':',$line,2);
      if(strtolower($username) == $user) return true;
    return false;

$error = '';
$request_accepted = false;
/* Check main form input and send an email verification code on success */
if(!empty($_POST['userid'])) {
    $userid = strtolower($_POST['userid']);
    $email  = strtolower($_POST['email']);
    $pubkey = strtr($_POST['pubkey'], ["\r"=>"","\n"=>"","\t"=>""]);
    $note   = $_POST['note'];

    if(!preg_match("/^[a-z][a-z0-9]{0,30}$/", $userid)) {
      $error = "illegal characters in user id";
      goto done;
    if(db::query("SELECT userid FROM users where userid=?", $userid)->num_rows) {
      $error = "user id is not available";
      goto done;
    if(active_user($userid)) {
      $error = "user id is not available";
      goto done;
    if(!filter_var($email, FILTER_VALIDATE_EMAIL) || strstr($email,'@remotes.club')) {
      $error = "invalid email address";
      goto done;
    $code = bin2hex(openssl_random_pseudo_bytes(16));
    db::query("INSERT into users (userid,email,pubkey,code,note) values (?,?,?,?,?)", 
              $userid, $email, $pubkey, $code, $note); 
    mail($email, "Remotes.Club Email Verification", "This is your verification code: $code\n");
    // We are done processing, filter these for display
    $userid = htmlspecialchars($_POST['userid'], ENT_COMPAT, 'UTF-8');
    $email  = htmlspecialchars($_POST['email'], ENT_COMPAT, 'UTF-8');
    $pubkey = htmlspecialchars(strtr($_POST['pubkey'],["\r"=>"","\n"=>"","\t"=>""]),ENT_NOQUOTES,'UTF-8');
    $note   = htmlspecialchars(strtr($_POST['note'],["\r"=>"","\n"=>"","\t"=>""]),ENT_NOQUOTES,'UTF-8');
/* Check that the email verification code matches */
if(!empty($_POST['vcode'])) {
    $vcode = $_POST['vcode'];
    $vuserid = $_POST['vuserid'];
    $record = db::query("SELECT * from users where userid=?", $vuserid)->fetch_assoc(); 
    if(strcmp($vcode, $record['code']) === 0) {
        db::query("UPDATE users set verified=1 where userid=?",$vuserid);
        $request_accepted = true;
    } else {
        $error = "Code didn't match";

And to finish it off, a command line tool for admins to approve account requests. This also checks for compromised ssh keys using ssh-vulnkey so you will need to install the blacklists:

apt-get install openssh-blacklist openssh-blacklist-extra

Running ssh-vulnkey has the added benefit of being able to catch copy+paste mistakes since the key won’t decode if it is off by a character. We could also run this as part of the initial web interface, but I really hate shelling out from web requests. Also note that admins that run this will need sudo access:

include 'phplib/db.php';
include '.dbpass';

$ver = ["\033[1;31m✗\033[0m","\033[0;32m✓\033[0m"];

/* List */
if($argc==1 || ($argc==2 && $argv[1][0]=='l')) {
  $users = db::query("SELECT * from users order by ts asc")->fetch_all(MYSQLI_ASSOC);
  if(count($users)==0 && ($argc==1 || $argv[1]!='lq')) { 
    echo "No users in the request queue\n"; exit(0);
  foreach($users as $u) {
    echo $ver[$u['verified']];
    echo " {$u['userid']}\t{$u['email']}\t({$u['ts']}): {$u['note']}\n";

/* Delete */
if($argc==3 && $argv[1][0]=='d') {
  $user = db::query("SELECT * from users where userid=?", $argv[2])->fetch_assoc();
  if(!$user) {
    echo "No such user\n";
  $status = db::query("DELETE from users where userid=?", $argv[2]);

/* Add */
if($argc==3 && $argv[1][0]=='a') {
  $user = db::query("SELECT * from users where userid=?", $argv[2])->fetch_assoc();
  if(!$user) {
    echo "No such user\n";
  $userid = $user['userid'];
  $pubkey = escapeshellarg($user['pubkey']);
  $res = shell_exec("echo $pubkey | ssh-vulnkey - 2>&1 1> /dev/null");
  if($res) {
    echo $res;
  system("sudo adduser --disabled-password --gecos \"\" $userid");
  system("sudo bash -c 'echo {$user['email']} > /home/$userid/.forward'");
  system("sudo mkdir -m 700 /home/$userid/.ssh");
  system("sudo bash -c \"echo $pubkey > /home/$userid/.ssh/authorized_keys\"");
  system("sudo chown -R $userid:$userid /home/$userid");
  system("sudo chmod 700 /home/$userid/.ssh");
  system("sudo mkdir -m 775 /home/$userid/logs");
  system("sudo chown $userid:www-data /home/$userid/logs");
  // send email to user
  $message = <<< EOB
Hi! Welcome to remotes.club. To access your shell account:
  mail($user['email'], "Welcome to remotes.club!", $message, 
       "From: remotes.club <noreply@remotes.club>\r\n");
  db::query("DELETE from users where userid=?", $argv[2]);

/* Usage */
if($argc==2 && ($argv[1][0]=='h' || $argv[1]=="-h" || $argv[1]=="--help")) {
  echo <<<EOB
Usage: remotes [options] [command] [arg]

   --help          This help
   l(ist)          List all user ids in the request table (default)
   lq              List all user ids in the request table, don't report if there aren't any (for cron)
   a(dd) <userid>  Approve <userid> and create account
   d(el) <userid>  Delete <userid>


Way up in the PHP section I mentioned that new users are automatically added to the www-data group. This is done by adding:


to /etc/adduser.conf and the script above calls adduser to create new users.

Overall this provides a decent playground for people to experiment with things. It is not overly secure as noted with some of the LXC caveats, but also resource-wise and overall security within the system means there is a certain level of trust required of the users who are granted access to the system.

And if you are wondering, this document was created from markdown with:

pandoc -s -S -c css/pandoc.css --highlight-style pygments remotes.md -o remotes.html

You can see the source markdown here: https://rasmus.remotes.club/remotes.md