Reverse-proxy yourself to localhost with SSL/TLS

Some time ago Scott Hanselman described how to setup self-signed certificates for localhost using dotnet dev-certs. Having SSL on localhost is, for me, a must-have since we all want to have our dev env resemble production as much as possible. The approach Scott showed is great but it might be a little bit hard to use on Linux. On Linux-based systems there are multiple libraries, multiple (probably embedded) stores and hundreds of options to configure all of this. I’ll show you an another approach that will allow to develop apps locally with full SSL/TLS and nice addresses.

You own a domain

The idea is that DNS servers allow any valid IPv4/6 address in A/AAAA records. So who will prevent us from putting there? :) When you own a domain and control DNS servers (zones), you can point one of the sub-domains to localhost. Besides, I own and I did exactly that - set A and AAAA records of and * to

$ drill ANY
;; ->>HEADER<<- opcode: QUERY, rcode: NOERROR, id: 40407
;; flags: qr tc rd ra ; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;;    IN  ANY

;; ANSWER SECTION:   3479    IN  A   3479    IN  AAAA    ::1
# Cut...
$ drill * ANY
;; ->>HEADER<<- opcode: QUERY, rcode: NOERROR, id: 62844
;; flags: qr tc rd ra ; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; *  IN  ANY

* 3598    IN  A
* 3598    IN  AAAA    ::1
# Cut...

From now on instead of going to localhost:8080, I (and you too!) can use these addresses.

Add reverse proxy to the mix

It is rarely the case that a project consists of a single application. With the raise of containers, we tend to split the projects into multiple semi-independent apps (I won’t call them microservices ;) ) that are developed by different people. Let’s assume that we have a simple backend (that returns the string backend when requesting /) with even simpler frontend that just returns frontend. We can run them directly on our PC and use different ports but why not use containers? That way we can bind them to the same port (because of different net namespaces) and give ourselves some flexibility (combined with ease of configuration and better resemblance of production environment).

We still have one minor problem - how to access the containers? Let’s run another container! It will bind to port 80 (and 443) on real localhost and it will be responsible for routing the external (i.e. requests from outside of the Docker network) traffic to corresponding containers as it will have access to both the host and the overlay network. It might do this based on the Host header (that’s why I’ve added A record for *.local) or anything you want, really. Host seems to be the easiest method because of the awesome projects like jwilder/nginx-proxy. It does magic and automatically discovers containers that want to be exposed. Add a little bit of YAML and you have fully featured, nginx-based applications with reverse proxy and custom domain names running on your local computer:

version: "3"
    image: nginx
    # Why would you create separate Dockerfiles when you can abuse the
    # entrypoint? ;)
    entrypoint: >-
      /bin/sh -c 'echo backend > /usr/share/nginx/html/index.html &&
      nginx -g "daemon off;"'

    image: nginx
    entrypoint: >-
      /bin/sh -c 'echo frontend > /usr/share/nginx/html/index.html &&
      nginx -g "daemon off;"'

    image: jwilder/nginx-proxy
      - "80:80"
      # - "443:443" we still don't have certificates, so leaving it disabled for now
      - /var/run/docker.sock:/tmp/docker.sock:ro

Et voilà! We now have two separate apps running on our local PC that can be accessed using

$ curl
$ curl

But beware, even though I use docker-compose here, I’m giving the proxy container access to docker.sock and not some docker-compose-constrained socket. It can (and will) inspect the whole state of Docker engine so it will expose all containers that have VIRTUAL_HOST set.

And a little bit of Let’s Encrypt

We now have two sites - so-called backend and frontend - running on our local machine that can be accessed (from our local machine only!) using normal addresses. Unfortunately, it all works over HTTP but the main promise of this post was to expose them over HTTPS. Here, the awesome Let’s Encrypt service comes with help.

Let’s Encrypt will generate a (trusted!) certificate for your domain for free, provided that you show them you own it. This is one of the best services out there, especially since the beginning of last year, when they started supporting wildcard certificates.

Let’s Encrypt and their ACME protocol requires to prove one owns the domain it tries to generate certificate for. You might either expose some well-known file with challenge token (the HTTP challenge) or use DNS records for the same purpose (the DNS challenge). We cannot really use the first approach to generate certificates for as we’ve already pointed the domain to and would need to correctly handle changes, TTL of records and many more.

In our case the DNS challenge is much easier to use. There are also multiple tools that can automate DNS updates for us. One of my favorites is the script. It automatically adds necessary records (supports multiple providers), generates the certificate, handles renewals, cleans up after itself and more.

Since I use OVH servers for the domain and supports it, issuing certificate for is as easy as

$ export OVH_AK="..."
$ export OVH_AS="..."
$ # Optional export OVH_CK="..."
$ --issue -d '' -d '*' --dns dns_ovh --test # Always use LE Staging first

Mix it all

All the blocks are now ready, we just need to give the certificate to the proxy. It is possible to do so with only a slight modification of the docker-compose.yml file shown above. We just need to --install-cert into a known location and attach it as a volume to /etc/nginx/certs. It will work but will make moving to another machine painful (i.e. you need to embed full path to the certificate in the compose file). We can do better here - let’s embed the certificate directly in the image!

This can be done with multi-stage builds. The first stage generates & exports the certificate (using neilpang/ image) and the second one (based on jwilder/nginx-proxy) just copies it to correct location. Nothing fancy, just a couple of lines in Dockerfile:

FROM neilpang/ AS cert


# Re-export args as ENV

# Issue & export the certificate
# This has to be done in a single RUN statement as the base image marks /
# as VOLUME so it will be purged after the statement (and we cannot mount
# volumes during build phase)
RUN mkdir /export
RUN --issue \
    --dns dns_ovh \
    -d '' -d '*' && \
    \ --install-cert -d '' \
    --key-file /export/key.pem \
    --fullchain-file /export/fullchain.pem

# And the final proxy
FROM jwilder/nginx-proxy:alpine

COPY --from=cert /export/fullchain.pem /etc/nginx/certs/
COPY --from=cert /export/key.pem /etc/nginx/certs/

Build it, tag it (or change compose file to do this for you):

$ docker build \
    -t proxy-with-ssl \
    --build-arg OVH_AK=$OVH_AK \
    --build-arg OVH_AS=$OVH_AS \
    --build-arg OVH_CK=$OVH_CK \

Change the image used in compose file and everything will just work:

$ curl
$ curl
$ openssl s_client -connect </dev/null
Certificate chain
 0 s:CN =
   i:C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
 1 s:C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
   i:O = Digital Signature Trust Co., CN = DST Root CA X3

Yay! Mission accomplished! The main drawback of this approach is that you need to have sensible API for your DNS servers (but that should not be a problem) and generate some tokens for it. You also need to renew the certificate in about 60 days (it is valid for 90 days).

I am aware that embedding secrets in the image itself might not be the most secure approach, so NEVER, EVER DO THIS WITH production images OR WITH PUBLIC DOMAINS.

For dev purposes it helps tremendously and might ease the development process but always have in mind security implications that it has. Also, do not publish the image to Docker Hub or any public registry. ;)

Final code is available in this gist.


Pointing the domain you own to a localhost, having valid certificate for it and using normal domains for local development blurs the differences between dev and production environment even more (which is a good thing!). It helped me and my team at work to use SSL/TLS everywhere and I think that now every one of us has a better understanding how it all works (and we don’t have mixed content any more ;) ). It has its limitations but I think it is worth checking.

In the next episode (probably not 2 years from now ;) ) - how to run proxy in docker and everything else on localhost (also - HSTS).