Local dev setup for a complex app using docker-compose
Permalink for this talk:
https://domm.plix.at/talks/2021_cic_docker_compose
docker-compose for local dev
not for production use!
https://oe1.orf.at
"Österreich 1 (Ö1) is an Austrian radio station: one of the four national channels operated by Austria's public broadcaster ORF. It focuses on classical music and opera, jazz, documentaries and features, news, radio plays and dramas, Kabarett, quiz shows, and discussions"
I (and a few more freelancer) make their website since ~2004
MP3 Download Server (PSGI / Starlet / Streaming responses)
PostgreSQL DB
ElasticSearch
Redis
Outgoing Mail
ImageProxy
Nginx
7 Web services
3 Storage services
2 more services (nginx, mail-out)
That's quite a lot of stuff to configure on your laptop...
docker-compose
"Compose is a tool for defining and running multi-container Docker applications."
https://docs.docker.com/compose/install/
alias d-c="docker-compose"
"With Compose, you use a YAML file to configure your application’s services."
More on that in a minute.
"Then, with a single command, you create and start all the services from your configuration."
~/jobs/oe1.orf.at$ d-c up
~/jobs/oe1.orf.at$ d-c up Starting oe1orfat_imageproxy_1 ... done Starting oe1orfat_web_1 ... done Starting oe1orfat_mp3download_1 ... done Starting oe1orfat_rkh_1 ... done Starting oe1orfat_api_1 ... done Starting oe1orfat_admin_1 ... done ... rkh_1 | Starman: Accepting connections at http://*:4006/ api_1 | Starman: Accepting connections at http://*:4005/ web_1 | Starman: Accepting connections at http://*:4001/ ...
Or if you only need to work on one service:
~/jobs/oe1.orf.at$ d-c up web
~/jobs/oe1.orf.at$ d-c up web Starting oe1orfat_db_1 ... done Starting oe1orfat_es5_1 ... done Starting oe1orfat_redis_1 ... done Starting oe1orfat_smtp_1 ... done Starting oe1orfat_web_1 ... done web_1 | Starman: Accepting connections at http://*:4001/
Services can define dependencies
e.g. web
depends on db
, es
, redis
and smtp
Which is why all of those services have been started automatically, even though I only asked for web
~/jobs/oe1.orf.at$ d-c up web Starting oe1orfat_db_1 ... done Starting oe1orfat_es5_1 ... done Starting oe1orfat_redis_1 ... done Starting oe1orfat_smtp_1 ... done Starting oe1orfat_web_1 ... done web_1 | Starman: Accepting connections at http://*:4001/
docker-compose.yml
services: web: admin: api: rkh: db: redis: es5: smtp: nginx: networks: volumes:
Our Apps
services: web: build: context: ./ depends_on: - db - redis - es5 - smtp - nginx volumes: - ./bin:/home/oe1/bin - ./sql:/home/oe1/sql - ./lib:/home/oe1/lib - ./root:/home/oe1/root - ./etc/docker-compose/oe1.pl:/home/oe1/etc/oe1_local.pl - ./etc/docker-compose/pg_service.conf:/home/oe1/.pg_service.conf networks: - oe1net ports: - 10401:4001 command: plackup -p 4001 -R /home/oe1/lib -s Starman --workers 3 /home/oe1/bin/oe1_web.psgi
services: web: build: context: ./
Where to find the Dockerfile
to build the container that implements this service
In this case, in the same directory as the docker-compose.yml
~/jobs/oe1.orf.at$ ls -la -rw-r--r-- 1 domm domm 1205 Jan 20 17:17 Dockerfile -rw-r--r-- 1 domm domm 3286 Jun 7 16:45 docker-compose.yml -rw-r--r-- 1 domm domm 2189 Feb 9 16:52 cpanfile drwxr-xr-x 7 domm domm 4096 Jun 7 16:45 bin drwxr-xr-x 3 domm domm 4096 Jun 7 16:45 lib ...
The first time you start this service, docker-compose
will use this Dockerfile
to build the container.
services: web: depends_on: - db - redis - es5 - smtp - nginx
The list of other services this service needs
Which will be started / build automatically
services: web: depends_on: - db - redis - es5 - smtp - nginx db: redis: es5: smtp: nginx:
services: web: volumes: - ./bin:/home/oe1/bin - ./sql:/home/oe1/sql - ./lib:/home/oe1/lib - ./root:/home/oe1/root - ./etc/docker-compose/oe1.pl:/home/oe1/etc/oe1_local.pl - ./etc/docker-compose/pg_service.conf:/home/oe1/.pg_service.conf
Using volumes
we can mount files or directories from the host computer (your laptop) into the docker container
Or share files and directories between services
services: web: volumes: - ./lib:/home/oe1/lib
This mounts the lib
dir from my host into the container at /home/oe1/lib
So I can open my code on my host in my editor, change some code, save it.
And have the new version available immediately inside the container, without having to rebuild it.
Together with auto-restarting services, this allows for very a very smooth dev experience
services: web: volumes: - ./etc/docker-compose/oe1.pl:/home/oe1/etc/oe1_local.pl - ./etc/docker-compose/pg_service.conf:/home/oe1/.pg_service.conf
This mounts some config files into the container
The config inside the containers usually will be slightly different than from what you use in production
services: web: networks: - oe1net
You can define and name networks, which makes it a little easier to have various services talk to one another
services: web: ports: - 10401:4001
Here we define the port mapping, i.e. which port inside the container shall be mapped to which port on the host.
In this case, we use port 4001 inside the container, and map it to 10401 on the host.
services: web: command: plackup -p 4001 -R /home/oe1/lib -s Starman --workers 3 /home/oe1/bin/oe1_web.psgi
command
lets us define which command to run inside the container.
Usually this would be defined in the Dockerfile
as CMD
, but often you will want to run a slightly different command during dev.
plackup -p 4001 -R /home/oe1/lib -s Starman --workers 3 /home/oe1/bin/oe1_web.psgi
Restart the server if any file inside /home/oe1/lib
changes
Not a good idea in production..
services: web: build: context: ./ depends_on: - db - redis - es5 - smtp - nginx volumes: - ./bin:/home/oe1/bin - ./sql:/home/oe1/sql - ./lib:/home/oe1/lib - ./root:/home/oe1/root - ./etc/docker-compose/oe1.pl:/home/oe1/etc/oe1_local.pl - ./etc/docker-compose/pg_service.conf:/home/oe1/.pg_service.conf networks: - oe1net ports: - 10401:4001 command: plackup -p 4001 -R /home/oe1/lib -s Starman --workers 3 /home/oe1/bin/oe1_web.psgi
services: admin: build: context: ./ depends_on: - db - redis - es5 - smtp - nginx volumes: - ./bin:/home/oe1/bin - ./sql:/home/oe1/sql - ./lib:/home/oe1/lib - ./root:/home/oe1/root - ./etc/docker-compose/oe1.pl:/home/oe1/etc/oe1_local.pl - ./etc/docker-compose/pg_service.conf:/home/oe1/.pg_service.conf networks: - oe1net ports: - 10402:4002 command: plackup -p 4002 -R /home/oe1/lib -s Starman --workers 3 /home/oe1/bin/oe1_admin.psgi
services: web: build: context: ./ depends_on: - db - redis - es5 - smtp - nginx volumes: - ./bin:/home/oe1/bin - ./sql:/home/oe1/sql - ./lib:/home/oe1/lib - ./root:/home/oe1/root - ./etc/docker-compose/oe1.pl:/home/oe1/etc/oe1_local.pl - ./etc/docker-compose/pg_service.conf:/home/oe1/.pg_service.conf networks: - oe1net ports: - 10401:4001 command: plackup -p 4001 -R /home/oe1/lib -s Starman --workers 3 /home/oe1/bin/oe1_web.psgi
services: admin: build: context: ./ depends_on: - db - redis - es5 - smtp - nginx volumes: - ./bin:/home/oe1/bin - ./sql:/home/oe1/sql - ./lib:/home/oe1/lib - ./root:/home/oe1/root - ./etc/docker-compose/oe1.pl:/home/oe1/etc/oe1_local.pl - ./etc/docker-compose/pg_service.conf:/home/oe1/.pg_service.conf networks: - oe1net ports: - 10402:4002 command: plackup -p 4002 -R /home/oe1/lib -s Starman --workers 3 /home/oe1/bin/oe1_admin.psgi
We can reduce the duplicate code using YAML Anchors and Merge Keys
which are standard (if slightly weird) features of YAML
x-perlapp: &perlapp build: context: ./ depends_on: - db - redis - es5 - smtp - nginx volumes: - ./bin:/home/oe1/bin - ./sql:/home/oe1/sql - ./lib:/home/oe1/lib - ./root:/home/oe1/root - ./etc/docker-compose/oe1.pl:/home/oe1/etc/oe1_local.pl - ./etc/docker-compose/pg_service.conf:/home/oe1/.pg_service.conf networks: - oe1net
services: web: <<: *perlapp ports: - 10401:4001 command: plackup -p 4001 -R /home/oe1/lib -s Starman --workers 3 /home/oe1/bin/oe1_web.psgi admin: <<: *perlapp ports: - 10402:4002 command: plackup -p 4002 -R /home/oe1/lib -s Starman --workers 3 /home/oe1/bin/oe1_admin.psgi
services: web: <<: *perlapp ports: - 10401:4001 command: plackup -p 4001 -R /home/oe1/lib -s Starman --workers 3 /home/oe1/bin/oe1_web.psgi admin: <<: *perlapp ports: - 10402:4002 command: plackup -p 4002 -R /home/oe1/lib -s Starman --workers 3 /home/oe1/bin/oe1_admin.psgi api: <<: *perlapp ports: - 10405:4005 command: plackup -p 4005 -R /home/oe1/lib -s Starman --workers 3 /home/oe1/bin/oe1_api.psgi mp3download: <<: *perlapp ports: - 10403:4003 command: plackup -p 4003 -s Starlet --workers 3 /home/oe1/bin/oe1_mp3_download.psgi rkh: <<: *perlapp ports: - 10406:4006 command: plackup -p 4006 -R /home/oe1/lib -s Starman --workers 3 /home/oe1/bin/oe1_rkh.psgi
The dependencies
Redis
redis: image: redis:alpine networks: - oe1net ports: - 46379:6379
redis: image: redis:alpine networks: - oe1net ports: - 46379:6379
We use image
instead of build / context
This will pull the specified image (redis/alpine
) from Docker Hub
A lot of services are available there, which saves a lot of work!
The first time you start this service, the docker image will be downloaded and installed
This will take some time...
But it's so much easier than installing it on the host and fiddling with all the settings.
redis: image: redis:alpine networks: - oe1net ports: - 46379:6379
We use the same network
and specify some port mappings.
Time for a "live demo"...
~/jobs/oe1.orf.at$ d-c up web redis
~/jobs/oe1.orf.at$ d-c up web redis redis_1 | * Ready to accept connections web_1 | Starman: Accepting connections at http://*:4001/
~/jobs/oe1.orf.at$ redis-cli -p 46379
~/jobs/oe1.orf.at$ redis-cli -p 46379 127.0.0.1:46379>
~/jobs/oe1.orf.at$ redis-cli -p 46379 127.0.0.1:46379> set hello perl
~/jobs/oe1.orf.at$ redis-cli -p 46379 127.0.0.1:46379> set hello perl OK
~/jobs/oe1.orf.at$ redis-cli -p 46379 127.0.0.1:46379> set hello perl OK 127.0.0.1:46379> get hello
~/jobs/oe1.orf.at$ redis-cli -p 46379 127.0.0.1:46379> set hello perl OK 127.0.0.1:46379> get hello "perl"
~/jobs/oe1.orf.at$ d-c exec redis /bin/sh
/data #
/data # redis-cli
/data # redis-cli 127.0.0.1:6379>
/data # redis-cli 127.0.0.1:6379> get hello
/data # redis-cli 127.0.0.1:6379> get hello "perl"
~/jobs/oe1.orf.at$ d-c exec web bash
oe1@867cdddb0593:~$
oe1@867cdddb0593:~$ redis-cli
oe1@867cdddb0593:~$ redis-cli bash: redis-cli: command not found
oe1@867cdddb0593:~$ perl -MRedis -E ' say Redis->new( server => "127.0.0.1:46379")->get("hello")'
oe1@867cdddb0593:~$ perl -MRedis -E ' say Redis->new( server => "127.0.0.1:46379")->get("hello")' Could not connect to Redis server at 127.0.0.1:46379: Connection refused
oe1@867cdddb0593:~$ perl -MRedis -E ' say Redis->new( server => "127.0.0.1:6379")->get("hello")'
oe1@867cdddb0593:~$ perl -MRedis -E ' say Redis->new( server => "127.0.0.1:6379")->get("hello")' Could not connect to Redis server at 127.0.0.1:6379: Connection refused
Does not work, because web
and redis
behave like different hosts
oe1@867cdddb0593:~$ perl -MRedis -E ' say Redis->new( server => "redis:6379")->get("hello")'
oe1@867cdddb0593:~$ perl -MRedis -E ' say Redis->new( server => "redis:6379")->get("hello")' perl
You can access other services via their name.
And then use their "internal" ports.
You only need the external ports when you want to access a service from the host.
PostgreSQL
db: image: postgres:11-alpine networks: - oe1net ports: - 15010:5432 volumes: - ./bin/db/dockercompose-init-db.sh:/docker-entrypoint-initdb.d/init-user-db.sh:ro - ./sql/oe1_dev.pgdump:/oe1_dev.pgdump:ro - oe1db:/var/lib/postgresql/data
db: image: postgres:11-alpine networks: - oe1net ports: - 15010:5432
I like to map the ports, so I can access the services directly from my host machine
yes, this means I have to install the various clients (redis-cli
, psql
, ..) on my host.
db: volumes: - ./bin/db/dockercompose-init-db.sh:/docker-entrypoint-initdb.d/init-user-db.sh:ro - ./sql/oe1_dev.pgdump:/oe1_dev.pgdump:ro - oe1db:/var/lib/postgresql/data
db: volumes: - ./bin/db/dockercompose-init-db.sh:/docker-entrypoint-initdb.d/init-user-db.sh:ro - ./sql/oe1_dev.pgdump:/oe1_dev.pgdump:ro - oe1db:/var/lib/postgresql/data
The PostgreSQL container from Docker Hub looks at /docker-entrypoint-initdb.d/init-user-db.sh
for a script to do initial work after the container has been created.
We can use this to install our dev DB
~/jobs/oe1.orf.at$ cat bin/db/dockercompose-init-db.sh #!/bin/bash set -e psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL CREATE ROLE "oe1_connect" ENCRYPTED PASSWORD '1234' NOSUPERUSER NOCREATEDB NOCREATEROLE; CREATE ROLE "oe1" NOLOGIN; CREATE ROLE "oe1_dbadmin" NOLOGIN; CREATE DATABASE "oe1" TEMPLATE template0 ENCODING UTF8; GRANT "oe1" TO "oe1_connect"; GRANT "oe1_dbadmin" TO "oe1_connect"; GRANT ALL PRIVILEGES ON DATABASE "oe1" TO "oe1"; create extension if not exists "uuid-ossp"; ALTER DATABASE oe1 SET search_path = oe1; EOSQL pg_restore -1 -v -d oe1 --no-acl /oe1_dev.pgdump
~/jobs/oe1.orf.at$ cat bin/db/dockercompose-init-db.sh #!/bin/bash set -e psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL CREATE ROLE "oe1_connect" ENCRYPTED PASSWORD '1234' NOSUPERUSER NOCREATEDB NOCREATEROLE; CREATE ROLE "oe1" NOLOGIN; CREATE ROLE "oe1_dbadmin" NOLOGIN; CREATE DATABASE "oe1" TEMPLATE template0 ENCODING UTF8; GRANT "oe1" TO "oe1_connect"; GRANT "oe1_dbadmin" TO "oe1_connect"; GRANT ALL PRIVILEGES ON DATABASE "oe1" TO "oe1"; create extension if not exists "uuid-ossp"; ALTER DATABASE oe1 SET search_path = oe1; EOSQL pg_restore -1 -v -d oe1 --no-acl /oe1_dev.pgdump
~/jobs/oe1.orf.at$ cat bin/db/dockercompose-init-db.sh #!/bin/bash set -e psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL CREATE ROLE "oe1_connect" ENCRYPTED PASSWORD '1234' NOSUPERUSER NOCREATEDB NOCREATEROLE; CREATE ROLE "oe1" NOLOGIN; CREATE ROLE "oe1_dbadmin" NOLOGIN; CREATE DATABASE "oe1" TEMPLATE template0 ENCODING UTF8; GRANT "oe1" TO "oe1_connect"; GRANT "oe1_dbadmin" TO "oe1_connect"; GRANT ALL PRIVILEGES ON DATABASE "oe1" TO "oe1"; create extension if not exists "uuid-ossp"; ALTER DATABASE oe1 SET search_path = oe1; EOSQL pg_restore -1 -v -d oe1 --no-acl /oe1_dev.pgdump
~/jobs/oe1.orf.at$ cat bin/db/dockercompose-init-db.sh #!/bin/bash set -e psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL CREATE ROLE "oe1_connect" ENCRYPTED PASSWORD '1234' NOSUPERUSER NOCREATEDB NOCREATEROLE; CREATE ROLE "oe1" NOLOGIN; CREATE ROLE "oe1_dbadmin" NOLOGIN; CREATE DATABASE "oe1" TEMPLATE template0 ENCODING UTF8; GRANT "oe1" TO "oe1_connect"; GRANT "oe1_dbadmin" TO "oe1_connect"; GRANT ALL PRIVILEGES ON DATABASE "oe1" TO "oe1"; create extension if not exists "uuid-ossp"; ALTER DATABASE oe1 SET search_path = oe1; EOSQL pg_restore -1 -v -d oe1 --no-acl /oe1_dev.pgdump
~/jobs/oe1.orf.at$ cat bin/db/dockercompose-init-db.sh #!/bin/bash set -e psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL CREATE ROLE "oe1_connect" ENCRYPTED PASSWORD '1234' NOSUPERUSER NOCREATEDB NOCREATEROLE; CREATE ROLE "oe1" NOLOGIN; CREATE ROLE "oe1_dbadmin" NOLOGIN; CREATE DATABASE "oe1" TEMPLATE template0 ENCODING UTF8; GRANT "oe1" TO "oe1_connect"; GRANT "oe1_dbadmin" TO "oe1_connect"; GRANT ALL PRIVILEGES ON DATABASE "oe1" TO "oe1"; create extension if not exists "uuid-ossp"; ALTER DATABASE oe1 SET search_path = oe1; EOSQL pg_restore -1 -v -d oe1 --no-acl /oe1_dev.pgdump
db: volumes: - ./bin/db/dockercompose-init-db.sh:/docker-entrypoint-initdb.d/init-user-db.sh:ro - ./sql/oe1_dev.pgdump:/oe1_dev.pgdump:ro - oe1db:/var/lib/postgresql/data
db: volumes: - ./bin/db/dockercompose-init-db.sh:/docker-entrypoint-initdb.d/init-user-db.sh:ro - ./sql/oe1_dev.pgdump:/oe1_dev.pgdump:ro - oe1db:/var/lib/postgresql/data
db: volumes: - ./bin/db/dockercompose-init-db.sh:/docker-entrypoint-initdb.d/init-user-db.sh:ro - ./sql/oe1_dev.pgdump:/oe1_dev.pgdump:ro - oe1db:/var/lib/postgresql/data
db: volumes: - ./bin/db/dockercompose-init-db.sh:/docker-entrypoint-initdb.d/init-user-db.sh:ro - ./sql/oe1_dev.pgdump:/oe1_dev.pgdump:ro - oe1db:/var/lib/postgresql/data volumes: oe1db:
Named volumes
are create as Docker volumes and thus persist even when the container using them is rebuild.
Everything you store inside a Docker container is ephemeral.
So when you rebuild the container, all data is gone.
Named volumes
allow you to keep the data.
And to share it between different services.
ElasticSearch
ElasticSearch is quite a beast to install and run.
It's so much easier to just reuse a readily available docker container
es5: image: docker.elastic.co/elasticsearch/elasticsearch:5.6.10 volumes: - oe1es5:/usr/share/elasticsearch/data environment: cluster.name: oe1 bootstrap.memory_lock: 'true' ES_JAVA_OPTS: '-Xms512m -Xmx512m' node.master: 'true' node.data: 'true' node.ingest: 'false' node.ml: 'false' node.name: es5 http.compression: 'true' http.port: 9200 transport.tcp.port: 9300 search.remote.connect: 'false' discovery.zen.minimum_master_nodes: 1 indices.query.bool.max_clause_count: 10240 xpack.security.enabled: 'false' xpack.monitoring.enabled: 'false' xpack.graph.enabled: 'false' xpack.watcher.enabled: 'false' xpack.ml.enabled: 'false' ulimits: memlock: soft: -1 hard: -1 networks: - oe1net ports: - 49200:9200
I won't go into details here.
But you can see that you can set environment variables
environment: cluster.name: oe1 bootstrap.memory_lock: 'true' ES_JAVA_OPTS: '-Xms512m -Xmx512m' ...
Sending Mail
This application has to send various mails to the editors and sometimes to users (e.g. Opt-In mails)
You do not want to send mails during development
But you will need to test if it works, and if mails are correctly rendered.
smtp: image: "djfarrelly/maildev" command: ["bin/maildev", "--web", "1080", "--smtp", "1025"] networks: - oe1net ports: - "10025:1080"
smtp: image: "djfarrelly/maildev" command: ["bin/maildev", "--web", "1080", "--smtp", "1025"] networks: - oe1net ports: - "10025:1080"
A combinded SMTP server and webmail client
All mails sent to the SMTP server are stored in memory only
and can be read via the included webmail client
smtp: image: "djfarrelly/maildev" command: ["bin/maildev", "--web", "1080", "--smtp", "1025"] networks: - oe1net ports: - "10025:1080"
# etc/docker-compose/oe1.pl mail => { transport_class=>'Email::Sender::Transport::SMTP', args=>{ host => 'smtp', port => '1025', } },
smtp: image: "djfarrelly/maildev" command: ["bin/maildev", "--web", "1080", "--smtp", "1025"] networks: - oe1net ports: - "10025:1080"
oe1@867cdddb0593:~$ bin/mailer
oe1@867cdddb0593:~$ bin/mailer Running Oe1::Daemon::Mailer->run Sending 1 overdue mails Sent mail 12345 to gehoert.gewusst@orf.at
Very handy, because you never risk sending 1.000 test mails to a real person...
Accessing the website
Thanks to the port mappings, I can access the web service directly:
services: web: ports: - 10401:4001
But this looks not very good..
A lot of images are not loaded
The font is wrong
The cookie banner is always displayed, even after agreeing
Several features do not work
Most of this is caused by the fact that this website is part of the general orf.at network
We are using several widgets provided by orf.at as JavaScript snippets
But those will only load if we're running on a secure connection (https)
And if we're "inside" the orf.at domain.
An ugly /etc/hosts hack
~/jobs/oe1.orf.at$ cat /etc/hosts
~/jobs/oe1.orf.at$ cat /etc/hosts 127.0.0.1 oe1dev.orf.at oe1admindev.orf.at rkhdev.orf.at
Now my browser thinks it is accessing a site inside orf.at
which in fact is running on my laptop
and will happily load all assets from orf.at
Nginx
nginx: image: nginx:1-alpine volumes: - ./root:/oe1-root/ - ./etc/docker-compose/nginx/dev.conf:/etc/nginx/conf.d/oe1.conf - ./etc/docker-compose/nginx/https.conf:/etc/nginx/oe1/https.conf - ./etc/nginx/:/etc/nginx/oe1/ networks: - oe1net ports: - 10100:80 - 10101:10101
nginx: image: nginx:1-alpine volumes: - ./root:/oe1-root/ - ./etc/docker-compose/nginx/dev.conf:/etc/nginx/conf.d/oe1.conf - ./etc/docker-compose/nginx/https.conf:/etc/nginx/oe1/https.conf - ./etc/nginx/:/etc/nginx/oe1/ networks: - oe1net ports: - 10100:80 - 10101:10101
nginx: image: nginx:1-alpine volumes: - ./root:/oe1-root/ - ./etc/docker-compose/nginx/dev.conf:/etc/nginx/conf.d/oe1.conf - ./etc/docker-compose/nginx/https.conf:/etc/nginx/oe1/https.conf - ./etc/nginx/:/etc/nginx/oe1/ networks: - oe1net ports: - 10100:80 - 10101:10101
volumes: - ./root:/oe1-root/ - ./etc/docker-compose/nginx/dev.conf:/etc/nginx/conf.d/oe1.conf - ./etc/docker-compose/nginx/https.conf:/etc/nginx/oe1/https.conf - ./etc/nginx/:/etc/nginx/oe1/
file: etc/docker-compose/nginx/dev.conf server { listen *:10101 ssl; server_name oe1dev.orf.at; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto 'https'; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; location / { set $upstream http://web:4001; proxy_pass $upstream; } location /uimg/ { alias /oe1-root/web/uimg/; try_files $uri @oe1live; } location @oe1live { proxy_pass https://oe1.orf.at; } include /etc/nginx/oe1/https.conf; };
file: etc/docker-compose/nginx/dev.conf server { listen *:10101 ssl; server_name oe1dev.orf.at; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto 'https'; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; location / { set $upstream http://web:4001; proxy_pass $upstream; } location /uimg/ { alias /oe1-root/web/uimg/; try_files $uri @oe1live; } location @oe1live { proxy_pass https://oe1.orf.at; } include /etc/nginx/oe1/https.conf; };
file: etc/docker-compose/nginx/dev.conf server { listen *:10101 ssl; server_name oe1dev.orf.at; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto 'https'; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; location / { set $upstream http://web:4001; proxy_pass $upstream; } location /uimg/ { alias /oe1-root/web/uimg/; try_files $uri @oe1live; } location @oe1live { proxy_pass https://oe1.orf.at; } include /etc/nginx/oe1/https.conf; };
file: etc/docker-compose/nginx/dev.conf server { listen *:10101 ssl; server_name oe1dev.orf.at; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto 'https'; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; location / { set $upstream http://web:4001; proxy_pass $upstream; } location /uimg/ { alias /oe1-root/web/uimg/; try_files $uri @oe1live; } location @oe1live { proxy_pass https://oe1.orf.at; } include /etc/nginx/oe1/https.conf; };
file: etc/docker-compose/nginx/dev.conf server { listen *:10101 ssl; server_name oe1dev.orf.at; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto 'https'; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; location / { set $upstream http://web:4001; proxy_pass $upstream; } location /uimg/ { alias /oe1-root/web/uimg/; try_files $uri @oe1live; } location @oe1live { proxy_pass https://oe1.orf.at; } include /etc/nginx/oe1/https.conf; };
file: etc/docker-compose/nginx/dev.conf server { listen *:10101 ssl; server_name oe1dev.orf.at; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto 'https'; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; location / { set $upstream http://web:4001; proxy_pass $upstream; } location /uimg/ { alias /oe1-root/web/uimg/; try_files $uri @oe1live; } location @oe1live { proxy_pass https://oe1.orf.at; } include /etc/nginx/oe1/https.conf; };
file: etc/docker-compose/nginx/dev.conf server { listen *:10101 ssl; server_name oe1dev.orf.at; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto 'https'; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; location / { set $upstream http://web:4001; proxy_pass $upstream; } location /uimg/ { alias /oe1-root/web/uimg/; try_files $uri @oe1live; } location @oe1live { proxy_pass https://oe1.orf.at; } include /etc/nginx/oe1/https.conf; };
file: etc/docker-compose/nginx/dev.conf server { listen *:10101 ssl; server_name oe1dev.orf.at; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto 'https'; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; location / { set $upstream http://web:4001; proxy_pass $upstream; } location /uimg/ { alias /oe1-root/web/uimg/; try_files $uri @oe1live; } location @oe1live { proxy_pass https://oe1.orf.at; } include /etc/nginx/oe1/https.conf; };
If an image is not available on my dev machine, nginx will make a proxy request to the live server and fetch it from there
So after I fetch a new dev DB from the live system, I don't have to sync all the images to my dev machine
file: etc/docker-compose/nginx/dev.conf server { listen *:10101 ssl; server_name oe1dev.orf.at; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto 'https'; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; location / { set $upstream http://web:4001; proxy_pass $upstream; } location /uimg/ { alias /oe1-root/web/uimg/; try_files $uri @oe1live; } location @oe1live { proxy_pass https://oe1.orf.at; } include /etc/nginx/oe1/https.conf; };
file: etc/docker-compose/nginx/https.conf ssl_certificate /etc/nginx/oe1/oe1dev.orf.at+2.pem; ssl_certificate_key /etc/nginx/oe1/oe1dev.orf.at+2-key.pem; ssl_prefer_server_ciphers on; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # not possible to do exclusive ssl_ciphers ... ssl_session_timeout 5m;
file: etc/docker-compose/nginx/https.conf ssl_certificate /etc/nginx/oe1/oe1dev.orf.at+2.pem; ssl_certificate_key /etc/nginx/oe1/oe1dev.orf.at+2-key.pem; ssl_prefer_server_ciphers on; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # not possible to do exclusive ssl_ciphers ... ssl_session_timeout 5m;
But you cannot use e.g. LetsEncrypt to get a certificate for a website that's not reachable from outside.
And oe1dev.orf.at
is definitly not reachable from outside, because it's only defined in my /etc/hosts
Faking a certificate
https://github.com/FiloSottile/mkcert
"mkcert is a simple tool for making locally-trusted development certificates."
~/jobs/oe1.orf.at$ bin/tools/mkcert oe1dev.orf.at oe1admindev.orf.at rkhdev.orf.at
~/jobs/oe1.orf.at$ bin/tools/mkcert oe1dev.orf.at oe1admindev.orf.at rkhdev.orf.at ~/jobs/oe1.orf.at$ mv oe1dev.orf.at+2* etc/nginx/
volumes: - ./root:/oe1-root/ - ./etc/docker-compose/nginx/dev.conf:/etc/nginx/conf.d/oe1.conf - ./etc/docker-compose/nginx/https.conf:/etc/nginx/oe1/https.conf - ./etc/nginx/:/etc/nginx/oe1/
file: etc/docker-compose/nginx/https.conf ssl_certificate /etc/nginx/oe1/oe1dev.orf.at+2.pem; ssl_certificate_key /etc/nginx/oe1/oe1dev.orf.at+2-key.pem;
Summary
Running your dev setup via docker-compose
makes a lot of things a lot easier
The initial setup will be some work, especially if you haven't already dockerized your app, but it will be worth it.
docker-compose
makes it also very easy to on-board new team members
or switch to a new dev-machine
Run docker-compose up
, wait until everything is installed, and start hacking
No more fiddling with symlinks and installing weird dependencies on your laptop
And if you're a freelancer working on several projects, the encapsulation docker-compose
provides for each project is another great benefit.