Server: caprica
Setup, configuration and documentation for my local toolserver which also acts as a testing- and staging-ground for my container images. The setup follows an infrastructure as code approach. This machines takes care of Docker images and Vagrantboxes and handles some automation tasks.
Key | Value |
---|---|
Hostname |
caprica (FQDN = caprica.fritz.box) |
OS |
Ubuntu Server 22.04 |
User (Password) |
seb (start123) |
Hardware |
Lenovo Think Centre (Core i5, 126GB RAM) |
Requirements Overview
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet.
todo … |
Quality Goals
-
The whole setup, all my Vagrantboxes and all Docker images is Infrastructure as Code.
-
Software installations through the classic package managers like
apt
should be avoided. As much as possible should run inside docker containers. This is true for the host machine as well as Vagrantboxes.
Building Block View / Whitebox Overall System
Docker Stack: Ops
Expose metrics for system monitoring. See Raspi Prometheus for more monitoring information.
Container Name | Image | URL | Description | Restart |
---|---|---|---|---|
node_exporter |
prom/node-exporter:latest |
Prometheus exporter to monitor system metrics |
always |
|
cadvisor |
gcr.io/cadvisor/cadvisor:latest |
Prometheus exporter to monitor docker containers |
always |
|
portainer |
portainer/portainer-ce:alpine |
Manage docker containers |
always |
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet.
Docker Stack: sommerfeld-io
This machine acts as a staging ground for my webapp images hosted on DockerHub. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet.
Container Name | Image | URL | Description | Restart |
---|---|---|---|---|
website |
sommerfeldio/website:latest |
Image used for www.sommerfeld.io |
unless-stopped |
All images from this stack are always pulled to make sure the latest version is running.
Install
Prepare bootable USB stick and install Ubuntu
-
Download Ubuntu server from the Ubuntu website (use Option 2: Manual server installation). Use Ubuntu Server 22.04 LTS or higher.
-
Create a bootable USB stick from the downloaded iso image with a tool like Etcher or the Startup Disk Creator (shipped with Ubuntu).
-
Install machine from stick (the setup wizard takes care of the hostname, network settings, ssh, …)
-
When prompted for a user and password, use the same information as in kobol. The default user
seb
is created later on when provisioning the system using Ansible. -
Remember to install and activate the OpenSSH server when the wizard prompts for this!
-
Do not install any futher software packages. Installations take place later on when provisioning the system using Ansible.
-
-
Test connecting to
caprica
viassh sebastian@caprica
andssh sebastian@caprica.fritz.box
(with sebastian being the user created while installing the OS). Make sure to test both hostnames (to ensure both are added to your~/.ssh/known_hosts
)!
Provision System
-
Run
src/main/workstations/ansible.sh
from kobol to provision the machine.
Tasks performed by Ansible Playbook
-
Perpare localhost (not remote machine) → Copy maven wrapper script into place (ensure same versions on caprica as on localhost)
-
Create default user with ssh keypair
-
Config → Set timezone
-
Config → Copy motd
-
Config → Copy git configuration
-
Config → Write aliases to .bashrc
-
Config → Write commands to .bashrc
-
Config → Update bash prompt in .bashrc
-
Config → Create directories
-
Config → Copy wrapper-scripts (ensure same versions on caprica as on localhost)
-
Config → Create symlinks from /usr/bin to wrapper-scripts
-
SSH config → Copy SSH keys (allow ssh connects without password and
vagrant ssh
with non-default user) -
SSH Config → Set SSH key verification policy for IP range used by Vagrant (= copy .ssh/config file)
-
SSH config → Add to authorized_keys (allow
vagrant ssh
with non-default user) -
SSH config → Change ownership of authorized_keys
-
Install required system packages
-
Install tool packages
-
Install Ansible
-
Docker Setup → Add GPG apt key
-
Docker Setup → Add apt repository
-
Docker Setup → Install docker-ce and docker-compose
-
Docker Setup → Install Docker Module for Python
-
Docker Setup → Add default user to docker group
-
Virtualbox Setup → Install Virtualbox and dependencies
-
Virtualbox Setup → Set virtual machine folder
-
Vagrant Setup → Install Vagrant and Vagrant plugins
-
Vagrant Setup → Install Vagrant plugins
Usage
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet.
todo … |
Run Services
Services are defined in a docker-compose.yml file. The compose files are grouped into folders representing their respective set of usecases.
Controlling these services is done from host kobol
. Deploying to the remote node caprica
is done using DOCKER_HOST environment variable to set up the target engine. Docker Contexts is another possibility but not used because the test environment caprica-test
is using DHCP instead of a static IP.
todo … how do I run the deployment? Which script? Something from src/main/workstations/caprica … how do I run the test script? |
Architecture Decisions
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet.
todo … (Ubuntu with Vagrant vs Proxmox ???) → Decision = Don’t use Vagrant at all and run containers directly (because Jenkins should be able to start/stop Vagrantboxes)?! |