Homelab Reverse Proxy - Part 4 - Virtual Machine Setup
This is a series of articles about running a reverse proxy; stay tuned via RSS when more parts will be published.
It is finally the time to setup a virtual machine that will run our gateway web server, acting as a reverse proxy.
Terraform
To make it easy to deploy and configure my reverse proxy virtual machine, I am using terraform.
I have already went ahead and acquired the necessary Microsoft Azure subscription and setup a Service Principal Account that will be used by terraform
to perform necessary automation tasks.
The virtual machine is contained in a resource group, has a public IP(v4) as well as a network interface. Notice that I am using variable templating in my resource names so that I can later modularise this and “stamp-out” many virtual machines of the same configuration across regions or for different domains/purposes.
virtual-machine.tf
resource "azurerm_resource_group" "main" {
name = "${var.prefix}${var.location}${var.name}"
location = "${var.location}"
}
resource "azurerm_public_ip" "main" {
name = "${var.prefix}${var.location}${var.name}pip"
resource_group_name = "${azurerm_resource_group.main.name}"
location = "${azurerm_resource_group.main.location}"
sku = "Basic"
allocation_method = "Static"
}
resource "azurerm_network_interface" "main" {
name = "${var.prefix}${var.location}${var.name}nic"
resource_group_name = "${azurerm_resource_group.main.name}"
location = "${azurerm_resource_group.main.location}"
ip_configuration {
name = "main"
subnet_id = "${azurerm_subnet.main.id}"
private_ip_address_allocation = "Dynamic"
public_ip_address_id = "${azurerm_public_ip.main.id}"
}
}
resource "azurerm_virtual_machine" "main" {
name = "${var.prefix}${var.location}${var.name}vmc"
resource_group_name = "${azurerm_resource_group.main.name}"
location = "${azurerm_resource_group.main.location}"
network_interface_ids = ["${azurerm_network_interface.main.id}"]
vm_size = "${var.vm_size}"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
storage_os_disk {
name = "${var.prefix}${var.location}${var.name}osd"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
admin_username = "${var.admin_username}"
computer_name = "${var.prefix}${var.location}${var.name}vmc"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
key_data = "${var.ssh_public_key}"
path = "/home/${var.admin_username}/.ssh/authorized_keys"
}
}
}
For VM networking, I picked a simple 192.168.254.0/24
subnet (does not need to be overly complicated for my purposes) as well as a fairly restrictive
NSG rule-set. The management_public_ip
is an optional variable of my public IP address
at home (or wherever) for temporarily provisioning the VM. Once the VM has been provisioned and an OpenVPN tunnel is established, further management occurs via that
VPN tunnel and the rule to allow port-22 (SSH) traffic is automatically disabled (disallowed).
network.tf
resource "azurerm_virtual_network" "main" {
name = "${var.prefix}${var.location}${var.name}vnt"
resource_group_name = "${azurerm_resource_group.main.name}"
location = "${azurerm_resource_group.main.location}"
address_space = ["192.168.254.0/24"]
}
resource "azurerm_subnet" "main" {
name = "vm"
resource_group_name = "${azurerm_resource_group.main.name}"
virtual_network_name = "${azurerm_virtual_network.main.name}"
address_prefix = "192.168.254.0/24"
}
resource "azurerm_network_security_group" "main" {
name = "${var.prefix}${var.location}${var.name}nsg"
resource_group_name = "${azurerm_resource_group.main.name}"
location = "${azurerm_resource_group.main.location}"
}
resource "azurerm_subnet_network_security_group_association" "main" {
subnet_id = "${azurerm_subnet.main.id}"
network_security_group_id = "${azurerm_network_security_group.main.id}"
}
resource "azurerm_network_security_rule" "ssh_management" {
count = "${length(var.management_public_ip) > 0 ? 1 : 0}"
resource_group_name = "${azurerm_resource_group.main.name}"
network_security_group_name = "${azurerm_network_security_group.main.name}"
name = "management"
priority = 150
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
source_address_prefix = "${var.management_public_ip}"
destination_port_range = 22
destination_address_prefix = "*"
}
resource "azurerm_network_security_rule" "http_traffic" {
resource_group_name = "${azurerm_resource_group.main.name}"
network_security_group_name = "${azurerm_network_security_group.main.name}"
name = "http"
priority = 200
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
source_address_prefix = "Internet"
destination_port_ranges = [80,443]
destination_address_prefix = "*"
}
OpenVPN Client
Now that VM basics are out of the way, it is time to provision OpenVPN client and its configuration. Terraform’s
null_resource provisioner can be used to run scripts inside that VM.
The script also creates a directory where OpenVPN configuration will be stored.
openvpn.tf (installation)
resource "null_resource" "openvpn-install" {
depends_on = [
azurerm_virtual_machine.main
]
connection {
type = "ssh"
host = local.management_ip
user = var.admin_username
private_key = var.ssh_private_key
}
provisioner "remote-exec" {
inline = [
"sudo apt-get update && sudo apt-get -y install openvpn",
"sudo mkdir -p /etc/openvpn"
]
}
}
Earlier in the file I am also setting up OpenVPN configuration (a text file) that will be uploaded to the virtual machine. The configuration matches
parameters entered for the server created earlier in this series (such as cipher, authentication, CA, TLS keys, et al). Generating this block in its
entirety allows terraform to keep track of changes to this configuration and make updates in the future easier (not to mention using variables).
When null_resource.openvpn-conf
is triggered (upon first run and/or change in configuration) several events occur: the contents of the
heredoc is written to /tmp/openvpn-backend.conf
, which is then moved to
OpenVPN’s configuration directory. Permissions are reset to appropriate and the service is restarted (albeit in a shotgun-style approach).
openvpn.tf (rest of the file)
locals {
openvpn_conf = <<OPENVPNCONF
dev tun
persist-tun
persist-key
cipher AES-256-GCM
ncp-disable
auth SHA256
tls-client
client
resolv-retry infinite
remote ${var.openvpn_remote_host} ${var.openvpn_remote_port} ${var.openvpn_remote_protocol}
verify-x509-name "${var.openvpn_verify_x509_name}" name
remote-cert-tls server
comp-lzo adaptive
<ca>
${var.openvpn_certificate_ca_chain}
</ca>
<cert>
${var.openvpn_certificate_client}
</cert>
<key>
${var.openvpn_certificate_client_key}
</key>
<tls-crypt>
${var.openvpn_tls_key}
</tls-crypt>
OPENVPNCONF
}
resource "null_resource" "openvpn-conf" {
depends_on = [
azurerm_virtual_machine.main,
null_resource.openvpn-install
]
connection {
type = "ssh"
host = local.management_ip
user = var.admin_username
private_key = var.ssh_private_key
}
triggers = {
conf = sha1(local.openvpn_conf)
}
provisioner "file" {
content = local.openvpn_conf
destination = "/tmp/openvpn-backend.conf"
}
provisioner "remote-exec" {
inline = [
"sudo mv /tmp/openvpn-backend.conf /etc/openvpn/backend.conf",
"sudo chown root:root /etc/openvpn/backend.conf",
"sudo chmod 750 /etc/openvpn/backend.conf",
"sudo systemctl daemon-reload",
"sudo systemctl enable openvpn@backend.service",
"sudo systemctl restart openvpn@backend.service",
"sudo systemctl status --no-pager openvpn@backend.service"
]
}
}
Caddy
Last, but not least, caddy server is installed. The installation is pretty straightforward based on official documentation. In my case, I create a dedicated directory
which will be used by caddy for its configuration, service run files, and things like certificates (both clients and upstream internal CA).
The service is also given permission to bind to ports < 1024 (required on Linux) since it will not be ran as a root
user.
caddy.tf (installation)
resource "null_resource" "caddy-binary" {
depends_on = [
azurerm_virtual_machine.main
]
connection {
type = "ssh"
host = local.management_ip
user = var.admin_username
private_key = var.ssh_private_key
}
triggers = {
pluginlist = join(",",var.caddy_plugins)
}
provisioner "remote-exec" {
inline = [
"curl -fsSL https://getcaddy.com | sudo bash -s ${var.caddy_license} ${join(",",var.caddy_plugins)}",
"sudo mkdir -p /etc/ssl/caddy",
"sudo chown -R www-data:www-data /etc/ssl/caddy",
"sudo chmod -R 770 /etc/ssl/caddy",
"sudo setcap CAP_NET_BIND_SERVICE=+eip /usr/local/bin/caddy"
]
}
}
Again as the OpenVPN example, any changes to Caddyfile or certificates trigger a series of events to: ensure caddy configuration directory exists with
appropriate permissions, newest Caddyfile is copied, newest trusted root CA (and client root CA) files are also copied and the service is restarted.
caddy.tf (configuration)
resource "null_resource" "caddy-reconfigure" {
depends_on = [
azurerm_virtual_machine.main,
null_resource.caddy-binary
]
connection {
type = "ssh"
host = local.management_ip
user = var.admin_username
private_key = var.ssh_private_key
}
triggers = {
caddyfile = sha1(var.caddy_caddyfile)
tls_web_client_cer = sha1(var.tls_web_client_certificate_ca)
ca = sha1(local.ca_bundle)
}
provisioner "remote-exec" {
inline = [
"sudo mkdir -p /etc/caddy"
]
}
# Caddyfile
provisioner "file" {
content = var.caddy_caddyfile
destination = "/tmp/Caddyfile"
}
# TLS Web Client CA
provisioner "file" {
content = var.tls_web_client_certificate_ca
destination = "/tmp/tlswebclient.cer"
}
# Trusted root CA (downstream)
provisioner "file" {
content = local.ca_bundle
destination = "/tmp/ca_bundle.crt"
}
provisioner "remote-exec" {
inline = [
"sudo systemctl stop caddy.service",
# Apply new Caddyfile
"sudo mv /tmp/Caddyfile /etc/caddy/Caddyfile",
# TLS Web Client CA
"sudo mv /tmp/tlswebclient.cer /etc/caddy/tlswebclient.cer",
# Root CA
"sudo mv /tmp/ca_bundle.crt /etc/caddy/ca_bundle.crt",
# Fix permissions
"sudo chown -R www-data:www-data /etc/caddy",
"sudo chmod 550 /etc/caddy/Caddyfile",
"sudo chmod 555 /etc/caddy/tlswebclient.cer",
"sudo chmod 555 /etc/caddy/ca_bundle.crt",
"sudo systemctl start caddy.service",
"sudo systemctl status --no-pager caddy.service"
]
}
}
Putting it all together
Since I am using variables to configure this entire stack, below is an example of variables and their defaults. Variables without defaults would have to have values
provided to terraform (for example, when used as a module).
module.tf
variable "name" {
}
variable "prefix" {
}
variable "location" {
}
variable "management_public_ip" {
default = ""
}
variable "management_private_ip" {
default = ""
}
# https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-general#b-series
variable "vm_size" {
default = "Standard_B1ls"
}
variable "admin_username" {
default = "adminuser"
}
variable "ssh_public_key" {
}
variable "ssh_private_key" {
}
variable "caddy_license" {
default = "personal"
}
variable "caddy_le_email" {
default = "email@example.org"
}
variable "caddy_plugins"{
type = "list"
default = [
"http.cache",
"http.cors",
"http.expires",
"http.filter",
"http.forwardproxy",
"http.ipfilter",
"http.nobots",
"http.permission",
"http.ratelimit"
]
}
variable "caddy_caddyfile" {
}
variable "openvpn_remote_host" {
default = "openvpnserver.example.org"
}
variable "openvpn_remote_port" {
default = 1194
}
variable "openvpn_remote_protocol" {
default = "udp"
}
variable "openvpn_verify_x509_name" {
default = "openvpnserver.example.org"
}
variable "openvpn_certificate_ca_chain" {
}
variable "openvpn_certificate_client" {
}
variable "openvpn_certificate_client_key" {
}
variable "openvpn_tls_key" {
}
variable "tls_web_client_certificate_ca" {
}
output "public_ip_address" {
value = "${azurerm_public_ip.main.ip_address}"
}
Finally, a Caddyfile example is provided below which proxies requests from a website such as this one to a back-end server running in our home lab.
Caddyfile
romanbezlepkin.com
{
redir https://www.romanbezlepkin.com{uri}
}
www.romanbezlepkin.com
{
gzip
header / {
-Server
-x-powered-by
}
proxy / backend-web.home.example.org:80 {
transparent
}
}
In the next part, we’ll be looking back at pfsense firewall rules to restrict traffic coming from reverse proxies just to the services that are being exposed.