Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure images #35

Closed
wants to merge 57 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
57 commits
Select commit Hold shift + click to select a range
d9ba340
devshell.nix: add jq
flokli Nov 30, 2023
b8c81da
tf-modules: init azurerm-nix-vm-image
flokli Dec 5, 2023
25eb9c5
tf-modules: init azurerm-linux-vm
flokli Dec 5, 2023
d0472a1
hosts/jenkins-controller: init
flokli Nov 22, 2023
2b3ac17
azurerm-linux-vm: add virtual_machine_custom_data
flokli Dec 6, 2023
ec28b0a
hosts/jenkins-controller: enable cloud-init
flokli Dec 6, 2023
05941a6
services/openssh: add kitty terminfo
flokli Dec 6, 2023
4a3f2ea
hosts/jenkins-controller: include service-openssh module
flokli Dec 6, 2023
ec5f594
services/openssh: set with priorities
flokli Dec 6, 2023
ca3c070
add ssh-keys.yaml
flokli Dec 6, 2023
0b242a8
terraform/jenkins: init
flokli Dec 6, 2023
8be794b
flake.nix: bump nixpkgs to 23.11
flokli Dec 6, 2023
2c7ad73
hosts/jenkins-controller: use networkd
flokli Dec 6, 2023
f48db25
hosts/jenkins-controller: re-enable resolved
flokli Dec 6, 2023
214de32
hosts/azure-common-2: init
flokli Dec 6, 2023
9ec4e7e
tf-modules/azurerm-linux-vm: assign identity
flokli Dec 6, 2023
b879d35
hosts: add binary-cache config
flokli Dec 6, 2023
5a69ad6
hosts/binary-cache: apply caddy workaround
flokli Dec 7, 2023
63c5ae3
hosts/binary-cache: hardcode domain for now
flokli Dec 7, 2023
afc26be
tf-modules/azurerm-linux-vm: move out security group config
flokli Dec 7, 2023
e910d47
hosts/azure-common-2: add filesystem tools
flokli Dec 7, 2023
c5d55f0
azure-common: support timeout in disk_setup
flokli Dec 7, 2023
f048547
terraform/jenkins: add binary cache storage
flokli Dec 7, 2023
067b8bd
terraform/jenkins: deploy binary cache vm
flokli Dec 7, 2023
1ef98df
hosts/jenkins-controller: give jenkins state
flokli Dec 7, 2023
9399361
docs, hosts: drop more nix-serve-ng module usages
flokli Dec 8, 2023
58fe9d9
hosts: explicitly wait for cloud-init.service
flokli Dec 9, 2023
5e42f97
binary-cache: configure params with cloudinit
flokli Dec 13, 2023
02dd7d4
terraform/jenkins: don't listen on port 80
flokli Dec 13, 2023
8e3724e
hosts: use x-systemd.device-timeout=5min option
flokli Dec 13, 2023
2a040c4
binary-cache: move to EnvironmentFile=
flokli Dec 13, 2023
63893c1
azurerm-linux-vm: use azurerm_virtual_machine
flokli Dec 13, 2023
313a469
flake: switch to nixpkgs master
flokli Dec 14, 2023
7a7a1e4
azure-scratch-store-common.nix: init
flokli Dec 15, 2023
f6e747b
hosts: enable scratch /nix/store
flokli Dec 15, 2023
88a99f1
terraform/jenkins: interpolate storageaccount name
flokli Dec 18, 2023
ab4546b
binary-cache: rclone env file: move to /var/lib
flokli Dec 18, 2023
db781d7
services: add remote-build module
flokli Dec 18, 2023
be68ca4
hosts: add builder node
flokli Dec 18, 2023
a3ad3e1
tf-modules: linux-vm: allow no data disks
flokli Dec 18, 2023
13cc6c4
tf-modules/azurerm-linux-vm allow non-public ips
flokli Dec 18, 2023
270c362
terraform: deploy builders
flokli Dec 18, 2023
50dd368
terraform/jenkins: create ed25519 key with terraform
flokli Dec 18, 2023
3b6a6c1
terraform/jenkins: put privkey in azure key vault
flokli Dec 19, 2023
d23fea1
terraform/jenkins: use TerraformAdminsGHAFInfra
flokli Dec 19, 2023
be594e4
hosts/jenkins-controller: fetch secret from vault
flokli Dec 19, 2023
c4b1b99
tf-modules/linux-vm: expose private ip
flokli Dec 19, 2023
0d8e9da
terraform/jenkins: render /etc/nix/machines
flokli Dec 19, 2023
01ca2a6
terraform: add terraform-provider-secret
flokli Dec 19, 2023
8a24c0b
terraform/jenkins: add post-build-hook and signing
flokli Dec 19, 2023
e953afb
terraform/jenkins: ensure nar/ exists
flokli Dec 19, 2023
30fbbe2
terraform/jenkins: drop user ssh on builders
flokli Dec 19, 2023
480511f
jenkins-controller: populate known_hosts
flokli Dec 19, 2023
7c9b05f
jenkins-controller: move jenkins itself to port 8081
flokli Dec 19, 2023
cf71bab
hosts/jenkins-controller: document url params
flokli Dec 20, 2023
0871be7
hosts/jenkins-controller: inline get_secret.py
flokli Dec 20, 2023
35cf9fb
terraform/jenkins: add README
flokli Dec 20, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .reuse/dep5
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@ Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/

Copyright: 2023 Technology Innovation Institute (TII)
License: Apache-2.0
Files: *.lock *.png *.svg *.csv *.yaml
Files: *.lock *.png *.svg *.csv *.yaml *.pub
1 change: 0 additions & 1 deletion docs/adapting-to-new-environments.md
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,6 @@ $ cat hosts/mytarget/configuration.nix
# Define the services you want to run on your target, as well as the users
# who can access the target with ssh:
imports = [
inputs.nix-serve-ng.nixosModules.default
inputs.sops-nix.nixosModules.sops
inputs.disko.nixosModules.disko
../generic-disk-config.nix
Expand Down
64 changes: 39 additions & 25 deletions flake.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

5 changes: 3 additions & 2 deletions flake.nix
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

inputs = {
# Nixpkgs
nixpkgs.url = "github:nixos/nixpkgs/nixos-23.05";
nixpkgs.url = "github:nixos/nixpkgs/master";
# Allows us to structure the flake with the NixOS module system
flake-parts.url = "github:hercules-ci/flake-parts";
flake-root.url = "github:srid/flake-root";
Expand All @@ -19,7 +19,8 @@
# Binary cache with nix-serve-ng
nix-serve-ng = {
url = "github:aristanetworks/nix-serve-ng";
inputs.nixpkgs.follows = "nixpkgs";
# Broken with 23.11, base32 misses text >=2.0 && <2.1
# inputs.nixpkgs.follows = "nixpkgs";
henrirosten marked this conversation as resolved.
Show resolved Hide resolved
};
# Disko for disk partitioning
disko = {
Expand Down
24 changes: 24 additions & 0 deletions hosts/azure-common-2.nix
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# SPDX-FileCopyrightText: 2023 Technology Innovation Institute (TII)
#
# SPDX-License-Identifier: Apache-2.0
#
# Profile to import for Azure VMs. Imports azure-common.nix from nixpkgs,
# and configures cloud-init.
{modulesPath, ...}: {
imports = [
"${modulesPath}/virtualisation/azure-config.nix"
];

# enable cloud-init, so instance metadata is set accordingly and we can use
# cloud-config for ssh key management.
services.cloud-init.enable = true;

# Use systemd-networkd for network configuration.
services.cloud-init.network.enable = true;
networking.useDHCP = false;
networking.useNetworkd = true;
# FUTUREWORK: Ideally, we'd keep systemd-resolved disabled too,
# but the way nixpkgs configures cloud-init prevents it from picking up DNS
# settings from elsewhere.
# services.resolved.enable = false;
}
86 changes: 86 additions & 0 deletions hosts/azure-scratch-store-common.nix
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
# SPDX-FileCopyrightText: 2023 Technology Innovation Institute (TII)
#
# SPDX-License-Identifier: Apache-2.0
{
pkgs,
utils,
...
}: {
# Disable explicit resource disk handling in waagent.
# We want to take control over it in initrd already.
virtualisation.azure.agent.mountResourceDisk = false;

boot.initrd.systemd = {
# This requires systemd-in-initrd.
enable = true;

# We need the wipefs binary available in the initrd
extraBin = {
"wipefs" = "${pkgs.util-linux}/bin/wipefs";
};

# The resource disk comes pre-formatted with NTFS, not ext4.
# Wipe the superblock if it's NTFS (and only then, to not wipe on every reboot).
# Once we get `filesystems`-syntax to work again, we could delegate the mkfs
# part to systemd-makefs (and make this `wantedBy` and `before` that makefs
# unit).
services.wipe-resource-disk = {
description = "Wipe resource disk before makefs";
requires = ["${utils.escapeSystemdPath "dev/disk/azure/resource-part1"}.device"];
after = ["${utils.escapeSystemdPath "dev/disk/azure/resource-part1"}.device"];
wantedBy = ["${utils.escapeSystemdPath "sysroot/mnt/resource"}.mount"];
before = ["${utils.escapeSystemdPath "sysroot/mnt/resource"}.mount"];

script = ''
if [[ $(wipefs --output=TYPE -p /dev/disk/azure/resource-part1) == "ntfs" ]]; then
echo "wiping resource disk (was ntfs)"
wipefs -a /dev/disk/azure/resource-part1
mkfs.ext4 /dev/disk/azure/resource-part1
else
echo "skip wiping resource disk (not ntfs)"
fi
'';
};

# Once /sysroot/mnt/resource is mounted, ensure the two .rw-store/
# {work,store} directories that overlayfs is using are present.
# The kernel doesn't create them on its own and fails the mount if they're
# not present, so we set `wantedBy` and `before` to the .mount unit.
services.setup-resource-disk = {
description = "Setup resource disk after it's mounted";
unitConfig.RequiresMountsFor = "/sysroot/mnt/resource";
wantedBy = ["${utils.escapeSystemdPath "sysroot/nix/store"}.mount"];
before = ["${utils.escapeSystemdPath "sysroot/nix/store"}.mount"];

script = ''
mkdir -p /sysroot/mnt/resource/.rw-store/{work,store}
'';
};

# These describe the mountpoints inside the initrd
# (/sysroot/mnt/resource, /sysroot/nix/store).
# In the future, this should be moved to `filesystems`-syntax, so we can
# make use of systemd-makefs and can write some things more concisely.
mounts = [
{
where = "/sysroot/mnt/resource";
what = "/dev/disk/azure/resource-part1";
type = "ext4";
}
# describe the overlay mount
{
where = "/sysroot/nix/store";
what = "overlay";
type = "overlay";
options = "lowerdir=/sysroot/nix/store,upperdir=/sysroot/mnt/resource/.rw-store/store,workdir=/sysroot/mnt/resource/.rw-store/work";
wantedBy = ["initrd-fs.target"];
before = ["initrd-fs.target"];
requires = ["setup-resource-disk.service"];
after = ["setup-resource-disk.service"];
unitConfig.RequiresMountsFor = ["/sysroot" "/sysroot/mnt/resource"];
}
];
};
# load the overlay kernel module
boot.initrd.kernelModules = ["overlay"];
}
103 changes: 103 additions & 0 deletions hosts/binary-cache/configuration.nix
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
# SPDX-FileCopyrightText: 2023 Technology Innovation Institute (TII)
#
# SPDX-License-Identifier: Apache-2.0
{
self,
config,
pkgs,
lib,
...
}: {
imports = [
../azure-common-2.nix
../azure-scratch-store-common.nix
self.nixosModules.service-openssh
];

nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";

# Configure /var/lib/caddy in /etc/fstab.
# Due to an implicit RequiresMountsFor=$state-dir, systemd
# will block starting the service until this mounted.
fileSystems."/var/lib/caddy" = {
device = "/dev/disk/by-lun/10";
fsType = "ext4";
options = [
"x-systemd.makefs"
"x-systemd.growfs"
];
};

# Run a read-only HTTP webserver proxying to the "binary-cache-v1" storage
# container at a unix socket.
# This relies on IAM to grant access to the storage container.
systemd.services.rclone-http = {
after = ["network.target"];
requires = ["network.target"];
wantedBy = ["multi-user.target"];
serviceConfig = {
Type = "notify";
Restart = "always";
RestartSec = 2;
DynamicUser = true;
RuntimeDirectory = "rclone-http";
ExecStart =
"${pkgs.rclone}/bin/rclone "
+ "serve http "
+ "--azureblob-env-auth "
+ "--read-only "
+ "--addr unix://%t/rclone-http/socket "
+ ":azureblob:binary-cache-v1";
# On successful startup, grant caddy write permissions to the socket.
ExecStartPost = "${pkgs.acl.bin}/bin/setfacl -m u:caddy:rw %t/rclone-http/socket";
EnvironmentFile = "/var/lib/rclone-http/env";
};
};

# Expose the rclone-http unix socket over a HTTPS, limiting to certain
# keys only, disallowing listing too.
# TODO: use https://caddyserver.com/docs/caddyfile-tutorial#environment-variables for domain
services.caddy = {
enable = true;
configFile = pkgs.writeTextDir "Caddyfile" ''
# Disable the admin API, we don't want to reconfigure Caddy at runtime.
{
admin off
}

# Proxy a subset of requests to rclone.
https://{$SITE_ADDRESS} {
handle /nix-cache-info {
reverse_proxy unix///run/rclone-http/socket
}
handle /*.narinfo {
reverse_proxy unix///run/rclone-http/socket
}
handle /nar/*.nar {
reverse_proxy unix///run/rclone-http/socket
}
handle /nar/*.nar.* {
reverse_proxy unix///run/rclone-http/socket
}
}
'';
};

# workaround for https://github.com/NixOS/nixpkgs/issues/272532
# FUTUREWORK: rebase once https://github.com/NixOS/nixpkgs/pull/272617 landed
services.caddy.enableReload = false;
systemd.services.caddy.serviceConfig.ExecStart = lib.mkForce [
""
"${pkgs.caddy}/bin/caddy run --environ --config ${config.services.caddy.configFile}/Caddyfile"
];
systemd.services.caddy.serviceConfig.EnvironmentFile = "/run/caddy.env";

# Wait for cloud-init mounting before we start caddy.
systemd.services.caddy.after = ["cloud-init.service"];
systemd.services.caddy.requires = ["cloud-init.service"];

# Expose the HTTPS port. No need for HTTP, as caddy can use TLS-ALPN-01.
networking.firewall.allowedTCPPorts = [443];

system.stateVersion = "23.05";
}
1 change: 0 additions & 1 deletion hosts/binarycache/configuration.nix
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@

imports = lib.flatten [
(with inputs; [
nix-serve-ng.nixosModules.default
sops-nix.nixosModules.sops
disko.nixosModules.disko
])
Expand Down
Loading