Native growth environments with Terraform + LXD

Faheem

As a Huge Knowledge Options Architect and InfraOps, I want growth environments to put in and take a look at software program. They should be configurable, versatile, and performant. Working with distributed programs, the best-fitting setups for this use case are native virtualized clusters of a number of Linux situations.

For just a few years, I’ve been utilizing HashiCorp’s Vagrant to handle libvirt/KVM situations. That is working nicely, however I not too long ago skilled one other setup that works higher for me: LXD to handle situations and Terraform (one other HashiCorp instrument) to function LXD. On this article, I clarify what the benefits of the latter are and how you can setup such an setting.

Glossary

Vagrant and Terraform

Vagrant allows customers to create and configure light-weight, reproducible, and moveable growth environments. It’s principally used to provision digital machines regionally.

Terraform is a broadly used Infrastructure as Code instrument that enables provisioning assets on nearly any cloud. It helps many suppliers from public cloud (AWS, Azure, GCP) to non-public self-hosted infrastructure (OpenStack, Kubernetes, and LXD in fact). With Terraform, InfraOps groups apply GitOps finest practices to handle their infrastructure.

Linux virtualization/containerization

Here’s a fast assessment of the assorted instruments (and acronyms) used on this article and composing the crowded Linux virtualization/containerization ecosystem:

Having Vagrant working KVM hosts is achieved with the vagrant-libvirt provider. See KVM machines for Vagrant on Archlinux for how you can setup libvirt/KVM with Vagrant.

Why Terraform?

LXD is utilized in CLI with the lxc command to handle it’s assets (containers and VMs, networks, storage swimming pools, occasion profiles). Being a command-based instrument, it’s by nature not Git pleasant.

Thankfully, there’s a Terraform supplier to handle LXD: terraform-provider-lxd. This allows versioning LXD infrastructure configuration alongside software code.

Observe: One other instrument to function LXD might be Canonical’s Juju, but it surely appears a bit extra advanced to be taught.

Why Terraform + LXD? Benefits over Vagrant + libvirt/KVM

Stay resizing of situations

Linux containers are extra versatile than VMs, which permits resizing situations with out reboot. It is a very handy characteristic.

Unified tooling from growth to manufacturing

LXD will be put in on a number of hosts to make a cluster that can be utilized as the bottom layer of a self-hosted cloud. The Terraform + LXD couple can thus be used to handle native, integration, and manufacturing environments. This considerably eases testing and deploying infrastructure configurations.

LXD help in Ansible

To put in and configure software program on the native situations, I usually use Ansible. There are a number of connection plugins out there to Ansible to hook up with the goal hosts, the principle one being ssh.

When provisioning LXC situations we are able to use the usual ssh plugin but in addition a native LXC plugin: lxc (which makes use of the LXC Python library) or lxd (which makes use of the LXC CLI). That is useful for 2 causes:

  • For safety as we don’t have to start out an OpenSSH server and open the SSH port on our situations
  • For simplicity as we don’t should handle SSH keys for Ansible

Configuration modifications preview

One of many principal options of Terraform is the flexibility to preview the modifications {that a} command would apply. This avoids undesirable configuration deployments and command errors.

Instance with an LXD occasion profile’s resizing:

$ terraform plan
...
Terraform will carry out the next actions:

  
  ~ useful resource "lxd_profile" "tdp_profiles" {
      ~ config = {
          ~ "limits.cpu"    = "1" -> "2"
          ~ "limits.reminiscence" = "1GiB" -> "2GiB"
        }
        id     = "tdp_edge"
        identify   = "tdp_edge"
    }

Plan: 0 so as to add, 1 to vary, 0 to destroy.

Configuration readability and modularity

The Terraform language is declarative. It describes an supposed aim reasonably than the steps to succeed in that aim. As such, it’s extra readable than the Ruby language utilized in Vagrant recordsdata. Additionally as a result of Terraform parses all recordsdata within the present listing and permits defining modules with inputs and outputs, we are able to very simply break up the configuration to extend maintainability.


$ ls -1 | grep -P '.tf(vars)?$'
native.auto.tfvars
principal.tf
outputs.tf
supplier.tf
terraform.tfvars
variables.tf

Efficiency acquire

Utilizing Terraform + LXD accelerates each day operations in native growth environments which is all the time gratifying.

Here’s a efficiency benchmark when working a neighborhood growth cluster with the next specs:

  • Host OS: Ubuntu 20.04
  • Variety of visitor situations: 7
  • Assets allotted: 24GiB of RAM and 24 vCPUs
Metric Vagrant + libvirt/KVM Terraform + LXD Efficiency acquire
Cluster creation (sec) 56.5 51 1.1x quicker
Cluster startup (sec) 36.5 6 6x quicker
Cluster shutdown (sec) 46 13.5 3.4x quicker
Cluster destroy (sec) 9 17 2x slower

Setup of a minimal Terraform + LXD setting

Now let’s attempt to setup a minimal Terraform + LXD setting.

Conditions

Your laptop wants:

  • LXD (see Installation)
  • Terraform >= 0.13 (see Install Terraform)
  • Linux cgroup v2 (to run latest Linux containers like Rocky 8)
  • 5 GB of RAM out there

Additionally create a listing to work from:

mkdir terraform-lxd-xs
cd terraform-lxd-xs

Linux cgroup v2

To examine in case your host makes use of cgroup v2, run:

stat -fc %T /sys/fs/cgroup

Latest distributions use cgroup v2 by default (examine the checklist here) however the characteristic is offered on all hosts that run a Linux kernel >= 5.2 (e.g. Ubuntu 20.04). To allow it, see Enabling cgroup v2.

Terraform supplier

We’ll use the terraform-lxd/lxd Terraform provider to handle our LXD assets.

Create supplier.tf:

terraform {
  required_providers {
    lxd = {
      supply  = "terraform-lxd/lxd"
      model = "1.7.1"
    }
  }
}

supplier "lxd" {
  generate_client_certificates = true
  accept_remote_certificate    = true
}

Variables definition

It’s a good observe to permit customers to configure the Terraform setting by way of input variables. We implement the variable correctness by declaring their anticipated sorts.

Create variables.tf:

variable "xs_storage_pool" {
  kind = object({
    identify   = string
    supply = string
  })
}

variable "xs_network" {
  kind = object({
    ipv4 = object({
      deal with = string
    })
  })
}

variable "xs_profiles" {
  kind = checklist(object({
    identify = string
    limits = object({
      cpu    = quantity
      reminiscence = string
    })
  }))
}

variable "xs_image" {
  kind    = string
  default = "photographs:rocky/8"
}

variable "xs_containers" {
  kind = checklist(object({
    identify    = string
    profile = string
    ip      = string
  }))
}

The next variables are outlined:

  • xs_storage_pool: the LXD storage pool storing the disks of our containers
  • xs_network: the LXD IPv4 network utilized by containers to speak inside a shared community
  • xs_profiles: the LXD profiles created for our containers. Profiles permit the definition of a set of properties that may be utilized to any container.
  • xs_image: the LXD image. This primarily specifies which OS the containers run.
  • xs_containers: The LXD instances to create.

Essential

The principle Terraform file defines all of the assets configured by way of the variables. This file just isn’t modified fairly often by builders after its first implementation for the challenge.

Create principal.tf:


useful resource "lxd_storage_pool" "xs_storage_pool" {
  identify = var.xs_storage_pool.identify
  driver = "dir"
  config = {
    supply = "${path.cwd}/${path.module}/${var.xs_storage_pool.supply}"
  }
}


useful resource "lxd_network" "xs_network" {
  identify = "xsbr0"

  config = {
    "ipv4.deal with" = var.xs_network.ipv4.deal with
    "ipv4.nat"     = "true"
    "ipv6.deal with" = "none"
  }
}


useful resource "lxd_profile" "xs_profiles" {
  depends_on = [
    lxd_storage_pool.xs_storage_pool
  ]

  for_each = {
    for index, profile in var.xs_profiles :
    profile.identify => profile.limits
  }

  identify = every.key

  config = {
    "boot.autostart" = false
    "limits.cpu"    = every.worth.cpu
    "limits.reminiscence" = every.worth.reminiscence
  }

  machine {
    kind = "disk"
    identify = "root"

    properties = {
      pool = var.xs_storage_pool.identify
      path = "/"
    }
  }
}


useful resource "lxd_container" "xs_containers" {
  depends_on = [
    lxd_network.xs_network,
    lxd_profile.xs_profiles
  ]

  for_each = {
    for index, container in var.xs_containers :
    container.identify => container
  }

  identify  = every.key
  picture = var.xs_image
  profiles = [
    each.value.profile
  ]

  machine {
    identify = "eth0"
    kind = "nic"
    properties = {
      community        = lxd_network.xs_network.identify
      "ipv4.deal with" = "${every.worth.ip}"
    }
  }
}

The next assets are created by Terraform:

  • lxd_network.xs_network: the community for all our situations
  • lxd_profile.xs_profiles: a number of profiles that may be outlined by the person
  • lxd_container.xs_containers: the situations’ definitions (together with the appliance of the profile and the community machine attachment)

Variables file

Lastly, we offer Terraform with the variables particular to the environment. We use the auto.tfvars extension to routinely load the variables when terraform is run.

Create native.auto.tfvars:

xs_storage_pool = {
  identify = "xs_storage_pool"
  supply = "lxd-xs-pool"
}

xs_network = {
  ipv4 = { deal with = "192.168.42.1/24" }
}

xs_profiles = [
  {
    name = "xs_master"
    limits = {
      cpu    = 1
      memory = "1GiB"
    }
  },
  {
    name = "xs_worker"
    limits = {
      cpu    = 2
      memory = "2GiB"
    }
  }
]

xs_image = "photographs:rockylinux/8"

xs_containers = [
  {
    name    = "xs-master-01"
    profile = "xs_master"
    ip      = "192.168.42.11"
  },
  {
    name    = "xs-master-02"
    profile = "xs_master"
    ip      = "192.168.42.12"
  },
  {
    name    = "xs-worker-01"
    profile = "xs_worker"
    ip      = "192.168.42.21"
  },
  {
    name    = "xs-worker-02"
    profile = "xs_worker"
    ip      = "192.168.42.22"
  },
  {
    name    = "xs-worker-03"
    profile = "xs_worker"
    ip      = "192.168.42.23"
  }
]

Setting provisioning

Now we’ve all of the recordsdata wanted to provision the environment:


terraform init


mkdir lxd-xs-pool


terraform apply

As soon as the assets are created, we are able to examine that the whole lot is working advantageous:


lxc community checklist
lxc profile checklist
lxc checklist


lxc shell xs-master-01

Et voilà!

Observe: To destroy the setting: terraform destroy

Extra superior instance

You may check out tdp-lxd for a extra superior setup with:

  • Extra profiles
  • File templating (for an Ansible stock)
  • Outputs definition

Conclusion

The mix of Terraform and LXD brings a brand new manner of managing native growth environments that has a number of benefits over opponents (particularly Vagrant). In case you are usually bootstrapping this type of setting, I recommend you give it a strive!

Leave a Comment