Declarative vs Imperative: DevOps done right

Deciding whether to automate workloads, while designing your ICT infrastructure, is trivial. It’s 2019 and automation is everywhere around. However, deciding which DevOps paradigm to choose and which tool to use, may not be that obvious. In order to assist you with the ‘declarative vs imperative’ decision-making process, this blog briefly introduces existing DevOps paradigms, presents the main differences between them and outlines the key benefits of using declarative DevOps with charms.

DevOps paradigms (Declarative vs Imperative)

An automation framework can be designed and implemented in two different ways: declarative vs imperative. These are called DevOps paradigms. While using an imperative paradigm, the user is responsible for defining exact steps which are necessary to achieve the end goal, such as instructions for software installation, configuration, database creation, etc. Those steps are later executed in a fully automated way. The ultimate state of the environment is a result of particular operations defined by the user. While keeping full control over the automation framework, users have to carefully plan every step and the sequence in which they are executed. Although suitable for small deployments, imperative DevOps does not scale and fails while deploying big software environments, such as OpenStack.

In turn, a declarative paradigm takes a different approach. Instead of defining exact steps to be executed, the ultimate state is defined. The user declares how many machines will be deployed, will workloads be virtualised or containerised, which applications will be deployed, how will they be configured, etc. However, the user does not define the steps to achieve it. Instead, a ‘magic’ code is executed which takes care of all necessary operations to achieve the desired end state. By choosing a declarative paradigm, users not only save a lot of time usually spent on defining the exact steps but also benefit from the abstraction layer being introduced. Instead of focusing on the ‘how’, they can focus on the ‘what’.

Implementing declarative DevOps with charms

All right, all of that sounds great, but where is the ‘magic’ coming from? Imagine pieces of code which contain all necessary instructions to deploy and configure applications. This includes a collection of scripts and metadata, such as configuration file templates. Such pieces of software, called charms, provide the ‘magic’ described. The users no longer have to think about low-level instructions. This logic is already implemented in the charms. Instead they can focus on shaping the applications being deployed and modelling the entire deployment by relating one application with others. For example, should the database being deployed listen on a different port than the default one?. Or how many concurrent connections should it allow? All the user has to do is to declare the ultimate state.

Let’s take a look at another example. Assume you are deploying Kubernetes for container orchestration purposes and you want to create a simple deployment with one master node and one worker node. The desired end state declared in a YAML file format, called bundle, would look as follows:

machines:
  '0':
    constraints: cores=2 mem=4G root-disk=16G
    series: bionic
  '1':
    constraints: cores=4 mem=4G root-disk=16G
    series: bionic
services:
  containerd:
    charm: cs:~containers/containerd-2
  easyrsa:
    charm: cs:~containers/easyrsa-254
    num_units: 1
    to:
    - lxd:0
  etcd:
    charm: cs:~containers/etcd-434
    num_units: 1
    options:
      channel: 3.2/stable
    to:
    - '0'
  flannel:
    charm: cs:~containers/flannel-425
  kubernetes-master:
    charm: cs:~containers/kubernetes-master-700
    expose: true
    num_units: 1
    options:
      channel: 1.15/stable
    to:
    - '0'
  kubernetes-worker:
    charm: cs:~containers/kubernetes-worker-552
    expose: true
    num_units: 1
    options:
      channel: 1.15/stable
    to:
    - '1'
relations:
- - kubernetes-master:kube-api-endpoint
  - kubernetes-worker:kube-api-endpoint
- - kubernetes-master:kube-control
  - kubernetes-worker:kube-control
- - kubernetes-master:certificates
  - easyrsa:client
- - kubernetes-master:etcd
  - etcd:db
- - kubernetes-worker:certificates
  - easyrsa:client
- - etcd:certificates
  - easyrsa:client
- - flannel:etcd
  - etcd:db
- - flannel:cni
  - kubernetes-master:cni
- - flannel:cni
  - kubernetes-worker:cni
- - containerd:containerd
  - kubernetes-worker:container-runtime
- - containerd:containerd
  - kubernetes-master:container-runtime

It starts with a declaration of two machines which will be used for the deployment purpose. Minimum hardware constraints and OS series are defined. Later, a bunch of applications are declared. Apart from the kubernetes-master and kubernetes-worker, there are also some underpinning applications like easyrsa or etcd. Those have the charm version, number of units, configuration options and placement directives defined. Finally, a set of relations is declared which allows the applications to cooperate together.

That’s it! The deployment has just been declared. All you have to do is to deploy the bundle in your preferred cloud environment. During the deployment, machines will get allocated, charms will be placed on them and the charm code will be executed. All of these operations are managed by the controller called Juju. Once the deployment is complete, your Kubernetes cluster is ready to be used. And it is configured exactly how you defined it. This is how declarative DevOps works.

For more information on charms and Juju, please visit https://jaas.ai/.

Posted in: