Red Hat Enterprise Linux

How to use Ansible to Deploy Application in K8S?

Introduction

In this article we will use a simple api python app, and deploy it in k8s. We will use Jfrog artifactory to store our image, and use Ansible to automate the whole environmental installation and deployment.

Prerequisites

Note: everything in this article is suitable for Linux machines, especially Red Hat.

Python API App

For this demo, we will use a very simple api python apllication.
API (Application Programming Interface) is a mechanism that enables two software components to communicate with each other using a set of definitions and protocols. In our application we are getting a jason file that says hello world!.

Setting up a virtual environment

Before diving into our source code, it is best practice to develop the app inside a virtual environment, so it will be easy for us to deal with our app’s dependicies later.

# To create a new environment run the command:
$ python -m venv .venv

# To activate and enter the environment run:
$ source .venv/bin/activate

Requirements

For this app we have only one requirement, the Flask package.
To install the flask package run:
$ pip install flask

Source Code

In our source code we have two routes. One for /, which represents the home page, there we navigate the user to the actual route wich is /api/message. So, when a user goes to this url, he is getting back a jason file with hello world.

from flask import Flask, jsonify

app = Flask(__name__)

@app.route('/')
def home():
    return "Navigate to /api/message to get a message"


@app.route('/api/message', methods=['GET'])
def get_message():
    return jsonify({"message": "Hello, World!"})


if __name__ == '__main__':
    app.run(host='0.0.0.0', debug=True)

Docker

Docker Engine is an open source containerization technology for building and containerizing your applications. Docker has two main attributes, images and containers. We can imagine that an image is like a recepy and the container is like the cake. We create containers out of images. We will use jfrog artifactory to save our image.

Dockerfile

FROM python:slim

WORKDIR /app

COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 5000

CMD [ "python", "./app.py" ]

Ansible

Ansible is defined as an open-source tool for resource provisioning automation that DevOps professionals popularly use for continuous delivery of software code by taking advantage of an “infrastructure as code” approach. In our project, we will use ansible roles and playbooks to download and create a k8s cluster using kind, install maetallb and nginx ingress controller, and deploy our entire app and jfrog artifactory.

Before we will start with the roles we need to define an inventory file, that will hold the connection to our machine, and an ansible.cfg file, to hold our configurations.

Inventory

[devops] # The name of the group
remote-machine # The domain for our remote machine IP, located in /etc/hosts

ansible.cfg

[defaults]
inventory = ./inventory
remote_user = ansible
ask_pass = false


[privilege_escalation]
become = true
become_method = sudo
become_user = root
become_ask_pass = false

setup-role:

In this role we are installing Docker, Kind and creating a cluster with metallb and nginx inress controller.
MetalLB assigns IP addresses to services (applications) within Kubernetes, providing load balancing functionality similar to what cloud providers offer. Ingress has rules, who can enter and who doesn’t and ingress controller is the thing that enforces those rules and helps to manage the traffic, ensuring traffic get smoothly to the right destination.
To create the role run:
$ ansible-galaxy init setup-role

# Vars - used to store our variables
---
# vars file for setup-role
kind_node_image: kindest/node:v1.25.3@sha256:f52781bc0d7a19fb6c405c2af83abfeb311f130707a0e219175677e366cc45d1
kind_config_path: /home/ansible/kind-config.yaml
kind_cluster_name: devtest-cluster

# Handlers - used to enable us call it after a command
---
# handlers file for setup-role
# Creating task for restarting docker service
- name: Restarting Docker Service
  ansible.builtin.service:
    name: docker
    state: restarted

# Tasks - used to hold all our tasks
---
# tasks file for setup-role
# Creating task for installing docker-ce package
- name: Installing Docker
  ansible.builtin.dnf:
    name: docker-ce
    state: present

# Creating task for checking successful installation of docker-ce package
- name: Checking Docker Installation
  ansible.builtin.command: 
    cmd: docker --version

# Creating task for starting adding user to docker group
- name: Adding User to Docker Group
  ansible.builtin.user:
    name: ansible
    groups: docker
    append: yes

# Creating task for starting docker service
- name: Starting Docker Service
  ansible.builtin.service:
    name: docker
    state: started
    enabled: yes

# Creating task for installing kind 
- name: Installing Kind
  ansible.builtin.shell: 
    cmd: curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-linux-amd64
      && chmod +x ./kind
      && mv ./kind /usr/local/bin/kind

# Creating task for creating a kind cluster file
- name: Creating Kind Cluster File
  ansible.builtin.copy:
    content: |
      kind: Cluster
      apiVersion: kind.x-k8s.io/v1alpha4
      name: "{{ kind_cluster_name }}"
      nodes:
      - role: control-plane
        image: "{{ kind_node_image }}"
      - role: worker
        image: "{{ kind_node_image }}"
    dest: "{{ kind_config_path }}"

# Checking if cluster already exists
- name: Checking Kind Cluster
  ansible.builtin.command:
    cmd: kind get clusters
  register: kind_cluster 
    
# Creating task for cluster creation
- name: Creating Kind Cluster
  ansible.builtin.command:
    cmd: kind create cluster --config /home/ansible/kind-config.yaml
  notify: Restarting Docker Service
  when: "'{{ kind_cluster_name }}' not in kind_cluster.stdout"

# Donwloading Helm
- name: Downloading Helm
  ansible.builtin.dnf:
    name: helm
    state: present

# Downloading nginx ingress controller
- name: Downloading Nginx Ingress Controller
  ansible.builtin.shell: |
    helm pull oci://ghcr.io/nginxinc/charts/nginx-ingress --untar --version 1.1.3
    cd nginx-ingress
    kubectl apply -f crds/
    helm install nginx-ingress oci://ghcr.io/nginxinc/charts/nginx-ingress --version 1.1.3
    
# Downloading MetalLB
- name: Downloading MetalLB
  ansible.builtin.shell: |
    kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
    sleep 90


# Get Docker network information
- name: Get Docker network information
  ansible.builtin.shell: docker network inspect kind
  register: docker_network_info


# Extract IP range
- name: Extract IP range
  set_fact:
    ip_range: "{{ (docker_network_info.stdout | from_json)[0].IPAM.Config[0].Subnet | regex_search('(\\d+\\.\\d+)') }}.255.200 - {{ (docker_network_info.stdout | from_json)[0].IPAM.Config[0].Subnet | regex_search('(\\d+\\.\\d+)') }}.255.255"


# Create MetalLB configuration file
- name: Create MetalLB configuration file
  ansible.builtin.copy:
    dest: /home/ansible/metallb-config.yaml
    content: |
      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        name: example
        namespace: metallb-system
      spec:
        addresses:
        - {{ ip_range }}
      ---
      apiVersion: metallb.io/v1beta1
      kind: L2Advertisement
      metadata:
        name: empty
        namespace: metallb-system


# Apply MetalLB configuration
- name: Apply MetalLB configuration
  ansible.builtin.shell: kubectl apply -f /home/ansible/metallb-config.yaml

Jfrog Artifactory

JFrog Artifactory is a solution for housing and managing all the artifacts, binaries, packages, files, containers, and components for use throughout your software supply chain.

Jfrog-role

In this role, we will copy and apply the jfrog k8s config files on the remote machine. In addition, we will enable access to the jfrog ui through a domain.
To create the role run:
$ ansible-galaxy init jfrog-role

# Vars
---
# vars file for jfrog-role
jfrog_k8s_files:
  - namespace.yaml
  - deployment.yaml
  - service.yaml
  - storage.yaml
  - ingress.yaml
# Tasks
---
# tasks file for jfrog-role
# Creating Direcory
- name: Create Jfrog Directory
  ansible.builtin.file:
    path: /home/ansible/jfrog
    state: directory

# Copying Files
- name: Copy Jfrog K8S Files
  ansible.builtin.copy:
    src: "jfrog/{{ item }}"
    dest: /home/ansible/jfrog/
  loop: "{{ jfrog_k8s_files }}"


# Applying K8S Files
- name: Apply Jfrog K8S Files
  ansible.builtin.shell:
    cmd: kubectl apply -f /home/ansible/jfrog/{{ item }}
  loop: "{{ jfrog_k8s_files }}"

# Get the IP address of the ingress
- name: Get the IP address of the ingress
  ansible.builtin.shell: kubectl get ingress -n artifactory jfrog-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
  register: ingress_ip

# Add domain to /etc/hosts
- name: Add domain to /etc/hosts
  ansible.builtin.lineinfile:
    path: /etc/hosts
    line: '{{ ingress_ip.stdout }} jfrog-ds.octopus.lab'
    state: present

The k8s config files for the Jfrog artifactory are as the following:

Namespace

apiVersion: v1
kind: Namespace
metadata:
  name: artifactory

Storage

# Creating a PersistentVolume

apiVersion: v1

kind: PersistentVolume

metadata:

  name: jfrog-pv

  namespace: artifactory

  labels:

    type: local

spec:

  capacity:

    storage: 5Gi # Storage capacity

  accessModes:

    – ReadWriteOnce # This specifies that the volume can be mounted as read-write by a single node

  hostPath:

    path: “/data/db” # specifies a directory on the host node’s filesystem to use for the storage (inside the container)

  persistentVolumeReclaimPolicy: Retain # This specifies that the volume should not be deleted when the claim is deleted

  storageClassName: standard

  volumeMode: Filesystem # specifies that the volume should be mounted to the Pod as a filesystem

# Creating a PersistentVolumeClaim

# The info should match the PersistentVolume

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

  name: jfrog-pvc

  namespace: artifactory

spec:

  storageClassName: standard

  accessModes:

    – ReadWriteOnce

  resources:

    requests:

      storage: 5Gi

Service

apiVersion: v1
kind: Service
metadata:
  name: jfrog-service
  namespace: artifactory
spec:
  selector:
    app: artifactory
  ports:
    - name: http
      protocol: TCP
      port: 8082
      targetPort: 8082

Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: jfrog-deployment
  namespace: artifactory 
spec:
  replicas: 1
  selector:
    matchLabels:
      app: artifactory # has to match service
  template:
    metadata:
      labels:
        app: artifactory
    spec:
      containers:
      - name: artifactory
        image: docker.bintray.io/jfrog/artifactory-oss:latest
        ports:
          - containerPort: 8081
        volumeMounts: # Mount the volume to the container
          - name: artifactory-data
            mountPath: /data/db
      volumes: # Define the volume
      - name: artifactory-data
        persistentVolumeClaim: # Use the persistent volume claim
          claimName: jfrog-pvc

Ingress

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

  name: jfrog-ingress

  namespace: artifactory

  annotations:

    kubernetes.io/ingress.class: “nginx”

    nginx.org/client-max-body-size: “100m” # Set the maximum body size to 100m

    nginx.ingress.kubernetes.io/proxy-body-size: “100m” # Set the maximum body size to 100m

spec:

  rules:

    – host: jfrog-ds.octopus.lab

      http:

        paths:

          – path: /

            pathType: Prefix

            backend:

              service:

                name: jfrog-service

                port:

                  number: 8082 # Send requests to port 8082 on the pods

Ansible Vault

Ansible allows us to store sensative data in a form of vault. When we create a vault file we need to assign it a password so every time we want to view it we will need to supply that password. We can right it in a .txt file and do not share it with anyone so it is easier to decrypt every usage.
To create the file enter:
$ ansible-vault create vault.yaml


In there, enter your jfrog username and password.

Artifactory-role

In this role we are building the docker image, and using REST API to be able to push the docker image to the artifactory. REST API stands for Representational State Transfer Application Programming Interface.
When you interact with a REST API, you are essentially using HTTP requests (e.g., GET, POST, PUT, DELETE) to perform actions on resources.
To create the role run:
$ ansible-galaxy create artifactory-role

# Vars
---
# vars file for artifactory-role
docker_image_name: api-app-image
docker_image_tag: latest
docker_build_path: /home/ansible/python
dockerfile: /home/ansible/python/Dockerfile
tar_image_src: /home/ansible/api-app-image.tar.gz
jfrog_url: http://jfrog-ds.octopus.lab/artifactory/generic-local/api-app-image.tar.gz

# Tasks
---
# tasks file for artifactory-role
# Build the Docker Image
- name: Build Docker image
  ansible.builtin.docker_image:
    name: "{{ docker_image_name }}"
    tag: "{{ docker_image_tag }}"
    source: build
    build:
      path: "{{ docker_build_path }}"
      dockerfile: "{{ dockerfile }}"

# Save Docker Image to Tar File
- name: Save Docker image to tar file
  ansible.builtin.shell:
    cmd: docker save {{ docker_image_name }}:{{ docker_image_tag }} | gzip > "{{ tar_image_src }}"

# Include vault variables
- name: Include vault variables
  ansible.builtin.include_vars:
    file: vault.yaml
    name: vault

# Upload Docker Image Tar to JFrog Artifactory
- name: Upload Docker image tar to JFrog Artifactory
  ansible.builtin.uri:
    url: "{{ jfrog_url }}"
    method: PUT
    user: "{{ vault.jfrog_username }}"
    password: "{{ vault.jfrog_password }}"
    body_format: raw
    src: "{{ tar_image_src }}"
    force_basic_auth: yes
    status_code: 201
    remote_src: yes

Application-role

In this role we are pulling the docker image from the artifactory, copy the k8s config files for the app to the remote machine and deploy our app.
To create the role run:
$ ansible-galaxy init application-role

# Vars
---
# vars file for application-role
# vars/main.yml
app_directory: /home/ansible/app
jfrog_url: http://jfrog-ds.octopus.lab/artifactory/generic-local/api-app-image.tar.gz
tar_dest: /home/ansible/api-app-image.tar.gz
docker_image_name: api-app-image:latest
k8s_cluster_name: devtest-cluster
k8s_files:
  - namespace.yaml
  - deployment.yaml
  - service.yaml
  - ingress.yaml
domain: ds-api-app.octopus.lab

# Tasks
---
# tasks file for application-role
# Creating Direcory
- name: Create app Directory
  ansible.builtin.file:
    path: "{{ app_directory }}"
    state: directory

# Include vault variables
- name: Include vault variables
  ansible.builtin.include_vars:
    file: vault.yaml
    name: vault

# Get the tar file from Jfrog
- name: Get the tar file from Jfrog
  ansible.builtin.get_url:
    url: "{{ jfrog_url }}"
    dest: /home/ansible/api-app-image.tar.gz
    username: "{{ vault.jfrog_username }}"
    password: "{{ vault.jfrog_password }}"
    force_basic_auth: yes

# Load the Docker image
- name: Load the Docker image
  ansible.builtin.command:
    cmd: docker load -i "{{ tar_dest }}" # -i is for input file
  delegate_to: remote-machine

# Load the Docker image into kind
- name: Load the Docker image into kind
  ansible.builtin.command:
    cmd: kind load docker-image "{{ docker_image_name }}" --name "{{ k8s_cluster_name }}"
  delegate_to: remote-machine

# Copying Files
- name: Copy App K8S Files
  ansible.builtin.copy:
    src: "app/{{ item }}"
    dest: /home/ansible/app/
  loop: "{{ k8s_files }}"

# Change the image name in deployment.yaml
- name: Change the image name in deployment.yaml
  ansible.builtin.replace:
    path: /home/ansible/app/deployment.yaml
    regexp: 'image: .+'
    replace: 'image: {{ docker_image_name }}'


# Ensure kubernetes Python library is installed
- name: Ensure kubernetes Python library is installed
  ansible.builtin.pip:
    name: kubernetes
    state: present

# Applying K8S Files
- name: Apply App K8S Files
  kubernetes.core.k8s:
    src: "/home/ansible/app/{{ item }}"
    state: present
  loop: "{{ k8s_files }}"

# Get the IP address of the ingress
- name: Get the IP address of the ingress
  ansible.builtin.shell: kubectl get ingress -n api-app api-app-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
  register: ingress_ip

# Add domain to /etc/hosts
- name: Add domain to /etc/hosts
  ansible.builtin.lineinfile:
    path: /etc/hosts
    line: '{{ ingress_ip.stdout }} {{ domain }}'
    state: present

The k8s config files for the app are teh following:

Namespace

apiVersion: v1
kind: Namespace
metadata:
  name: api-app

Service

apiVersion: v1
kind: Service
metadata:
  name: api-app-service
  namespace: api-app
spec:
  selector:
    app: api-app
  ports:
  - port: 80 # Expose the service on internal port 80
    targetPort: 5000 # Send requests to port 5000 on the pods

Deployment

apiVersion: apps/v1

kind: Deployment

metadata:

  name: app-deployment

  namespace: api-app

spec:

  replicas: 1

  selector:

    matchLabels:

      app: api-app # has to match service

  template:

    metadata:

      labels:

        app: api-app

    spec:

      containers:

      – name: api-app

        image: api-app:latest

        imagePullPolicy: IfNotPresent

        ports:

        – containerPort: 5000

Ingress

kind: Ingress # Setting the kind
metadata:
  namespace: api-app # Setting the namespace
  name: api-app-ingress # Setting the name of the ingress
  annotations:
    kubernetes.io/ingress.class: "nginx" # Setting the ingress class
spec:
  rules:
  - host: ds-api-app.octopus.lab # Setting the domain (taken from /etc/hosts)
    http:
      paths:
      - path: / # Setting the path(homepage)
        pathType: Prefix
        backend:
          service:
            name: api-app-service # Ensuring the service matches the service name
            port:
              number: 80 # Setting the port number traffic will be sent to

Playbooks

Playbooks in ansible are a collection of plays, and each play is a collection of tasks. In our playbooks we will run the roles, one after another, and automate our all deployment and installation.

Setup

In this playbook we activate the setup and jfrog roles.
To run the playbook enter:
$ ansible-playbook setup.yaml

---
# Creating playbook
- name: Installing entire application stack with kind, docker, k8s
  hosts: devops
  roles:
    - setup-role # This role will install docker, kind, kubectl, and other dependencies
    - jfrog-role # This role will install JFrog Artifactory

Accessing the Jfrog UI

After some time, the jfrog pod will be up, and we will able to connect to the jfrog ui.
The screen below is the first window you will encounter. You can login using the default username: admin and the default password: password.

Then you should click on the right side to reate a new local repository.

Then, create a new generic repository and name it generic-local.
Note: you can name the repo wahtever you want just make sure to use it correctly in your files later on.

Run-app

In this playbook we activate the artifactory and application roles.
To run the playbook enter:
$ ansible-playbook run-app.yaml

---
# Creating playbook
- name: Running entire application stack with kind, docker, k8s
  hosts: devops
  roles:
    - artifactory-role # This role will build the Docker image and upload it to JFrog Artifactory
    - application-role # This role will download the Docker image from JFrog Artifactory, load it into kind, and apply the K8S files

After some time, enter your domain and access the website.

Summary

In this article, we learned how to deploy a simple python app, interact with jfrog artifacory, and automate the whole environmental space and installations with ansible. I hope you will enjoy it as much as I did, I wish you less bugs as possible, and happy coding 🙂

Red Hat
Red Hat

חברת אוקטופוס היא שותף בכיר של Red Hat בישראל ומציעים רישוי לכלל מוצרי Red Hat במחירים תחרותיים וכן את שירותי ייעוץ המומחים שלנו מאחוריהם אלפי שעות בפרויקטים המורכבים ביותר במשק.

צור קשר