<< Web Hosting DIY | Active Directory PowerShell Module Users and Groups Migration Commands Generator >> |
This blog post will explain some of the design considerations for the playbooks involved in the automated deployment of a LAMP stack (Linux, Apache, MySQL, PHP) on cloud infrastructure, via Azure DevOps Continuous Integration (CI) triggers and Ansible playbooks.
First, I will outline the general steps for the process. Then, we'll examine how SSH key authentication is used throughout the pipeline. Followed by a closer look at how LAMP playbooks are executed from within the new Azure subnet. Lastly, we will look at how the pipeline checks for the success of the LAMP deployment.
The playbooks for the DevOps Pipeline can be found in the public GitHub repository below:
Better-Computing-Consulting/azure-devops-ci-ansible-lamp-deployment: Automatic deployment of cloud infrastructure and LAMP application stack using Ansible and Azure DevOps CI (github.com)
This video shows the entire process execution:
The general steps for the deployment process are as follows:
Throughout the process, all authentication between the Agent and the temp Ansible server or between the temp server and the Web and DB servers is done via SSH keys. For this purpose, step 3 of the process, in line 33 azure.piplines.yml executes:
33
|
cat /dev/zero | ssh-keygen -q -N ""
|
This creates the id_rsa and id_rsa.pub files under the ~/.ssh directory without requiring user interaction.
Then, before creating each VM, the process reads contents of the id_rsa.pub file and stores them in the sshkey variable. This is done on playbook azenv/mkvm.yml lines 71-3.
71 72 73 |
- name: "{{ item.name }} Get contents of ~/.ssh/id_rsa.pub into variable"
ansible.builtin.shell: cat ~/.ssh/id_rsa.pub
register: sshkey
|
When the playbook creates the VM on the next step, it does so without enabling the ssh password for the admin account and setting the contents of ~/.ssh/authorized_keys to the contents of the sshkey variable. This way, users must have the SSH key to access the computer.
76 77 78 79 80 81 82 83 84 85 |
- name: "{{ item.name }} Create VM"
azure_rm_virtualmachine:
resource_group: "{{ rg }}"
name: "{{ item.name }}"
vm_size: Standard_DS1_v2
admin_username: "{{ admin }}"
ssh_password_enabled: false
ssh_public_keys:
- path: /home/{{ admin }}/.ssh/authorized_keys
key_data: "{{ sshkey.stdout }}"
|
Thus, all three servers allow access to the same SSH key initially. However, on subsequent executions of the pipeline, when the Web and DB server already exist in Azure, the same playbook step will not update the existing authorized_keys files on those servers with the SSH key generated by the new Agent, so the process updates the key on the existing servers on the next step of the pipeline (step 5).
To do this, on the last step of the mkvm.yml, the process writes an "az vm user update" command in the bash script sshkeyupdate.sh. The Azure command includes the contents of the new id_rsa.pub file. The playbook stores the result of the VM creation step on the vmkres variable, and only adds the Azure command when the result of the operation indicates the VM was not "changed" (vmkres.changed == 0), which means that the server existed prior to the execution of the pipeline. The process does not write the command for the temp Ansible VM either because it will always be new.
76 ... 92 93 94 95 96 97 98 99 100 |
- name: "{{ item.name }} Create VM"
register: vmkres
- name: "{{ item.name }} Add ssh key update command to bash file to run on exiting servers"
ansible.builtin.lineinfile:
path: ./sshkeyupdate.sh
create: yes
line: 'az vm user update -u {{ admin }} --ssh-key-value "$(< ~/.ssh/id_rsa.pub)" -n {{ item.name }} -g {{ rg }}'
mode: 0770
when: "vmkres.changed == 0 and item.name != 'ans'"
|
The next step of the pipeline playbook azure.piplines.yml executes the sshkeyupdate.sh script.
45 46 47 48 49 50 |
displayName: Update ssh key on web and db servers
inputs:
azureSubscription: 'AzureServiceConnection'
scriptType: 'bash'
scriptLocation: 'scriptPath'
scriptPath: './azenv/sshkeyupdate.sh'
|
Thus, by step 5 of the process all three VMs, new or existing, accept connections from the SSH key created on step 3. However, for the temp Ansible VM to be able to authenticate to the Web and DB servers to deploy the LAMP stack, it needs the private and public keys from the Agent Ubuntu server. Therefore, the last two steps of the playbook azenv/deploysw.yml, which installs Ansible on the temp server, copy and set the permissions on the id_rsa.pub and id_rsa files. Lines 42-54 of deploysw.yml do this:
42 43 44 45 46 47 48 49 50 51 52 53 54 |
- name: Copy id_rsa
become: no
ansible.builtin.copy:
src: ~/.ssh/id_rsa
dest: ~/.ssh/
mode: 0600
- name: Copy id_rsa.pub
become: no
ansible.builtin.copy:
src: ~/.ssh/id_rsa.pub
dest: ~/.ssh/
mode: 0644
|
After this, the Agent can authenticate to the temp Ansible server, and this server can authenticate to both the Web and DB servers using the same SSH keys.
The last step in the process dealing with SSH authentication comes at the end of playbooks that setup the Web and DB servers (step 6). In this step, all existing entries in the authorized_keys files of these servers are removed. So, at the end of the process the Web and DB servers do not allow access either by password or SSH key.
The process clears the entries in the Web server in the last step of the lamp/roles/web/tasks/tasks.yml playbook:
41 42 43 |
- name: Removes all entries from authorized_keys
become: no
ansible.builtin.shell: :> ~/.ssh/authorized_keys
|
The process does the same for the DB server in the last step of the lamp/roles/db/tasks/tasks.yml playbook:
44 45 46 |
- name: Remove all entries from authorized_keys
become: no
ansible.builtin.shell: :> ~/.ssh/authorized_keys
|
After the pipeline ends, the only way to access the Web or DB servers is by triggering the pipeline again or by manually modifying the access on Azure.
There are a couple of options for running the LAMP playbooks on the Azure VMs. This pipeline uses a temp Ansible VM deployed on Azure because it made it possible to avoid having to assign public IPs to both VMs and open SSH access to them from the internet. As it is now, the DB server does not get a public IP, and neither the Web nor DB server allow SSH access from the internet. The only VM in the system that gets a public IP and allows SSH access is the temporary Ansible server, which is deleted at the end of the process.
Further, when the temp Ansible server runs the LAMP playbooks from within the same subnet, it takes advantage of the fact that by default all VMs in an Azure subnet can perform DNS resolution by hostname only, without domain, and that they have SSH access to each other. Thus, the process does not have to capture the private IPs of the servers or open SSH access between them when the playbooks are deployed from within the same subnet.
To be able to deploy the LAMP playbooks from the temp Ansible VM, the first step is to capture the public IP of the VM in the vmpubip variable when the azenv/mkvm.yml playbook creates it:
1 2 3 4 5 6 |
- name: "{{ item.name }} Create public IP address"
azure_rm_publicipaddress:
resource_group: "{{ rg }}"
allocation_method: Static
name: "{{ item.name }}PubIP"
register: vmpubip
|
Then, when the temp Ansible VM ("ans") is created, the same playbook creates host file ansiblehost based on the ansiblehost.j2 template to deploy the software to the temp VM, which happens in the next step in the azenv/mkvm.yml playbook:
21 22 23 24 25 |
- name: "{{ item.name }} Create ansiblehost file for software installation playbook"
ansible.builtin.template:
src: ansiblehost.j2
dest: ansiblehost
when: item.name == "ans"
|
The ansiblehost.j2 template uses the values of the vmpubip variable just captured, and the admin variable from the azenv/vars.yml file. It also adds an option that will allow to stablish a ssh connection even when the signature of the temp Ansible server is not already in the known_hosts file of the Agent server.
1 2 |
[ansiblehost]
{{ vmpubip['state']['ip_address'] }} ansible_user={{ admin }} ansible_ssh_common_args='-o StrictHostKeyChecking=no'
|
The next step after deploying the three VMs is to install Ansible on the temp server and copy the LAMP playbooks folder to it for later execution. The azure-pipelines.yml playbook lunches this process at line 36.
36
|
ansible-playbook -i ansiblehost deploysw.yml
|
The deploysw.yml playbook copies the lamp folder at lines 36-40.
36 37 38 39 40 |
- name: Copy lamp folder
become: no
ansible.builtin.copy:
src: ../lamp
dest: ~/
|
Finally, the pipeline playbook executes a remote command on the temp Ansible server that causes the execution of the LAMP playbooks. To get the values for the SSH command the azure-pipelines.yml playbook in line 54 parses the ansiblehost host file created on the previous step to get the public IP address of temp server and administrator username:
ssh -oStrictHostKeyChecking=accept-new $(cat ansiblehost | grep "\." | cut -d' ' -f 2 | cut -d'=' -f 2)@$(cat ansiblehost | grep "\." | cut -d' ' -f 1) "cd lamp && ansible-playbook -i hosts site.yml"
The command is ssh username@ipaddress, so it first gets the username with:
cat ansiblehost | grep "\." | cut -d' ' -f 2 | cut -d'=' -f 2
and then the IP with:
cat ansiblehost | grep "\." | cut -d' ' -f 1
Also, the azenv/main.yml playbook created the site.yml file in step 4 based on the site.yml.j2 template. The file needs to be created at runtime because the value of remote_user might vary from site to site. This variable is set to the value of the admin variable from vars.yml:
1 2 3 4 5 | site: bcclamp
rg: "{{ site }}RG"
vnet: "{{ site }}VNET"
subnet: "{{ site }}SUBNET"
admin: lampadmin
|
The last step of the pipeline playbook azure-pipelines.yml uses the curl command to access the website to make sure it is up, and that it is connected to the database:
63 64 65 66 |
- script: |
cat azenv/web1ip
curl $(cat azenv/web1ip)
displayName: Access lamp website
|
The web1ip file was written by the mkvm.yml playbook when it created the VM. Even if the step did not create the VM because it already existed, it will still return the current state of the VM, including its public IP.
14 15 16 17 18 19 |
- name: "{{ item.name }} Document web server public ip"
ansible.builtin.lineinfile:
path: "./{{ item.name }}ip"
create: yes
line: "http://{{ vmpubip['state']['ip_address'] }}"
when: item.name == "web1"
|
Thank you for reading.
IT Consultant
Better Computing Consulting
<< Web Hosting DIY | Active Directory PowerShell Module Users and Groups Migration Commands Generator >> |
F: (310) 935-0341
Mon -Fri 9AM - 6PM Pacific Time