Ansible Playbooks pt 1: Intro (90 Days of DevOps)
In this section of the 90 Days of DevOps series, we continue our coverage of configuration management and Ansible by working with Ansible Playbooks.
The playbook format: plays, tasks
A Playbook is a YAML document. It contains Plays, which themselves are broken down into individual Tasks. As mentioned at the end of the previous post, Playbooks are intended for configuration management tasks that we expect to perform regularly.
A simple (local) playbook
Below is an example playbook. For simplicity, all of its tasks will run locally against the control node itself, rather than communicating with any managed nodes.
1
2
3
4
5
6
7
8
9
- name: Simple Play
hosts: localhost
connection: local
tasks:
- name: Ping Me
ping:
- name: Print OS
debug:
msg: "{{ ansible_os_family }}"
We save this playbook on our control node as simple_play.yml and run it there using the ansible-playbook command:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ ansible-playbook simple_play.yml
PLAY [Simple Play] *******************************************************************************
TASK [Gathering Facts] ***************************************************************************
ok: [localhost]
TASK [Ping Me] ***********************************************************************************
ok: [localhost]
TASK [Print OS] **********************************************************************************
ok: [localhost] => {
"msg": "Debian"
}
PLAY RECAP ***************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
As we see in the command output above, the playbook executes our PLAY called Simple Play, and runs three TASKs: Gathering Facts, Ping Me, and Print OS. Since only the last two tasks are part of the playbook we wrote, we can guess that Ansible automatically adds the Gathering Facts task.
What actually happens there is that Ansible automatically calls the ansible.builtin.setup module as its first task before processing any of the tasks written in the playbook.
Notice the PLAY RECAP section at the end of the output, telling us that all three tasks ran successfully ( ok=3 ) on localhost. If there were any issues like unreachable hosts or failed tasks, they’d be detailed there.
The above was just a minimal playbook to demonstrate:
- a playbook’s yaml format,
- the
ansible-playbookcommand, and - the type of output returned after running a playbook.
The rest of the playbooks we run will work with managed nodes rather than with localhost . Before that, though, some setup is required in our Ansible environment.
Expanding/reorganizing our demo environment
Assume for the rest of this post that we have added two more managed nodes to our Ansible environment, for a total of four managed nodes. As with our original managed nodes, these two new nodes can be either physical machines or virtual machines. They can also be either local or remote.
Because Ansible will rely on SSH to connect to our managed hosts, the important thing is that we are able to log into them from our control node using SSH.
As a reminder, in the previous post our Ansible inventory file looked like this:
1
2
3
[demo_servers]
172.16.0.11 ansible_user=vagrant ansible_password=vagrant
172.16.0.12 ansible_user=vagrant ansible_password=vagrant
For the purposes of this post, let’s say the new managed hosts we’re adding will follow the same IP addressing scheme as the old ones; the addresses of our managed hosts will now be 172.16.0.11–172.16.0.14.
Customizing /etc/hosts on the control node
To better simulate a production environment, we’ll start referencing our machines by hostname rather than IP address. We simulate DNS records for all of our demo nodes by adding some new entries to the /etc/hosts file on our control node:
1
2
3
4
172.16.0.11 web01
172.16.0.12 web02
172.16.0.13 loadbalancer
172.16.0.14 db01
Before adding these hosts to our Ansible inventory, we should first make sure we can SSH into them from our control node.
Setting up key-based SSH authentication to our managed hosts
Rather than using password-based authentication (as we did in the previous post), this time we’ll use SSH keys to log in to our Ansible managed hosts.
If key-based authentication isn’t already set up, we can log into our control node and run the ssh-keygen and ssh-copy-id commands as needed. The first command generates a key pair on the control node, and the second one is used to pre-authorize our control node’s public key as an acceptable authentication credential on each of the managed nodes.
Ansible’s official documentation regarding connections is available here.
In our case, we will choose to connect to each of our managed nodes via a user account named vagrant, which happens to already be configured on all of these nodes.
Updating our host inventory on the control node
Our updated /etc/ansible/hosts file will now look like this:
1
2
3
4
5
6
7
8
9
[webservers]
web01
web02
[proxy]
loadbalancer
[database]
db01
The major changes from before:
- hosts are now identified by meaningful names rather than by IP addresses
- password information is no longer included in the inventory, since we’ve switched to key-based SSH authentication
Also, there are now four hosts instead of two, and these hosts are organized into three groups. At this point, all of them are running standard, minimal Debian 12 stable (Bookworm) images.
Verifying Ansible communications
With our environment expanded and our inventory updated, we verify that Ansible can communicate properly with all of its managed nodes:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ ansible all -m ansible.builtin.ping --user=vagrant
web02 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
db01 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
web01 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
loadbalancer | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
As we can see, all of our managed nodes return communications just fine. Our updated environment is ready to go.
In upcoming posts, we’ll try to create useful playbooks that run tasks on all four of the managed nodes we’ve prepared above. See you in the next post!
<< Back to 90 Days of DevOps posts
<<< Back to all posts