Ansible: Getting Started (90 Days of DevOps)
In this section of the 90 Days of DevOps series, we continue our coverage of configuration management by starting to work with Ansible, one of the most popular config management tools currently in use.
Installation
As mentioned in the previous post, one of Ansible’s distinguishing characteristics is its relative simplicity/ease of use. With that in mind, setup should be fairly straightforward.
First, we need to set up what is referred to as a control node, which is the computer that will be used to push out configuration commands to other computers. These other computers are called managed nodes.
For demonstration purposes, we will install Ansible to an Ubuntu Linux machine.
Since Ubuntu is among the many Linux distributions that provides a ready-made Ansible package, the process is as simple as using our package manager to install the package named ansible:
1
2
$ sudo apt update
$ sudo apt install ansible
Once the above is run, Ansible gets installed along with some other packages needed to fulfill dependencies. We can confirm that the installation was successful by running:
1
2
3
$ ansible --version
ansible [core 2.16.3]
...
The ansible command returns its version info, as shown above.
The official install documentation for Ansible is available here.
Ansible Modules
While the ansible --version command above was enough to demonstrate that Ansible is installed and working, we’ll also take this opportunity to learn about Ansible modules.
Modules in Ansible are pre-built “commands” that help us perform common tasks on one or more managed nodes. As described on the linked reference page, some examples of the tasks we can use Ansible modules for include:
- setting the state of a particular service, e.g. httpd, on managed nodes
- “pinging” managed nodes
- defining custom OS/shell commands to be run on managed nodes
The full collection of modules available to Ansible users is extensive, so it can help to check on the full list every now and then when faced with a common task that needs to be performed on many nodes at once.
Running a module (locally)
Now that we’ve discussed what Ansible modules are, we’ll actually run one: the ansible.builtin.ping module.
Normally, we’d use this module to test Ansible communications between our control node and one or more managed nodes. However, because we don’t yet have any managed nodes, we’ll just test that our control node can communicate with itself.
1
2
3
4
$ ansible localhost -m ansible.builtin.ping
localhost | SUCCESS => {
"changed": false,
"ping": "pong"
As we can see from the module’s output, local communications were successful.
Of course, this type of test is not really useful in practice, since the point of an Ansible control node is to enable us to communicate with and manage other nodes.
Some realistic examples
What if we had hundreds of web servers running and we needed to verify that Ansible can communicate with all of them?
In that case, we could test them all at once by running something like:
1
$ ansible webservers -m ansible.builtin.ping
where webservers would be the name of a group containing all of those hundreds of nodes that we’d have previously configured in Ansible.
If we wanted to ensure that a specific service is running on all of these servers, we could use the ansible.builtin.service module:
1
$ ansible webservers -m ansible.builtin.service -a "name=httpd state=started"
The above would check all of those webservers with one command, and attempt to start the httpd service on any of them where it hasn’t already been started.
Adding managed nodes
So how does Ansible know what we mean when we specify a group named webservers ?
We can define an inventory of managed nodes in a file named /etc/ansible/hosts .
For the rest of this blog post, assume that we have added two Linux-based machines to our local network. These could be two physical computers, or they could be a couple of local virtual machines, maybe even spun up using vagrant with SSH and bridging enabled.
Additionally, we’ve already verified that these two new machines are pingable from our control node (via the standard Linux ping command, not Ansible):
1
2
3
4
5
6
7
$ ping -c 1 172.16.0.11
PING 172.16.0.11 (172.16.0.11) 56(84) bytes of data.
64 bytes from 172.16.0.11: icmp_seq=1 ttl=64 time=0.189 ms
--- 172.16.0.11 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms
1
2
3
4
5
6
7
$ ping -c 1 172.16.0.12
PING 172.16.0.12 (172.16.0.12) 56(84) bytes of data.
64 bytes from 172.16.0.12: icmp_seq=1 ttl=64 time=0.224 ms
--- 172.16.0.12 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms
Defining our host inventory
We’re ready to add these two servers to our Ansible inventory so they can become managed nodes. Using the Ansible documentation linked a moment ago, we define a new host group named demo_servers by writing the following to a file named /etc/ansible/hosts on our control node:
1
2
3
[demo_servers]
172.16.0.11 ansible_user=vagrant ansible_password=vagrant
172.16.0.12 ansible_user=vagrant ansible_password=vagrant
Note a few things:
- Linux users who will run
ansibleon the control node will need at least read access to this file. - The
ansible_userandansible_passwordvalues are called Ansible variables, and in this case they’re used to tell Ansible what username and password to use when trying to access our managed nodes via ssh. The variable values should match valid user credentials for each respective host. - Cleartext passwords saved in a file are rarely best practice, but we do this above in our
hostsfile to allow for a simplified demonstration.
With that said, we now have a full Ansible environment set up, including a control node, two managed nodes, and an inventory providing information about our managed nodes.
Running Ansible modules on live managed nodes
Let’s run some Ansible modules again, but this time targeted at our new managed nodes.
The commands below reference the new demo_servers host group we’ve just defined in our inventory:
ansible.builtin.ping
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ ansible demo_servers -m ansible.builtin.ping
172.16.0.12 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
172.16.0.11 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
ansible.builtin.service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
$ ansible demo_servers -m ansible.builtin.service -a "name=sshd state=started"
172.16.0.11 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"name": "sshd",
"state": "started",
"status": {
...
}
}
172.16.0.12 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"name": "sshd",
"state": "started",
"status": {
...
}
}
}
So far, so good. The modules work as planned.
Using Ansible to run shell commands
Along with Ansible modules, we can use the ansible command to run specified Linux commands on our managed nodes.
Example: Read the motd on all of our selected nodes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ ansible demo_servers -a "cat /etc/motd"
172.16.0.12 | CHANGED | rc=0 >>
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
172.16.0.11 | CHANGED | rc=0 >>
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Example: Reboot all of our selected nodes
If Ansible is connecting to each of our managed nodes using a privileged enough user account, we can easily reboot all of our selected nodes:
1
$ ansible demo_servers -a "/sbin/reboot"
Ansible also supports permission escalation for these types of commands.
1
$ ansible demo_servers -a "/sbin/reboot" -u username --become [--ask-become-pass]
“Ad hoc” vs “repeatable” configuration management
The Ansible commands demonstrated throughout this post are called ad hoc commands. As described in Ansible’s documentation, ad hoc commands are for the types of tasks that we don’t plan on performing often. Examples include:
- rebooting servers
- managing files/packages
- managing services
For commonly repeated tasks, we instead turn to Ansible playbooks, which we will work with next.
See you in the next post!
<< Back to 90 Days of DevOps posts
<<< Back to all posts