When Ansible Can't See Your EC2 Instances: Resolving AWS Dynamic Inventory Issues
AWS Dynamic Inventory is best practice when using Ansible to configure AWS infrastructure. It allows you to perform automatic discovery of EC2 instances because it'd be difficult to maintain a static inventory of hosts as AWS changes IPs dynamically. It's in fact recommended by Ansible when working with cloud providers.
The workflow for working with Dynamic Inventory is you first provision your infrastructure using IaC (e.g., Terraform). Then you configure the AWS Dynamic Inventory plugin aws_ec2 within the Ansible ansible.cfg file. I ran into problems when testing connectivity between my control node and my AWS infrastructure using Ansible. Here's how I worked through that problem.
# Listing all hosts Ansible can see
$ ansible-inventory --list
# Verifying I have connectivity
$ ansible all -m ping
When running the list flag for my inventory, I was getting an empty dictionary hostvars: {} so right away I knew there was an issue. And this was confirmed when I ran the ping command, because I received an error that only localhost was available. Using AWS CLI to further investigate the problem, I confirmed that my EC2 instances did exist and had the right tags.
aws ec2 describe-instances --filters "Name=tag:Environment,Values=development"
I then investigated my Terraform main.tf file and found the problem. The tag I was using in my Terraform file was Name = DevOpsInstance but the tag I was using in my Ansible aws_ec2.yml dynamic inventory file was Environment = Development. This is what caused the issue, a single tag that had mismatching names and values.
Once the two files contained the same information, I ran the ansible CLI commands again and was able to list my AWS infrastructure. Attention to detail is crucial when working across different tools, and managing large code bases.