Dynamic DNS in less than 25 lines of Ansible YAML and CloudFlare CDN

Overview

The requirement for dynamic DNS has been around for decades and companies like DynDNS have been an enabler for just as long.  Script kiddies, IRC dudes, gamers & professionals often want to host services out of their homes for various reasons, but, may not have a static IP address for their internet connection.  Dynamic DNS services allows the user to update a hostname with a provider to point back to the dynamic IP address allocated to the users modem.  This allows people to reference the domain name record to return the IP address of the modem.

Note: I’m not talking about RFC2136 which includes a dynamic DNS mechanism in the DNS protocol.

I host a few services at home which I like to reach remotely from time to time, and, I’m too tight to pay for a static IP address.   A few years ago this was the task I decided to force myself to solve using python in an attempt to learn.  Whilst ugly, it served its purpose for quite some time until last night when I set myself a task to do this with Ansible in the evening.

The Players

Ansible  is a simple automation tool with use cases across a number of use cases such as Provisioning, Configuration Management, Application Deployment, Orchestration and others. Ansible has plugins and modules which extend it’s functionality.  In this case we are using the ipinfoio_facts and cloudflare_dns modules to query/communicate with…

Cloudflare I see as the Content Delivery Network (CDN) for the people.  Free basic plans,  API interfaces, proxying and DNS management.

ipinfo.io, a neat little site/service to give you geolocation information about where you are browsing from.  This site also returns the data in JSON format if requested, which, makes it nice an easy to query programatically.

A linux Ansible command host to run the Ansible playbooks from…. and setup a crontab to continually run the playbooks.

Some domain names that I have registered with various domain registrars.

The Process (TL;DR)

  1. Ensure you have a domain name to use.
  2. Ensure you have a Cloudflare account, with, the domain name associated.
    1. Take note of your cloudflare API token which is found under My Profile > API Key
  3. Ensure you have a linux box with Ansible installed on it (tested with 2.3.x)
  4. Clone https://github.com/Im0/CloudFlare_DyDNS_Playbook.git
  5. Update the following fields in the cf_dydns_update.yml file
    1. cf_api_token: ‘YOUR API KEY’

    2. cf_email: ‘YOUR CLOUDFLARE EMAIL’

    3. with_items: – The domain names you want to update
  6. Run ansible with:

    ansible-playbook cf_dydns_update.yml

Obviously, you’ll probably want different DNS records updated.  Change the ‘record: mail’ an A record of your choice.

More detail

 


1 ---
2 - hosts: localhost
3 gather_facts: no
4 vars:
5 cf_api_token: 'CF API token under My Profile.. API key'
6 cf_email: 'Cloud Flare email address'
7
8 tasks:
9 - name: get current IP geolocation data
10 ipinfoio_facts:
11 timeout: 5
12 register: ipdata
13
14 # - debug:
15 # var: ipdata.ansible_facts.ip
16
17 - name: Update mail A record
18 cloudflare_dns:
19 zone: '{{ item }}'
20 record: mail
21 type: A
22 value: '{{ ipdata.ansible_facts.ip }}'
23 account_email: '{{ cf_email }}'
24 account_api_token: '{{ cf_api_token }}'
25 register: record
26 with_items:
27 - domain1
28 - domain2
29
30 # - debug:
31 # var: record

 

Breaking down the YML file..

  1. Required at the top of oru YAML files
  2. As we are not configuring any nodes, we set localhost as the only node we want to call against.
  3. As we aren’t using any facts, we don’t need to collect them.
  4. Variables we’re going to need to talk to cloudflare
  5. The API token found under our profile
  6. Our sign up email address for cloudflare
  7. .
  8. The tasks section for all tasks we are going to execute in this playbook
  9. .
  10. Using the ipinfoio_facts module we query ipinfo.io for our externally visible IP address.  Note: If we are being a proxy of some sort this will likely break what we are trying to achieve.
  11. .
  12. This could probably be done a bit better and dropped.  We are registering the output of the module to the ipdata variable.  This could probably be removed as the returned data ends up in the gathered facts which we could use.
  13. .
  14. If we want to see what useful little nuggets of information that have come back, dump the variable contents.
  15. .
  16. .
  17. .
  18. Use the cloudflare_dns module to start talking to cloudflare
  19. Which domain (zone) are we talking about?  In this case we iterate over the domains listed starting line 26 ‘with_items’:
  20. record: is the record we wish to update.
  21. type, is the type of record we are working with.  A few other examples are on the cloudflare_dns module page.
  22. Use the data we received from ipinfoio.  We’ve stashed this away in the data structure: ipdata.ansible_facts.ip
  23. Our cloudflare email
  24. Our cloudflare API key
  25. Capture the output from the cloudflare_dns queries, if we want to dump it in debug later.
  26. With items is a list of items we iterate over… instead of hosts.

 

Automating As-Built Documentation… Documentation as Code? Documentation as a Service? Documenting Infrastructure.

Documentation can be repetitive, boring, costly, incomplete and error prone.  When the next engineer in your organisation comes along to generate more documentation on a similar subject, often, they start again and waste all prior investment of effort whilst possibly blissfully unaware it even existed.  Not a wise investment in time and money.

Scope:  there are many systems that need documenting and each have their own requirements, tools and peculiarities.  This post assumes an infrastructure slant on documentation standards/requirements.

Documentation cobwebs

What about all those once loved documents that have been created, and served a useful purpose at one stage of their lives, however, over the years have fallen into disrepair and neglect.  The systems they once accurately curated have evolved, adapted or disappeared… people forgot the documentation even existed as they made changes to the infrastructure but hurriedly move onto the next urgent thing.

How about the documentation that nobody ever looks at… denies its existence.  Documentation that sits in a documentation store,  one of many documentation stores with new and old versions scattered far and wide.

What about the comfortable colleague who doesn’t want to excerpt a little effort to learn how to do something in their job… documentation may as well not exist.

Sometimes it’s easy to wonder why bother with good documentation when far too often it feels like a wasted cause!

 

Why good documentation matters

  • Customer may want to see the documentation for the solution you’ve built, or, manage for them.
  • Our intrepid explorer colleague who likes to know how things work, and, takes the initiative and time to learn for themselves very much appreciates the labors of good documentation.
  • May actually help diagnose a problem or issue faster.
  • Whilst painstakingly writing the documentation, perhaps the author catches mistakes in the documentation or design.

 

Is there a better way?  Maybe.

What about;

  • Documentation re-use (gold templates)
    • Let’s not waste so much effort creating the documentation… re-use as much as possible from templates.
  • Documentation source control
    • Collaborate with others to build the awesome gold template and scripts to auto populate
  • Documentation auto population
    • Much time is wasted by engineers transposing table information from an environment to a document.
    • Non-searchable copy and pastes of tables (images) into the document make indexing and searching impossible
    • Post change documentation automatic update
  • A customer view vs an internal view
    • A customer may want more detail as to what each component does, whilst, a colleague dealing with similar solutions all day long doesn’t care for the vendor marketing material (marchitecture)or technology descriptions.
  • Automatic/schedule documentation updates
  • Infrastructure as Code
    • To some degree can be self documenting.  Even better if it contains good comments.

Tool chain

So many options exist when it comes to documentation.  I looked at a couple with a view to knocking something up quickly:

  • Confluence
    • Seemed like a good option which the possibility of styling a pages PDF output.
    • Didn’t invest the time in working out how to integrate external data sources into a page, but, seemed do-able via Dynamic Content Macro.
  • Tex/LaTex
    • Whilst powerful and very popular among many technical professions for technical documentation, it seemed like there was a high learning curve.
  • Markdown
  • reStructuredText & sphinx
    • Was my preferred option for a while there, however, ran into issues generating PDF documents and I didn’t want to spend the time troubleshooting.
    • Really liked the sphinx HTML output generated from python script docstring comments
  • AsciiDoc (language) & AsciiDoctor (processor)
    • Settled on AsciiDoctor as it did everything I wanted and seemed to just work, especially with asciidoctor-pdf.
  • Pandoc
    • The swiss army knife for converting between various documentation types.
  • DocBook

Source controlled gold template structure

The directory structure is important to consider as it would enable the ability to collaborate, house chapter based text, refer to images and hold information gathering scripts.  Below is a possible directory structure to nicely split out static text, images and dynamic content.

├── chapters

│   └── includes
│   └── chapter_01
│   ├── host_info.csv
│   └── mem_info.csv
├── img
│   ├── includes
│   │   ├── chapter_01
│   │   │   └── sample-diagram.png
│   ├── document-footer-image.png
│   └── document-title-image.png
├── main.adoc
├── pdf-theme.yml
├── README.md
└── scripts (maintain scripts in a separate repository and import as required)
└── query_hosts.ps1 (example)

Further consideration would be required for sharing content between different gold template repositories.  In fact, the above structure worked well for generating PDF documents out of the source, however, when editing the AsciiDoc code with Atom or Visual Code Studio with various markup tools enabled, the HTML view present was broken due to image paths.

 

Great, gold templates in git.  What now?

Once you have a new project that needs documenting,  clone your gold template and create a new repository to host the documentation you’re about to create.

Modify the new repository to meet your needs and execute some of the helper scripts to gather required environment data,  commit and push your changes to the new repository and now build/compile your documentation.

Modify the new repository documentation as desired, to meet your requirements.

 

Seems like a lot of effort.  Why not use a word processor as we normally do?

How about day two?  Your documentation and environment align at the time of creation (hopefully),  what happens weeks,  months,  years into existence?  Hopefully the documentation is not gathering dust!

If it is,  setup an automated scheduled task to gather the current state information for you and compile the up-to-date documentation.

Perhaps your monitoring system provides you with estimated capacity exhaustion information… maybe that’s useful in your documentation if sharing with a customer on a regular basis.  (Perhaps better in some sort of reporting document)

What if you had an “infrastructure as code” setup and stored the code in a code repository.  How nice would it be to update documentation when configuration is updated in your code repository,  perhaps as a webhook or CI/CD pipeline.

Documentation as a Service

How about you don’t want all your engineers to install the documentation tool chain required to produce the final version of the documentation.

Consider a small web app which accepts the following inputs:

  • Code repo URL
  • Code repo username & password
  • Email address, or, target location (document store)

Process flow for web app:  Submit params -> Clone Repo -> Build/Process Documentation -> Preview, email or upload final document.  Display errors if any.

 

Observation

I thought there would have been more discussion around documentation as code, or, automatically generating documentation.  Perhaps documenting infrastructure automatically is not as glamorous as other topics?

Maybe it’s some what redundant if you have self documenting infrastructure as code?  But, probably falls short as this only covers the “what” of a solution design and not the “why”.

 

Links/Interesting reading