Shouldn't we have Standard Automation for Commodity Infrastructure?
Our focus on SRE series continues… At RackN, we see a coming infrastructure explosion in both complexity and scale. Unless our industry radically rethinks operational processes, current backlogs will escalate and stability, security and sharing will suffer.
An entire chapter of the Google SRE book was dedicated to the benefits of improving data center provisioning via automation; however, the description was abstract with a focus on the importance of validation testing and self-healing. That lack of detail is not surprising: Google’s infrastructure automation is highly specialized and considered a competitive advantage.
Shouldn’t everyone be able to do this?
After all, data centers are built from the same basic components with the same protocols.
Unfortunately, the stack of small (but critical) variations between these components makes it very difficult to build a universal solution. Reasonable variations like hardware configuration, vendor out-of-band management protocol, operating system, support systems and networking topologies add up quickly. Even Google, with their tremendous SRE talent and time investments, only built a solution for their specific needs.
To handle this variation, our SRE teams bake assumptions about their infrastructure directly into their automation. That’s expedient because there’s generally little operational reward for creating generic solutions for specific problems. I see this all the time in data centers that have server naming conventions and IP address schemes that are the automation glue between their tools and processes. While this may be a practical tactic for integration, it is fragile and site specific.
Hard coding your operational environment into automation has serious downsides.
First, it creates operational debt [reference] just like hard coding values in regular development. Please don’t mistake this as a call for yak shaving provisioning scripts into open ended models! There’s a happy medium where the scripts can be robust about infrastructure like ips, NIC ordering, system names and operating system behavior without compromising readability and development time.
Second, it eliminates reuse because code that works in one place must be forked (or copied) to be used again. Forking creates a proliferation of truth and technical debt. Unlike a shared script, the forked scripts do not benefit from mutual improvements. This is true for both internal use and when external communities advance. I have seen many cases where a company’s decision to fork away from open source code to “adjust it for their needs” cause them to forever lose the benefits accrued in the upstream community.
Consequently, Ops debt is quickly created when these infrastructure specific items are coded into the scripts because you have to touch a lot of code to make small changes. You also end up with hidden dependencies
However, until recently, we have not given SRE teams an alternative to site customization.
Of course, the alternative requires some additional investment up front. Hard coding and forking are faster out of the gate; however, the SRE mandate is to aggressively reduce ongoing maintenance tasks wherever possible. When core automation is site customized, Ops loses the benefits of reuse both internally and externally.
That’s why we believe SRE teams work to reuse automation whenever possible.
Digital Rebar was built from our frustration watching the OpenStack community struggle with exactly this lesson. We felt that having a platform for sharing code was essential; however, we also observed that differences between sites made it impossible to share code. Our solution was to isolate those changes into composable units. That isolation allowed us take a system integration view that did not break when inevitable changes were introduced.
If you are interested in breaking out of the script customization death spiral then review what the RackN team has done with Digital Rebar.