The foundation of devops
Integration is the foundation of devops
If automation is the how, then integration is the what. Let’s look at the three key categories.
Even the name “devops” gives it away: we’re trying to integrate processes across development and operations. We want an integrated deployment pipeline that carries developer commits into production with the right lifecycle workflow automation in place. When ops has a critical patch to deploy, we want to see the patch testing process coordinated with the development test process. We want to control configuration drift using a combination of infrastructure provisioning, middleware deployment, app deployment and deployment blueprinting. All of these are examples where it makes sense to create processes that span the traditional boundary between development and ops.
So process integration is the goal, but to make this possible, we need tool integration. When developers commit code, we want the continuous integration server to launch a build. The build automation needs to be able to invoke test automation (commit tests, integration tests and so forth). When the CI server finishes a build, we want it to drop the artifact in an artifact repository to support cross-team CI (in the case of libraries) or deployments into target environments (in the case of apps and services). The development automation needs to be able to grab the artifacts, invoke VM provisioning APIs (like EC2), launch installers, run automated smoke tests and so forth. Promotion into downstream environments requires knowing what the entry and exit gates are for each environment, as well as knowing whether the build in question “passed” any given environment.
Besides tool integration, we need data integration. Ideally the same team membership data that drives deployment ACLs drives the list of subject matter experts in the operational runbook. The list of apps in the deployment tool should be the same one that the build tool sees, the same one that the configuration repositories see, the same one that the NOC sees in the monitoring tools and so forth.
Automation is important in establishing the integrations, but it’s secondary in that it supports the integrations. We could have a manual process in which we copy data from one system into another system, and that would count as an integration when you take a system view. But it’s a slow and error-prone integration. Automation makes it a lot better.
By now you may agree that integration is vital to devops. But why do we need a platform?
Devops integrations require a platform
We knew at the outset that over time, our primary “users” would be automated rather than human. The shift is still underway, so we’re in the process of stabilizing APIs, establishing infrastructure (e.g., a message bus to coordinate CRUD operations on CIs across standalone tools) and so forth. But forces are definitely driving us toward a platform-based design. I don’t necessarily want to try to define “platform” here, but here are some of the platform-ish things that became important once we started seeing increased adoption of our tools and data:
Communicating through service interfaces—even the UIs should use service interfaces where possible since this helps automation do much of what people can do
Versioned APIs and associated design/implementation approaches (e.g., separate domain objects from DTOs)
Service authentication and authorization
Message privacy and integrity, both in transit and at rest
Performance and availability protections (e.g., pagination, rate-limiting, circuit breakers, etc.)
Messaging infrastructure to coordinate independent tools (commercial, open source, in-house)