In the old days – way back before virtualization – no one really needed (or even cared much about) ways to automatically configure a new server. The process looked like this: a) purchase a physical server, b) install that server in a rack in the datacenter, c) configure the server for its intended use, d) patch/update as required and e) retire the server at the end of its useful life. Often, there was little need to automate the configuration step because it would most likely happen only once for each new server.
However, virtualization and infrastructure clouds have disrupted that cycle dramatically. In today’s world, a specific physical server may be called upon to spawn hundreds or even thousands of “virtual” servers during its service life. This reality has increased the need for tools that can fully automate the process of server configuration both at OS and application layers. Two tools in particular – Puppet and Chef – do this very well.
Since our distributed application orchestration platform – Maestro – is in the IT automation space along with Puppet and Chef, I’m often asked to differentiate Maestro from these tools. In this post, I’ll outline the differences between distributed application orchestration and automated server configuration. I’ll also describe how Maestro works with tools like Puppet and Chef to orchestrate distributed applications throughout the application lifecycle.
What Is Automated Server Configuration?
Over the last few years, both Puppet and Chef have become leading automated server configuration tools and both continue to add to their growing user communities. Puppet/Chef users are typically system administrators and devops teams responsible for managing deployments of Intel based compute resource. Puppet/Chef (and all other automated server configuration tools) essentially do the same thing – move a specific physical or virtual server to a specific configuration state in a fully automated fashion. While each tool has a slightly different approach, Puppet and Chef have become very successful because they each provide simple and elegant methods to automatically execute a script (or series of scripts) on a server.
Another big advantage Puppet and Chef have over other automated server configuration tools is they each support and maintain large libraries of pre-built server templates (Puppet calls them “modules” and Chef calls them “recipies”). Leveraging these pre-built server templates can take much of the work out of building, testing and certifying a specific server configuration. Server templates make it very easy to automatically configure a new server to virtually any set of specifications.
All servers – either physical or virtual – are employed in the service of application software. Many (if not most) multi-user applications deployed today rely on some form of distributed architecture. Distributed applications utilize software that executes across two or more individual servers. When compared to applications that run on a single large centralized server, distributed applications typically run on lower cost hardware, have greater fault tolerance and are easier to scale up or down. Virtualization and infrastructure clouds are making distributed applications even more practical and efficient by providing pools of compute resource that can be instantly provisioned and de-provisioned – on demand.
Application-Down Orchestration is a requirement of all distributed applications and is the process of interacting with all layers in the application stack to perform a distributed workflow. Examples could include directing the provisioning platform to provision compute/network/storage resources (VMware, public/private cloud api, etc..), initiating automated configuration tools to configure the server to a desired state (Puppet, Chef, etc.) and interacting with application tiers/services/sub-components directly to deliver and manage a multi-tier distributed application to end-users over the entire application lifecycle. While the orchestration function is often performed manually by system administrators and devops teams today, Maestro provides the ability to completely automate this process across all layers of a distributed application.
To automate the application orchestration process, Maestro provides eight major capabilities which include bi-directional interactivity/communications, workflow automation, dependency management/sequencing, self-service portal, user access management/security, monitoring/reporting/self-healing, global application parameter/data store and a REST-api interface.
Distributed applications reside on multiple servers and do not exist on any one server in particular. As a result, it is very difficult to automate distributed workflows using automated server configuration tools – that’s simply not what those tools were designed to do. Maestro, on the other hand, was purpose built to tackle the automated orchestration workflows required by dynamic distributed applications – on bare-metal, virtualized or cloud-based infrastructure.