Application services for a DevOps world
In a world where companies are embarking on digital transformation programs, applications are taking a more prominent role in the business value chain. Company success in a digital world will highly depend on how fast you can deliver and deploy brand new applications or new versions of existing applications both to delight your customers and business partners as well as to improve your internal business operations. Embracing modern DevOps methodologies will help organizations meet those goals. However, very often we find that choosing for speed and agility means compromising on security. But do we really need to compromise?
When applications are getting deployed, you also want to deploy application services in front of them in order to scale and secure the apps, regardless of whether this happens on premises, in private clouds or in public clouds. Application services include functions such as load balancing, SSL offloading, web application firewall, DDOS mitigation, bot mitigation and DNS to name a few. In order not to slow down the deployment rate of new applications, the provisioning process of these application services needs to be automated and integrated into the toolchains that the DevOps community is using for the automation of application delivery and deployments.
The DevOps community uses CI/CD pipeline tools such as Jenkins or Bamboo to automate the process of building, testing and deploying applications. These tools typically pull the application source code out of a source code repository platform like GitHub or GitLab. They then go through a series of steps (the pipeline) that automates the building, the testing and the deployment of the application. In order to automate the deployment of application services together with the application, we should have the application services configurations also stored in that same source code repository. It implies that ADC policies, WAF policies, etc. should be stored as ‘source code’ in Git. This is what the industry refers to as infrastructure-as-code or security-as-code.
It is important to note that your CI/CD pipeline tool should only know ‘what’ application services to provision, not ‘how’ these application services should be provisioned. If your application services are provisioned through a standard (imperative) REST API, your CI/CD tool needs to know how services are configured as it needs to execute a long series of imperative REST API calls in a very specific order to configure the service. This makes things complex as you would also have to program into the CI/CD pipeline what rollback steps to take if any of these REST API calls fail somewhere halfway through the configuration. What is more, this logic is very dependent on the underlying application services platform vendor. Bringing domain-specific or vendor-specific logic into a CI/CD tool is bad practice. This problem is avoided with a declarative API. A declarative API allows you with one single REST API command to configure the entire service. The application service is described in source code format (JSON or YAML) and is easily stored in the source code repository similar to how application code is stored. When the CI/CD tool has successfully deployed the application, the next step is to deploy the application services in front of it. This is accomplished with a single (declarative) REST API command pushing down the JSON or YAML file with the declarative description of the service. The result is either a success (application service deployed and operational) or a failure (application service not deployed). The CI/CD tool has no knowledge of how this service got deployed which makes things a lot easier.
Infrastructure-as-code and declarative APIs are the key factors in making sure that speed and security don’t get in each other’s way when deploying new applications. So back to our original question : do we need to compromise between speed and security? The answer is no.