A DevOps Guide to Securing Microservices and Safeguarding Application Deployment

Read More >

The approach of decoupling an application and deploying it as standalone modules, or what are commonly called microservices, has received widespread adoption in the modern software development lifecycle. In fact, in a random survey, 37% of respondents said they have started the shift to microservices in their organizations, demonstrating that adoption is going strong.

But microservices come with their own security challenges that need to be addressed:

 

  • The scale at which they are deployed means it’s hard to keep track of the number of services and how to address their security. 
  • The attack surface becomes much larger given the amount of communication between services and endpoints.
  • The environment in which microservices are deployed is most likely not uniform. While we may strive for centralized uniformity, the reality on the ground is that we often have some services in Docker, others in Kubernetes, and some might be found in the cloud. This variation of dislocated environments has an impact on security and therefore, needs to be addressed.

 

In this blog, we’ll discuss some strategies on how to secure microservices in order to safeguard application deployment.

Securing Your Environment

As we already mentioned, it’s very likely your microservices are not deployed in a single environment and are instead spread out among different environments such as Docker, Kubernetes, cloud, etc. Therefore, before even deploying the services, you must first secure your environment so that no unnecessary access is provided—to either developers or external parties—in order to reduce the attack surface. 

Some ways to achieve this are the following:

  • Keep workloads in a private network.
  • Use role-based access control (RBAC) when giving access to developers.
  • Implement rate limiting WAFs, load balancing, and whitelisting IPs; expose all relevant open ports only to services that need them.
  • Audit all access granted from time to time.

Implement Authentication and Authorization Between Services

Since there’s a lot of interaction between microservices, it’s important to implement secure authorization and authentication to prevent lateral movement in the unfortunate event of a breach. 

To handle this:

  • Apply the principle of least privilege to the microservices and a zero trust model to not give any unnecessary access.
  • Invest in an API gateway and service discovery mechanism to enhance security in your architecture.
  • Use tokens and perform penetration testing frequently to identify any authorization and authentication issues; each and every request in the environment should have some form of authentication regardless of origin for maximum security.
  • Use identity and access management services provided by your cloud provider.

Secure Communication Between Services with SSL

Secure Socket Layer in public-facing applications has become fairly standard nowadays. But what many organizations fail to understand is the importance of implementing it for internal networks and services as well. A common practice used by many enterprises is terminating the SSL connection on the public-facing load balancer level instead of the application itself. Some argue that terminating it at the application can increase network bandwidth and effectively affect compute time. But when it comes to securely designing microservices, this trade-off needs to be considered even if it’s internal. 

The communication between services needs to be encrypted to prevent network sniffing and man-in-the-middle attacks. Some points to consider when implementing SSL internally:

  • Implement mTLS to safeguard the authenticity of services and prevent malicious traffic from ever being processed. 
  • Use a service mesh solution, such as Istio or Linkerd, to help address the complexity of setting up encryption when many services are involved

Establish Proper Monitoring and Logging

While most microservice-based environments will have logging implemented in some form for debugging purposes, it’s important to leverage logs for security reasons as well. 

Logs are the bread and butter of security teams, and investing in a good logging system will help security teams monitor and triage incidents effectively. Along with this, the security team should perform threat modeling and set up proper alerts for anomalies in order to have seamless incident response in the event of a breach. 

There are a plethora of open-source and commercial software products that help in achieving assurance when it comes to monitoring and logging. Here are some points to consider prior to implementation:

  • Format logs so they’re easy to understand and consume for better incident response.
  • Store logs in a fast storage driver so that there’s no delay in them showing up on your tool dashboard.
  • Test alerts and remediations regularly to check if they’re functioning properly.
  • Back up the logs generated regularly, preferably in a secure inaccessible location, as in the event of a breach, hackers would try to remove logs to cover their footprints.
  • Consider open-source tools like Wazuh, Loki, and ELK; commercial tools include Splunk and Datadog.
  • Check out Falco for Kubernetes environments, an amazing tool that detects runtime threats and performs sound incident response. 

Don’t Hardcode Secrets in Your Source Code

Microservices are bound to carry and exchange secrets when communicating among themselves. It is important to keep these secrets as confidential as possible and not hardcode them within the source code because if the repository gets leaked, it could have devastating effects. Furthermore, hardcoding makes it difficult to rotate secrets. This means you should invest in a secret management solution to keep hardcoded secrets as far away as possible from the source code. 

Usually in the case of a secret management solution, the microservice would make rest API calls to the secret-hosting server and retrieve the secrets to be used at runtime. So it goes without saying that the microservice retrieving the secrets should not be overly permissive—meaning, it should only retrieve secrets it needs and not access any others. 

There are many open-source tools such as Hashicorp Vault and cloud-native tools like AWS Secrets Manager and Azure Key Vault that can be integrated into the software development process, providing a much-needed defense in the case of a code leak. 

Documentation

The engineering team should inculcate the habit of documenting where each and every service is deployed along with all relevant information pertaining to the service, including its open ports, access, dependencies, sample API calls, etc. This helps keep track of all the changes in your infrastructure and can help security teams review architecture configurations. 

For example, let’s say a service was given temporarily higher privileges for some reason, and the engineer forgot to revoke the access. This can be devastating if somehow it was compromised. With proper documentation and timely follow-ups, you can prevent this, ensuring that the temporary changes are addressed. 

Tools like Backstage and Confluence make the task of documenting and sharing quite easy and therefore, should be adopted by expanding engineering teams.

Conclusion

Security related to microservices is overlooked in many organizations in favor of building and shipping out faster releases. However as history has repeatedly shown, the real security budget discussion often starts after a security incident – and by then it is often too little too late. Your organization may have scaled up to massive infrastructure, making it more difficult to implement security best practices. 

It is crucial to recognize the need for securing your microservices early in the pipeline so you can avoid needlessly increasing your engineers’ workload and stress levels should an incident occur in the future.

Zesty offers automated cloud optimization to reduce costs, free resources, and maximize efficiency. 

Book a demo now and start running your cloud on autopilot.