Showing posts with label containers. Show all posts
Showing posts with label containers. Show all posts

Wednesday, August 28, 2024

Rate limiting in Kubernetes using Istio


Rate limiting is one of my favorite sections. I have implemented rate limiting as part of API Gateway and had multiple versions of it tailored to the needs of the downstream applications that were leveraging them in various ways. It's also another use case that is specific to microservices that can be implemented in Kubernetes. It can also be an add-on to existing rate limit implementations that can be based on customer or subscription.

First, let's talk about the security measure, and how its absence or improper implementation can lead to several security problems:

Vulnerability to Attacks


Denial of Service (DoS) and Distributed Denial of Service (DDoS) Attacks

Attackers can overwhelm an API with a flood of requests, exhausting server resources and making the API unresponsive or unavailable to legitimate users. It causes Resource Exhaustion, depleting server resources like CPU, memory, and network bandwidth, potentially causing system crashes or slowdowns.

Brute Force Attacks

Lack of rate limiting makes it easier for attackers to conduct brute force attacks on authentication endpoints, attempting numerous login attempts in rapid succession. Back in 2014, this was one of the reasons that Apple's authentication was compromised as hackers were able to hack into celebrity accounts. Apple has hardened the authentication service since then, and it's not a concern anymore with 2-factor authentication being the standard.

Credential Stuffing

This almost always happens by bots scanning most known URLs for easy access with common and stolen credentials. Without rate limits in place, this only exacerbates the problem.

Increased Attack Surface

Unlimited API access provides more opportunities for attackers to discover and exploit other vulnerabilities in the system.

Business Impact

This has a direct business impact with services having poor response because of resource exhaustion. It may also lead to excessive financial costs and service disruptions.

Implementing proper rate limiting policies is crucial for maintaining API security, ensuring fair resource allocation, and protecting against various forms of attacks and abuse. It's an essential component of a comprehensive API security strategy.

So how can you implement it , If you are already using Kubernetes and service mesh Its just a matter of adding it as part of your configuration . I have provided an example here -> Github 

Without Rate limit I am able to make 20 Request per second

                                      With Rate limit I am only able to make 10 Request per second 
                        

Istio

Istio, a popular service mesh for Kubernetes, provides built-in support for both rate limiting and circuit breaking. It can address almost all the security concerns described above . It may be possible by other service mesh as well , but I am going to use Istio for demonstration .

Protection Against Brute Force

Rate limit with Circuit Breaker help protect your services from being overwhelmed.

Fine-grained Control

You can set different limits and thresholds for different services or endpoints.

Improved Resilience

Circuit breakers prevent cascading failures, while rate limiting controls the flow of requests.

You can also do Rate limit , Load Balancing and routing on your Ingress and will not require a Service Mesh .  But this will be limited to the services accessed through ingress 

Containerization Github List  ->   Github Containers

Sunday, August 25, 2024

AKS: Exploring Capabilities and Building Your First Sample Application

Azure offers a fantastic opportunity for developers to explore container services in the cloud with its $200 free credit for new accounts. 


This credit allows you to experiment, learn, and build applications without incurring immediate costs. Let's dive into how you can make the most of this offer and get hands-on experience with Azure Kubernetes Service (AKS).

What can you do with the Free Credit 

When you sign up for an Azure free account, you receive $200 credit to use within 30 days. This credit gives you access to a wide range of Azure services, including AKS. Here's how you can leverage this opportunity:

Explore Azure Portal 

Familiarize yourself with the Azure portal interface, where you'll manage your resources and services.

Create Resource Groups 

Organize your projects by creating resource groups, which act as containers for related resources.

Deploy AKS Clusters

This was my main focus to deploy a simple app in AKS and understand the Kubernetes integrations .

Azure's control plane management offers significant advantages, particularly for smaller organizations without dedicated DevOps teams.Azure handles the control plane

taking care of patches and upgrades, allowing you to focus solely on application deployment. This approach streamlines your workflow and reduces operational overhead.


While using Docker Hub is convenient for testing Azure services, it doesn't seem to be supported . I was able to use ACR ( Azure Container Registry ) , this would be integrated solution for Azure . However most organizations will prefer private container registries for security and compliance reasons.


Horizontal Pod Autoscaling (HPA)

HPA in cloud environments offers more comprehensive scaling capabilities.HPA manages not just pod scaling but also coordinates with cluster autoscaling for efficient node management.


Cloud platforms excel in scenarios requiring rapid and frequent scaling, offering flexibility that's harder to achieve with on-premises solutions.


Admission Controllers and Add-ons

While many Admission Controller use cases can be implemented as add-ons in Azure Kubernetes Service (AKS), it's important to consider:


Native Kubernetes Flexibility: Some operations that are straightforward in native Kubernetes might require additional configuration or external services in managed Kubernetes offerings.


Trade-offs: The convenience of managed services often comes at the cost of some flexibility. Evaluate whether the simplification aligns with your specific requirements and use cases.

My test stops here but you can do more by Deploying your ML workloads , Implement CICD ,etc

Sample Application Deployment in AKS

To help you get started, I've created a sample Java application that can be deployed in AKS.

Github

The application and detailed deployment instructions are available in my GitHub repository. Here's a high-level overview of the steps:

  • Clone the Repository. -  Download the sample application code and Kubernetes manifests.
  • Build the Docker Image - Package your Java application into a container image.
  • Push to Container Registry -  Upload your image to Azure Container Registry (ACR).
  • Create AKS Cluster -  Use the Azure portal or CLI to set up your Kubernetes cluster.
  • Deploy the Application -  Apply the Kubernetes manifests to deploy your app to AKS.

Access the Application -  Configure ingress or use port forwarding to access your running application.

Containerization Github List  ->   Github Containers

Saturday, July 13, 2024

Traffic Shaping Strategies Using Kubernetes and Istio

Service mesh in Kubernetes has evolved there are multiple types of service mesh for different use cases. Either you need a hybrid network with VM's and containers or high performance options using eBPF . 

This article I want to provide a quick how to on how to do traffic shaping with Istio. It is way easier to do this rather than applying this on application. Once you try out it you will be using it most often for Development and QA . 

Well Can't I do this in my Microservice ?

Yes , its common e.g Applying the new version of the API only particular type of customer or only for a certain list of customers . Here are the reasons you should consider Kubernetes approach 

Unified traffic management and Fine-grained control

Kubernetes provides more granular control over network traffic at the infrastructure level, allowing you to shape traffic based on various criteria such as source, destination, and protocol. This level of control may not be easily achievable through microservice design patterns alone.

Infrastructure-level management

Traffic shaping in Kubernetes operates at the cluster level, affecting all services running within it. This allows for consistent traffic management across the entire application ecosystem, regardless of individual microservice implementations.

Dynamic Scaling

If you are using in Production Kubernetes can automatically adjust traffic shaping rules based on the current state of the cluster, such as the number of pods or resource utilization. This dynamic approach is harder to implement solely through microservice design patterns.

How about in Development and QA ?

Good thing about this is also the fact that you can try this locally or in your Dev and QA to stimulate and test various cases , even if you are not using containers in production . Create different containers for different version of the API's deploy them using Kubernetes  . I have provided the step by step instruction below 

Step by Step

Github

Containerization Github List  ->   Github Containers

Should You Use Containers with Serverless Implementations?

Serverless implementations are usually Cloud Deployment-specific, AWS Lambda being the most popular implementation. However, there are some ...