OCI – Using Open Service Broker within Kubernetes to bind to Autonomous Databases

Kubernetes is a great platform to run microservices, there is no question about it. It has great features like Horizontal Pod Autoscaler and Cluster Autoscaler that make it very easy to scale whole applications depending on current or forecasted load. However with auto-scaling there are a few considerations that we need to keep in mind and one of the most important ones is that containers are ephemeral, which implies that we need to design our applications in such a way that they can scale, without compromising data persistency. There are multiple techniques available to make this possible. A common way to achieve this, is by using Persistent Volumes (PV) and Persistent Volumes Claims (PVC), that hook via the CSI (Container Storage Interface) into external disk volumes. This helps maintain state outside containers, allowing them to scale without compromising the data.

Also, with the constant embrace of Cloud providers to kubernetes, these solutions are quickly also evolving and becoming more sophisticated and easier to use. For example, now days we can extend the use of PVC with Storage Classes, implemented by the different Cloud vendors. This make the whole PV/PVC experience so enjoyable, as these storage classes become responsible to interface into the Cloud vendor IaaS land and create resources that we simply declared, while we keep reading and writing data in persistent disks.

Now, with this constant multi-cloud endorsement with kubernetes, it was a matter of time, until Cloud vendors decided to differentiate themselves by allowing the use of foreign cloud services, as first-class citizens in kubernetes. Just imagine, having the ability to use a PaaS service from “Cloud Vendor A”, seamlessly from within my kubernetes cluster that is running on “Cloud Vendor B”. The piece of magic that makes this possible is called, Open Service Broker (OSB), which is really not magic, but just a bunch of APIs that allow the control plane in kubernetes to interact with Cloud services.

In this blog, I am going to show you how to consume Oracle Cloud Infrastructure (OCI) resources from within kubernetes using the Open Service Broker. Specifically, I am going to let my kubernetes control plane to fully manage an OCI Autonomous Transaction Processing DB (ATP), as if it was a native kubernetes resource… And by the way, I am going to use OKE (Oracle managed Kubernetes), but you could very well use Google/AWS/Azure Kubernetes elsewhere and still consume OCI resources. How cool is that?

Continue reading “OCI – Using Open Service Broker within Kubernetes to bind to Autonomous Databases”

OCI – Oracle Container Engine for Kubernetes (OKE) – Using RBAC with IAM

In a recent blog, I explained how to approve in Kubernetes external certificate signing requests from end users. This way, users can then simply use their private keys to authenticate into Kubernetes API server. Further to this, Role Based Access Control (RBAC) can be put in place to authorise access to resources in kubernetes clusters. This is amazing and provides some level of governance, however there is a caveat, since kubernetes does not hold neither users nor groups, the identities must exist outside the cluster somewhere else. If we create external CSRs with non-existent users and groups, soon it will become very hard to properly manage all the identities, especially if we have to maintain multiple users accessing the cluster. This is yet another area that gets highly simplified when Cloud vendors embrace kubernetes as first class citizens.

In this blog, I am going to show you how to create and manage your identities in OCI IAM and simply, using such identities to access Oracle Container Engine for Kubernetes (OKE) clusters and authorise access to resources.

In a nutshell, this is what I am going to do:

  1. Create a new user/group in OCI IAM
  2. Configure an OCI policy to grant access to my user’s group to access the OKE clusters
  3. Create Roles and Role Bindings (RBAC) in OKE to authorise our user to access OKE resources
  4. Download a kubeconfig set for my new user using token validation
  5. Use kubectl to access only granted resources.

Ok, let’s have fun!!!

Continue reading “OCI – Oracle Container Engine for Kubernetes (OKE) – Using RBAC with IAM”

OCI – Oracle Container Engine for Kubernetes (OKE) – Using Client Certificates and RBAC

Kubernetes has been proven the best way that we have today to run microservices deployments, whether it is via a Serverless approach or by managing your own deployments in the cluster. This has solidified with the strong adoption of Kubernetes by all the major Cloud Vendors, as the strategic way to orchestrate containers and run serverless functions.

However, one of the situations that we need to be mindful, is that kubernetes creates by default a super powerful user that has full access to almost every resource in the cluster (accessible via kubectl or directly though APIs). This is very convenient for most dev & test scenarios, but it is imperative that for production workloads, we limit such power and use Role Base Access Control (RBAC), stable since version 1.8, for fine-grained authorisation access control to kubernetes cluster resources.

For the purpose of this demo, I am assuming some familiarity with Kubernetes and kubectl. I will mainly focus on the Authentication and Authorisation aspects that allow us to use Client certificates to get access to protected resources in a Kubernetes cluster.

In a nutshell this is what I am going to do:

  1. Create and use Client certificates to authenticate into a Kubernetes cluster
  2. Create a Role Base Access Control to fine grain authorise resources in the Kubernetes cluster
  3. Configure kubectl with the new security context, to properly limit access to resources in the Kubernetes cluster.

This is a super simplified visual representation:

Continue reading “OCI – Oracle Container Engine for Kubernetes (OKE) – Using Client Certificates and RBAC”

Teaching How to Get Started with Oracle Container Engine for Kubernetes (OKE)

In a previous blog, I explained how to provision a Kubernetes cluster locally on your laptop (either as a single node with minikube or a multi-node using VirtualBox), as well as remotely in the Oracle Public Cloud IaaS. In this blog, I am going to show you how to get started with Oracle Container Engine for Kubernetes (OKE). OKE is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud on Kubernetes.

Continue reading “Teaching How to Get Started with Oracle Container Engine for Kubernetes (OKE)”

Teaching How to Invoke Gen2 Oracle Cloud Infrastructure (OCI) resources via REST APIs

I am thrilled with the Oracle’s Gen2 Cloud Infrastructure architecture, where Oracle completely separates the Cloud Control Computers from the User Code, so that no threats can enter from outside the cloud and no threats can spread from within tenants.

Obviously with more security, there comes more coordination, especially at the moment of invoking OCI resources APIs. Luckily, Oracle did a good job at providing a simple to use CLI and SDK (see here for more information).

For the purpose of this blog, I built a simple NodeJS application that helps demystify the security aspect of invoking OCI APIs. Check this link for examples of running similar code across other Programming Languages.

My NodeJS application manages OCI resources in order to:

  • List ADW instances
  • Stop an ADW instance
  • Start an ADW instance

I started this NodeJS application to list, start and stop ADW resources. However, I designed this application to easily extend it to invoke any other type of OCI resources.

I containerised this application with Docker, to make it easier to ship and run.

This is a picture of the moving parts:

Continue reading “Teaching How to Invoke Gen2 Oracle Cloud Infrastructure (OCI) resources via REST APIs”

Circuit Breaker in Service Mesh – Istio/Envoy

This Lab, logically follows previous steps required to provision and curate a Kubernetes cluster. Please review them before proceeding. If you are in doubt, feel free to contact me directly via https://www.linkedin.com/in/citurria/

Testing BookInfo app with Circuit Breaker based policy

The third and last test in the Service Mesh, is using a Circuit Breaker based pattern. It further protects our microservices in case of certain conditions occur, such as preventing that an unexpected number of requests overflow and affect the microservices in the service mesh.

We might decide to throttle or simply reject new incoming requests when a number of current incoming http requests reaches certain threshold.

For demonstration purposes, we are going to set rules to allow a maximum of 1 request at a time. If more than 1 request comes in, we will prevent it from entering the mesh.

Continue reading “Circuit Breaker in Service Mesh – Istio/Envoy”

Service Mesh 101 – Getting familiar with Istio and Envoy

This Lab, logically follows previous steps required to provision and curate a Kubernetes cluster. Please review them before proceeding. If you are in doubt, feel free to contact me directly via https://www.linkedin.com/in/citurria/

Introducing Service Mesh

Continue reading “Service Mesh 101 – Getting familiar with Istio and Envoy”

Socks-shop Polyglot App in Kubernetes…

This Lab, logically follows previous steps required to provision and curate a Kubernetes cluster. Please review them before proceeding. If you are in doubt, feel free to contact me directly via https://www.linkedin.com/in/citurria/

In this Lab, we will show how to manage a more complex Microservices-based application based on an E-Commerce Socks-shop App (also see here). Similarly, as with the Simpler Cheeses App, we are going to use Weave Scope to gather real-time insight into this more complex application.

This is an e-commerce application that sells socks on web. However, we chose this application because it is not any different to any modern Application. That is, it is based on multiple microservices, where each one use different technologies as programming languages/frameworks, as well as persistent back stores or databases.

That is:


For more information, see: Weave-socks multiple technologies
(and github).

Continue reading “Socks-shop Polyglot App in Kubernetes…”

Cheeses App – Self-Healing and Scalability in Kubernetes…

This Lab, logically follows previous steps required to provision and curate a Kubernetes cluster. Please review them before proceeding. If you are in doubt, feel free to contact me directly via https://www.linkedin.com/in/citurria/

In this Lab, we will show how to deploy microservices-based Applications into Kubernetes cluster. Then we are going to use an Open Source framework called Weave Scope, to gather in real-time runtime insight into it. We will finish by learning a few tricks to easily manage your microservices.

The Application that we are going to deploy is based on “Cheeses“. It is made of 3 microservices (3 types of cheeses) that when invoked via an API, they simply return their own name (i.e. cheddar, stilton, or wensleydale).

Continue reading “Cheeses App – Self-Healing and Scalability in Kubernetes…”

Kubernetes Dashboard Deep Dive…

In this blog, you will get familiar with the Kubernetes Cluster UI Dashboard and with the various components that are pre-deployed in your sandbox environment.

Kubernetes Dashboard UI is a web-based interface that lets you visually see all the different components of the Kubernetes cluster, as well as to deploy and manage Applications via Containers running on Pods. It also provides ability to overview the health of the various components and troubleshoot your various components specifications.

The Kubernetes Dashboard UI comes with a vertical menu. Let’s review the main sections in this menu:

Continue reading “Kubernetes Dashboard Deep Dive…”