Teaching How to Get Started with Oracle Container Engine for Kubernetes (OKE)

In a previous blog, I explained how to provision a Kubernetes cluster locally on your laptop (either as a single node with minikube or a multi-node using VirtualBox), as well as remotely in the Oracle Public Cloud IaaS. In this blog, I am going to show you how to get started with Oracle Container Engine for Kubernetes (OKE). OKE is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud on Kubernetes.

I recommend using OKE when you want to reliably build, deploy and manage cloud-native applications. Oracle takes full responsibility of provisioning the Kubernetes cluster and managing tiers (control plane), you simply choose how many Kubernetes worker nodes you want to have and then simply deploy your Kubernetes Applications there. Oracle manages the full Kubernetes Control plane, and wait, the best part is that Oracle does not charge for it, you just pay for the primitive IaaS that you use to run your application.

For the purpose of this demonstration I am going to show how to:

  1. Provision an OKE cluster
  2. Configure kubectl locally, so that you can run commands against your OKE cluster, e.g. deploy your application.
  3. Finally, I am going to show you how to deploy a microservice into your OKE cluster.

    For the purpose of this demonstration, I built a microservice earlier in a previous blog. It is a containerised NodeJS application called apis4harness that allows to interact with OCI API resources. In particular, it allows to: list, start and stop Oracle Autonomous Data Warehouse (ADW) instances.

This is a high-level visual representation:

Continue reading “Teaching How to Get Started with Oracle Container Engine for Kubernetes (OKE)”

Teaching How to Invoke Gen2 Oracle Cloud Infrastructure (OCI) resources via REST APIs

I am thrilled with the Oracle’s Gen2 Cloud Infrastructure architecture, where Oracle completely separates the Cloud Control Computers from the User Code, so that no threats can enter from outside the cloud and no threats can spread from within tenants.

Obviously with more security, there comes more coordination, especially at the moment of invoking OCI resources APIs. Luckily, Oracle did a good job at providing a simple to use CLI and SDK (see here for more information).

For the purpose of this blog, I built a simple NodeJS application that helps demystify the security aspect of invoking OCI APIs. Check this link for examples of running similar code across other Programming Languages.

My NodeJS application manages OCI resources in order to:

  • List ADW instances
  • Stop an ADW instance
  • Start an ADW instance

I started this NodeJS application to list, start and stop ADW resources. However, I designed this application to easily extend it to invoke any other type of OCI resources.

I containerised this application with Docker, to make it easier to ship and run.

This is a picture of the moving parts:

Continue reading “Teaching How to Invoke Gen2 Oracle Cloud Infrastructure (OCI) resources via REST APIs”

Circuit Breaker in Service Mesh – Istio/Envoy

This Lab, logically follows previous steps required to provision and curate a Kubernetes cluster. Please review them before proceeding. If you are in doubt, feel free to contact me directly via https://www.linkedin.com/in/citurria/

Testing BookInfo app with Circuit Breaker based policy

The third and last test in the Service Mesh, is using a Circuit Breaker based pattern. It further protects our microservices in case of certain conditions occur, such as preventing that an unexpected number of requests overflow and affect the microservices in the service mesh.

We might decide to throttle or simply reject new incoming requests when a number of current incoming http requests reaches certain threshold.

For demonstration purposes, we are going to set rules to allow a maximum of 1 request at a time. If more than 1 request comes in, we will prevent it from entering the mesh.

Continue reading “Circuit Breaker in Service Mesh – Istio/Envoy”

Service Mesh 101 – Getting familiar with Istio and Envoy

This Lab, logically follows previous steps required to provision and curate a Kubernetes cluster. Please review them before proceeding. If you are in doubt, feel free to contact me directly via https://www.linkedin.com/in/citurria/

Introducing Service Mesh

Continue reading “Service Mesh 101 – Getting familiar with Istio and Envoy”

Socks-shop Polyglot App in Kubernetes…

This Lab, logically follows previous steps required to provision and curate a Kubernetes cluster. Please review them before proceeding. If you are in doubt, feel free to contact me directly via https://www.linkedin.com/in/citurria/

In this Lab, we will show how to manage a more complex Microservices-based application based on an E-Commerce Socks-shop App (also see here). Similarly, as with the Simpler Cheeses App, we are going to use Weave Scope to gather real-time insight into this more complex application.

This is an e-commerce application that sells socks on web. However, we chose this application because it is not any different to any modern Application. That is, it is based on multiple microservices, where each one use different technologies as programming languages/frameworks, as well as persistent back stores or databases.

That is:


For more information, see: Weave-socks multiple technologies
(and github).

Continue reading “Socks-shop Polyglot App in Kubernetes…”

Cheeses App – Self-Healing and Scalability in Kubernetes…

This Lab, logically follows previous steps required to provision and curate a Kubernetes cluster. Please review them before proceeding. If you are in doubt, feel free to contact me directly via https://www.linkedin.com/in/citurria/

In this Lab, we will show how to deploy microservices-based Applications into Kubernetes cluster. Then we are going to use an Open Source framework called Weave Scope, to gather in real-time runtime insight into it. We will finish by learning a few tricks to easily manage your microservices.

The Application that we are going to deploy is based on “Cheeses“. It is made of 3 microservices (3 types of cheeses) that when invoked via an API, they simply return their own name (i.e. cheddar, stilton, or wensleydale).

Continue reading “Cheeses App – Self-Healing and Scalability in Kubernetes…”

Kubernetes Dashboard Deep Dive…

In this blog, you will get familiar with the Kubernetes Cluster UI Dashboard and with the various components that are pre-deployed in your sandbox environment.

Kubernetes Dashboard UI is a web-based interface that lets you visually see all the different components of the Kubernetes cluster, as well as to deploy and manage Applications via Containers running on Pods. It also provides ability to overview the health of the various components and troubleshoot your various components specifications.

The Kubernetes Dashboard UI comes with a vertical menu. Let’s review the main sections in this menu:

Continue reading “Kubernetes Dashboard Deep Dive…”