AUSOUG Connect 2018 – Talking Dev

ausoug-title-01.pngIn November 2018, I had the privilege to attend the Australian Oracle User Group national conference “#AUSOUG Connect” in Melbourne. My role was to have video interviews with as many of the speakers and exhibitors at the conference. Overall, 10 interviews over the course of the day, 90 mins of real footage, 34 short clips to share and plenty of hours reviewing and post-editing to capture the best parts.

Continue reading “AUSOUG Connect 2018 – Talking Dev”

Teaching How to Get Started with Oracle Container Engine for Kubernetes (OKE)

In a previous blog, I explained how to provision a Kubernetes cluster locally on your laptop (either as a single node with minikube or a multi-node using VirtualBox), as well as remotely in the Oracle Public Cloud IaaS. In this blog, I am going to show you how to get started with Oracle Container Engine for Kubernetes (OKE). OKE is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud on Kubernetes.

I recommend using OKE when you want to reliably build, deploy and manage cloud-native applications. Oracle takes full responsibility of provisioning the Kubernetes cluster and managing tiers (control plane), you simply choose how many Kubernetes worker nodes you want to have and then simply deploy your Kubernetes Applications there. Oracle manages the full Kubernetes Control plane, and wait, the best part is that Oracle does not charge for it, you just pay for the primitive IaaS that you use to run your application.

For the purpose of this demonstration I am going to show how to:

  1. Provision an OKE cluster
  2. Configure kubectl locally, so that you can run commands against your OKE cluster, e.g. deploy your application.
  3. Finally, I am going to show you how to deploy a microservice into your OKE cluster.

    For the purpose of this demonstration, I built a microservice earlier in a previous blog. It is a containerised NodeJS application called apis4harness that allows to interact with OCI API resources. In particular, it allows to: list, start and stop Oracle Autonomous Data Warehouse (ADW) instances.

This is a high-level visual representation:

Before we start

In this blog I assume the following:

  • You have an Oracle Cloud account, if not request a free trial. Here: https://cloud.oracle.com/tryit
  • You are a bit familiar with Docker. At RedThunder we have written plenty of existing Docker blogs that will quickly get you familiar with it.
  • You have a Docker Hub account. Create one otherwise.
  • You are familiar with Vagrant. If not, read this blog.

Ok then, let’s have fun!

Provision OKE Cluster

Now days, we rarely do things manually. We always want to automate most tasks, especially when provisioning environments. Provisioning OKE clusters is not an exception. I recommend using Terraform Oracle Cloud Infrastructure Provider to help version control your provisioning configuration of the Kubernetes cluster, together with the networking configuration, compute, etc.

We are saving for another blog, how to use Terraform to automate the provisioning of OKE clusters. For the purpose of this blog, I want to show how simple it is to spin up a new OKE cluster with just a few 2 clicks, without involving Terraform.

  • Log in to the OCI Console.

  • Click on the top left burger menu > Developer Services > Container Clusters (OKE)

  • Click on Create Cluster button.

  • Configure it:
    • Give it a good name
    • Kubernetes version: Choose the latest version. E.g. v1.11.1
    • Select “Quick Create” for now – By default a Virtual Cloud Network will be created, that includes: A new VCN, 2 new subnets for LBaaS and 3 worker node subnets.
    • Create Node Pool: Select a “shape” that works for you. See info about available shapes here.
    • Type how many nodes you want to have per subnet.
    • Public SSH Key: Paste your Public SSH key.
    • Leave Kubernetes dashboard and Tiller (Helm) enabled

  • When done, click Create.
  • Within minutes, you will have a working Kubernetes cluster ready to go! Simple, huh?

  • Click on it and get familiar with its working nodes, IP addresses, subnets, etc.

  • You can SSH into it using the private key pair. Also make sure to observe the Networking configuration that was created automatically. For example, click on Menu > Networking > Virtual Cloud Networks

Congratulations, your Kubernetes cluster is ready. In the next section, we will install kubectl, that is a CLI for running commands against your Kubernetes clusters.

Install Kubectl

In this section, we are going to install kubectl and configure it to point to your Kubernetes cluster, so that we can interact with it.

I am trying to summarise and simplify the steps here, but if you need more information at any point in time, refer to the official documentation.

First, we need to choose a platform to run kubectl. This is the place from which you are going to run CLI commands against your Kubernetes cluster, for example to deploy your microservices or in general to monitor and manage the state of your cluster. Make sure you satisfy these minimum requirements.

Normally, I build this platform as a Build server VM, also in the cloud, so that it can be shared among my devops-fellow colleagues. In previous blogs, I have explained how to achieve this. However, this time for simplicity purposes, your build server can also be a Vagrant VM running on your laptop.

For the purpose of this demonstration, I have prepared a Vagrant box in the same APIs 4 Harness project.

  • This will simplify the steps.
    • Clone my APIs 4 Harness repository

    git clone https://github.com/solutionsanz/apis4harness

    • Move into the apis4harness directory:

    cd apis4harness

    • Now, start your vagrant box:

    vagrant up

    Note: Give it some time the first time. It will download the Ubuntu Box and install all dependencies. Subsequent times will be much faster.

    • Once it finishes, as per the bootstrap process, your Vagrant VM is going to come with all necessary components installed, like Docker Engine, so that you can build your containerised app as a Docker image.
    • Vagrant ssh into it.

    vagrant ssh

    • Move into your host auto-mounted working directory.

    cd /vagrant

  • Now, let’s install OCI CLI, so that we can grab the OKE cluster Kubeconfig easily:
bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"
  • Leave all defaults and respond with Y to all requests to install extra libraries (e.g. python), modify PATH or CLI update, in case you had install older versions of OCI CLI before.

  • Before using the CLI, you must create a config file that contains the required credentials for working with Oracle Cloud Infrastructure:

    bash

    oci setup config

Note: The command prompts you for the information required for the config file and the API public/private keys. For simplicity purposes, let the setup dialog generates an API key pair and creates the config file.

  • Be ready to enter a manager’s OCID, tenancy ID and leave the rest as default values. If you need help to bring the parameters, read this reference or feel free to drop me a question via LinkedIn.

  • By default, it will write a Config file into /home/vagrant/.oci/config
  • Since we let it generate a new key pair, we need to upload the generated Public key to the manager user in OCI console. Keys are set by default at: /home/vagrant/.oci
  • Now let’s retrieve the Kubeconfig from our OKE cluster. To access the kubeconfig for your cluster, run the following commands:

mkdir -p $HOME/.kube

oci ce cluster create-kubeconfig --cluster-id [OKE-Cluster_OCID] --file $HOME/.kube/config

You can get your OKE cluster OCID from within the console easily:

  • It will create the kubectl config file under $HOME/.kube
  • Now, install kubectl (see here for more information):
    sudo apt-get update && sudo apt-get install -y apt-transport-https
    
    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
    
    echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
    
    sudo apt-get update
    
    sudo apt-get install kubectl
    

Note: If prompted, accept the installation.

  • That’s it, kubectl is properly installed and configured to point to your OKE cluster, even though you are running it locally on a Vagrant Box on your laptop and your OKE cluster is somewhere around the world miles away.
  • Test kubectl by retrieving the version of you Kubernetes cluster. Also retrieve all running services:

kubectl version

kubectl get services --all-namespaces

  • If you are using RBAC, you need to run the following command to grant enough privileges to your OCI admin user (the one you chose to associate the public key):

kubectl create clusterrolebinding owner-cluster-admin-binding --clusterrole cluster-admin --user=[ENTER_YOUR_USER_OCID]

    Note: Your [ENTER_YOUR_USER_OCID] can be retrieved from the OCI console (Menu > Identity > Users)

  • I am going to use Traefik as an Ingress Controller in my Kubernetes cluster, so let’s install Traefik Load Balancer Controller and Traefik agent (to see the Pods in Weave Scope)

kubectl create namespace traefik-cheese

kubectl create -f /vagrant/deploy/kubernetes/traefik/traefik-complete-demo_pub.yaml

  • Traefik controller and Traefik agent are installed now. Verify by retrieving all services and pods:

kubectl get services --all-namespaces

kubectl get pods --all-namespaces

  • Also, verify that you have a new LBaaS in your OCI console (Menu > Networking > Load Balancers):

Congratulations, your OKE cluster is ready to run your microservices and expose them via LBaaS to Traefik Ingress Controller.

Deploy your microservice and expose it via LBaaS

At this point you can deploy any microservices into your OKE cluster. Let me show you the steps to deploy my APIs 4 Harness Application.

“APIs 4 Harness” application, contains the Docker file required to containerise our sample microservice.

  • If you are not still there, go back to the same Vagrant Box SSH window, where we previously deployed kubectl.
  • Change directory to /vagrant:

    cd /vagrant

Before deploying the APIs 4 Harness application, let’s analyse the Docker file.

  • Feel free to explore the Dockerfile. It is doing the following:

  • Line 1: Starting from existing Docker Hub node image version 8.11.4
  • Line 3: Update Ubuntu libraries.
  • Line 8: Setting the Working directory within the new Docker node image (creating and changing current directory)
  • Line 9: Adding all the local directory (i.e. the “APIs 4 Harness” App) content/files into the Working directory
  • Line 13: It will run “npm install” to retrieve all the “APIs 4 Harness” NodeJS App dependencies. In this case only the “express” module.
  • Line 14: Defines the intended port on which the “APIs 4 Harness” App is configured to run on.
  • Line 15: Setting the command to run when “this” image is run. In this case, running the “APIs 4 Harness” NodeJS App (as indicated in package.json).
  • As for the “APIs 4 Harness” NodeJS app, I tried to keep it extremely simple. The actual NodeJS exposes the following APIs:

    Note: The actual code is at router > routes > services.js

    • Get all existing ADW instances:
      • GET: /services/adw
    • Get an existing ADW instance by ID:
      • GET: /services/adw/{ocid}
    • Start an ADW instance by ID:
      • POST: /services/adw/{ocid}?action=start
    • Stop and ADW instance by ID:
      • POST: /services/adw/{ocid}?action=stop
  • The other file that you might want to have a look is the actual NodeJS descriptor package.json

It is quite self-explanatory, but just pay close attention to:

  • dependencies -> body-parser, express, js-yaml, http-signature, jssha – This is what is executed at “docker build” time, as defined in the Dockerfile (RUN npm install).
  • scripts -> Start: node app.js – This is what will be executed at “docker run” time as defined in the Dockerfile (CMD npm start).
First, let’s test your Application locally

Before jumping and deploying your Application into Kubernetes, it is a good idea to run and test your application locally, just using Docker run.

  • Create a directory called ssh

    mkdir /vagrant/ssh

  • Place inside your Private key. You will need to reference your private key in the next steps. By default I called it: id_rsa_pri.pem – You can change the name accordingly.
  • Use setEnv_template as a reference and create a new file. Called it setEnv – In there, set the properties of your OCI environment. If you need help to bring the parameters, read this reference or feel free to drop me a question via LinkedIn.

    Note: Remember that the public key finger print comes from importing a PEM Public key into the user that you wish to use to invoke the OCI APIs.

  • Ok, now that everything is clear, let’s build our Docker image. Since we already added the ubuntu user to the docker group during the bootstrap of this Vagrant Box instance. Let’s switch to ubuntu user.

    sudo su ubuntu

  • Build the docker image:

    docker build .

    Note: Notice the last dot “.”

  • Give it some time the first time, as it has to pull the node image from Docker Hub first (~200MB).

  • As the Docker build process moves across the steps, you will be able to see the progress in the console.

    At the end it will show you the id of your final Docker image. Make a note of it, as you will need it later when tagging your image.

  • Let’s quickly test that our new Docker image works well. For this let’s run the image using its id as a reference. The command goes like this:

docker run –env-file setEnv -p [HostPort]:[ContainerPORT] -it [DockerImageId]

Note: -i is to run it in interactive mode, which means that you can stop it later on by ctrl+c.

For example:

docker run –env-file setEnv -p 3000:3000 -it c26c58862548

This will run a Docker container from our Docker image and start the “APIs 4 Harness” NodeJS App. It will map the internal 3000 port from the container into also port 3000 in our Host.

  • The provided Vagrant box is configured by default with NAT and Port-Forwarding on port 3000:3000, so you can open a browser on your host machine and go to localhost:3000 – You should be able to see the “APIs 4 Harness” Swagger UI.

  • Feel free to play with the APIs to confirm that you can List all ADW instances, as well as stop start individual ADW instances.
  • Now that we know that our Docker image works as intended, let’s move on to the next section to push it into Docker Hub

Push your APIs 4 Harness App Docker Image to Docker Hub

Now that we have created our Docker image and that we have briefly tested it. Let’s proceed to push it into an Image Registry. We can use OCI-R, that comes included with OKE, but for now let’s use a Docker Hub repository. For this, I assume that you already have a Docker Hub account and that you have created a repository. For example, I created one called: apis4harness – Notice that Docker Hub repos are always prefixed with your Docker Hub username, so you might choose the same name if you like.

  • Go back to your terminal window where you built your Docker image using the ubuntu user. Ctrl + C in case you are still running the container from last section.
  • In the terminal, first we need to set the Docker Hub login details.

    docker login

    Then enter your username and password when requested.

  • Tag the Docker image:

docker tag [Image_ID] [DockerHubUsername]/[DockerHubRepoName]

For example:

docker tag c26c58862548 cciturria/apis4harness:1.0

Note: You could’ve tagged your Docker image at the moment of “docker building” by using -t [user/repoName]

  • If you can’t remember your Docker image ID, you can type docker images
  • Then finally, Docker push the image:

docker push [DockerHubUsername]/[DockerHubRepoName]

E.g.

docker push cciturria/apis4harness:1.0

  • Give it some time as it uploads your compressed docker image into your specified Docker Hub repository.

  • After a few minutes, your docker image will appear in your Docker Hub specified repo.

Run your APIs 4 Harness App Docker Image in Kubernetes

Once your docker image is in a Docker repository, like OCI-R or Docker Hub, we can easily pull it and run it on Kubernetes.

Applications in Kubernetes run within the concept of “pods”, that are logical runtime grouping of Docker Containers that make up a whole Application. In our case, for the “APIs 4 Harness NodeJS App” there will be just one Docker Container. Pods are defined as YAML files.

  • Go back to your Vagrant Box, if not already there.
  • Important: If you are still under user ubuntu, switch back to user vagrant

sudo su vagrant

  • Make sure your Kubernetes cluster is up and running (
    kubectl get nodes

    ) with at least 1 worker node. I have 2 in this case:

  • Move to the /vagrant/deploy directory:

cd /vagrant/deploy

  • Inside there is a script called “deploy.sh“, that:
    • Creates a “apis4harness” namespace
    • Creates “APIs 4 Harness” Kubernetes Application deployment.
    • Creates “APIs 4 Harness” Kubernetes Service.
    • Creates “APIs 4 Harness” Kubernetes Ingress (leveraging Traefik Ingress controller that we deployed previously).
  • Before executing the provision.sh script, you need to set the Application environment properties. For this, use the template /vagrant/deploy/kubernetes/apis4harness-dpl.yaml_sample to create a new file /vagrant/deploy/kubernetes/apis4harness-dpl.yaml – In this file, at the end:
    • Set the Docker image tag name (e.g. XXX/apis4harness:1.0)
    • Set all the OCI properties that you used in setEnv while testing the microservice locally with Docker run.

  • Now, let’s deploy APIs 4 Harness Application resources (deployment, service, ingress)

/vagrant/deploy/deploy.sh

  • If you don’t get errors, it is a good sign. Validate the status of your Services:

kubectl get services --all-namespaces


  • Also, feel free to get the pods to see if they are ready or still being created. You can filter by namespace:

kubectl get pods --namespace=apis4harness

Note: The first time you deploy it, it will take a bit longer, as the Docker image has to be downloaded from Internet. Give it a minute or two and the Pods should be running.

  • At this point your microservice is up and running.
  • Test your APIs by pointing at your LBaaS. The external IP Address, can be retrieved from the OCI Console (Menu > Networking > Load Balancers)

Congratulations!!! Your APIs 4 Harness Application is up and running on Kubernetes. Also, you should know the steps to move microservices into your OKE cluster.

I hope you found this blog useful. If you have any question or comment, feel free to contact me directly at https://www.linkedin.com/in/citurria/

Thanks for your time.

 

Teaching How to Provision Oracle Integration Cloud (OIC) with Cloud Stack and Terraform

We have covered multiple blogs on how to use Terraform to help automate the provisioning of environments and treat your Infrastructure as Code. Until now, for PaaS stacks, we have used Terraform together with Oracle PaaS Service Manager (PSM) CLI. This gives us great flexibility to script our own tailored PaaS stacks the way we want them. However, with flexibility comes responsibility, and in this case, if we choose to use PSM CLI, it’s up to us to script the whole provisioning/decommission of components that make up the stack. As well as what to do if we encounter an error half-way through, so that we leave things consistently.

A simpler way to provision PaaS stacks is by making use of Oracle Cloud Stack, that treats all components of the stack as a single unit, where all sub-components are provisioned/decommissioned transparently for us. For example, Oracle Integration Cloud (OIC) stack, is made of Oracle DB Cloud Service (DBCS), Integration Cloud Service (ICS), Process Cloud Service (PCS), Visual Builder Cloud Service (VBCS), IaaS, storage, network, etc. If we use Oracle Cloud Stack to provision an environment, we only have to pass a YAML template with the configuration of the whole stack and then, Cloud Stack handles the rest. Pretty awesome huh?

Similarly, as we have done in the past, we are going to use a “Build Server”. This will be used as a platform to help us provision our PaaS stacks. When provisioning this “Build Server”, I will add all the tooling it requires as part of its bootstrap process. For this, I am using Vagrant + Terraform, so that I can also treat my “Build Server” as “infrastructure as code” and I can easily get rid of it, after I built my target PaaS stack.

This is a graphical view of what I will be doing in this blog to provision an OIC stack via Cloud Stack:

Continue reading “Teaching How to Provision Oracle Integration Cloud (OIC) with Cloud Stack and Terraform”

Teaching How to Get started with Kubernetes deploying a Hello World App

In a previous blog, I explained how to provision a new Kubernetes environment locally on physical or virtual machines, as well as remotely in the Oracle Public Cloud. In this workshop, I am going to show how to get started by deploying and running a Hello World NodeJS application into it.

There are a few moving parts involved in this exercise:

  • Using an Ubuntu Vagrant box, I’ll ask you to git clone a “Hello World NodeJS App”. It will come with its Dockerfile to be easily imaged/containerised.
  • Then, you will Docker build your app and push the image into Docker Hub.
  • Finally, I’ll ask you to go into your Kubernetes cluster, git clone a repo with a sample Pod definition and run it on your Kubernetes cluster.

Continue reading “Teaching How to Get started with Kubernetes deploying a Hello World App”

Teaching How to quickly provision a Dev Kubernetes Environment locally or in Oracle Cloud

 

This time last year, people were excited talking about technologies such as Mesos or Docker Swarm to orchestrate their Docker containers. Now days (April 2018) almost everybody is talking about Kubernetes instead. This proves how quickly technology is moving, but also it shows that Kubernetes has been endorsed and backed up by the Cloud Giants, including AWS, Oracle, Azure, (obviously Google), etc.

At this point, I don’t see Kubernetes going anywhere in the coming years. On the contrary, I strongly believe that it is going to become the default way to dockerise environments, especially now that it is becoming a PaaS offering with different cloud providers, e.g. Oracle Containers. This is giving the extra push to easily operate in enterprise mission critical solutions, having the backup of a big Cloud Vendor.

So, if you have not yet got familiar with Kubernetes, you better do so and quickly. In this blog I am going to show you how to get started with a fully functional Kubernetes dev environment that will let you start playing with it. In future blogs I am going to explain different use cases using Kubernetes, mainly around the 12-factor principles of microservices, e.g. including deploying applications with concurrency, managing load balancers, managing replication controller, scalability, managing state post-container restarts, etc… But let’s start with the first and most important 12-factor: “Disposability”.

 

In this blog, you don’t have to install Kubernetes manually if you don’t want to. I am going to explain 3 different ways in which you can get started with a Kubernetes Dev environment ready to go:

  • Option 1: Automate the provisioning of Kubernetes locally on a VM (using Vagrant Box).
  • Option 2: Automate the provisioning of Kubernetes in the Oracle Public Cloud (using Terraform).
  • Option 3: Manually install locally Kubernetes on an existing environment (using minikube) – This can be your own Host machine, a local VM, IaaS, etc.

Obviously, option1 and 2 will simplify the whole process and will give you the ability to treat your Kubernetes environment “as code”, so that you can destroy/recreate it quickly and easily – and that’s what “Disposability” is all about. Option 3 is more if you want to learn how to install Kubernetes on an existing environment, for example your own laptop or an existing VM.

Let’s get our hands dirty!

Before we start

 

In this blog I assume the following:

  • You are familiar with Vagrant. If not, read this blog. It will take you 5-very-well-spent minutes.
  • Also, I assume that you are familiar with Terraform. If not, read this other blog that I wrote some time ago. It explains Terraform in detail.
  • You need an Oracle Cloud account, if not request a free trial. Here: https://cloud.oracle.com/tryit

 

Option 1: Automate the provisioning of Kubernetes locally on a VirtualBox VM (using Vagrant)

 

Spin up Vagrant VM

 

For this exercise, I am going to use an Oracle-managed Vagrant Box that luckily for us has ability to run a local nested virtualization. Yes, you read it well. Thanks to Oracle, we can now easily run a local HA Kubernetes cluster inside VirtualBox VMs!!!

 

It will create a topology with 3 VMs.

  • 1 Master node
  • 2 Worker Nodes

 

The GitHub repository is https://github.com/oracle/vagrant-boxes/tree/master/Kubernetes and it has a very comprehensive Readme file, but below I am writing a quick summary to get up and running.

 

Note: I assume that you have already:

  1. Installed Git
  2. Installed Oracle VM VirtualBox and Vagrant.
  3. You have an existing Oracle Container Registry account. Then, sign in to Oracle Container Registry and accept the Oracle Standard T&Cs for the Oracle Container Registry.

  1. After Accepting, it will show something like:

 

 

  • Clone Oracle’s Kubernetes Vagrant box repository:

     

    git clone https://github.com/oracle/vagrant-boxes

     

  • Move into the vagrant-boxes/Kubernetes directory:

     

    cd vagrant-boxes/Kubernetes

     

  • Now, start your vagrant box:

     

    vagrant up master

     

    Note: Give it some time the first time. It will download the Vagrant Box and install all dependencies. Subsequent times will be much faster.


 

  • Vagrant ssh into it.

     

    vagrant ssh

 


 

  • Setup your master node within the master guest VM. For this, within the master guest VM, run as root:

    /vagrant/scripts/kubeadm-setup-master.sh

     

    You will be asked to log in to the Oracle Container Registry. Use the same account from which you accepted the T&C’s already.

     

    Note: If you get an error like this one:

 

    It means that you have not yet accepted the T&C’s for the Oracle Container Registry!

 

  • Once you run this script you will see a message asking you to be patient. They mean it! For me, it took around 30 minutes to download the associated Docker images and configure them.

 

 

  • Setting up your Master node should succeed with a message like the following:

     

 

  • Now, back to your Host machine, open other terminal windows and start the first worker node:

     

    vagrant up worker1


  • Once it’s started, vagrant ssh into it:

     

    vagrant ssh wroker1


  • Setup your worker1 node within the worker1 guest VM. For this, within the worker1 guest VM, run as root:

     

    /vagrant/scripts/kubeadm-setup-worker.sh

    

Once again, you will be asked to log in to the Oracle Container Registry. Use the same account from which you accepted the T&C’s previously.

 

  • Setting up your Worker node 1 node should succeed with a message like the following:

     

  • Finally, let’s setup the 3rd and last VM, in this case the second Worker Node. Go back to your Host machine, open a 3rd terminal window and start the second worker node:

     

    vagrant up worker2

     

  • Once it’s started, vagrant ssh into it:

     

    vagrant ssh wroker2

  • Setup your worker2 node within the worker2 guest VM. For this, within the worker2 guest VM, run as root:

     

    /vagrant/scripts/kubeadm-setup-worker.sh

    

Once again, you will be asked to log in to the Oracle Container Registry. Use the same account from which you accepted the T&C’s previously.

 

  • Setting up your Worker node 2 should succeed with a message like the following:

 

 

  • Congratulations your Kubernetes cluster with 1 Master node and 2 Worker nodes is ready to go.

     

     

     

  • Test your Kubernetes cluster. For this, within the master node/VM (not root, but back to the vagrant user) try the following commands:

     

    kubectl cluster-info

    kubectl get nodes

    kubectl get pods –namespace=kube-system


 

For more information, please refer to the original Oracle Git repo readme file.

 

 

Option 2: Automate the provisioning of Kubernetes in the Oracle Public Cloud (using Terraform)

 

For this option, we are going to use Terraform to provision compute (IaaS) with Kubernetes on Oracle Public Cloud. If you have already installed Terraform locally or are using a BuildServer with Terraform installed, feel free to skip the next section. Otherwise if you don’t have Terraform installed and don’t want to install it, you can use a Vagrant Box that I already put together.

 

If you want to use a Vagrant Box that I previously put together that auto-installs Terraform:

 

  • Clone my devops repository:

     

    git clone https://github.com/solutionsanz/devops

     

  • Move into the KubernetesDevEnv directory:

     

    cd devops/KubernetesDevEnv

     

  • Now, start your vagrant box:

     

    vagrant up

     

    Note: Give it some time the first time. It will download the Ubuntu Box and install all dependencies. Subsequent times will be much faster.

  • Once it finishes, as per the bootstrap process, your Vagrant VM is going to come with Terraform installed and ready to go.
  • Vagrant ssh into it.

     

    vagrant ssh

     

  • Validate that Terraform is installed properly:

     

    terraform –version

     

  • Now, we are going to use Terraform to install and configure Kubernetes on Oracle Compute Cloud. For this, we are going to git clone another repository that Cameron Senese has put together for this exact purpose.

     

  • First, Register an account at the Oracle Container Registry (OCR). Be sure to accept the Oracle Standard Terms and Restrictions after registering with the OCR. The installer will request your OCR credentials at build time. Registration with the OCR is a dependency for the installer to be able to download the containers which will be used to assemble the K8s control plane.

     

  • Git clone Cam’s repo:

     

    git clone https://github.com/cameronsenese/opc-terraform-kubernetes-installer

     

  • Move to the opc-terraform-kubernetes-installer directory

     

    cd opc-terraform-kubernetes-installer

     

  • Initialise Terraform:

     

    terraform init

     

  • Apply the Terraform plan:

    terraform apply

     

  • At this point the configuration will prompt for target environment inputs. Please refer to original Cameron Senese Git Repo Readme file if any of them is strange for you.

     

  • Depending on the number of modules selected to be installed, the whole provisioning might vary to up to 15 minutes to complete.

     

  • You can SSH into your new VM by using the sample private key proided by default under folder ssh.

     

  • Once inside the new VM, make sure the cluster is running properly by trying the following commands:

     

    kubectl cluster-info

    kubectl get nodes

     

 

Option 3: Manually Install and configure Kubernetes (with minikube) on an existing environment

 

Install Kubectl Manually (Optional)

 

For more information on installing Kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl

 

Install Minikube Manually (Optional)

 

 

  • If you wish to stop your kubernetes cluster:

     

    minikube stop

Congratulations!!! Regardless of the approach that you took, your Kubernetes Development environment is ready to go.

If you want to also learn how to use it, read this other blog that shows you the basics of Kubernetes by deploying a HelloWorld Application.

 

Curating your Kubernetes cluster with other Cloud Native Technologies (Optional)

 

Recently, CNCF is pushing on various cloud native projects that are building great momentum. Technologies such as: Service Mesh (Istio, Envoy), Grafana, Prometheus, Zipkin, etc.

For this reason and in alignment with a recent Kubernetes workshop that we deliver in Australia and NZ, we provide a quick way to curate a local Kubernetes cluster, assuming you chose option 1 or 3 from this blog (kubeadm via using VirtualBox VMs or Minikube on a physical host).

Note: If you chose, option 2, i.e. suing Terraform to provision your Kubernetes cluster, then you don’t have to follow these steps. Your kubernetes cluster is already curated with all these open source technologies.

Once you have your Kubernetes cluster up and running, follow the next steps to deploy these open source technologies.

  • Go to the master node or wherever you have kubectl installed.
  • Git clone the following repo:

     

    git clone https://github.com/solutionsanz/opc-terraform-kubernetes-installer

     

    Note: If git is not yet installed, do a quick: sudo [yum|apt-get] install git -y

     

  • Change directory inside the repo

     

    cd opc-terraform-kubernetes-installer

     

  • Set execution privileges to curating script:

     

    chmod 755 curate-k8s.sh

     

  • Run the script by specifying true or false, depending on whether you want to install the following open source components:

     

    Order: [MonitoringDashboards SocksShopDemo CheesesDemo ServiceMeshDemo]

     

    E.g.

     

    ./curate-k8s.sh true true true true

     

  • After the script execution, you should be able to see all specified services running inside your Kubernetes cluster

     

    kubectl get services –all-namespaces

Notice that we have a variety of type of Services:

  • ClusterIP: These services allow intra pod/container communication inside the cluster.
  • NodePort: These services come with ClusterIP plus an assigned port on each Worker Node for external consumption
  • LoadBalancer: If running on a Cloud provider, an external Load Balancer as a Service (LBaaS) will be mapped to this service

Ideally, at this point you are going to configure Ingress services to all those services that you wish to expose outside the Kubernetes cluster, fronted by LBaaS for easy external consumption by using a Cloud vendor.

For Dev purposes working locally on a Mac o PC, if you are running this Kubernetes cluster as option 1 (Vagrant based VirtualBox VMs), you might need to open up the assigned ports for those NodePort/Ingress services.

For example, in the image above: Traefik-ingress-service mapped to port 80 on 10.103.63.35

However, another way to quickly “hack” access into your internal services, even those of type ClusterIP is to establish SSH tunnels redirecting traffic from the host machine into the specific services internal IP Addresses and Ports.

 

For example, let assume that we want to gather access to the WeaveScope dashboard UI . Based on the configuration described above, I can:

 

 

  • If using Linux/MacOS, you can simply setup an SSH tunnel from within a terminal in your host:

     

    ssh -L 180:10.108.222.2:80 vagrant@127.0.0.1 -p [2200|2201] -i private-key

     

    Let’s analyse the previous command:

     

    • 180: It is a random port I decide on my host machine through which all traffic will be SSH tunnelled.
    • 10.108.222.2:80: It is the internal IP Address and Port on which I want to route all traffic. In this case it is the endpoint for the WeaveScope Dashboard UI.
    • vagrant@127.0.0.1 -p 2200: Here I need to target any of the 2 worker nodes. If using the VirtualBox VMs, there is an out of the box Port forwarding already in place that maps 2200 to 22 for worker node 1 and 2201 to 22 for worker node 2. Either way will work. As the Kubernetes services are network constructs running on each of the worker nodes.
    • -i private-key: It is referencing the SSH Private key (located under $GIT_REPO_LOCATION/vagrant-boxes/Kubernetes/.vagrant/machines/worker1/virtualbox/private_key)

     

  • If using PuTTY, first you would need to convert the Vagrant SSH private key into PPK (you can find the private key in your host machine under: $GIT_REPO_LOCATION/vagrant-boxes/Kubernetes/.vagrant/machines/worker1/virtualbox/private_key).

     

    Then you would need to establish a tunnel configuration in your PuTTY session:

        Finally, open a browser window in your Host machine and go point localhost:[YOUR_CHOSEN_PORT], e.g. localhst:180

 

 

I hope you found this blog useful. If you have any question or comment, feel free to contact me directly at https://www.linkedin.com/in/citurria/

Thanks for your time.

Teaching How to use Terraform to automate Provisioning of Oracle Integration Cloud (OIC)

In a previous blog, I explained how to treat your Infrastructure as Code by using technologies such as Vagrant and Terraform in order to help automate provisioning and decommissioning of environments in the cloud. Then, I evolved those concepts with this other blog, where I explained how to use Oracle PaaS Service Manager (PSM) CLI in order to provision Oracle PaaS Services into the Cloud.

In this blog, I am going to put together both concepts and show how simply you can automate the provisioning of Oracle Integration Cloud with Terraform and PSM CLI together.

To provision a new PaaS environment, I first create a “Build Server” in the cloud or as my boss calls it a “cockpit” that brings all the required bells and whistles (e.g. Terraform, PSM CLI, GIT, etc) to provision PaaS environments. I will add all the tooling it requires as part of its bootstrap process. To create the “Build Server” in the first place, I am using Vagrant + Terraform as well, just because I need a common place to start and these tools highly simplify my life. Also, this way, I can also treat my “Build Server” as “infrastructure as code” and I can easily get rid of it after I built my target PaaS environments and save with that some bucks in the cloud consumption model.

Once I build my “Build Server”, I will then simply git clone a repository that contains my scripts to provision other PaaS environments, setup my environment variables and type “terraform apply”. Yes, as simple as that!

This is a graphical view of what I will be doing:

Continue reading “Teaching How to use Terraform to automate Provisioning of Oracle Integration Cloud (OIC)”

Teaching How to use Oracle PaaS Service Manager (PSM) CLI to Provision Oracle PaaS environments

In this blog, I am going to get you started with Oracle PaaS Service Manager (PSM) CLI – A great tool to manage anything API-enabled on any Oracle PaaS Service or Stack. For example, provisioning, scaling, patching, backup, restore, start, stop, etc.

It has the concept of Stack (multiple PaaS services), what means that you can very easily provision and manage full Stacks, such as Oracle Integration Cloud (OIC), that combines multiple PaaS solutions underneath, e.g. ICS, PCS, VBCS, DBCS, etc.

For this, we are going to use a pre-cooked Vagrant Box/VM that I prepared for you, so that you don’t have to worry about installing software, but moving as quickly as possible to the meat and potatoes.

This is a graphical view of what we are going to do:

Continue reading “Teaching How to use Oracle PaaS Service Manager (PSM) CLI to Provision Oracle PaaS environments”