Teaching How to Provision Oracle Integration Cloud (OIC) with Cloud Stack and Terraform

We have covered multiple blogs on how to use Terraform to help automate the provisioning of environments and treat your Infrastructure as Code. Until now, for PaaS stacks, we have used Terraform together with Oracle PaaS Service Manager (PSM) CLI. This gives us great flexibility to script our own tailored PaaS stacks the way we want them. However, with flexibility comes responsibility, and in this case, if we choose to use PSM CLI, it’s up to us to script the whole provisioning/decommission of components that make up the stack. As well as what to do if we encounter an error half-way through, so that we leave things consistently.

A simpler way to provision PaaS stacks is by making use of Oracle Cloud Stack, that treats all components of the stack as a single unit, where all sub-components are provisioned/decommissioned transparently for us. For example, Oracle Integration Cloud (OIC) stack, is made of Oracle DB Cloud Service (DBCS), Integration Cloud Service (ICS), Process Cloud Service (PCS), Visual Builder Cloud Service (VBCS), IaaS, storage, network, etc. If we use Oracle Cloud Stack to provision an environment, we only have to pass a YAML template with the configuration of the whole stack and then, Cloud Stack handles the rest. Pretty awesome huh?

Similarly, as we have done in the past, we are going to use a “Build Server”. This will be used as a platform to help us provision our PaaS stacks. When provisioning this “Build Server”, I will add all the tooling it requires as part of its bootstrap process. For this, I am using Vagrant + Terraform, so that I can also treat my “Build Server” as “infrastructure as code” and I can easily get rid of it, after I built my target PaaS stack.

This is a graphical view of what I will be doing in this blog to provision an OIC stack via Cloud Stack:

Continue reading “Teaching How to Provision Oracle Integration Cloud (OIC) with Cloud Stack and Terraform”

Teaching How to Get started with Kubernetes deploying a Hello World App

In a previous blog, I explained how to provision a new Kubernetes environment locally on physical or virtual machines, as well as remotely in the Oracle Public Cloud. In this workshop, I am going to show how to get started by deploying and running a Hello World NodeJS application into it.

There are a few moving parts involved in this exercise:

  • Using an Ubuntu Vagrant box, I’ll ask you to git clone a “Hello World NodeJS App”. It will come with its Dockerfile to be easily imaged/containerised.
  • Then, you will Docker build your app and push the image into Docker Hub.
  • Finally, I’ll ask you to go into your Kubernetes cluster, git clone a repo with a sample Pod definition and run it on your Kubernetes cluster.

Continue reading “Teaching How to Get started with Kubernetes deploying a Hello World App”

Teaching How to quickly provision a Dev Kubernetes Environment locally or in Oracle Cloud

 

This time last year, people were excited talking about technologies such as Mesos or Docker Swarm to orchestrate their Docker containers. Now days (April 2018) almost everybody is talking about Kubernetes instead. This proves how quickly technology is moving, but also it shows that Kubernetes has been endorsed and backed up by the Cloud Giants, including AWS, Oracle, Azure, (obviously Google), etc.

At this point, I don’t see Kubernetes going anywhere in the coming years. On the contrary, I strongly believe that it is going to become the default way to dockerise environments, especially now that it is becoming a PaaS offering with different cloud providers, e.g. Oracle Containers. This is giving the extra push to easily operate in enterprise mission critical solutions, having the backup of a big Cloud Vendor.

So, if you have not yet got familiar with Kubernetes, you better do so and quickly. In this blog I am going to show you how to get started with a fully functional Kubernetes dev environment that will let you start playing with it. In future blogs I am going to explain different use cases using Kubernetes, mainly around the 12-factor principles of microservices, e.g. including deploying applications with concurrency, managing load balancers, managing replication controller, scalability, managing state post-container restarts, etc… But let’s start with the first and most important 12-factor: “Disposability”.

 

In this blog, you don’t have to install Kubernetes manually if you don’t want to. I am going to explain 3 different ways in which you can get started with a Kubernetes Dev environment ready to go:

  • Option 1: Automate the provisioning of Kubernetes locally on a VM (using Vagrant Box).
  • Option 2: Automate the provisioning of Kubernetes in the Oracle Public Cloud (using Terraform).
  • Option 3: Manually install locally Kubernetes on an existing environment (using minikube) – This can be your own Host machine, a local VM, IaaS, etc.

Obviously, option1 and 2 will simplify the whole process and will give you the ability to treat your Kubernetes environment “as code”, so that you can destroy/recreate it quickly and easily – and that’s what “Disposability” is all about. Option 3 is more if you want to learn how to install Kubernetes on an existing environment, for example your own laptop or an existing VM.

Let’s get our hands dirty!

Before we start

 

In this blog I assume the following:

  • You are familiar with Vagrant. If not, read this blog. It will take you 5-very-well-spent minutes.
  • Also, I assume that you are familiar with Terraform. If not, read this other blog that I wrote some time ago. It explains Terraform in detail.
  • You need an Oracle Cloud account, if not request a free trial. Here: https://cloud.oracle.com/tryit

 

Option 1: Automate the provisioning of Kubernetes locally on a VirtualBox VM (using Vagrant)

 

Spin up Vagrant VM

 

For this exercise, I am going to use an Oracle-managed Vagrant Box that luckily for us has ability to run a local nested virtualization. Yes, you read it well. Thanks to Oracle, we can now easily run a local HA Kubernetes cluster inside VirtualBox VMs!!!

 

It will create a topology with 3 VMs.

  • 1 Master node
  • 2 Worker Nodes

 

The GitHub repository is https://github.com/oracle/vagrant-boxes/tree/master/Kubernetes and it has a very comprehensive Readme file, but below I am writing a quick summary to get up and running.

 

Note: I assume that you have already:

  1. Installed Git
  2. Installed Oracle VM VirtualBox and Vagrant.
  3. You have an existing Oracle Container Registry account. Then, sign in to Oracle Container Registry and accept the Oracle Standard T&Cs for the Oracle Container Registry.

  1. After Accepting, it will show something like:

 

 

  • Clone Oracle’s Kubernetes Vagrant box repository:

     

    git clone https://github.com/oracle/vagrant-boxes

     

  • Move into the vagrant-boxes/Kubernetes directory:

     

    cd vagrant-boxes/Kubernetes

     

  • Now, start your vagrant box:

     

    vagrant up master

     

    Note: Give it some time the first time. It will download the Vagrant Box and install all dependencies. Subsequent times will be much faster.


 

  • Vagrant ssh into it.

     

    vagrant ssh

 


 

  • Setup your master node within the master guest VM. For this, within the master guest VM, run as root:

    /vagrant/scripts/kubeadm-setup-master.sh

     

    You will be asked to log in to the Oracle Container Registry. Use the same account from which you accepted the T&C’s already.

     

    Note: If you get an error like this one:

 

    It means that you have not yet accepted the T&C’s for the Oracle Container Registry!

 

  • Once you run this script you will see a message asking you to be patient. They mean it! For me, it took around 30 minutes to download the associated Docker images and configure them.

 

 

  • Setting up your Master node should succeed with a message like the following:

     

 

  • Now, back to your Host machine, open other terminal windows and start the first worker node:

     

    vagrant up worker1


  • Once it’s started, vagrant ssh into it:

     

    vagrant ssh wroker1


  • Setup your worker1 node within the worker1 guest VM. For this, within the worker1 guest VM, run as root:

     

    /vagrant/scripts/kubeadm-setup-worker.sh

    

Once again, you will be asked to log in to the Oracle Container Registry. Use the same account from which you accepted the T&C’s previously.

 

  • Setting up your Worker node 1 node should succeed with a message like the following:

     

  • Finally, let’s setup the 3rd and last VM, in this case the second Worker Node. Go back to your Host machine, open a 3rd terminal window and start the second worker node:

     

    vagrant up worker2

     

  • Once it’s started, vagrant ssh into it:

     

    vagrant ssh wroker2

  • Setup your worker2 node within the worker2 guest VM. For this, within the worker2 guest VM, run as root:

     

    /vagrant/scripts/kubeadm-setup-worker.sh

    

Once again, you will be asked to log in to the Oracle Container Registry. Use the same account from which you accepted the T&C’s previously.

 

  • Setting up your Worker node 2 should succeed with a message like the following:

 

 

  • Congratulations your Kubernetes cluster with 1 Master node and 2 Worker nodes is ready to go.

     

     

     

  • Test your Kubernetes cluster. For this, within the master node/VM (not root, but back to the vagrant user) try the following commands:

     

    kubectl cluster-info

    kubectl get nodes

    kubectl get pods –namespace=kube-system


 

For more information, please refer to the original Oracle Git repo readme file.

 

 

Option 2: Automate the provisioning of Kubernetes in the Oracle Public Cloud (using Terraform)

 

For this option, we are going to use Terraform to provision compute (IaaS) with Kubernetes on Oracle Public Cloud. If you have already installed Terraform locally or are using a BuildServer with Terraform installed, feel free to skip the next section. Otherwise if you don’t have Terraform installed and don’t want to install it, you can use a Vagrant Box that I already put together.

 

If you want to use a Vagrant Box that I previously put together that auto-installs Terraform:

 

  • Clone my devops repository:

     

    git clone https://github.com/solutionsanz/devops

     

  • Move into the KubernetesDevEnv directory:

     

    cd devops/KubernetesDevEnv

     

  • Now, start your vagrant box:

     

    vagrant up

     

    Note: Give it some time the first time. It will download the Ubuntu Box and install all dependencies. Subsequent times will be much faster.

  • Once it finishes, as per the bootstrap process, your Vagrant VM is going to come with Terraform installed and ready to go.
  • Vagrant ssh into it.

     

    vagrant ssh

     

  • Validate that Terraform is installed properly:

     

    terraform –version

     

  • Now, we are going to use Terraform to install and configure Kubernetes on Oracle Compute Cloud. For this, we are going to git clone another repository that Cameron Senese has put together for this exact purpose.

     

  • First, Register an account at the Oracle Container Registry (OCR). Be sure to accept the Oracle Standard Terms and Restrictions after registering with the OCR. The installer will request your OCR credentials at build time. Registration with the OCR is a dependency for the installer to be able to download the containers which will be used to assemble the K8s control plane.

     

  • Git clone Cam’s repo:

     

    git clone https://github.com/cameronsenese/opc-terraform-kubernetes-installer

     

  • Move to the opc-terraform-kubernetes-installer directory

     

    cd opc-terraform-kubernetes-installer

     

  • Initialise Terraform:

     

    terraform init

     

  • Apply the Terraform plan:

    terraform apply

     

  • At this point the configuration will prompt for target environment inputs. Please refer to original Cameron Senese Git Repo Readme file if any of them is strange for you.

     

  • Depending on the number of modules selected to be installed, the whole provisioning might vary to up to 15 minutes to complete.

     

  • You can SSH into your new VM by using the sample private key proided by default under folder ssh.

     

  • Once inside the new VM, make sure the cluster is running properly by trying the following commands:

     

    kubectl cluster-info

    kubectl get nodes

     

 

Option 3: Manually Install and configure Kubernetes (with minikube) on an existing environment

 

Install Kubectl Manually (Optional)

 

For more information on installing Kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl

 

Install Minikube Manually (Optional)

 

 

  • If you wish to stop your kubernetes cluster:

     

    minikube stop

Congratulations!!! Regardless of the approach that you took, your Kubernetes Development environment is ready to go.

If you want to also learn how to use it, read this other blog that shows you the basics of Kubernetes by deploying a HelloWorld Application.

 

Curating your Kubernetes cluster with other Cloud Native Technologies (Optional)

 

Recently, CNCF is pushing on various cloud native projects that are building great momentum. Technologies such as: Service Mesh (Istio, Envoy), Grafana, Prometheus, Zipkin, etc.

For this reason and in alignment with a recent Kubernetes workshop that we deliver in Australia and NZ, we provide a quick way to curate a local Kubernetes cluster, assuming you chose option 1 or 3 from this blog (kubeadm via using VirtualBox VMs or Minikube on a physical host).

Note: If you chose, option 2, i.e. suing Terraform to provision your Kubernetes cluster, then you don’t have to follow these steps. Your kubernetes cluster is already curated with all these open source technologies.

Once you have your Kubernetes cluster up and running, follow the next steps to deploy these open source technologies.

  • Go to the master node or wherever you have kubectl installed.
  • Git clone the following repo:

     

    git clone https://github.com/solutionsanz/opc-terraform-kubernetes-installer

     

    Note: If git is not yet installed, do a quick: sudo [yum|apt-get] install git -y

     

  • Change directory inside the repo

     

    cd opc-terraform-kubernetes-installer

     

  • Set execution privileges to curating script:

     

    chmod 755 curate-k8s.sh

     

  • Run the script by specifying true or false, depending on whether you want to install the following open source components:

     

    Order: [MonitoringDashboards SocksShopDemo CheesesDemo ServiceMeshDemo]

     

    E.g.

     

    ./curate-k8s.sh true true true true

     

  • After the script execution, you should be able to see all specified services running inside your Kubernetes cluster

     

    kubectl get services –all-namespaces

Notice that we have a variety of type of Services:

  • ClusterIP: These services allow intra pod/container communication inside the cluster.
  • NodePort: These services come with ClusterIP plus an assigned port on each Worker Node for external consumption
  • LoadBalancer: If running on a Cloud provider, an external Load Balancer as a Service (LBaaS) will be mapped to this service

Ideally, at this point you are going to configure Ingress services to all those services that you wish to expose outside the Kubernetes cluster, fronted by LBaaS for easy external consumption by using a Cloud vendor.

For Dev purposes working locally on a Mac o PC, if you are running this Kubernetes cluster as option 1 (Vagrant based VirtualBox VMs), you might need to open up the assigned ports for those NodePort/Ingress services.

For example, in the image above: Traefik-ingress-service mapped to port 80 on 10.103.63.35

However, another way to quickly “hack” access into your internal services, even those of type ClusterIP is to establish SSH tunnels redirecting traffic from the host machine into the specific services internal IP Addresses and Ports.

 

For example, let assume that we want to gather access to the WeaveScope dashboard UI . Based on the configuration described above, I can:

 

 

  • If using Linux/MacOS, you can simply setup an SSH tunnel from within a terminal in your host:

     

    ssh -L 180:10.108.222.2:80 vagrant@127.0.0.1 -p [2200|2201] -i private-key

     

    Let’s analyse the previous command:

     

    • 180: It is a random port I decide on my host machine through which all traffic will be SSH tunnelled.
    • 10.108.222.2:80: It is the internal IP Address and Port on which I want to route all traffic. In this case it is the endpoint for the WeaveScope Dashboard UI.
    • vagrant@127.0.0.1 -p 2200: Here I need to target any of the 2 worker nodes. If using the VirtualBox VMs, there is an out of the box Port forwarding already in place that maps 2200 to 22 for worker node 1 and 2201 to 22 for worker node 2. Either way will work. As the Kubernetes services are network constructs running on each of the worker nodes.
    • -i private-key: It is referencing the SSH Private key (located under $GIT_REPO_LOCATION/vagrant-boxes/Kubernetes/.vagrant/machines/worker1/virtualbox/private_key)

     

  • If using PuTTY, first you would need to convert the Vagrant SSH private key into PPK (you can find the private key in your host machine under: $GIT_REPO_LOCATION/vagrant-boxes/Kubernetes/.vagrant/machines/worker1/virtualbox/private_key).

     

    Then you would need to establish a tunnel configuration in your PuTTY session:

        Finally, open a browser window in your Host machine and go point localhost:[YOUR_CHOSEN_PORT], e.g. localhst:180

 

 

I hope you found this blog useful. If you have any question or comment, feel free to contact me directly at https://www.linkedin.com/in/citurria/

Thanks for your time.

Teaching How to use Terraform to automate Provisioning of Oracle Integration Cloud (OIC)

In a previous blog, I explained how to treat your Infrastructure as Code by using technologies such as Vagrant and Terraform in order to help automate provisioning and decommissioning of environments in the cloud. Then, I evolved those concepts with this other blog, where I explained how to use Oracle PaaS Service Manager (PSM) CLI in order to provision Oracle PaaS Services into the Cloud.

In this blog, I am going to put together both concepts and show how simply you can automate the provisioning of Oracle Integration Cloud with Terraform and PSM CLI together.

To provision a new PaaS environment, I first create a “Build Server” in the cloud or as my boss calls it a “cockpit” that brings all the required bells and whistles (e.g. Terraform, PSM CLI, GIT, etc) to provision PaaS environments. I will add all the tooling it requires as part of its bootstrap process. To create the “Build Server” in the first place, I am using Vagrant + Terraform as well, just because I need a common place to start and these tools highly simplify my life. Also, this way, I can also treat my “Build Server” as “infrastructure as code” and I can easily get rid of it after I built my target PaaS environments and save with that some bucks in the cloud consumption model.

Once I build my “Build Server”, I will then simply git clone a repository that contains my scripts to provision other PaaS environments, setup my environment variables and type “terraform apply”. Yes, as simple as that!

This is a graphical view of what I will be doing:

Continue reading “Teaching How to use Terraform to automate Provisioning of Oracle Integration Cloud (OIC)”

Teaching How to use Oracle PaaS Service Manager (PSM) CLI to Provision Oracle PaaS environments

In this blog, I am going to get you started with Oracle PaaS Service Manager (PSM) CLI – A great tool to manage anything API-enabled on any Oracle PaaS Service or Stack. For example, provisioning, scaling, patching, backup, restore, start, stop, etc.

It has the concept of Stack (multiple PaaS services), what means that you can very easily provision and manage full Stacks, such as Oracle Integration Cloud (OIC), that combines multiple PaaS solutions underneath, e.g. ICS, PCS, VBCS, DBCS, etc.

For this, we are going to use a pre-cooked Vagrant Box/VM that I prepared for you, so that you don’t have to worry about installing software, but moving as quickly as possible to the meat and potatoes.

This is a graphical view of what we are going to do:

Continue reading “Teaching How to use Oracle PaaS Service Manager (PSM) CLI to Provision Oracle PaaS environments”

Teaching How to push your code into multiple Remote Git repositories

Very quickly Git has become one of the most common ways to maintain and manage source code. It is easy to use, fast, reliable and most modern CI/CD tooling support it. GitHub also makes it easy to anyone who wants to share code, to do it in a free or very inexpensive way. Many companies however, also look for ways in which they can maintain their own private repositories as an enterprise-grade solution, like Developer Cloud Service (DevCS), the one Oracle gives for free with any IaaS or PaaS service.

In this blog I am going to show you how to push your code into any number of remote Git repositories. For example, you can have your private repository in DevCS and choose to also publish them into another GitHub remote repository (public or private) in GitHub.

This is the high-level idea:

  1. Let’s create a new Git repo in DevCS
  2. Let’s create a repo in GitHub
  3. Let’s clone DevCS repo locally on my laptop
  4. Let’s push the code to DevCS Git repo
  5. Let’s push the code to GitHub repo.

Continue reading “Teaching How to push your code into multiple Remote Git repositories”

Teaching How to use Terraform to Manage Oracle Cloud Infrastructure as Code

Infrastructure as Code is becoming very popular. It allows you to describe a complete blueprint of a datacentre using a high-level configuration syntax, that can be versioned and script-automated. This brings huge improvements in the efficiency and reliability of provisioning and retiring environments.

Terraform is a tool that helps automate such environment provisioning. It lets you define in a descriptor file, all the characteristics of a target environment. Then, it lets you fully manage its life-cycle, including provisioning, configuration, state compliance, scalability, auditability, retirement, etc.

Terraform can seamlessly work with major cloud vendors, including Oracle, AWS, MS Azure, Google, etc. In this blog, I am going to show you how simple it is to use it to automate the provisioning of Oracle Cloud Infrastructure from your own laptop/PC. For this, we are going to use Vagrant on top of VirtualBox to virtualise a Linux environment to then run Terraform on top, so that it doesn’t matter what OS you use, you can quickly get started.

This is the high-level idea:

Continue reading “Teaching How to use Terraform to Manage Oracle Cloud Infrastructure as Code”