Making access easy but secure

So following on from my earlier article, Policies let your teams play safe, I have been given another challenge: Can we give our users single sign on now that each team can play safely in their own Oracle Cloud Infrastructure compartments?

Single sign on delivers a number of really important benefits. Firstly, the user experience is much smoother and seamless as users don’t get prompted for multiple passwords and don’t have to remember even more passwords. More importantly, single sign on eliminates the need to manage multiple stores of identities. This can be a big overhead for administrators and sometimes open up additional risks. Finally, an enterprise wide identity solution can often provide additional capabilities can be leveraged by your Oracle Cloud Infrastructure.

Oracle Cloud Infrastructure supports a couple of different flavours of enterprise identity solutions: Oracle Identity Cloud Service and Microsoft Active Directory Federation Service (ADFS). The great news is that Oracle Cloud Infrastructure is pre-integrated with Oracle Identity Cloud but you’ll probably want to provide finer grained access.

In this example, I’ll explain how configure Oracle Identity Cloud Service as an external identity provider for Oracle Cloud Infrastructure. I will also explain how to map groups from the external identity provider to Oracle Cloud Infrastructure so you can use the policies from my earlier article, Policies let your teams play safe.

At a high level this is what we’ll do:

  • Setup trust between Oracle Identity Cloud and Oracle Cloud Infrastructure using the Client ID and Secret
  • Create a couple of Marketing groups and users then assign each group to different users
  • Map the groups from Identity Cloud to Oracle Cloud Infrastructure
  • If you haven’t tried my earlier article, you’ll need setup policies as described in Policies let your teams play safe
  • Try it out by logging in as one of the Marketing Users

The good news is that this is a one-off configuration to ensure that your Oracle Cloud Infrastructure service and your Identity Cloud Service trust each other. Once you have configured the trust and mapped your groups the users will have a seamless SSO experience with policies enforced by Groups.

You will need an active cloud subscription and create a Compute Instance which you can do from the Cloud Services Console below. Oracle Identity Cloud is enabled by default. Don’t worry if you haven’t got an active subscription because you can sign up for a trial for free.

So let’s get started. Firstly login to the Oracle Cloud Services console with credentials provided by your administrator or from Oracle Cloud if you’re the super user. The Cloud Services console should look like my screenshot below.

Compute Application Information

You’ll need to get the Application Client ID and Client Secret from Identity Cloud so that you can map the groups later in this article.

Click on the Users icon at the top of the screen (I’ve circled it in red above).

Click on the Identity Console button.

Click on the Menu Button next to the Oracle logo in the top left of the screen

Click on Applications and search for COMPUTEBAREMETAL

Select the COMPUTEBAREMETAL application and click on the Configuration tab

Copy the Client ID and Client Secret from the General Information section. You’ll need to click on the Show Secret button to see the Client Secret. Remember that this information is very sensitive – DON’T POST IT ON BLOGS like this or github etc. The screenshot below shows what to copy (with bits obscured to protect the innocent).

Create Marketing Groups

In my earlier article, Policies let your teams play safe, I created a couple of groups for the Marketing department in Oracle Cloud Infrastructure: Marketing-Users and Marketing-Admins. I create these groups in my external identity provider so that when users login from Identity Cloud, the groups associated with the user will be passed onto Oracle Cloud Infrastructure. This is how Oracle Cloud Infrastructure knows which policies to enforce.

Click on Groups -> Add

Create a group called Marketing-Users

Give it a description and optionally check the Users can request access checkbox. This checkbox allows users who have already been registered in Identity Cloud to request access to the Marketing-Users group. In my example, I don’t mind anyone in the enterprise using the Marketing resources. Of course, you may decide this is appropriate in the real world.

Click Finish

Click Add

Create a group called Marketing-Users

Give it a description but this time don’t check the Users can request access checkbox. In my example, admin level access is privileged so users are not permitted to request access to this group.

Click Finish

Assign the groups to some users

Go and create a couple of users (I’m assuming you can work this out yourself). Click on Users in the menu and go from there. Assign one user to Marketing-Users and another user to Marketing-Admins. These will be used later to tryout your SSO and make sure that the policies from my earlier article are applied to these new users.

You’re done with Identity Cloud for now so log out.

Map the Groups

Mapping groups from the Identity Provider (Oracle Identity Cloud Service in this case) to groups in Oracle Cloud Infrastructure is the secret sauce that allows you to enforce fine grained access controls. For example, by assigning a Marketing Admin group to a user in Identity Cloud, you’ll then be able to restrict access for that user to Marketing resources using policies defined in Oracle Cloud Infrastructure.

Oracle Cloud Infrastructure is automatically federated with your instance of Oracle Identity Cloud Service. All you need to do is map the groups from Identity Cloud (aka the Identity Provider) to groups in Oracle Cloud Infrastructure.

Login to the Oracle Cloud Services console again with credentials provided by your administrator or from Oracle Cloud if you’re the super user. This is the console that you first logged into at the beginning of this article.

Click on the Compute instance

Click on the Open Service Console button

TIP: I find it very useful to create a bookmark for this URL in my browser. I’ll refer to this later as the Oracle Cloud Infrastructure Console.

Click on Menu -> Identity -> Federation

Select the active Identity Provider. It should be called something like “OracleIdentityCloudService”.

Click on Edit Mapping

Enter the Client ID and Client Secret that you saved earlier and click Continue

Click on Add Mapping

Select Marketing-Users from IDENTITY PROVIDER GROUP drop down.

Select Marketing-Users from the ORACLE CLOUD INFRASTRUCTURE GROUP drop down.

Click on Add Mapping

Select Marketing-Admins from IDENTITY PROVIDER GROUP drop down.

Select Marketing-Admins from the ORACLE CLOUD INFRASTRUCTURE GROUP drop down.

Click on Submit

Log out!

Create Some Policies

If you haven’t followed my earlier article then now is the time to do so. Go have a look at Policies let your teams play safe.

Try it Out!

Click on the Oracle Cloud Infrastructure Console bookmark that you saved in your browser earlier.

Click on Continue in the Single Sign-On (SSO) box. This will take you to Oracle Identity Cloud.

Login as the Market Admin user that you created in Oracle Identity Cloud Service earlier in this article.

Marketing Administrators Can’t See Non Marketing Resources

We should see a Forbidden message

Marketing Administrators Can See Resources in the Marketing Compartment

Now select the Marketing compartment from the drop down on the left

We should see a different message now: “There are no Autonomous Data Warehouses in Marketing that match the filter criteria.”

This is because the Marketing Policy allows us to see objects in the Marketing Compartment, however, we haven’t created any warehouses in the Marketing Compartment yet.


Conclusion

Integrating Oracle Cloud Infrastructure with Oracle Identity Cloud gives you big benefits in terms of user experience and significantly less time managing users. You can extend the benefits even further with Policy Based Multi Factor Authentication.

Teaching How to Provision Oracle Autonomous API Platform and API Gateway

Oracle is adding a secret recipe to all their Cloud Services with a nice touch of Machine Learning. This makes it possible to have the new series of “Autonomous” Cloud Services that are self-driving, self-healing and self-securing. Stay tuned, because we are going to keep listening a lot about them.

In this blog I am going to show you how to provision an Autonomous API Platform environment and then provision and register an API Gateway, running on a separate Oracle Linux VM on IaaS.

This is a graphical view of what I will be doing in this blog:

Continue reading “Teaching How to Provision Oracle Autonomous API Platform and API Gateway”

Teaching How to Provision Oracle Integration Cloud (OIC) with Cloud Stack and Terraform

We have covered multiple blogs on how to use Terraform to help automate the provisioning of environments and treat your Infrastructure as Code. Until now, for PaaS stacks, we have used Terraform together with Oracle PaaS Service Manager (PSM) CLI. This gives us great flexibility to script our own tailored PaaS stacks the way we want them. However, with flexibility comes responsibility, and in this case, if we choose to use PSM CLI, it’s up to us to script the whole provisioning/decommission of components that make up the stack. As well as what to do if we encounter an error half-way through, so that we leave things consistently.

A simpler way to provision PaaS stacks is by making use of Oracle Cloud Stack, that treats all components of the stack as a single unit, where all sub-components are provisioned/decommissioned transparently for us. For example, Oracle Integration Cloud (OIC) stack, is made of Oracle DB Cloud Service (DBCS), Integration Cloud Service (ICS), Process Cloud Service (PCS), Visual Builder Cloud Service (VBCS), IaaS, storage, network, etc. If we use Oracle Cloud Stack to provision an environment, we only have to pass a YAML template with the configuration of the whole stack and then, Cloud Stack handles the rest. Pretty awesome huh?

Similarly, as we have done in the past, we are going to use a “Build Server”. This will be used as a platform to help us provision our PaaS stacks. When provisioning this “Build Server”, I will add all the tooling it requires as part of its bootstrap process. For this, I am using Vagrant + Terraform, so that I can also treat my “Build Server” as “infrastructure as code” and I can easily get rid of it, after I built my target PaaS stack.

This is a graphical view of what I will be doing in this blog to provision an OIC stack via Cloud Stack:

Continue reading “Teaching How to Provision Oracle Integration Cloud (OIC) with Cloud Stack and Terraform”

Learn how Containers and Kubernetes fit together – Live Workshop

Need to understand how Containers, Kubernetes and the Cloud-Native Landscape fit together?

Organisations are excited about the cloud-native approach as it helps provide parity between development and production environments, adoption of the DevOps framework, and enables software developers to build great products faster.

This new paradigm in application delivery has brought with it much new jargon and tooling – “Containers”, “Docker”, “Kubernetes”, “Container Orchestration”, and “Microservices” are fast becoming the new norm.

Save time and get up to speed on the business value and technical know-how of these contemporary, cloud-native concepts and tools, including:

  • CNCF Landscape
  • Containers
  • Docker
  • Kubernetes
  • Microservices

Continue reading “Learn how Containers and Kubernetes fit together – Live Workshop”

Teaching How to Get started with Kubernetes deploying a Hello World App

In a previous blog, I explained how to provision a new Kubernetes environment locally on physical or virtual machines, as well as remotely in the Oracle Public Cloud. In this workshop, I am going to show how to get started by deploying and running a Hello World NodeJS application into it.

There are a few moving parts involved in this exercise:

  • Using an Ubuntu Vagrant box, I’ll ask you to git clone a “Hello World NodeJS App”. It will come with its Dockerfile to be easily imaged/containerised.
  • Then, you will Docker build your app and push the image into Docker Hub.
  • Finally, I’ll ask you to go into your Kubernetes cluster, git clone a repo with a sample Pod definition and run it on your Kubernetes cluster.

Continue reading “Teaching How to Get started with Kubernetes deploying a Hello World App”

Teaching How to quickly provision a Dev Kubernetes Environment locally or in Oracle Cloud

 

This time last year, people were excited talking about technologies such as Mesos or Docker Swarm to orchestrate their Docker containers. Now days (April 2018) almost everybody is talking about Kubernetes instead. This proves how quickly technology is moving, but also it shows that Kubernetes has been endorsed and backed up by the Cloud Giants, including AWS, Oracle, Azure, (obviously Google), etc.

At this point, I don’t see Kubernetes going anywhere in the coming years. On the contrary, I strongly believe that it is going to become the default way to dockerise environments, especially now that it is becoming a PaaS offering with different cloud providers, e.g. Oracle Containers. This is giving the extra push to easily operate in enterprise mission critical solutions, having the backup of a big Cloud Vendor.

So, if you have not yet got familiar with Kubernetes, you better do so and quickly. In this blog I am going to show you how to get started with a fully functional Kubernetes dev environment that will let you start playing with it. In future blogs I am going to explain different use cases using Kubernetes, mainly around the 12-factor principles of microservices, e.g. including deploying applications with concurrency, managing load balancers, managing replication controller, scalability, managing state post-container restarts, etc… But let’s start with the first and most important 12-factor: “Disposability”.

 

In this blog, you don’t have to install Kubernetes manually if you don’t want to. I am going to explain 3 different ways in which you can get started with a Kubernetes Dev environment ready to go:

  • Option 1: Automate the provisioning of Kubernetes locally on a VM (using Vagrant Box).
  • Option 2: Automate the provisioning of Kubernetes in the Oracle Public Cloud (using Terraform).
  • Option 3: Manually install locally Kubernetes on an existing environment (using minikube) – This can be your own Host machine, a local VM, IaaS, etc.

Obviously, option1 and 2 will simplify the whole process and will give you the ability to treat your Kubernetes environment “as code”, so that you can destroy/recreate it quickly and easily – and that’s what “Disposability” is all about. Option 3 is more if you want to learn how to install Kubernetes on an existing environment, for example your own laptop or an existing VM.

Let’s get our hands dirty!

Before we start

 

In this blog I assume the following:

  • You are familiar with Vagrant. If not, read this blog. It will take you 5-very-well-spent minutes.
  • Also, I assume that you are familiar with Terraform. If not, read this other blog that I wrote some time ago. It explains Terraform in detail.
  • You need an Oracle Cloud account, if not request a free trial. Here: https://cloud.oracle.com/tryit

 

Option 1: Automate the provisioning of Kubernetes locally on a VirtualBox VM (using Vagrant)

 

Spin up Vagrant VM

 

For this exercise, I am going to use an Oracle-managed Vagrant Box that luckily for us has ability to run a local nested virtualization. Yes, you read it well. Thanks to Oracle, we can now easily run a local HA Kubernetes cluster inside VirtualBox VMs!!!

 

It will create a topology with 3 VMs.

  • 1 Master node
  • 2 Worker Nodes

 

The GitHub repository is https://github.com/oracle/vagrant-boxes/tree/master/Kubernetes and it has a very comprehensive Readme file, but below I am writing a quick summary to get up and running.

 

Note: I assume that you have already:

  1. Installed Git
  2. Installed Oracle VM VirtualBox and Vagrant.
  3. You have an existing Oracle Container Registry account. Then, sign in to Oracle Container Registry and accept the Oracle Standard T&Cs for the Oracle Container Registry.

  1. After Accepting, it will show something like:

 

 

  • Clone Oracle’s Kubernetes Vagrant box repository:

     

    git clone https://github.com/oracle/vagrant-boxes

     

  • Move into the vagrant-boxes/Kubernetes directory:

     

    cd vagrant-boxes/Kubernetes

     

  • Now, start your vagrant box:

     

    vagrant up master

     

    Note: Give it some time the first time. It will download the Vagrant Box and install all dependencies. Subsequent times will be much faster.


 

  • Vagrant ssh into it.

     

    vagrant ssh

 


 

  • Setup your master node within the master guest VM. For this, within the master guest VM, run as root:

    /vagrant/scripts/kubeadm-setup-master.sh

     

    You will be asked to log in to the Oracle Container Registry. Use the same account from which you accepted the T&C’s already.

     

    Note: If you get an error like this one:

 

    It means that you have not yet accepted the T&C’s for the Oracle Container Registry!

 

  • Once you run this script you will see a message asking you to be patient. They mean it! For me, it took around 30 minutes to download the associated Docker images and configure them.

 

 

  • Setting up your Master node should succeed with a message like the following:

     

 

  • Now, back to your Host machine, open other terminal windows and start the first worker node:

     

    vagrant up worker1


  • Once it’s started, vagrant ssh into it:

     

    vagrant ssh wroker1


  • Setup your worker1 node within the worker1 guest VM. For this, within the worker1 guest VM, run as root:

     

    /vagrant/scripts/kubeadm-setup-worker.sh

    

Once again, you will be asked to log in to the Oracle Container Registry. Use the same account from which you accepted the T&C’s previously.

 

  • Setting up your Worker node 1 node should succeed with a message like the following:

     

  • Finally, let’s setup the 3rd and last VM, in this case the second Worker Node. Go back to your Host machine, open a 3rd terminal window and start the second worker node:

     

    vagrant up worker2

     

  • Once it’s started, vagrant ssh into it:

     

    vagrant ssh wroker2

  • Setup your worker2 node within the worker2 guest VM. For this, within the worker2 guest VM, run as root:

     

    /vagrant/scripts/kubeadm-setup-worker.sh

    

Once again, you will be asked to log in to the Oracle Container Registry. Use the same account from which you accepted the T&C’s previously.

 

  • Setting up your Worker node 2 should succeed with a message like the following:

 

 

  • Congratulations your Kubernetes cluster with 1 Master node and 2 Worker nodes is ready to go.

     

     

     

  • Test your Kubernetes cluster. For this, within the master node/VM (not root, but back to the vagrant user) try the following commands:

     

    kubectl cluster-info

    kubectl get nodes

    kubectl get pods –namespace=kube-system


 

For more information, please refer to the original Oracle Git repo readme file.

 

 

Option 2: Automate the provisioning of Kubernetes in the Oracle Public Cloud (using Terraform)

 

For this option, we are going to use Terraform to provision compute (IaaS) with Kubernetes on Oracle Public Cloud. If you have already installed Terraform locally or are using a BuildServer with Terraform installed, feel free to skip the next section. Otherwise if you don’t have Terraform installed and don’t want to install it, you can use a Vagrant Box that I already put together.

 

If you want to use a Vagrant Box that I previously put together that auto-installs Terraform:

 

  • Clone my devops repository:

     

    git clone https://github.com/solutionsanz/devops

     

  • Move into the KubernetesDevEnv directory:

     

    cd devops/KubernetesDevEnv

     

  • Now, start your vagrant box:

     

    vagrant up

     

    Note: Give it some time the first time. It will download the Ubuntu Box and install all dependencies. Subsequent times will be much faster.

  • Once it finishes, as per the bootstrap process, your Vagrant VM is going to come with Terraform installed and ready to go.
  • Vagrant ssh into it.

     

    vagrant ssh

     

  • Validate that Terraform is installed properly:

     

    terraform –version

     

  • Now, we are going to use Terraform to install and configure Kubernetes on Oracle Compute Cloud. For this, we are going to git clone another repository that Cameron Senese has put together for this exact purpose.

     

  • First, Register an account at the Oracle Container Registry (OCR). Be sure to accept the Oracle Standard Terms and Restrictions after registering with the OCR. The installer will request your OCR credentials at build time. Registration with the OCR is a dependency for the installer to be able to download the containers which will be used to assemble the K8s control plane.

     

  • Git clone Cam’s repo:

     

    git clone https://github.com/cameronsenese/opc-terraform-kubernetes-installer

     

  • Move to the opc-terraform-kubernetes-installer directory

     

    cd opc-terraform-kubernetes-installer

     

  • Initialise Terraform:

     

    terraform init

     

  • Apply the Terraform plan:

    terraform apply

     

  • At this point the configuration will prompt for target environment inputs. Please refer to original Cameron Senese Git Repo Readme file if any of them is strange for you.

     

  • Depending on the number of modules selected to be installed, the whole provisioning might vary to up to 15 minutes to complete.

     

  • You can SSH into your new VM by using the sample private key proided by default under folder ssh.

     

  • Once inside the new VM, make sure the cluster is running properly by trying the following commands:

     

    kubectl cluster-info

    kubectl get nodes

     

 

Option 3: Manually Install and configure Kubernetes (with minikube) on an existing environment

 

Install Kubectl Manually (Optional)

 

For more information on installing Kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl

 

Install Minikube Manually (Optional)

 

 

  • If you wish to stop your kubernetes cluster:

     

    minikube stop

Congratulations!!! Regardless of the approach that you took, your Kubernetes Development environment is ready to go.

If you want to also learn how to use it, read this other blog that shows you the basics of Kubernetes by deploying a HelloWorld Application.

 

Curating your Kubernetes cluster with other Cloud Native Technologies (Optional)

 

Recently, CNCF is pushing on various cloud native projects that are building great momentum. Technologies such as: Service Mesh (Istio, Envoy), Grafana, Prometheus, Zipkin, etc.

For this reason and in alignment with a recent Kubernetes workshop that we deliver in Australia and NZ, we provide a quick way to curate a local Kubernetes cluster, assuming you chose option 1 or 3 from this blog (kubeadm via using VirtualBox VMs or Minikube on a physical host).

Note: If you chose, option 2, i.e. suing Terraform to provision your Kubernetes cluster, then you don’t have to follow these steps. Your kubernetes cluster is already curated with all these open source technologies.

Once you have your Kubernetes cluster up and running, follow the next steps to deploy these open source technologies.

  • Go to the master node or wherever you have kubectl installed.
  • Git clone the following repo:

     

    git clone https://github.com/solutionsanz/opc-terraform-kubernetes-installer

     

    Note: If git is not yet installed, do a quick: sudo [yum|apt-get] install git -y

     

  • Change directory inside the repo

     

    cd opc-terraform-kubernetes-installer

     

  • Set execution privileges to curating script:

     

    chmod 755 curate-k8s.sh

     

  • Run the script by specifying true or false, depending on whether you want to install the following open source components:

     

    Order: [MonitoringDashboards SocksShopDemo CheesesDemo ServiceMeshDemo]

     

    E.g.

     

    ./curate-k8s.sh true true true true

     

  • After the script execution, you should be able to see all specified services running inside your Kubernetes cluster

     

    kubectl get services –all-namespaces

Notice that we have a variety of type of Services:

  • ClusterIP: These services allow intra pod/container communication inside the cluster.
  • NodePort: These services come with ClusterIP plus an assigned port on each Worker Node for external consumption
  • LoadBalancer: If running on a Cloud provider, an external Load Balancer as a Service (LBaaS) will be mapped to this service

Ideally, at this point you are going to configure Ingress services to all those services that you wish to expose outside the Kubernetes cluster, fronted by LBaaS for easy external consumption by using a Cloud vendor.

For Dev purposes working locally on a Mac o PC, if you are running this Kubernetes cluster as option 1 (Vagrant based VirtualBox VMs), you might need to open up the assigned ports for those NodePort/Ingress services.

For example, in the image above: Traefik-ingress-service mapped to port 80 on 10.103.63.35

However, another way to quickly “hack” access into your internal services, even those of type ClusterIP is to establish SSH tunnels redirecting traffic from the host machine into the specific services internal IP Addresses and Ports.

 

For example, let assume that we want to gather access to the WeaveScope dashboard UI . Based on the configuration described above, I can:

 

 

  • If using Linux/MacOS, you can simply setup an SSH tunnel from within a terminal in your host:

     

    ssh -L 180:10.108.222.2:80 vagrant@127.0.0.1 -p [2200|2201] -i private-key

     

    Let’s analyse the previous command:

     

    • 180: It is a random port I decide on my host machine through which all traffic will be SSH tunnelled.
    • 10.108.222.2:80: It is the internal IP Address and Port on which I want to route all traffic. In this case it is the endpoint for the WeaveScope Dashboard UI.
    • vagrant@127.0.0.1 -p 2200: Here I need to target any of the 2 worker nodes. If using the VirtualBox VMs, there is an out of the box Port forwarding already in place that maps 2200 to 22 for worker node 1 and 2201 to 22 for worker node 2. Either way will work. As the Kubernetes services are network constructs running on each of the worker nodes.
    • -i private-key: It is referencing the SSH Private key (located under $GIT_REPO_LOCATION/vagrant-boxes/Kubernetes/.vagrant/machines/worker1/virtualbox/private_key)

     

  • If using PuTTY, first you would need to convert the Vagrant SSH private key into PPK (you can find the private key in your host machine under: $GIT_REPO_LOCATION/vagrant-boxes/Kubernetes/.vagrant/machines/worker1/virtualbox/private_key).

     

    Then you would need to establish a tunnel configuration in your PuTTY session:

        Finally, open a browser window in your Host machine and go point localhost:[YOUR_CHOSEN_PORT], e.g. localhst:180

 

 

I hope you found this blog useful. If you have any question or comment, feel free to contact me directly at https://www.linkedin.com/in/citurria/

Thanks for your time.

Your Place or Ours

Sometimes you just want to build a local environment on your own equipment simply because it’s quick and easy. But you soon realise that other people need access and resources get a bit tight (memory, CPU, etc). That’s when it makes sense to move it from your place into the cloud.

Just recently I realised how useful Oracle Virtual Box’s new export feature is for migrating local VMs into Oracle Public Cloud Infrastructure – Compute Classic. Oracle Virtual Box’s new export formats give me the ability to easily migrate Images to the Oracle Public Cloud where I can scale my environments as required.

Earlier this week I was building a new Oracle Identity and Access Management development environment on my laptop. This worked well from an initial build and configure perspective but there comes a time when I need to make this environment available to my Developers, Testers and other stakeholders. Running this image continuously on my laptop quickly becomes impractical even for development teams.

Continue reading “Your Place or Ours”