An automated approach to provisioning the Oracle Container Engine for Kubernetes

I was reflecting recently on how IT tools and productivity aids often allow us to make a mess real quickly. There is often some underlying basics that need to be considered before using the productivity tool in order to get a sustainable outcome. As the old adage goes … A fool with a tool is still a fool !

I just read a great blog posted by my colleague Ali Mukadam. He has been spending some time exploring a number of interesting technologies including the Oracle Container Engine. For those unaware, the Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud. I have played with this a little and as a technology geek I really love it.

Oracle does provides a Quick create option to help you get to function quickly, however often you will need to consider the wider IT landscape, such as where does this service fit into my overall network topology in order to assess how you want to lay out your network etc. To that end Ali has developed a toolkit to automate the provisioning of OKE on the Oracle Cloud Infrastructure.

You can check out his blog post here …. One does not simply deploy Kubernetes to the cloud .

First experience with Oracle Cloud Infrastructure Registry (OCIR)

In my early experiences with docker, I successfully containerised a few simple Node.js applications as per some of my older blog posts (refer Exploring GitHub, DockerHub and OCCS and the MedRec Hands -On Labs section). As is often the case in the modern IT landscape some software (eg OCCS) falls out of favour as more industry support rallies around new capabilities such as Kubernetes. Oracle has thrown its support behind Kubernetes and has brought to market a capability called Oracle Container Engine for Kubernetes (OKE). OKE can be described as, ” A developer friendly, container-native, and enterprise-ready managed Kubernetes service for running highly available clusters with the control, security, and predictable performance of Oracle’s Cloud Infrastructure.” A benefit of leveraging this capability is you don’t have to install, configure and patch the Kubernetes environment you leave that to Oracle which allows you to focus on using the Kubernetes capabilities to deploy and run your container native applications. Essentially, with OKE you get the latest Kubernetes updates which helps you remain compatible with the CNCF ecosystem without the management and administrative overhead. OKE is integrated with your Oracle Cloud Infrastructure tenancy, and the good news is that Oracle doesn’t charge for OKE, you simply pay for the infrastructure you use for worker nodes and any storage requirements that you need to support your containerised application deployments.

In addition to OKE, Oracle also provides a private registry known as Oracle Cloud Infrastructure Registry (OCIR) for your container images. OCIR is, “a highly available private container registry service for storing and sharing container images within the same regions as the deployments. An integrated, performant platform offering, where users can store their container images easily. Access to push and pull images with the Docker CLI, or images can be pulled directly into a Kubernetes deployment”. You can use Oracle Cloud Infrastructure Registry as a private Docker registry for internal use, pushing and pulling Docker images to and from the Registry using the Docker V2 API and the standard Docker command line interface (CLI). You can also use Oracle Cloud Infrastructure Registry as a public Docker registry, enabling any user with internet access and knowledge of the appropriate URL to pull images from public repositories in Oracle Cloud Infrastructure Registry. In each region that is enabled for your tenancy, you can create up to 500 repositories in Oracle Cloud Infrastructure Registry. Each repository can hold up to 500 images. In this post I have recorded the steps to interact with OCIR.

One of my colleagues previously setup a Docker registry on Oracle Cloud Infrastructure-Classic and at the time I remember I was glad he was doing it as it seemed a cumbersome task to do. As an application developer I really don’t want to care about installing, configuring and managing registry services I simply want to use them. Of course there are capabilities available to me out there on the internet such as docker.io (Docker Hub) where I could choose to make my docker images private or public but there are times where I may want a private registry within my Cloud tenancy. In this brief post I want to share my first experience with the Oracle Cloud Infrastructure Registry and provide a worked example for anyone interested to follow.

Those who have used Docker would know that in order to build a docker image you have the choice of starting the process FROM SCRATCH or by getting a head start by utilising a suitable already built base image. In other words, you can choose to quickly and easily take advantage of what someone else has already developed and use that as your starting point. In practical terms, this might mean that you are using an image that someone else has developed and shared using docker hub (docker.io). The image might for example provide an operating system at a version (eg Ubuntu 14), and with Node.js of a particular version already installed. Simplistically, taking this approach allows you to focus purely on adding your application on top of this base image.

The base images that I have referred to, are commonly pulled from an image registry (by default the registry used is docker.io ). Once you have added steps into your Dockerfile in order to ADD / COPY your application code into the image, essentially extending the base image, the newly built image containing your application will be available within your local docker image registry. However, you may want to share that image with others and to do that you will need to publish (push) your image to a Docker Registry. Once the image has been pushed to a registry then others (with access) can pull it from there using the docker pull command.

The following steps cover what I did in order to utilise the private registry capability in the Oracle Cloud known as OCIR. The Oracle OCIR documentation walks through the steps, all I have done is to overlay my actual example. It is assumed you have access to Oracle Could Infrastructure tenancy and are in a group that has permissions / suitable access policy to interact with OCIR

Step 1 – Get an Authentication token

As per Oracle documentation, in order to push / pull to / from the registry you need to create an OAuth token, simply click on the Avatar in top right and select User Settings.

Assuming you don’t already have an Authentication Token created, click the Auth Tokens link. In the following screenshot your username is shown (mine is obfuscated). If you have a federated identity tying you back to Oracle Identity Cloud Service then you will see a prefix of oracleidentitycloudservice/your_username. This is important to note as the format of your connection string to OCIR will vary slightly.

Click the button to Generate Token.

Enter a description for the token and press the button to Generate Token.

Important Note : When you press Generate Token – the Auth token will be displayed. You must copy it and store it somewhere secure eg a password wallet as you won’t see it again but you will definitely need it.

You should see that you now have a token listed in your Auth tokens however if you click the show link as per below screenshot it will only display the Oracle Cloud ID (OCID) for your user and not the actual generated Token as per previous screenshot.
It is the Auth token that you will use as the password value when connecting to the OCIR.

Step 2 – Access the OCIR service

In the OCI Console, click the hamburger menu in the top left and then select Developer Services –> Registry.

Note: if you see the following error it is because you don’t have privileges.

Once you have required access privileges to the registry you can optionally create a repository using the Registry Console UI, you simply click the Create Repository button, enter a name and then create a Readme to describe the purpose of the repository if you want.
However it is possible to create the repository from the docker cli which is covered in the next step of the blog post.

Step 3 – Login to the Registry using the Docker CLI

In order to login to the registry, you need to know which region your OCIR instance resides. There is a list of Region Codes for OCIR from which you will need to find the one that matches your OCIR location.
For the Ashburn US region, the code is iad and therefore the registry I had to login into was iad.ocir.io .
In my ssh session I entered

docker login iad.ocir.io

Now enter your username and use your Auth token as the password (remember you have this stored in a safe place – right !)

Note: If your tenancy is federated with Oracle Identity Cloud Service, use the format <tenancy-name>/oracleidentitycloudservice/<username> otherwise you enter your username in the format <tenancy_name>/<username>.
Assuming a tenancy name of mytenancy-dev and a user of mickey.mouse@movies.com then you would use … mytenancy-dev/oracleidentitycloudservice/mickey.mouse@movies.com otherwise you would use mytenancy-dev/mickey.mouse@movies.com.
As mentioned previously you will see your Username under User Settings (or it is also shown if you have any created repositories) and work out if you user has been federated with Oracle Identity Cloud Service if you are unsure.

In order to view my local docker images I entered

docker images

I then tagged my image of choice (apisforsapol) to associate it with the OCIR Registry in the iad region. To do this I used the following command;

docker tag apisforsapol:0.1 iad.ocir.io/mytenancy-dev/apisforsapol:0.1

To see what has happened I entered the following

docker images

Note that I have two repository references to the same image ID, one for my local (internal) docker repos and the other to OCIR.

As I am logged into the docker registry, I can simply now push my image to it using the following command.

docker push iad.ocir.io/mytenancy-dev/apisforsapol:0.1

Note: if you have an old version of docker installed you will see a message like below. To address the error you will need to upgrade your docker version to a more recent version eg docker-ce.

 

If you have a suitable version of docker installed , then the push should just work as per screenshot below.

Step 4 – Verify Image Push into your OCIR repository

Navigate back to the OCI Console and access the OCIR page. You should now see that you have a repository created.

If you click on the repository you will see the current tag of your image. In my case 0.1

If you now click on the tag you will see the layers that have been pushed across for your image.

Finally, under the Registry, Settings you can see where you can setup your custom image retention policies, assuming you are not happy with the global image retention polcies.

That brings me to the end of this post. My planned next step will be to reference my image held in OCIR from my deployment configuration on Oracle Container Engine for Kubernetes. I hope you found this helpful.

Connect Dockerised Instant Client to Autonomous Data Warehouse

With all the recent exciting releases of Oracle Autonomous PaaS Services, I wanted to explore some of the client connectivity options to work against the Oracle Autonomous Data Warehouse (ADW).
In my cloud subscription I provisioned an instance of ADW which took less than two minutes from start to finish – terrific, now I am ready to leverage the functionality. If you want to know the steps to provision an ADW instance check out this blog post – https://redthunder.blog/2018/07/02/teaching-how-to-get-started-with-oracle-autonomous-data-warehouse-cloud-service/

Obviously an empty data warehouse isn’t particularly useful so one of the first things I wanted to do was to connect a SQL client to the ADW instance so that I can load some data. Initially I used Oracle SQLDeveloper to load data into my ADW instance. I had staged my Excel data files into an Oracle Cloud Object Storage container and then referenced them in my SQL code as External Tables. Carlos has already blogged the steps required for this in https://redthunder.blog/2018/07/02/teaching-how-to-get-started-with-oracle-autonomous-data-warehouse-cloud-service/ . If you follow the steps you will quickly get your ADW instance populated with your data. In fact for the demo scenario my ADW instance was now populated with some data (approx. 1 Million rows of Sales data and associated related dimensions (Product, Customers etc).

My next step (and the subject of this blog post) was to use the Oracle Instant Client in order to query the loaded data. Of course I could easily have viewed the data inside SQLDeveloper but I wanted to try some other approaches. Often in Proof of Concepts there is a need to quickly spin up a tool to create, retrieve, update and delete data. Anyone who has used the Oracle Database would be familiar with the SQL*Plus client which is included as part of the Oracle Instant Client. For those who are not familiar with Oracle Instant client, the Oracle website describes it as follows,

“Free, light-weight, and easily installed Oracle Database tools, libraries and SDKs for building and connecting applications to an Oracle Database instance. Oracle Instant Client enables applications to connect to a local or remote Oracle Database for development and production deployment. The Instant Client libraries provide the necessary network connectivity, as well as basic and high end data features, to make full use of Oracle Database. It underlies the Oracle APIs of popular languages and environments including Node.js, Python and PHP, as well as providing access for OCI, OCCI, JDBC, ODBC and Pro*C applications. Tools included in Instant Client, such as SQL*Plus and Oracle Data Pump, provide quick and convenient data access.”

I used a Vagrant-Box / VirtualBox to avoid having to install development tools such as the Oracle Instant Client directly on my laptop operating system. I found an existing vagrant box that provided me with an Oracle Linux base that also included Docker. This vagrant-box image allowed me to quickly spin up a base environment which in turn allowed me to focus on steps to run the Oracle Instant Client inside a Docker container inside the Virtual Box environment (sounds like a cheesecake recipe – lots of layers). The Dockerfile I used was based on the Oracle Instant Client forked from the official Oracle DockerImages project with some modifications for specifics around connecting to an Oracle Data Warehouse Instance. Continue reading “Connect Dockerised Instant Client to Autonomous Data Warehouse”

Automate the Docker Build for a Microservices app and deploy to the Oracle Container Cloud Service

In a previous series of blog posts titled Exploring Github DockerHub and OCCS
I walked through how to setup your development environment, install Docker, create a git project, link Docker Hub and GitHub accounts, trigger a build of a docker image on Docker Hub and then deploy and run the docker image using the Oracle Container Cloud Service.

In this blog I wanted to do something similar but this time focus on a more simplified process so that a less technical user could follow the steps without the need for a development environment. The way I have structured this post mans that you can follow every step using a web browser. I have attempted to focus more on consuming the dockerised application that a developer has already built. For the purpose of this exercise I am using the example Medical Records application that some of you may be familiar with from some of my previous blog posts. For those not familiar with the application, it consists of a Node.js web application that interacts with a MongoDB database. The web application surfaces a number of REST APIs (eg Get / POST Patient , GET / POST Physician, GET/POST Observations etc), and uses Swagger UI so that the REST APIs can be quickly interacted with by the end user.

If you want to follow the steps in this blog post you will need a login for GitHub and Docker Hub and also a subscription or trial account for the Oracle Cloud in order to use the Oracle Container Service Classic (OCCS).

For those unfamiliar with what Oracle currently offers in the container space, Oracle have brought two offerings to market:

Container Service Classic provides an easy and quick way to create an enterprise-grade container infrastructure. It delivers comprehensive tooling to compose, deploy, orchestrate, and manage Docker container-based applications on Oracle Cloud Infrastructure for Dev, Dev/Test, DevOps, and Cloud Native use cases. The second container offering is Oracle’s Container Native Platform is designed for DevOps teams to build, deploy, and operate container-based microservices and serverless applications using open source tools. It is an end-to-end container lifecycle management suite, including a managed Kubernetes service for creating and managing clusters, a private registry for storing and sharing container images, and a fully integrated CI/CD service to automate and manage deployments, all powered by enterprise-grade cloud Infrastructure.

For my walk through below the focus is on consumption of the Docker Image using the first generation OCCS offering and I hope to blog around a similar exercise using the Oracle Container Native Platform.

It is assumed that an OCCS service instance has already been provisioned. Previous blog posts document the steps required to Provision an OCCS Service Instance and also Using the Oracle Container Cloud Service . Of course, always check the product documentation to validate that the steps recorded are still current.

The flow of the ten steps in the diagram below cover the approach I took. Please note that most of the steps are a one-off activity and the productivity.

Create GitHub account.

GitHub is a very popular hosted software source control service. As there is an existing git project for the ankimedrec-apis application you will take a copy of this existing project and add into your project library. If you don’t already have a GitHub account then sign up. Navigate in your browser to https://github.com .

Simply fill in the required mandatory details and press Sign Up for GitHub.

I chose the “Unlimited public repositories for free” option and pressed the Continue button.

You can then press the Submit button to submit your request for a GitHub account.

Click the “Start a project” button.

Before you can continue you should see that a further step is required in which you will need to validate the email address associated with your GitHub Account Sign Up.

Login to your email account and click the verify link in the email you received from GitHub.

In your browser session for GitHub account creation, you should see that “Your email was verified”.

With your GitHub account setup and your email verified, you will now create a new repository. Enter a name for your first Git repository eg myfirstproject , enter an optional description. Choose Public, and check the box to initialize the repository with a README file.

Click the green Create Repository button.

Soon afterwards you will see that a Git repository will be created under your login (eg jsmith001/myfirstproject).

Note that your repository has only a single README.md file. Files will extension of ‘.md’ are MarkDown files. According to Wikipedia , Markdown is a lightweight markup language with plain text formatting syntax. It is designed so that it can be converted to HTML and many other formats using a tool by the same name. Markdown is often used to format readme files, for writing messages in online discussion forums, and to create rich text using a plain text editor.

Obviously you can now upload other files to your first Git repository but for the remaining steps in this ‘How-To’ document you will focus on creating a fork of the ankimedrec-apis repository that contains the MedRec application project files. According to a GitHub Help article – A fork is a copy of a repository. Forking a repository allows you to freely experiment with changes without affecting the original project. Most commonly, forks are used to either propose changes to someone else’s project or to use someone else’s project as a starting point for your own idea.

Fork the solutionsnanz/ankimedrec-apis repository

To create a fork of the ankimedrec-apis repository owned by the user solutionsanz you will need to do the following.
While you are still logged into github.com, point your browser to https://github.com/solutionsanz . You should now be able to see all the public Git repositories owned by the user solutionsanz .

Next, click on the link ankimedrec-apis .

Click the fork button in the top right of the screen. This will start the copy of the repository from solutionsanz to your Github account which will take less than a minute.

After about a minute you should now see that a forked copy of the ankimedrec-apis project has been created in your GitHub account.

Ok so far so good, you should now have a GitHub account, and have forked the ankimedrec-apis project into your GitHub account. You can see that there are a number of files in the project.

Click the Dockerfile to view the steps that a very kind developer has defined in order to containerize the Node.js application. Also note the occs-stack.yml file which contains the docker compose / stack definition of the web application and the mongoDB which you will use later in this tutorial to define a stack within OCCS.

Create Docker Hub Account

While GitHub provides a place to store and manage source code for your projects, Docker Hub is a place used to build, store and retrieve docker images.
To create a docker hub account for your user, simply point your browser to docker hub … https://hub.docker.com/ , fill in the details under ‘New To Docker?’ , verify that you are not a robot and click the Sign Up option.

You should get a message directing you to your email account for verification.

Login to the email account associated with your Docker Hub signup and confirm your email address.

Once you have confirmed your email address, you should be ready to login to Docker Hub.

Simply use the Sign-in option on the Docker Hub site, and enter your credentials.

Enter your username and password and click Login .

Assuming you have successfully signed in to Docker Hub, you have a little more setup to do.

Link GitHub to your Docker Hub Account

Click on the drop down arrow next to your username.
The dropdown should reveal a sub menu of options, click Settings.

Click on the third tab across the top of the page … Linked Accounts and Services . You will link the Git and Docker Hub accounts such that certain activity in your GitHub account will trigger an automated build of a Docker image (containing the MedRec project) and then store the built docker image in Docker Hub where it can be pulled from into your OCCS environment.

Note that you can link to both GitHub and BitBucket. BitBucket provides a web based software version control system similar to GitHub.

Click the Link GitHub icon.

Select the Public and Private (Recommended) option.

Click the Authorize docker button
to setup the authorisation trust relationship (OAuth) between GitHub and Docker Hub.
Note: If you weren’t already signed into GitHub you would be prompted to do so.

As I was already authenticated to GitHub I could simply Authorise Docker Hub to be part of the OAuth trust.

GitHub will request your password confirmation before completing the OAuth setup.
Enter your GitHub password and press Confirm Password.

You should see the above image highlighting that your Docker Hub account is now linked to your GitHub account.

Create Automated Build Job to build Docker image

Creating an automated job is a very simple task and the official Docker documentation provides a good explanation;

You can build your images automatically from a build context stored in a repository.
A 
build context is a Dockerfile and any files at a specific location.
For an automated build, the build context is a repository containing a Dockerfile.

Automated Builds have several advantages:

  • Images built in this way are built exactly as specified.
  • The Dockerfile is available to anyone with access to your Docker Hub repository.
  • Your repository is kept up-to-date with code changes automatically.

Automated Builds are supported for both public and private repositories on both GitHub and Bitbucket.

As I want an automatic build of my docker image triggered by GitHub events I need to define my Automated Build.
I already have a Dockerfile in my ankimedrec-apis, so once I configure the automated build I should be good to go.

In Docker Hub, under the Create sub menu I chose Create Automated Build. This is where I specify the project git repository that I want Docker Hub to subscribe to for events.

I clicked the Create Auto-build GitHub icon.

From the list of repositories associated with your git hub account, choose the git repository you forked previously – ankimedrec-apis.

In the Create Automated Build page you can specify a repository name for docker image but I suggest that you accept the default repository name so that it matches the Github repository name. You can enter a description such as … “Docker build for ankimedrec-apis repository.”
Note: The description field has a 100 character limit.

Click the Create button.

Next configure Docker Hub to do an automated build based on changes to the Master Branch of the git repository

The steps required are very well documented on the Docker Hub site but I have recorded them here so that this blog post is pretty much self-contained. Click the Build Details tab.


Note that no build of a docker image has been actioned yet.

Click on the Build Settings Tab.

Builds of a docker image can be triggered automatically (based on a commit/push of the git repository that is linked to the docker hub automated build.

Builds can also be triggered manually by pressing the trigger button on the Build Settings tab..

Click the Trigger button


Once a build has been triggered (either manually or automatically) the status of the build job is made available in the Build Details tab.


Within a few minutes, you should see a newly built image appear on the Docker Hub image Dashboard.
Periodically refresh the browser page to see status updates.
Initially the status will be Queued, then Building as per below screenshots.




A few bullet points gleaned from the official documentation covering how all this magic works.

  • During the build process, Docker copies the contents of your Dockerfile (from your Git repo) to Docker Hub.
    The Docker community (for public repositories) or approved team members/orgs (for private repositories) can then view the Dockerfile on your repository page.
  • The build process looks for a README.md in the same directory as your Dockerfile.
    If you have a README.md file in your repository, it is used in the repository as the full description.
    If you change the full description after a build, it’s overwritten the next time the Automated Build runs.
    To make changes, modify the README.md in your Git repository.
  • You can only trigger one build at a time and no more than one every five minutes.
    If you already have a build pending, or if you recently submitted a build request, Docker ignores new requests.
  • The Build Settings page allows you to manage your existing automated build configurations and add new ones.
    By default, when new code is merged into your source repository, it triggers a build of your DockerHub image.

If you click on the Repo Info tab within Docker Hub after the initial Docker build has successfully completed, you will see that the README information is now displayed under the Full Description section.


Update a forked ankimedrec-api project file to trigger an automated build job

Next you will update one of the files (occs-stack.yml) in your ankimedrec-apis GitHub repository. The reason for this update two fold;

  1. The above file currently references an incorrect docker image (currently shows it is owned by barackd222).
  2. The update and subsequent commit/push will trigger a fresh build of the docker image

In your browser navigate to your GitHub repository

Click on the link to the occs-stack.yml file.

modify the line containing ‘index.docker.io/barackd222/ankimedrec-apis:latest’ to reflect your Docker Hub user (eg jsmith001 ) instead of barackd222 .

Note: you will need to copy the contents of this file later. When you do click the RAW button so the copy doesn’t include the line numbers.

Scroll down the page and enter the following information in the short and long descriptions within the Commit Changes area.

Press the Commit changes button. This will update the Master branch of your GitHub repository. Once this happens you should see soon after that a new Build of the docker image that will contain the ankimedrec-apis application will show on Docker Hub.
Note: Best practice would dictate that we should have development branches and not work directly on the Master branch, but in an attempt to keep this blog post as simple as possible I have cut corners.

In Docker Hub check the build status of your image under the Build Details tab.

Once you get a successfully built image you are now ready to create a Stack definition within the Oracle Container Cloud Service to pull down and run the docker image you just created.

With your Docker image built, login to your Oracle Cloud Account and access the MyServices Dashboard.

Click the hamburger icon on the bottom right of the Container Classic box.

Click Open Service Console, and then click the hamburger icon on the right of the Oracle Container Cloud Service Instance and choose Container Console.
In my example my Service was named dmrOCCS.

Login to the OCCS Container Console using the specified username / password provided by your Cloud Service Administrator.
Note: you will get a Security Exception as at this stage we don’t have a valid certificate recognised by the browser for our OCCS instance.
In Chrome browser the Security Exception will appear as follows.

Click Advanced to bring up the process to IP Address (unsafe) link…

Click the Proceed link

Enter your username (eg admin ) / password for the OCCS Container Console and press the Login button.

The Container Console should be displayed as per below. The console allows the user to create/edit/browse Services and Stacks, and deploy the Service and Stacks of interest to the Container Service runtime.

To clarify some of the terminology used within the console I have included the following definitions from the official Docker documentation ;
“- An image is a lightweight, stand-alone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files.
– A container is a runtime instance of an image—what the image becomes in memory when actually executed. It runs completely isolated from the host environment by default, only accessing host files and ports if configured to do so.
Services
In a distributed application, different pieces of the app are called “services.” For example, if you imagine a video sharing site, it probably includes a service for storing application data in a database, a service for video transcoding in the background after a user uploads something, a service for the front-end, and so on. Services are really just “containers in production.” A service only runs one image, but it codifies the way that image runs—what ports it should use, how many replicas of the container should run so the service has the capacity it needs, and so on. Scaling a service changes the number of container instances running that piece of software, assigning more computing resources to the service in the process.
A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together. A single stack is capable of defining and coordinating the functionality of an entire application (though very complex applications may want to use multiple stacks).”

The occs-stack.yml file was included in the ankimedrec-apis repository as it defines the stack (web application service and mongodb database service) that will need to be deployed to OCCS. Services and Stacks need to pulled from a Registry in order for you to be able to deploy the image. An example registry definition for Docker Hub registry is included in the OCCS instance when it is created, but you will create a registry definition to Docker Hub using your Docker Hub credentials.

Define a Registry in OCCS

Click Registries in the left menu.

You should see a link to the docker hub image registry as part of the OOTB experience within OCCS.
You will essentially create a modified copy of this Registry Definition in order for you to target your image that was created previously and stored in Docker Hub.
As I have used this environment, my credentials were already added and the details validated, so we are good to go.

Ener your credentials as per below.

Press the Validate button

Press Save to save the registry configuration.

If the validation step worked successfully, then we are in good shape to pull the built docker image that was created previously from docker hub when we deploy our Stack.
A Stack is a composition of services and in our case the sample application (ankimedrec-apis) consists of a Node.js application service and a MongoDB database service.
After you have successfully deployed the stack within OCCS, you will have end up with two docker containers, one for the web application tier and one for the database tier.
The Stack configuration will deal with the networking between the two containers.

Click on Stacks in the left hand menu.

Click the New Stack button on the right of screen.

Enter a name for your Stack eg MyMedRecAPP then click the Advanced Editor link.

Next copy the contents of the occs-stack.yml you edited and committed previously.
Note: Use RAW format when copying from GitHub repository.

Copy the RAW content and then Paste it into the Advanced (YAML) Stack Editor as per below.

Click Done.

Note: we are only pulling one image from Docker Hub and from the screenshot above you will see that it is the web application tier.
Out of the box, the mongo image is defined as a service within OCCS and the image is available locally.

Click Save button.

Your newly created stack definition should appear as per screenshot below;

Now that you have created a Stack, your next step is to deploy the Stack to the Container Service.

Click the green Deploy button adjacent to your Stack definition.

Note that the Oracle Container Cloud Service Classic offers orchestration for the database and application tiers of the MedRec application.

Accept the default values of 1 per-pool (across hosts in this pool) for both the (Node.js) Web Application and (mongoDB) Database Tiers.

You will see that the required docker images are now being pulled from the local (OCCS) repository (eg mongo image) and / or the Docker Hub repository (ankimedrec-apis image).

You might see the screen change colour from green to orange and eventually to green as per screenshot below.
Once the containers have successfully started for both mongo and the web application tier then you will see a green RUNNING indicator on the screen.

With the application successfully deployed and running, the next thing for you to do is to interact with the deployed application.
In the sceenshot above you will see the Hostnames for the web application tier (dmroccs-occs-wkr-3).
Click on the hostname for the web tier in order to get additional information about the host, including public IP address and any other containers already deployed on the host.
In the example below you will see that the web app con and mongodb containers are both deployed and running on worker node 3.

Copy the Public IP address.

In a browser tab enter the following – http://xxx.xxx.xxx.xxx:3000 – where xxx is the IP address of the host you just copied.

You should now see the Anki-MedRec Application running.

Click the Blue GET button adjacent to /physicians, we will execute this to return any Physicians that have been created and stored within MongoDB.

Click the Try It Out button.

Click the Blue Execute button.

Note that the Response Body is an Empty Array.

Click the Green POST adjacent /physicians which will allow use

Click Try It Out button

You can now edit the sample Physician data or leave it as is ..

Click the Execute button

You should see that an HTTP-200 (Success) has been returned.

Now repeat the steps for the GET /physician to see if the data is retrieved.

Finally, within the Container Console, click on Images to see the image that was pulled from Docker Hub.

As a logical next step and in order to understand the steps required to cover the design of the secured API layer using Apiary.io and to secure the REST APIs that are part of the AnkiMedrec application using the Oracle API Platform check out this blog.

I hope you found this useful.

Apiary designed APIs tested using Dredd

APIs are becoming the window to the digital assets of the modern business. Well documented, well governed and easy to use APIs are key to their successful uptake, longevity and associated business success. Yes, I did say well documented. In this instance I am talking about the documentation required to describe the APIs capabilities in a manner that is meaningful for your ultimate audience, the “API Consumers”, however it will also provide the template for the API Developer to develop their code from. In the modern business climate, we probably don’t want to produce War and Peace, we simply want to take a minimum viable approach to our API documentation. But where would I find a capability that will simplify our task as API Designers, capture the design documentation for our APIs, allow us to do some initialise testing to validate the usefulness of our design before any code is cut, and also have the documentation ready for consumption by team members and interested parties using a standards based approach. Where indeed ! Look no further than Apiary.io. Continue reading “Apiary designed APIs tested using Dredd”

Exploring GitHub Docker Hub and OCCS Part 4

In my previous post in this series I covered linking GitHub and DockerHub and configuring the environment such that a build of a Docker image was triggered on updates to GitHub. In this final post of the series I will take you through the steps to pull the image from Docker Hub into OCCS in order to run the application. It should be noted that the image built on Docker Hub in my example is only the web tier that contains my Node.js project (APIs and SwaggerUI). The MongoDB component of my OCCS Stack is pulled directly from Docker Hub when my Stack containing the Web Tier and Database Tier services is deployed to OCCS. Continue reading “Exploring GitHub Docker Hub and OCCS Part 4”

Exploring GitHub DockerHub and OCCS Part 3

In my previous post I described how I created a stack definition including my Node.js web application and a MongoDB service using docker-compose. In this article I will describe the steps I took to link my GitHub and Docker Hub accounts in order to automatically build a docker image triggered by a git push command.

Trigger a Build of the MedRec API Docker Image on Docker Hub

Combining internet / cloud based services such as GitHub and Docker Hub allows developers to experience productivity gains without having to fund a local server to provide this capability. I wanted to explore and experience this for myself.

Link Docker Hub and Git Hub accounts

As I didn’t have a docker account for my user I pointed my browser to docker hub … https://hub.docker.com/ and clicked the SignUp option. Continue reading “Exploring GitHub DockerHub and OCCS Part 3”

Exploring GitHub, DockerHub and OCCS Part 2

In my previous post I detailed how I Dockerised the MedRec app. In this post I will show how I added MongoDB and defined a stack using Docker-Compose.

Add MongoDB layer using Docker-Compose

According to the official docker documentation ;

“Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration. ”

A single command to create and start all the services in a configuration sounded pretty good to me. I definitely was keen on exploring docker-compose.

Add a docker-compose.yml file

Having proved that my web application runs up, I now need to address the persistence layer. The above Dockerfile contains the steps to create the required runtime platform for my node app, and installs the node application and package dependences (as specified in the package.json) file by doing the npm install. However if I tried to do a GET or a PUT my app will fail as it won’t find a MongoDB inside my container. I therefore still need a MongoDB somewhere in my environment to hold my application data. Continue reading “Exploring GitHub, DockerHub and OCCS Part 2”

Exploring GitHub Docker Hub and OCCS Part 1

In my previous post in this series I provided an Introduction describing the high level steps I planned to take.
In this post I will walk through the detailed steps to Dockerise the MedRec application.

Dockerise the MedRec APIs

Git clone the project repository

I used my Windows Surface Pro-4 with Oracle VM Virtual Box installed to host my development VM. I managed to source a VBox image that already had Ubuntu 16.04 and Docker installed so that helped get me started. In my development environment on my laptop, I created a directory under my home directory named gitprojects.
I cd into that directory.

cd gitprojects
Continue reading “Exploring GitHub Docker Hub and OCCS Part 1”

Exploring GitHub, DockerHub and OCCS – Introduction

As part of our MedRec API playpen initiative we had already developed some REST API’s using Node.js and leveraged MongoDB running on Oracle IaaS as the persistence layer. . This post describes what I did to dockerise the MedRec API application and eventually run it on the Oracle Container Cloud Service (OCCS).

The APIs have already been made available for interested parties to interact with via SwaggerUI. Of course developers could develop their own code (or any REST client) to interact with them. As a team we used a combination of the Oracle Developer Cloud Service (Git repository, Issue Tracker, Build Server etc) and also the public GitHub to provide public access to our project code. As the source code for the Node.js project containing the API’s was pushed to GitHub I simply did a clone of the Node application code in order to download and run it locally (MedRec API tutorials available here).

The application ran well enough on my local laptop which running Ubuntu 16.04, however I really wanted to be able to try to run the app and MongoDB as a Docker image/stack on my laptop. After I had the application successfully “Dockerised”, then I planned to deploy my application stack to the Oracle Container Cloud Service. I also wanted to explore the use of the GitHub / Docker Hub integration to build my image on Docker Hub, and then from within the Oracle Container Cloud Service (OCCS). With the application image available on Docker Hub, I could then pull my image from that source in order to run it up on OCCS.

A good blog can really help bring you up to speed quickly and help overcome inertia to get you started and I would like to acknowledge the help that Mauricio Payetta’s blog provided me.

In this series of blog posts I plan to retrace what I did during my self-learning. Continue reading “Exploring GitHub, DockerHub and OCCS – Introduction”