Oracle API Platform Cloud Service – Installation Steps of Gateway Node

In this blog I am going to document the Oracle API Platform Gateway Node (Version : 18.1.3) Installation steps which is one of the critical components of API Platform Cloud Service.

Oracle provides API Platform Cloud Service as a foundation product for API Management that comprises the Full API Lifecycle, encompassing the complete API Design & Documentation, API Security, Discovery & Consumption, Monetization, and Analysis etc.

Oracle API Platform comprises 3 major components as stated below to serve specific purpose-

Management Portal – This is used to create and manage APIs, deploy APIs to gateways, and manage gateways, and create and manage applications. You can also manage and Deploy APIs and manage gateways with the REST API.

Developer Portal – Application developers subscribe to APIs and get the necessary information to invoke them from this portal.

Gateway Node  – This is the security and access control run-time layer for APIs. Each API is deployed to a gateway node from the Management Portal or via the REST API.

In addition to above, Oracle also offer Oracle Apiary to quickly design, prototype, document and test APIs.

Below is the high level architecture diagram of API Platform.

Further down in this blog we will be discussing only one highlighted component API Gateway installation steps as it’s very critical component to process run-time load of APIs. To know more about API Platform CS and Gateway node please refer this link

Below the high level steps Involve to setup the Gateway node

  • Create a logical Gateway
  • Download Gateway Node Installer
  • Modify gateway-props.json file
  • Validate system requirement of Gateway node installer
  • Create Gateway Manager and Gateway Runtime users
  • Install JDK and Setup JAVA_HOME, PATH environment variable
  • Understanding of Log file and location
  • Execute APIGateway script with Install action
  • Execute APIGateway script with Install configure
  • Execute APIGateway script with Install start
  • Execute APIGateway script with Install join
  • Approve the join request
  • Conclusion

Create a logical gateway-

A Logical Gateway (called a Gateway in the Management Portal user interface) is a JSON object that defines what its registered Gateway Nodes should look like. A gateway node is the physical gateway run-time installation. Gateway nodes can be installed on-premises or in the cloud. Logical gateways and gateway nodes have a one to many relationship: many gateway nodes can register to one logical gateway, but a gateway node can register to only one logical gateway. Each gateway node polls the management service at configurable intervals to retrieve the logical gateway definition it registers to. The gateway node is updated to match the logical gateway definition. Because you deploy APIs to logical gateways, and not to gateway nodes, all nodes registered to a gateway have the same APIs deployed with the same policies applied.

Login to your management portal instance e.g. https://hostname:port/apiplatform/public/login.jsp, click on left hand side menu bar >> Gateway >> Create Gateway

Provide Gateway Name and Description and click on create


Download Gateway Node Installer-

Once gateway created, click on node option and click further on “Download Gateway Installer” option


A file will of 1.1 GB will be downloaded


Once installer downloaded, we need to move this installer to either on-premise server or spin a machine in cloud and then unzip it there. In my case I have created VM in Oracle cloud and also created a directory path called /u01/apics and 4 sub directories beneath that e.g. Java, install, archive, archive1 containing jdk1.8, gateway domain, unzip version of installer, and zip version of installer respectively.

Once installer unziped there are two important files we will be using in rest of steps as highlighted below. APIGateway is the script file which will perform the installation, configuration, start/stop/status and join action and this script needs gateway-props.json  file as an input which contains various parameters values.


Modify gateway-props.json file:

The default JSON file structure will look like this.

Gateway node installation does not need all these elements, few of them mandatory and many of them are optionals. The installation which I have done I have used below elements-

Specifying values for these parameters are fairly simple and detail definition about each parameter, sample value and optional/mandatory has given in this link

Note: Make sure you assign right values against each properties such as “logicalGatewayId” would be the ID of Logical Gateway which we have created at very first step, “managementServiceURL” would be your OTD instance URL fronting the management portal, “gatewayNodeName” would be any open string name and will be assigned to gateway node which can be seen at management portal while approving join request, “listenIPAddress” and “publishAddress” would be private IP and public IP of the VM where gateway node is going to be install, “nodeInstallDir” would be the location of directory path where gateway node domain is going to be created.

Actually, each installer action such as install, configure, start/stop/status and join etc needs their specific parameters but we can design gateway-props.json file in advance for all the actions what we expected to run. In our case we were expected to run Install, Configure, Start and Join, So I created relevant parameters in one time.

Details of each action and associated parameters can be find here

Validate System requirement of Gateway node installer-

Component Requirement
Operating Systems Oracle Linux and Red Hat Enterprise Linux 7.2, 6.7, and 6.4.
Dual core, 2 GHz or more per CPU
Disk Space 30 GB
Memory 6 GB
JDK version Oracle-Certified Java SE JDK 8. OpenJDK is not supported.

Create Gateway Manager and Gateway Runtime users-

Before we commence the gateway node installation we need to create two users with any name but need to assign specific grants “APIGatewayRuntimeUsers” and “APIGatewayManager”. These users will be required while executing “Join” action.

Login to cloud account, select API Platform Instance and open “Oracle Fusion Middleware Control Console”

You must use the WebLogic Administrator account, created when you provisioned your API Platform Cloud Service instance, to add users in the Fusion Middleware Control console.

Once user created, assign them appropriate grant as per below snap –


Install JDK and Setup JAVA_HOME, PATH environment variable-

Downloading JDK1.8+ version and installing is pretty standard stuff, so I am not covering in details here. However, the important point was how we setup the JAVA_HOME and PATH variable, because if we won’t setup the JAVA_HOME properly, installer won’t behave properly.

Make sure JAVA_HOME variable shouldn’t include path up to \bin, update the users bash_profile file with JAVA_HOME and Environment variable as per below snap


Understanding of Log file and location-

Before we start actual installation, lets understand where logs files resides which will show you the progress of installation, below snap will give fair idea about logs, their description and location. keep checking these logs to know what’s going behind the screen.

Log Execute APIGateway script with Install action-

Now, lets start actual work, so far whatever we have done its some ground work before we start actual installation process.

So,  gateway node can be install using single command

./APIGateway -f gateway-props.json -a install-configure-start-join

which does four actions such as install, configure, start and join in sequence. However, In my installation I have adopted different approach. All these action e.g. install, configure, start and join can be run in isolation as well but must be in sequence. So I ran them individually to get more visibility what’s going behind the scene using logs and standard out on terminal screen.

So, first action what I have ran is this –

./APIGateway -f gateway-props.json -a install

Make sure gateway-props.json been updated with required parameters explained in above sections.

Above steps will just install the gateway node product under /u01/apics/install directory. In your environment path could be something else depending up on what directory path you have specified in gateway-props.json file

Also, check the gatewayInstall.log to get details steps performed.

Now, gateway nodes software part been installed, next steps to configure the domain using configure action.

Execute APIGateway script with Install configure-

Configure action can be performed using below command

./APIGateway -f gateway-props.json -a configure

Above steps will create a domain with name “gateway1” which will have one admin server and 1 managed server, along with node manger process.

It also generate a separate log file “gatewayDomainCreation.log” at this location /u01/apics/install/logs/ which will give much more visibility about all executed steps.

So, far we executed install and configuration action as result weblogic server binary, gateway binary been installed and wls domain been created.

Now, in next steps we will be starting this environment which will make gateway node up and running.

Execute APIGateway script with Install start-

Run below command to start the node manger, admin server and managed server

./APIGateway -f gateway-props.json -a start

You can run “ps -ef | grep java” command to see all started process.

Now, the last step in gateway node installation is to join the Logical gateway which we have created in earlier steps.

Execute APIGateway script with Install join-

Run this command to join the logical gateway created at Management Portal

./APIGateway -f gateway-props.json -a join


When I have executed join command, my gateway-props.json was already having a valid “logicalGatewayId=100” which was created before I started executing APIGateway script. but somehow while joining process, APIGateway script didn’t recognized this existing logical gateway and  gave option to create a new logical gateway as you can see in above snap. I clicked “y” and it has created a new gateway called “Test” and also used “publishAddress:129.156.112.XX” element value and http (8011) and https defaults ports (9022) to co-relelate Logical gateway mapping to actual gateway node.

So, now gateway node has sent request to join logical gateway and user has to approve this request from API Management Portal.

So, jump into API management portal and approve the same.

Approve the join request-

Login to API Management portal using URL https://129.157.XX.XX/apiplatform/gateways and Navigate to this path –  Gateway>>Test>> Nodes>>Requesting tab page and approve the join request

Once approved request will move from “Requesting” to “Active” tab page.

That’s it, Now Logical gateway aks Gateway been installed and also mapped the gateway node which will process the runtime transactions.

Now, when we create a new API in API Management portal we will be getting option to select this gateway node and deploy this API there.

This gateway nodes will be keep synchronized with API Management portal to get any new API definition etc. and will make changes straight away. The synchronization duration user can change as per their desire. In my environment I have kept 2 seconds, so gateway nodes can keep polling any new API changes from Management portal and will make it active at Gateway node level.


Just to summarized, we have installed the Gateway nodes on a particular VM, it could be on-premise server or cloud VM, without this installation there will no runtime components exists where APIs can be deployed. This steps is one of the critical steps during API CS provisioning hence I have documented in details here.

I hope this blog will help you during API Gateway node installation for your environment. Please free to share and comments. I will try to reply best in my capacity

Teaching How to use Oracle Load Balancer as a Service (LBaaS) to front end your APIs


In this blog, I am going to show you how to configure Oracle Load Balancer as a Service (LBaaS) to proxy/redirect traffic into multiple APIs. For the sake of this example, I am going to point to running APIs hosted on my Oracle API Gateway, as well as running on a 3rd party Cloud provider. However, you can use Oracle LBaaS to proxy traffic to any HTTP or HTTPS endpoint(s).


In this example, I am going to consume an existing API that I built some time ago that when invoked returns a random joke. In order to test it in high availability mode, I am also going to configure yet another “jokes” API that will serve as a redundant backend endpoint/API.


This is the high-level view of how Oracle LBaaS can easily enable multiple proxy/redirections to backend APIs hosted across various places:



In this figure, we can see the following interactions:

  • I am going to configure a CNAME DNS record to point to my LBaaS URL
  • LBaaS itself is configured as a HA component
  • LBaaS will be configured to route to multiple backend endpoint/APIs


Before we start


In this blog I assume that you are familiar with the Oracle API Platform architecture, that allows you to download software-based API Gateways, with the ability to then install them wherever you want, including on Physical/Virtual hardware (perhaps on a corporate’s data-centre) or on top of Public Cloud vendor’s IaaS, including Oracle, AWS, MS Azure, Etc. If you need a refresher on how the Oracle API Platform works – see this previous blog.

Finally, if you don’t have yet access to the Oracle iPaaS, go to and request a new trial, they are free and get provisioned on the spot.

If you have any question or comment, feel free to contact me directly via LinkedIn, at


Obtaining the HTTPS certificates


Unencrypted communications are every time more on common and the default norm is to always liaise with SSL enabled HTTPS APIs. This is great, as all communication between 2 endpoints is encrypted and we leave it to the browsers and Load balancers to have to liaise with the hassle to encrypt /decrypt data. As consumers of an existing HTTPS endpoint/API, we need to get its certificate. For that we can simply use a browser, such as Firefox to easily download it with a full chain, in case it exists.

  • Using Firefox (adapt if using another browser), go to the HTTPS location where you want to connect/proxy/redirect traffic into. To the left to the bar menu, click on the green lock

  • Click on the arrow in front of your domain and then More information. Then, click on” View Certificate”

  • Click on the “Details” tab and then click on the last certificate in the chain (you might have only 1 level if it is self-signed). Then click on export and save it as PEM with full chain.

  • If the other APIs/endpoints runs on other SSL certificate, then also download them.


Creating the LBaaS


Now, we are ready to create our LBaaS configuration.

  • Login into your Oracle Cloud My Services console and click on Load Balancer Classic. If using OCI (bear-metal) you need to adjust accordingly.

  • Click on Open Service Console.

  • We need to upload the HTTPS certificate(s) first, so click on Network tab and then Load Balancers -> Digital Certificates

  • Click on Import Digital Certificate button

  • Set your Certificate Type to Trusted Certificate, give it a Name and click Select File to import the SSL certificate with chain that you exported previously. Notice that when you select it will automatically populate the chain too. Then, click Import.

  • Now, on the left menu, click on Load Balancers and click on Create Load Balancer.

  • Give it a Name, Description and type the permitted methods. In my case, I only need GET. Then click Create.

  • If needed, click on the refresher link to reload the page. Your new LB will be displayed. Click on it.

  • First, let’s create the Trusted Policy pointing to the SSL certificate that we just defined previously. For this, on the left menu click on Policies. Then Create Policy.

  • Select Policy Type: Trusted Certificate Policy, give it a Name and select the Trusted Certificate URI that you created previously. Then, click Create.

  • Next, let’s create a Server Pool, which is basically the list of APIs/endpoints that our LB will route to. For this, click on Server Pools and click on Create Server Pool.

  • Give it a Name and enter your API endpoint(s) into the Server space, following this form:




Note: Hit Enter, after typing any Server pair.

  • Also feel free to enable and configure Health Check, based on your requirements.

  • When done, click Create.
  • Finally, we need to create the Listener(s), but first let’s copy the canonical name of our load balancer, for this, go back to the Overview tab and copy the canonical name.

  • Now, click on Listeners and click on Create Listener.

  • Fill in the listeners as per the following list:
    • Name: Enter a name
    • Port: Enter the port on which you want your listener to receive requests. Behind the scenes security rule will be applied to let that port being visible on your LB.
    • Balancer Protocol: In this case, HTTP. It determines whether you want to accept HTTP or HTTPS requests. In my case I am choosing HTTP, but if you currently have a certificate, you can use HTTPS.
    • Server Protocol: In this case HTTPS. It determines whether the underlying APIs/endpoints run on HTTP or HTTPS.
    • Server Pool: Select the server pool that we created previously.
    • Security Certificate: If using HTTPS for your Balancer Protocol, this is where you import your certificate. Otherwise you can leave it blank.
    • Policies: By default, it runs on a Round Robin algorithm across the Server Pools, but you can change such behaviour here. Also, here is where we attach our Trusted Certificate Policy, so that we can talk to the HTTPS APIs/endpoints. There are multiple policies available, refer to this link for more information.
    • Virtual Host: This is going to be used to determine if this listener is being used or not, for now paste the canonical name of your load balancer that we copied in the previous section. Later, we will add our DNS CNAME domain name.
    • Path prefixes: Determine under which endpoints this listener will be accepting requests. For example, if you type /foo, only requests coming to your LB/foo would be routed to this listener. In our case, let’s leave it empty to allow all incoming requests on this listener.
    • Tags: Add tags to identify /group your listener

  • When done, click Create.
  • The first time you create a LBaaS it will take a bit longer as underneath al the virtual servers are being spun up and configured. Give it a minute or two. You can tell it’s done when the green circle on the name of the listener disappears and the State of the Load balancer in the Overview tab is set to Healthy.


  • Now, it is time to test your load balancer, for this, let’s open a browser window and type the canonical name of the LBaaS + “/jokes” in my case, so that I return the random joke. I can disable one underlying server at a time and make sure that the LBaaS is still functional.

    That’s a good joke!


  • Finally, if you want to map this domain to your own CNAME DNS record, once you do it, then enter your CNAME DNS name into the Virtual Hosts field under the Listener, so that the listener can be picked up even when using the CNAME, not only the canonical name.

  • And the result should be:

I hope you found this blog useful, if you encountered any trouble or have further questions, feel free to contact me directly via

Thanks for your time.

Policy Based Multi Factor Authentication

In my previous article, Securing Applications with Multi Factor Authentication I discussed how to roll out basic MFA. While this is great if your requirements are very straightforward, there are times when you’ll need a more sophisticated approach. One of the most common examples that I get asked about is how to challenge users for Multi Factor Authentication only when they are connecting remotely from home or when traveling.

In this article I use an example where the business requirement is to enforce MFA for people in the Customer Relations department who are accessing protected applications when they are not on the corporate network. I’ll explain how to configure policies and rules that allow users connected to the corporate network to login with just their User ID and Password, while users connected remotely will need to use Multi Factor Authentication to access protected applications.

Continue reading “Policy Based Multi Factor Authentication”

Teaching How to use Oracle PaaS Service Manager (PSM) CLI to Provision Oracle PaaS environments

In this blog, I am going to get you started with Oracle PaaS Service Manager (PSM) CLI – A great tool to manage anything API-enabled on any Oracle PaaS Service or Stack. For example, provisioning, scaling, patching, backup, restore, start, stop, etc.

It has the concept of Stack (multiple PaaS services), what means that you can very easily provision and manage full Stacks, such as Oracle Integration Cloud (OIC), that combines multiple PaaS solutions underneath, e.g. ICS, PCS, VBCS, DBCS, etc.

For this, we are going to use a pre-cooked Vagrant Box/VM that I prepared for you, so that you don’t have to worry about installing software, but moving as quickly as possible to the meat and potatoes.

This is a graphical view of what we are going to do:

Continue reading “Teaching How to use Oracle PaaS Service Manager (PSM) CLI to Provision Oracle PaaS environments”

Teaching How to push your code into multiple Remote Git repositories

Very quickly Git has become one of the most common ways to maintain and manage source code. It is easy to use, fast, reliable and most modern CI/CD tooling support it. GitHub also makes it easy to anyone who wants to share code, to do it in a free or very inexpensive way. Many companies however, also look for ways in which they can maintain their own private repositories as an enterprise-grade solution, like Developer Cloud Service (DevCS), the one Oracle gives for free with any IaaS or PaaS service.

In this blog I am going to show you how to push your code into any number of remote Git repositories. For example, you can have your private repository in DevCS and choose to also publish them into another GitHub remote repository (public or private) in GitHub.

This is the high-level idea:

  1. Let’s create a new Git repo in DevCS
  2. Let’s create a repo in GitHub
  3. Let’s clone DevCS repo locally on my laptop
  4. Let’s push the code to DevCS Git repo
  5. Let’s push the code to GitHub repo.

Continue reading “Teaching How to push your code into multiple Remote Git repositories”

Securing Applications with Multi Factor Authentication

These days, passwords online are not strong enough by themselves to protect applications. Scandals about password breaches seem to happen on a regular basis. This is where Multi Factor Authentication (MFA) greatly reduces the risks associated with protecting information online. Multi Factor Authentication combines something you know (e.g. your password) with something you have (e.g. your smartphone). MFA can be used with SMS or a Mobile App on an iPhone, an Andriod phone or a Windows Phone. Using MFA on a smartphone significantly reduces the costs associated with older and more traditional MFA technologies like physical tokens because of the cost of delivery and administrative overheads.

Oracle Identity Cloud Service allows you to deliver Multi Factor Authentication quickly and easily. In this article I’ll walk through the steps necessary to enable Multi Factor Authentication using Oracle Identity Cloud Service(IDCS). Once MFA is enabled you’ll be able to use MFA with any application protected by your instance of Oracle IDCS. In my example, I’ll use the Oracle Mobile Authenticator App on an iPhone to protect applications as well as the User Self Service Console in IDCS.

Continue reading “Securing Applications with Multi Factor Authentication”

Teaching How to use Terraform to Manage Oracle Cloud Infrastructure as Code

Infrastructure as Code is becoming very popular. It allows you to describe a complete blueprint of a datacentre using a high-level configuration syntax, that can be versioned and script-automated. This brings huge improvements in the efficiency and reliability of provisioning and retiring environments.

Terraform is a tool that helps automate such environment provisioning. It lets you define in a descriptor file, all the characteristics of a target environment. Then, it lets you fully manage its life-cycle, including provisioning, configuration, state compliance, scalability, auditability, retirement, etc.

Terraform can seamlessly work with major cloud vendors, including Oracle, AWS, MS Azure, Google, etc. In this blog, I am going to show you how simple it is to use it to automate the provisioning of Oracle Cloud Infrastructure from your own laptop/PC. For this, we are going to use Vagrant on top of VirtualBox to virtualise a Linux environment to then run Terraform on top, so that it doesn’t matter what OS you use, you can quickly get started.

This is the high-level idea:

Continue reading “Teaching How to use Terraform to Manage Oracle Cloud Infrastructure as Code”