In the world of cloud computing there are often multiple ways to achieve the same or similar result. In Oracle Cloud Infrastructure (OCI) logs are generated by the platform itself such as audit logs, OCI native services such as the Network Firewall Service, and custom logs from compute instances or your applications. These logs typically live in OCI logging where you can view them, or search them if required.
Collecting and storing logs is useful, however if you want to produce insights then you will need a way to analyse and visualise the log data. OCI Logging Analytics allows you to index, enrich, aggregate, explore, search, analyse, correlate, visualise and monitor all log data from your applications and system infrastructure.
From OCI logging there are two common ways in which logs can be ingested into Logging Analytics. The first is using a Service Connector to send logs to an Object Storage bucket, and an Object Collection Rule to then import the logs into Logging Analytics. The second option uses a Service Connector to send the logs directly to Logging Analytics. Both are valid options however require some consideration before use.
If you are running Oracle E-Business Suite (EBS) application today you will now be able to perform an auto discovery of all related resources in OCI Stack Monitoring. It will collect metrics specific for your EBS resources as well as ability to perform correlation across the EBS application and infrastructure stack as well as enable proactive alerting.
Components that will be auto discovered includes:
Concurrent Processing Node
Today, Stack Monitoring service supports EBS version 12.1 and 12.2 deployments hosted on OCI, On-Premise or Third Party Cloud (eg. AWS, Azure).
In the example, I will show you how you can configure Stack Monitoring for EBS version 12.2.
In Oracle Enterprise Manager (OEM) there is the ability to host an AWR Data Warehouse which enables you consolidate all your detailed performance data of all your database and store in a central location.
This enables you to do long-term analysis trend across your AWR data to determine, performance, capacity impact on the databases in your IT estate.
In OEM 13.5, Oracle now supports the AWR Warehouse repository for Autonomous Data Warehouse.
If you don’t have the infrastructure or capacity to store AWR data on-premise, you can now send your data to the Autonomous Data Warehouse(ADW) in Oracle Cloud (OCI).
There are enormous benefits to using Autonomous Data Warehouse(ADW). One of many benefits is that you can scale up/down cpu and storage whilst the database remains online.
There is plenty of information out there about connecting from an on-premises network to OCI. But if you want to see a step-by step-procedure that configures to completion an actual VPN you will have a hard time finding it. And rather than writing about it, this time I will actually show it.
This link will take you to the list of OCI’s verified CPE (Customer Premises Equipment) devices. If your On-Premises CPE is in this list then the VPN configuration should be very easy. In my case, the router I used is not in the list. It is a SOHO (Small Office-Home Office) type of router. For this configuration the on-premises network is my Home-Office LAN. For routers not on the list, there is an option called “other”. OCI offers a lists of supported configuration parameters for VPN connections that you can use for “other” types of routers. Here is the link to these parameter. And I explain them in the video. I hope that you find it useful:
Oracle recently introduced a Web Application Firewall (WAF) to further enhance and secure Oracle Cloud Infrastructure offerings. The Oracle Cloud Infrastructure WAF is based on Oracle Zenedge and Oracle Dyn technologies. It inspects all traffic destined to your web application origin and identifies and blocks all malicious traffic. The WAF offers the following tools, which can be used on any website, regardless of where it is being hosted:
Over 250 robust protection rules that include the OWASP rulesets to protect against SQL injection, cross-site scripting, HTML injection, and more
In this post, I configure a set of access control WAF policies to a website. Access control defines explicit actions for requests that meet conditions based on URI, request headers, client IP address, or countries and regions.
In my previous post in this series I covered linking GitHub and DockerHub and configuring the environment such that a build of a Docker image was triggered on updates to GitHub. In this final post of the series I will take you through the steps to pull the image from Docker Hub into OCCS in order to run the application. It should be noted that the image built on Docker Hub in my example is only the web tier that contains my Node.js project (APIs and SwaggerUI). The MongoDB component of my OCCS Stack is pulled directly from Docker Hub when my Stack containing the Web Tier and Database Tier services is deployed to OCCS. Continue reading “Exploring GitHub Docker Hub and OCCS Part 4”
In my previous post I described how I created a stack definition including my Node.js web application and a MongoDB service using docker-compose. In this article I will describe the steps I took to link my GitHub and Docker Hub accounts in order to automatically build a docker image triggered by a git push command.
Trigger a Build of the MedRec API Docker Image on Docker Hub
Combining internet / cloud based services such as GitHub and Docker Hub allows developers to experience productivity gains without having to fund a local server to provide this capability. I wanted to explore and experience this for myself.
“Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration. ”
A single command to create and start all the services in a configuration sounded pretty good to me. I definitely was keen on exploring docker-compose.
Add a docker-compose.yml file
Having proved that my web application runs up, I now need to address the persistence layer. The above Dockerfile contains the steps to create the required runtime platform for my node app, and installs the node application and package dependences (as specified in the package.json) file by doing the npm install. However if I tried to do a GET or a PUT my app will fail as it won’t find a MongoDB inside my container. I therefore still need a MongoDB somewhere in my environment to hold my application data. Continue reading “Exploring GitHub, DockerHub and OCCS Part 2”
In my previous post in this series I provided an Introduction describing the high level steps I planned to take.
In this post I will walk through the detailed steps to Dockerise the MedRec application.
Dockerise the MedRec APIs
Git clone the project repository
I used my Windows Surface Pro-4 with Oracle VM Virtual Box installed to host my development VM. I managed to source a VBox image that already had Ubuntu 16.04 and Docker installed so that helped get me started. In my development environment on my laptop, I created a directory under my home directory named gitprojects.
I cd into that directory. cd gitprojects Continue reading “Exploring GitHub Docker Hub and OCCS Part 1”
Throughout my development experience, I feel that I have had several major bursts-of-learning, due to problems which have made me re-evaluate how I approach architecting and developing a solution. I feel these ultimately make me better as a programmer, or at the very least, more versatile. I am sure some of these bouts of learning and understanding are near universal, experienced by most developers, such as understanding parallelisation, but others are somewhat more specialised, such as when I first started writing games, where having to take 60+ snapshots of a continuously evolving environment every second completely changed how I thought about performance and accuracy. Developing Cloud-Native applications (and indeed micro-service based applications, which share very similar principles) feels as though it is one of these moments in my development experience, and I feel it might be interesting to reflect upon that learning process.
I see the problem statement for Cloud-Native applications as something akin to: ‘you have no idea how many instances of your application will be running, you have no idea where they will be in relation to one another and you have no idea which one will be hit for any particular call’. That is a lot of unknowns to account for in your code, and forces you to think very carefully about how you architect and develop your applications.