I’ve started posting articles related to the project that @stantanev and a few of us are working on. This is snapshot of the puzzle that is to build out a APIs on the Oracle Always Free Tier.
As a demonstration of capability, we built a few different APIs using fnproject (https://fnproject.io/) – an open-source container-native serverless platform. As part of Oracle Cloud Infrastructure, there’s Oracle Functions which is the managed Function-as-a-Service based upon this same project.
Let’s take a look at it here and see what it took to get going. Also, this is being deployed into VM.Standard.E2.1.Micro compute shapes (which is 1 OCPU and 1GB of memory) and hence there are some considerations to make sure we get the most out of the kit we have access to (for free).
It’s almost 9 days before the event launches on the Friday night. Even before that, there are a series of workshops / webinars that we are hosting as part of the event in the days leading up to the event. Even then we are:
a/ Making sure that we have people, mentors, marketing, product managers, executives lined up to help where they can. b/ Making sure that we have ideas, platforms, trials, programs, education material lined up to help where it’s feasible. c/ Making sure that we help promote, advocate, market the event so those who would benefit would know about the event and attend.
All this effort for what outcome?
This says it all. And even though this is about #anomalydetection #deepfake #cybersecurity, much of this comes down to data – where the data can be sourced, how the data can be analysed, is the data reliable and can it be trusted.
Over the coming days leading up to the event – there will be plenty of chatter around it. Follow the event on LinkedIn. Some easy ways to follow are:
I’ll be writing more about it here as we go and as new content is available. If you are interested to know or more if you want to join a team or showcase a project or product – head to the Hackmakers website https://hackmakers.com/ to learn more and register.
I would like to show how OIC log management can be achieved with OCI Object Storage (I’ll call it bucket) and OCI Logging Analytics, Visual Builder Studio (used to be Developer Cloud, I’ll call it VB Studio).
Interestingly I’m not going to use OIC to download log files, either to ingest log data from OCI Object Storage. VB Studio will be my tool to do sourcing log files and feeding it to bucket – I’ll be taking advantage of unix shell and oct-cli from VB Studio. Then OCI Logging Analytics will ingest log data from bucket based on cloud event.
Oracle Cloud Infrastructure provides a ton of useful services for automating and orchestrating behaviours in your cloud environment, and while they are often pretty handy on their own, leveraging them together gives almost complete flexibility on what you can achieve. Want to trigger a backup using a command in slack, then have a message get sent back when it completes? Sure! Want to periodically poll a log API and archive the results? Easy. Oracle Cloud Infrastructure provides a number of inbuilt capabilities, as well as the ability to jump into arbitrary code to build elaborate automation flows, and this blog post will focus upon the security constructs around this, looking at how services can be authorised to invoke one another, as well as how they authenticate themselves, while avoiding storing sensitive data in insecure ways. This post is intended as an overview of the concepts, and will be referenced in more concrete ways in future.
Over the past couple of weeks, I was getting back into the normal life of Cloud Engineering (the #BuildWithAI global hackathon isn’t the only thing that I focus on – check this article out #BuildWithAI Announces Winners). And something that I was doing was actually less about technology but more about budgeting – Cloud Estimations.
This is an interesting puzzle because of a couple of different elements.
Cloud is supposed to be elastic. But budgeting is typically not. Nor are project estimations and costs. Nor are approval processes. Nor are procurement processes. There are so many things in a business that are not elastic.
The people provisioning are not necessarily in charge of the costs. And I know as a developer, these overarching cost discussions aren’t necessarily the one you get invited to.
I’ll keep this one short as we have done a specific writeup about this event. If you hasn’t seen it – check out this previous article #BuildWithAI – A Hackathon Experience. The winners have been announced and published … Here’s a quick summary of who won what.
On August 17th, we’ll be announcing winners of the #BuildWithAI hackathon and it will be live-streamed on youtube – https://youtu.be/URuB0FtBIJo (note – set your reminder). Cassie Kozyrkov (Chief Decision Scientist, Google), Steve Nouri (Board Member, Hackmakers), Cherie Ryan (Regional MD of ANZ and VP, Oracle) as well as an all-star judging line-up will be there.
Before we get to that, lets rewind, fast-forward and bring together some of the interesting points of the #BuildWithAI hackathon – an event that was truly global in its nature hosted by Hackmakers (https://hackmakers.com/).
July 24th 11:45am AEST – I received a calendar alert for the Leader Mentor Zoom session for the #BuildWithAI hackathon. Trying to finish as many of the things that I needed to get done before I joined this call. This will be interesting. Watching the number of competitors join the event’s slack workspace climbing from a hundred users when I first joined, to now over 3,500 users in the #introductions channel, it was an unique experience. I’m thinking about lots of different things from past hackathons that I’ve participated, mentored, sponsored, hosted – how will this one be any different. I’ll just have to wait and see. And better yet, give to the community and the competitors as much as I can in the time we have.
This moment was not the beginning nor the end of this experience. It was somewhere in between. I’ll give you some background.
Kubernetes is a great platform to run microservices, there is no question about it. It has great features like Horizontal Pod Autoscaler and Cluster Autoscaler that make it very easy to scale whole applications depending on current or forecasted load. However with auto-scaling there are a few considerations that we need to keep in mind and one of the most important ones is that containers are ephemeral, which implies that we need to design our applications in such a way that they can scale, without compromising data persistency. There are multiple techniques available to make this possible. A common way to achieve this, is by using Persistent Volumes (PV) and Persistent Volumes Claims (PVC), that hook via the CSI (Container Storage Interface) into external disk volumes. This helps maintain state outside containers, allowing them to scale without compromising the data.
Also, with the constant embrace of Cloud providers to kubernetes, these solutions are quickly also evolving and becoming more sophisticated and easier to use. For example, now days we can extend the use of PVC with Storage Classes, implemented by the different Cloud vendors. This make the whole PV/PVC experience so enjoyable, as these storage classes become responsible to interface into the Cloud vendor IaaS land and create resources that we simply declared, while we keep reading and writing data in persistent disks.
Now, with this constant multi-cloud endorsement with kubernetes, it was a matter of time, until Cloud vendors decided to differentiate themselves by allowing the use of foreign cloud services, as first-class citizens in kubernetes. Just imagine, having the ability to use a PaaS service from “Cloud Vendor A”, seamlessly from within my kubernetes cluster that is running on “Cloud Vendor B”. The piece of magic that makes this possible is called, Open Service Broker (OSB), which is really not magic, but just a bunch of APIs that allow the control plane in kubernetes to interact with Cloud services.
In this blog, I am going to show you how to consume Oracle Cloud Infrastructure (OCI) resources from within kubernetes using the Open Service Broker. Specifically, I am going to let my kubernetes control plane to fully manage an OCI Autonomous Transaction Processing DB (ATP), as if it was a native kubernetes resource… And by the way, I am going to use OKE (Oracle managed Kubernetes), but you could very well use Google/AWS/Azure Kubernetes elsewhere and still consume OCI resources. How cool is that?