Over the past week, Oracle has soft-launched a range of new services that leverage the capabilities of our Dyn investment to offer a significant enhancement to the native Edge management capabilities of our second generation cloud. These services include:
- Traffic Management Steering Policies
- Health Checks (Edge)
- Web Application Firewall
I’ll reserve my discussion on the Web Application Firewall for a later post, but what I’d like to discuss today is Traffic Management, and how it can be leveraged to deploy, control and optimise globally dispersed application services for your Enterprise.
To work with Traffic Management Steering Policies, you must have a domain (or subdomain) delegated to the OCI DNS Service. Many organisations choose to delegate a subdomain specifically for global load balancing activities to help delineate source services (eg. <service>.glb.domain.com). For more on using our DNS services, click here.
For the purposes of today’s walkthrough, I will be using the “pheatsols.com.au” domain served via. OCI’s DNS:
I’ve also spun up a couple of Apache instances in Ashburn and London serving a simple HTML page identifying their geographic location:
Hitting the IP addresses directly will return the following:
The setup above is a (very) basic facsimile of a web application being served out of 2 geographically disparate locations. Under a traditional DNS construct, one would have but a few rudimentary options for directing users to these applications:
- Separate Entries: Create a separate DNS entry for each service point (eg. london.pheatsols.com.au and ashburn.pheatsols.com.au) and instruct users to select their closest option.
- Single Entry: Create a single DNS entry (eg. application.pheatsols.com.au), nominating a location as the primary site and manually update the entry in the event of failure or cutover.
- Round Robin: Create multiple DNS entries under the same address which will invoke DNS round robin, essentially alternating the server resolved every lookup.
Whilst the above options work, they come with inherent limitations.
- Creating separate entries puts the onus on the user to use the correct server resulting in a sub-par user experience. Furthermore, in the event of failure, users will need to be manually communicated to and any programmatic access to services will have to be laboriously re-addressed.
- Single entries leads to an unbalanced workload (you are essentially running in an active-passive mode) and updating DNS entries in the event of failure can take some time to propagate leading to potentially extended outages.
- Round Robin provides intermittent service in the event of node failure (every second request still sends traffic to a dead node) and depending on application architecture, can cause issues with sessions being redirected to a server which has no prior history of your connection.
What we’d ideally like to do here is provide a single DNS endpoint to the user, which automatically forwards their request to the nearest server, but seamlessly fails over to another functioning server in the event of node outage… and with Traffic Management Steering Policies (and the heavily intertwined Health Check), we can do just that.
Creating a Traffic Management Steering Policy
To start off with, let’s open the hamburger menu and click on Edge Services > Traffic Management Steering Policies:
From here we select “Create Traffic Management Steering Policy”:
And we’re presented with a range of traffic steering policies. I’ll explain what the others do in the “Other Policy Types” section below, but for now, let’s just select “Geolocation Steering” which as the description highlights, dynamically routes traffic requests based on originating geographic conditions:
Now, we enter a name for our policy and set the TTL. The TTL defines how long requesting DNS servers cache the returned entry for. You’ll want to balance this number between the criticality of your application (the lower the TTL, the shorter it will cache a potentially “down” server address) and the chattiness of your DNS server (too short and your DNS server and client will be working overtime to externally resolve queries). The default 60 seconds works for us.
Next we define our answer pools, which are servers groups based (in this case) on geographic location. Here, I’ve created 2 pools – ashburn and london with a single server in each. Were you to have multiple servers in each location, you could optionally mark a server as ineligible in the event you were enacting maintenance on that server and did not want to return that result.
Now we must set the all-important geo-steering policies. Rules are parsed in order from top to bottom, so for Rule 1, I have a North America, South America, Oceania and Antarctica (these can also be defined at a country or State based level for US and Canada) routing to Ashburn first, then failing over to London. Rule 2 directs traffic from Africa, Asia and Europe to London with a Global Catch-all (in the event no rules are matched) mimicking Rule 1.
A Health Check can and should be assigned – this removes servers from the pool in the event they are inactive (and returns them when they go online) in support of an “always available” architecture. I will set up a basic port 80 GET on the root address of the host, but optional ports, paths, headers and methods are definable:
Finally, we associate the policy with a DNS entry. I will apply this policy to “application.pheatsols.com.au” noting that any previous static entry for application will be usurped by this dynamic policy:
After clicking “Create Policy”, we’re presented with a summary of the policy. Note that the Policy Answer Data lists the health of both our servers as “Healthy”.
Being based in Australia, I now expect browsing to application.pheatsols.com.au will direct me to Ashburn:
To test the failover, I will down the Apache service in Ashburn:
Sure enough, I’m now reporting Ashburn as down in the Policy Answer Data section of the policy:
And a browse to application.pheatsols.com.au seamlessly resolves to London:
Other Policy Types
While today we focused on Geolocation Steering, there are 4 other Traffic Management Steering Policies currently available:
- Load Balancer: Allows you to set up a weighted, health-checked load balancer between multiple backend nodes/servers.
- Failover: Establishes a sequential, health-checked based answer to support a simple high availability architecture.
- ASN Steering: Distributes traffic based on the ASN of the origin (used in BGP routing)
- IP Address Steering: Distributes traffic based on the IP address/range of the origin
The practical use of Traffic Management Steering Policies extends well beyond simple HA and Load Balancing, with the following scenarios now simplified or enhanced by our fully integrated second generation cloud:
- Cloud Migration: Weighted load balancing supports controlled migration from your data centre to Oracle Cloud Infrastructure servers. You can steer a small amount of traffic to your new resources in the cloud to verify everything is working as expected. You can then increase the ratios until you are comfortable with fully migrating all traffic to the cloud.
- Supporting Hybrid Environments: Since Traffic Management Steering Policies is an agnostic service, it may be used to not only steer traffic to Oracle Cloud Infrastructure resources, but can also be used to steer traffic to any publicly exposed (internet resolvable) resources, including other cloud providers and enterprise data centres.
- Pilot User Testing: Leveraging IP Prefix steering, you can configure policies to serve different responses for your internal users versus external users.
- Geo-fencing and Partner Only Access: Limit access to your services to certain geographical locations and partner organisations, optionally serving a “This content not available from your location” as a catch-all.
To find out more about our integrated suite of Edge services, read the full documentation here.