Teaching How to use Nginx to frontend your backend services with Trusted CA certificates on HTTPS


Now days with the adoption of Serverless architectures, microservices are becoming a great way to breakdown problem into smaller pieces. One situation that is common to find, is multiple backend services running on technologies like NodeJS, Python, Go, etc. that need to be accessible via HTTPS. It is possible to enable these internal microservices directly with SSL over HTTPS, but a cleaner approach is to use a reverse proxy that front ends these microservices and provides a single HTTPS access channel, allowing a simple internal routing.


In this blog, I am showing how simple it is to create this front end with Nginx and leveraging “Let’s encrypt” to generate trusted certificates attached to it, with strong security policies, so that our website can score an A+ on cryptographic SSL tests conducted by third party organizations.



  • You should have a non-root user who has sudo privileges.
  • You must own or control the registered domain name that you wish to use the certificate with.


    Note: I am using Ubuntu 16.04 – Adjust accordingly if using other OS.


The instructions to install Nginx on Ubuntu 16.04 and how to setup SSL certificates are based on these great articles 1 and 2. Special thanks to Mitchell Anicas for his great work that made possible this post.

Let’s Install Nginx


  • Install Nginx:

    sudo apt-get update

    sudo apt-get install nginx


    Note: Nginx installation also installs a firewall (ufw) that simplifies to enable and disable ports. Given I am running this environment in the Oracle Public cloud under a separate firewall, I don’t want to add unnecessary security controls. However, if you are directly connected to the public internet, it is recommended that you enable a firewall.


  • Verify that the Nginx is up and running:


    systemctl status nginx

  • Also, in a browser, make sure that you can see the Nginx Welcome Page by typing your public IP Address or domain (default port 80):


Basic management commands.


The next commands will help you manage Nginx:

  • To stop it:


    sudo systemctl stop nginx


  • To start it when it is stopped:


    sudo systemctl start nginx


  • To restart it:


    sudo systemctl restart nginx


  • If you are making configuration changes and want to reload Nginx without dropping connections:


    sudo systemctl reload nginx


  • Nginx is configured to start automatically when the server boots. In order to disable this behaviour:


    sudo systemctl disable nginx


  • To re-enable the service to start up at boot:


    sudo systemctl enable nginx


Let’s add SSL into the mix


  • We are going to use “Let’s Encrypt” plugin to obtain a valid SSL certificate. For this, first install Certbot:


    sudo add-apt-repository ppa:certbot/certbot (You will have to press ENTER to continue)

    sudo apt-get update

    sudo apt-get install certbot


  • As part of this installation, it will be required to create a file under /.well-known for security validation. Make a change to Nginx configuration to point to this file location:

    For this, edit once again the file /etc/nginx/sites-available/default – Inside the server block enter:


    location ~ /.well-known {

    allow all;



  • Once you added this snippet, you can validate for syntax errors by typing:


    sudo nginx -t



    Make sure you get a successful test.


  • Restart Ngnix


    sudo systemctl restart nginx


  • Use the Webroot plugin to request an SSL certificate. In this case I am specifying all domains I want to work with this certificate.


    sudo certbot certonly –webroot –webroot-path=/var/www/html -d [YOUR-DOMAIN] -d [ANOTHER-DOMAIN]


    If this is the first time you run certbot it will ask you for an email and to agree to some T&Cs. After this, it should issue the certificates.


    Make a note on where it saved them and the expiration date.



    Note: If you receive an error like Failed to connect to host for DVSNI challenge, your server’s firewall may need to be configured to allow TCP traffic on port 80 and 443.


  • This is what we just obtained (stored under /etc/letsencrypt/archive):
    • cert.pem: Your domain’s certificate
    • chain.pem: The Let’s Encrypt chain certificate
    • fullchain.pem: cert.pem and chain.pem combined
    • privkey.pem: Your certificate’s private key


  • Certbot creates symbolic links to the most recent certificate files in the /etc/letsencrypt/live/[YOUR-DOMAIN] directory.

  • To further increase security, also generate a strong Diffie-Hellman group. To generate a 2048-bit group, use this command:


    sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048


  • It will be stored under /etc/ssl/certs/dhparam.pem

  • Now we are going to create a few configuration snippets that then we are going to reference from within the Nginx configuration file.
  • Create a file that will contain the SSL Certificates (adjust [YOUR-DOMAIN] as expected):


    sudo vi /etc/nginx/snippets/ssl-[YOUR-DOMAIN].conf


    Enter the SSL certificate directives:


    ssl_certificate /etc/letsencrypt/live/[YOUR-DOMAIN]/fullchain.pem;

    ssl_certificate_key /etc/letsencrypt/live/[YOUR-DOMAIN]/privkey.pem;


  • Then, create another snippet that will configure Nginx with strong SSL cipher security.

    sudo vi /etc/nginx/snippets/ssl-params.conf

    Enter the following snippet.

# from https://cipherli.st/

# and https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html


ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

ssl_prefer_server_ciphers on;


ssl_ecdh_curve secp384r1;

ssl_session_cache shared:SSL:10m;

ssl_session_tickets off;

ssl_stapling on;

ssl_stapling_verify on;

resolver valid=300s;

resolver_timeout 5s;

# disable HSTS header for now

#add_header Strict-Transport-Security “max-age=63072000; includeSubDomains; preload”;

add_header X-Frame-Options DENY;

add_header X-Content-Type-Options nosniff;


ssl_dhparam /etc/ssl/certs/dhparam.pem;


Ok, now we are ready to make the specific changes on the Nginx configuration file:


  • First, take a backup of the original Nginx configuration file:


    sudo cp /etc/nginx/sites-available/default /etc/nginx/sites-available/default.bak


    We are going to create a new server definition. Update your file to look like this:


    server {

    listen 80 default_server;

    listen [::]:80 default_server;

    server_name [YOUR-DOMAIN];

    return 301 https://$server_name$request_uri;



    server {


    # SSL configuration


    listen 443 ssl http2 default_server;

    listen [::]:443 ssl http2 default_server;

    include snippets/ssl-[YOUR-DOMAIN].conf;

    include snippets/ssl-params.conf;

    . . .


    Notice that by doing so, the current body of the server definition will become body of the new definition.

It is recommended to configure Nginx to redirect all non-secure requests to automatically redirect to encrypted HTTPS. However, if you need to serve both HTTP and HTTPS, use the following configuration instead:

server {

listen 80 default_server;

listen [::]:80 default_server;

listen 443 ssl http2 default_server;

listen [::]:443 ssl http2 default_server;


server_name [YOUR-DOMAIN];

include snippets/ssl-[YOUR-DOMAIN].conf;

include snippets/ssl-params.conf;

. . .


  • Once you are done, validate that the file is accurate by running a test: sudo nginx -t

    Also make sure that you adjust your firewall to allow 443 and 80 (in case you are allowing 80) ports into your server.


  • That’s it, now you can restart Nginx:


    sudo systemctl restart nginx


  • Test your domain in a browser using plain HTTP and if you decided to redirect, it should automatically present the Welcome Nginx page with HTTPS



Setting Up Auto Renewal


“Let’s Encrypt’s” certificates are only valid for ninety days. You need to think of renewing your certificate. Let’s create a simple cron that can help us with such task.

  • Create a cron job:

    sudo crontab -e

  • Copy and paste the following line at the end of the file. We are basically setting up a check up every day at 12:00am

    00 00 * * * /usr/bin/certbot renew –quiet –renew-hook “/bin/systemctl reload nginx”

    The renew command for Certbot will check all certificates installed on the system and update any that are set to expire in less than thirty days. –quiet tells Certbot not to output information nor wait for user input. –renew-hook “/bin/systemctl reload nginx” will reload Nginx to pick up the new certificate files, but only if a renewal has actually happened.

  • That’s it. All installed certificates will be automatically renewed and reloaded when they have thirty days or less before they expire.

Validate how secure your site is now


You can use the Qualys SSL Labs Report to see how your server configuration scores:


After various strong cipher security assessments, you should score a beautiful A+ rating!!!


Finally, let’s make our reverse proxy configuration


Now that we are fully secured, let’s create our reverse proxy configuration to route to our internal services.


  • Edit file /etc/nginx/sites-available/default
  • Within server add a “location / ” configuration. This basically represents an external URI being mapped to an internal one. In this case we are assuming that on the same machine where Nginx runs on, there is a service running on port 3000.

For example (pay special attention to the sections in red):


server {


location / {

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;






  • Test your Nginx configuration with sudo nginx -t
  • If the tests is successful, then restart Nginx (see command above).
  • From now on, if you go to: http://IP-Address/ should redirect to: http://IP-Address:3000/


Great! Now let’s add yet another proxy pointing to another service. This time, let’s assume that the service runs on the same machine again, but this time on port 3001 under /newservice.


  • Simply, add another location payload:


server {

location / {

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;



location /newservice/ {

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;





Notice that the internal second URL (proxy_pass) can be different to the one used externally. That is, externally you can use “location /newservice/”, while internally it wraps: “proxy_pass”.

I hope you found this blog useful.

If you have any question feel free to contact me on LinkedIn at https://www.linkedin.com/in/citurria/



Dockerising a Vue.js based SPA, ship and run on Oracle Container Cloud Service

In this post, I am going to show how to build and containerize a Vue.js application and let it run on Container Cloud Service (OCCS) using the following steps:

  • Build a Vue.js Web App
  • Build Docker image based on the above Vue.js SPA
  • Push it on Docker-Hub
  • Create a Service in Oracle Container Cloud Service (OCCS)
  • Deploy Service (the vue.js app)

Continue reading “Dockerising a Vue.js based SPA, ship and run on Oracle Container Cloud Service”

JWTs? JWKs? ‘kid’s? ‘x5t’s? Oh my!

There are no shortage of acronyms in the security space, and shifting towards centralised-security, rather than perimeter-based-security, has added even more. As I have been playing with solutions around centralised identity services, such as Oracle’s Identity Cloud Service, I have found myself spending more and more time in IETF RFCs in order to understand these concepts. While there is a lot of value in the standards documents, they assume a lot of knowledge and I often found myself wishing for a slightly more approachable, high level description of the elements I was dealing with. While there is something tempting about being part of the secret ‘We read the security RFCs’ club, I resisted this, and took it upon myself to provide this higher level overview of these important concepts.

Continue reading “JWTs? JWKs? ‘kid’s? ‘x5t’s? Oh my!”

Multi Factor Authentication is Critical for Everyone

In today’s environment where systems run in the cloud and so much business and personal activity occurs online, passwords are not strong enough by themselves to protect applications. Scandals about password breaches seem to happen on a regular basis. It’s easy to find many case studies where passwords have been compromised as a result of malware, email scams and other techniques. The key point is that no matter how strong our passwords, no matter how much we educate our users, there will be situations where people are caught off guard and click on the wrong link, look at the wrong email or open the wrong document. Once this happens, our passwords can be compromised.

Continue reading “Multi Factor Authentication is Critical for Everyone”

Building a Docker Image for WebLogic MedRec app

This blog walks you through the steps I used to get WebLogic Server and the MedRec sample application installed into a Docker image. There are many well documented GitHub projects for the Oracle Docker Images. This blog is meant to simply narrow down exactly what I did to get this going in my environment. I was using Ubuntu 16.04, and already had Docker installed.

Continue reading “Building a Docker Image for WebLogic MedRec app”

ACCS Zero Downtime Updates and Re-Deployments

The May 2017 update for ACCS (17.2.3) brought a cool new feature to ACCS, zero-downtime updates. While previously, there was support for performing a rolling restart of an application instance, where each instance would be brought down and updated in turn, this only enabled zero-downtime updates if you had an application running two or more instances, and your application could satisfy performance requirements with one fewer node. While any production system should be able to satisfy these requirements, many of the utility systems I ran were on a single node, and I had to send out an email blast to avoid being disruptive when I wanted to push a quick code fix.

Continue reading “ACCS Zero Downtime Updates and Re-Deployments”

Teaching how to integrate Twitter with Integration Cloud Service

This blog shows you how to use the Twitter ICS Connector that comes out of the box in Oracle Integration Cloud Service.

Obtain your Twitter Connector in ICS

In order to create a Twitter Connector you need to first go to Twitter developers portal and create an access token. For this:

  • Login to the Twitter Developer portal at https://dev.twitter.com/
  • ON the top right, click on My apps and follow the link to log in (sign up if you don’t have an account yet).

Continue reading “Teaching how to integrate Twitter with Integration Cloud Service”