Teaching How to use Nginx to frontend your backend services with Trusted CA certificates on HTTPS

Now days with the adoption of Serverless architectures, microservices are becoming a great way to breakdown problem into smaller pieces. One situation that is common to find, is multiple backend services running on technologies like NodeJS, Python, Go, etc. that need to be accessible via HTTPS. It is possible to enable these internal microservices directly with SSL over HTTPS, but a cleaner approach is to use a reverse proxy that front ends these microservices and provides a single HTTPS access channel, allowing a simple internal routing.

In this blog, I am showing how simple it is to create this front end with Nginx and leveraging “Let’s encrypt” to generate trusted certificates attached to it, with strong security policies, so that our website can score an A+ on cryptographic SSL tests conducted by third party organizations.


  • You should have a non-root user who has sudo privileges.
  • You must own or control the registered domain name that you wish to use the certificate with.

    Note: I am using Ubuntu 16.04 – Adjust accordingly if using other OS.

The instructions to install Nginx on Ubuntu 16.04 and how to setup SSL certificates are based on these great articles 1 and 2. Special thanks to Mitchell Anicas for his great work that made possible this post.

Let’s Install Nginx

  • Install Nginx:

    sudo apt-get update

    sudo apt-get install nginx

    Note: Nginx installation also installs a firewall (ufw) that simplifies to enable and disable ports. Given I am running this environment in the Oracle Public cloud under a separate firewall, I don’t want to add unnecessary security controls. However, if you are directly connected to the public internet, it is recommended that you enable a firewall.

  • Verify that the Nginx is up and running:

    systemctl status nginx

  • Also, in a browser, make sure that you can see the Nginx Welcome Page by typing your public IP Address or domain (default port 80):

Basic management commands.

The next commands will help you manage Nginx:

  • To stop it:

    sudo systemctl stop nginx

  • To start it when it is stopped:

    sudo systemctl start nginx

  • To restart it:

    sudo systemctl restart nginx

  • If you are making configuration changes and want to reload Nginx without dropping connections:

    sudo systemctl reload nginx

  • Nginx is configured to start automatically when the server boots. In order to disable this behaviour:

    sudo systemctl disable nginx

  • To re-enable the service to start up at boot:

    sudo systemctl enable nginx

Let’s add SSL into the mix

  • We are going to use “Let’s Encrypt” plugin to obtain a valid SSL certificate. For this, first install Certbot:

    sudo add-apt-repository ppa:certbot/certbot (You will have to press ENTER to continue)

    sudo apt-get update

    sudo apt-get install certbot

  • As part of this installation, it will be required to create a file under /.well-known for security validation. Make a change to Nginx configuration to point to this file location:

    For this, edit once again the file /etc/nginx/sites-available/default – Inside the server block enter:

    location ~ /.well-known {

    allow all;


  • Once you added this snippet, you can validate for syntax errors by typing:

    sudo nginx -t

    Make sure you get a successful test.

  • Restart Ngnix

    sudo systemctl restart nginx

  • Use the Webroot plugin to request an SSL certificate. In this case I am specifying all domains I want to work with this certificate.

    sudo certbot certonly –webroot –webroot-path=/var/www/html -d [YOUR-DOMAIN] -d [ANOTHER-DOMAIN]

    If this is the first time you run certbot it will ask you for an email and to agree to some T&Cs. After this, it should issue the certificates.

    Make a note on where it saved them and the expiration date.

    Note: If you receive an error like Failed to connect to host for DVSNI challenge, your server’s firewall may need to be configured to allow TCP traffic on port 80 and 443.

  • This is what we just obtained (stored under /etc/letsencrypt/archive):
    • cert.pem: Your domain’s certificate
    • chain.pem: The Let’s Encrypt chain certificate
    • fullchain.pem: cert.pem and chain.pem combined
    • privkey.pem: Your certificate’s private key
  • Certbot creates symbolic links to the most recent certificate files in the /etc/letsencrypt/live/[YOUR-DOMAIN] directory.

  • To further increase security, also generate a strong Diffie-Hellman group. To generate a 2048-bit group, use this command:

    sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

  • It will be stored under /etc/ssl/certs/dhparam.pem

  • Now we are going to create a few configuration snippets that then we are going to reference from within the Nginx configuration file.
  • Create a file that will contain the SSL Certificates (adjust [YOUR-DOMAIN] as expected):

    sudo vi /etc/nginx/snippets/ssl-[YOUR-DOMAIN].conf

    Enter the SSL certificate directives:

    ssl_certificate /etc/letsencrypt/live/[YOUR-DOMAIN]/fullchain.pem;

    ssl_certificate_key /etc/letsencrypt/live/[YOUR-DOMAIN]/privkey.pem;

  • Then, create another snippet that will configure Nginx with strong SSL cipher security.

    sudo vi /etc/nginx/snippets/ssl-params.conf

Enter the following snippet.

# from https://cipherli.st/

# and https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

ssl_prefer_server_ciphers on;


ssl_ecdh_curve secp384r1;

ssl_session_cache shared:SSL:10m;

ssl_session_tickets off;

ssl_stapling on;

ssl_stapling_verify on;

resolver valid=300s;

resolver_timeout 5s;

# disable HSTS header for now

#add_header Strict-Transport-Security “max-age=63072000; includeSubDomains; preload”;

add_header X-Frame-Options DENY;

add_header X-Content-Type-Options nosniff;

ssl_dhparam /etc/ssl/certs/dhparam.pem;

Ok, now we are ready to make the specific changes on the Nginx configuration file:

  • First, take a backup of the original Nginx configuration file:

    sudo cp /etc/nginx/sites-available/default /etc/nginx/sites-available/default.bak

    We are going to create a new server definition. Update your file to look like this:

    server {

    listen 80 default_server;

    listen [::]:80 default_server;

    server_name [YOUR-DOMAIN];

    return 301 https://$server_name$request_uri;


    server {

    # SSL configuration

    listen 443 ssl http2 default_server;

    listen [::]:443 ssl http2 default_server;

    include snippets/ssl-[YOUR-DOMAIN].conf;

    include snippets/ssl-params.conf;

    . . .

Notice that by doing so, the current body of the server definition will become body of the new definition.

It is recommended to configure Nginx to redirect all non-secure requests to automatically redirect to encrypted HTTPS. However, if you need to serve both HTTP and HTTPS, use the following configuration instead:

server {

listen 80 default_server;

listen [::]:80 default_server;

listen 443 ssl http2 default_server;

listen [::]:443 ssl http2 default_server;

server_name [YOUR-DOMAIN];

include snippets/ssl-[YOUR-DOMAIN].conf;

include snippets/ssl-params.conf;

. . .

  • Once you are done, validate that the file is accurate by running a test: sudo nginx -t

    Also make sure that you adjust your firewall to allow 443 and 80 (in case you are allowing 80) ports into your server.

  • That’s it, now you can restart Nginx:

    sudo systemctl restart nginx

  • Test your domain in a browser using plain HTTP and if you decided to redirect, it should automatically present the Welcome Nginx page with HTTPS

Setting Up Auto Renewal

“Let’s Encrypt’s” certificates are only valid for ninety days. You need to think of renewing your certificate. Let’s create a simple cron that can help us with such task.

  • Create a cron job:

    sudo crontab -e

  • Copy and paste the following line at the end of the file. We are basically setting up a check up every day at 12:00am

    00 00 * * * /usr/bin/certbot renew –quiet –renew-hook “/bin/systemctl reload nginx”

    The renew command for Certbot will check all certificates installed on the system and update any that are set to expire in less than thirty days. –quiet tells Certbot not to output information nor wait for user input. –renew-hook “/bin/systemctl reload nginx” will reload Nginx to pick up the new certificate files, but only if a renewal has actually happened.

  • That’s it. All installed certificates will be automatically renewed and reloaded when they have thirty days or less before they expire.

Validate how secure your site is now

You can use the Qualys SSL Labs Report to see how your server configuration scores:

After various strong cipher security assessments, you should score a beautiful A+ rating!!!

Finally, let’s make our reverse proxy configuration

Now that we are fully secured, let’s create our reverse proxy configuration to route to our internal services.

  • Edit file /etc/nginx/sites-available/default
  • Within server add a “location / ” configuration. This basically represents an external URI being mapped to an internal one. In this case we are assuming that on the same machine where Nginx runs on, there is a service running on port 3000.

For example (pay special attention to the sections in red):

server {


location / {

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;





  • Test your Nginx configuration with sudo nginx -t
  • If the tests is successful, then restart Nginx (see command above).
  • From now on, if you go to: http://IP-Address/ should redirect to: http://IP-Address:3000/

Great! Now let’s add yet another proxy pointing to another service. This time, let’s assume that the service runs on the same machine again, but this time on port 3001 under /newservice.

  • Simply, add another location payload:

server {

location / {

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;



location /newservice/ {

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;




Notice that the internal second URL (proxy_pass) can be different to the one used externally. That is, externally you can use “location /newservice/”, while internally it wraps: “proxy_pass”.

I hope you found this blog useful.

If you have any question feel free to contact me on LinkedIn at https://www.linkedin.com/in/citurria/


Author: Carlos Rodriguez Iturria

I am extremely passionate about people, technology and the most effective ways to connect the two by sharing my knowledge and experience. Working collaboratively with customers and partners inspires and excites me, especially when the outcome is noticeable valuable to a business and results in true innovation. I enjoy learning and teaching, as I recognise that this is a critical aspect of remaining at the forefront of technology in the modern era. Over the past 10+ years, I have developed and defined solutions that are reliable, secure and scalable, working closely with a diverse range of stakeholders. I enjoy leading engagements and am very active in the technical communities – both internal and external. I have stood out as a noticeable mentor running technology events across major cities in Australia and New Zealand, including various technology areas such as, Enterprise Integrations, API Management, Cloud Integration, IaaS and PaaS adoption, DevOps, Continuous Integration, Continuous Automation among others. In recent years, I have shaped my role and directed my capabilities towards educating and architecting benefits for customers using Oracle and AWS Cloud technologies. I get especially excited when I am able to position both as a way to exceed my customers’ expectations. I hold a bachelor degree in Computer Science and certifications in Oracle and AWS Solutions Architecture.

2 thoughts on “Teaching How to use Nginx to frontend your backend services with Trusted CA certificates on HTTPS”

  1. Hi, I have a problem configuring a nginx server with microservices that point to another port on the same server, when doing this configuration the browser console shows ERR_CONNECTION_CLOSED


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: