Skip to main content

Introduction to NGINX

· 6 min read

This blog post summarizes basic use-cases of NGINX, namely static hosting, reverse proxy, and load balancer.

It provides fully-reproducible configuration setup using Docker for each use-case.

What is NGINX?

NGINX is a popular web services middleware that can be used as static hosting, reverse proxy, load balancer, and HTTP cache.

It is an open-source software with FreeBSD license.

Why is NGINX so popular?

ChatGPT answers the following (emphasis by me):

NGINX is popular for several reasons:

  1. High Performance and Scalability: NGINX is known for its high performance and scalability, capable of handling thousands of concurrent connections with low resource consumption. This makes it ideal for high-traffic websites and web applications.

  2. Easy to Configure and Customize: NGINX has a simple and intuitive configuration system, making it easy to customize and optimize for specific use cases. It also has a large ecosystem of modules and plugins that can extend its functionality.

  3. Reliability and Stability: NGINX is highly reliable and stable, with a low rate of crashes and failures. It uses an event-driven, asynchronous architecture to handle multiple connections efficiently, and can be configured to automatically recover from failures.

  4. Security: NGINX has several built-in security features, including SSL and TLS encryption, access control, and anti-DDoS measures. It also supports integration with third-party security tools and services.

  5. Open-Source and Community Support: NGINX is open-source software, which means that it is freely available and can be modified and distributed by anyone. It has a large and active community of developers and users, who contribute to its development, documentation, and support.


As a side note, not all the open-source software can be modified and distributed without any restrictions. However, NGINX's licence (FreeBSD) is one of the very permissive licenses and that's why.

Setup using Docker

Before diving into use cases, we need to install Docker and clone the demo repository.


This section uses Docker, so if you have not installed it yet, please do so now.

I recommend that if your operating system is Mac or Windows, download Docker Desktop (free for individuals and small-scale enterprises) or Rancher Desktop (free for anyone). Those softwares come with a virtual machine to instantiate a Linux VM on which docker runs.

If you are using Linux, you can install docker command using the package manager.

Verify installation

  1. Open the terminal and run docker run --rm -p 8080:80 nginx to start an NGINX container from the official image
  2. Then, open your browser and visit http://localhost:8080/
  3. If you see "Welcome to nginx!" page, then installation and setup are successful

You can stop the running container with Ctrl-C.

What do those options mean?
  • --rm: removes the container after it is stopped
  • -p 8080:80: sets up a port mapping between 8080 local and 80 container

Downloading the demo code

In the terminal, run git clone to clone the repo.

Use Cases

NGINX as a Static Hosting solution

Let's not dive deep into static hosting. Here, I'll just provide some basics.


  1. Navigate into static_hosting directory of the cloned repo and run docker compose up --build
  2. Visit http://localhost:8080


server {
listen 80;
server_name localhost;

location / {
root /usr/share/nginx/html;
index index.html index.htm;

error_page 404 /404.html;

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;

Explanation This is a basic NGINX configuration that sets up a web server listening on port 80 and serving content from the /usr/share/nginx/html directory.

The listen directive specifies the IP address and port number that NGINX will listen on. In this case, it is listening on port 80 for HTTP traffic.

The server_name directive specifies the hostname of the server. In this case, it is set to localhost.

The location directive is used to define how NGINX should handle requests that match a specific URL pattern. In this case, any request that matches the root / URL will be served files from the /usr/share/nginx/html directory with index.html or index.htm being the default file served.

The error_page directive specifies which HTML file to serve in the event of a specific error. In this case, it is set to serve /404.html for a 404 Not Found error, and /50x.html for any 500, 502, 503, or 504 error. The second location block specifies the location of the /50x.html file.

Overall, this configuration sets up a simple web server that serves static files from a specified directory and handles common HTTP errors.

NGINX as a Reverse Proxy

It is also very simple and easy to configure NGINX as a reverse proxy.
Please see the following config file.

server {
listen 80;
#listen [::]:80;
server_name localhost;

#access_log /var/log/nginx/host.access.log main;

location /hello {
proxy_pass http://hello:8000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

By using proxy_pass directive, the above config instructs NGINX to pass the requests at /hello endpoint to port 8000 of the hello host.
By default, Host header changes with this reverse-proxying, so it is a common practice to explicitly overwrite the Host field with the original (client host) using proxy_set_header directive.
Lastly, X-Forwarded-For header is set to contain all the proxied hosts.


  1. Navigate into reverse_proxy directory of the cloned repo and run docker compose up --build
  2. Run curl http://localhost:8080/hello or curl http://localhost:8080/bye to recieve requests from the proxied backends

NGINX as a Load Balancer

By defining upstream NGINX can act as a load balancer.


NGINX supports major algorithms such as round-robin, least connections, least time, IP Hash, etc.
Most of the above can have configurable weights, useful when the upstream servers have varying machine specs.

With NGINX Plus, it is possible to identify user sessions and pin traffic to the appropriate upstream regardless of the load-balancing algorithms.


NGINX provides various timeouts, both between client and NGINX, and between NGINX and upstream servers.
For example, proxy_connect_timeout is the timeout for establishing a connection with the upstream server.
proxy_send_timeout is the timeout over which sending request to an upstream server.

Those timeouts should be set appropriately in production environment to reduce overheads.


upstream greetings {
server hello:8000 weight=2;
server bye:8000 weight=1;

server {
listen 80;
server_name localhost;

location / {
proxy_pass http://greetings/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

The above NGINX config declares greetings upstream servers comprising hello:8000 and bye:8000 with varying weights.
This upstream uses the weighted least-connection algorithm to balance requests.

The defined greetings upstream is used as the target proxy for requests to / by using proxy_pass directive.


In this post, we saw basic usage of NGINX, including static hosting, reverse proxy, and load balancer.

I will write another blog on how to configure HTTP cache in NGINX later.