Nginx – Linux Hint https://linuxhint.com Exploring and Master Linux Ecosystem Tue, 02 Mar 2021 03:07:33 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.2 How to Enable Brotli Compression in Nginx https://linuxhint.com/enable-brotli-compression-nginx/ Sun, 28 Feb 2021 18:37:07 +0000 https://linuxhint.com/?p=92198

Brotli compression is a generic-purpose compression technique widely supported across browsers. It’s comparable to the currently available compression methods as it offers 20-26% better compression ratios. Nevertheless, it’s no good unless the webserver is sending compressed text-based resources with the Brotli algorithm.

In this article, we will learn how compression works in the server and why is it useful? We will also learn to install the Nginx server and get our server to provide Brotli compressed files.

Background

Compression techniques/algorithms improve website performance by reducing the content size. Hence the compressed data takes a low load and transfer time. However, it has a price. Servers utilize a lot of computational resources to provide a better compression rate. Hence, the better, the expensive. So a great deal of effort is added to improve compression formats while utilizing minimum CPU cycles.

By now, the most potential compression format was gzipped. Recently gzip is replaced by a new compression algorithm known as Brotli. It is an advanced compression algorithm composed of Huffman coding, the L77 algorithm, and context modeling. In contrast, Gzip is built on the Deflate algorithm.

The lossless compression format, designed by Google, is closely related to deflate compression format. Both of the compression methods use sliding windows for back referencing. The Brotli sliding window size ranges from 1 KB to 16MB. In contrast, Gzip has a fixed window size of 32KB. That means Brotli’s window is 512 times larger than the deflate window, which isn’t relevant as text files larger than 32 KB are rarely on web servers.

Server Compression Compatibility is Important

Whenever we download a file from the browser, the browser requests the server what kind of compression it supports through a header. For instance, if the browser supports gzip and deflate to decompress. It will add these options in its Accept-Encoding, header, i.e.,

Accept-Encoding=”deflate, gzip”

Hence the browsers that don’t support these formats will not include them in the header. When the server responds with the content, it tells the browser about the compression format through a header, Content-Encoding. Hence, if it supports gzip, then the header looks like this:

Content-Encoding=”gzip”

Headers of the browsers like Firefox that support Brotli compression and the webserver that have a Brotli module installed to look like these:

Accept-Encoding=”deflate, gzip, br”
Content-Encoding=”gzip, br”

Hence, if the browser utilizes the best compression format and the web server does not, it’s no good, as the web server won’t send back the files with the preferred compression algorithm. That’s why it is important to install the compression module for the webserver.

Server Installation

Before moving forward with the Brotli configuration, we will set up our Nginx server. Before that sudo apt-get update your Ubuntu distribution and type in the following commands in your bash terminal.

ubuntu@ubuntu:~$ sudo apt-get update
ubuntu@ubuntu:~$ sudo apt-get install nginx -y
ubuntu@ubuntu:~$ sudo service nginx start

To enable Brotli compression in the Nginx, we will compile our .so modules as per our Nginx version details. As shown, typing the following command will output the Nginx version:

ubuntu@ubuntu:~$ nginx -v
nginx version: nginx/1.18.0 (Ubuntu)

Use the wget command along with your nginx version detail to download the source code from the Nginx website.

ubuntu@ubuntu:~$ wget https://nginx.org/download/nginx-1.18.0.tar.gz
--2021-02-07 02:57:33--  https://nginx.org/download/nginx-1.18.0.tar.gz
Resolving nginx.org (nginx.org)... 3.125.197.172, 52.58.199.22, 2a05:d014:edb:5702::6, ...
Connecting to nginx.org (nginx.org)|3.125.197.172|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1039530 (1015K) [application/octet-stream]
Saving to: 'nginx-1.18.0.tar.gz'

nginx-1.18.0.tar.gz             100%[==================================================================>]   1015K   220KB/s in 4.8s

2021-02-07 02:57:38 (212 KB/s) - ‘nginx-1.18.0.tar.gz’ saved [1039530/1039530]

We will use this source code to compile *.so binaries for Brotli compression. Now extract the file using the following command.

ubuntu@ubuntu:~$ tar xzf nginx-1.18.0.tar.gz

Brotli Module Configuration

Now Google has released the Brotli module for Nginx. We will git-clone the module from the Google repository.

ubuntu@ubuntu:~$ git clone https://github.com/google/ngx_brotli --recursive.

We will cd into the nginx-1.18.0 folder to configure the dynamic Brotli module.

ubuntu@ubuntu:~$ cd nginx-1.18.0/
ubuntu@ubuntu:~$ sudo ./configure --with-compat --add-dynamic-module=../ngx_brotli

Note: You may receive the following error while configuring

./configure: error: the HTTP rewrite module requires the PCRE library.
You can either disable the module by using --without-http_rewrite_module
option, or install the PCRE library into the system, or build the PCRE library
statically from the source with nginx by using --with-pcre=<path> option.

In that case, run the following command to install the pcre library

ubuntu@ubuntu:~$ sudo apt-get install libpcre3-dev -y

Module Compilation

We will use the make command to create a modules folder inside the nginx-1.18.0 directory.

ubuntu@ubuntu:~$ sudo make modules

We use the cp command to copy ngx_http_brotli*.so files from the nginx-1.18.0/objs folder to the modules folder.

ubuntu@ubuntu:~$ cd /nginx-1.18.0/objs/
ubuntu@ubuntu:~$ sudo cp  <strong>ngx_http_brotli*.so </strong>/usr/share/nginx/modules

Now list the content of the files using the ls command. You will notice that it consists of two different module files, i.e.:

ubuntu@ubuntu:~$ ls ngx_http_brotli*.so

ngx_http_brotli_filter_module.so
ngx_http_brotli_static_module.so
  • Regular Brotli Module: The ngx_http_brotli_filter_module.so module compresses all the files on the fly, and hence it requires more computational resources
  • Static Brotli Module: The ngx_http_brotli_static_module.so module allows it to serve pre-compressed static files, hence less resource-intensive.

Now use your favorite editor to open the /etc/nginx/nginx.conf file to add Brotli load modules to begin Brotli configuration by including the following lines:

ubuntu@ubuntu:~$ sudo vim /etc/nginx/nginx.conf

# Load module section
load_module "modules/ngx_http_brotli_filter_module.so";
load_module "modules/ngx_http_brotli_static_module.so";

We will also include configuration folders paths /etc/nginx/conf.d/*.conf

and /usr/share/nginx/modules/*.conf in the above file such as:

http {
# Include configs folders
include /etc/nginx/conf.d/*.conf;
include /usr/share/nginx/modules/*.conf;
}

To add the Brotli configuration open the /etc/nginx/conf.d/brotli.conf

file in the vim editor and enable Brotli by setting the following configuration directives:

brotli     on;
brotli_static        on;
brotli_comp_level          6;
brotli_types         application/rss+xml application/xhtml+xml
text/css text/plain;

The “brotli off|on” value enables or disables dynamic or on the fly compression of the content.

The ‘brotli_ static on’ enables the Nginx server to check if the pre-compressed files with the .br extensions exist or not. We can also turn this setting into an option off or always. The always value allows the server to send pre-compressed content without confirming if the browser supports it or not. Since Brotli is resource-intensive, this module is best suited to reduce the bottleneck situations.

The “brotli_comp_level 6” directive sets the dynamic compression quality level to 6. It can range from 0 to 11.

Lastly, enable dynamic compression for specific MIME types, whereas text/html responses are always compressed. The default syntax for this directive is brotli_types [mime type]. You can find more about the configuration directive on Github.

Save the changes, restart the Nginx service by typing “sudo service restart nginx” and it’s all done.

Conclusion

After the changes, you will notice some obvious improvements in the performance metrics. However, it does come with a slight drawback of increased CPU load at peak times. To avoid such situations keep an eye on CPU usage; if it reaches 100% regularly, we can utilize many options as per our preferences, such as presenting pre-compressed or static content, lowering compression level, and turning off on-the-fly compression, among many.

]]>
How to install multiple domains on a Nginx server https://linuxhint.com/install-multiple-domains-nginx-server/ Wed, 03 Feb 2021 03:59:33 +0000 https://linuxhint.com/?p=88935 Nowadays, many webmasters run multiple domain names in the same server as it reduces the cost, and complexity in handling many web sites. As the web server, this guide uses Nginx due to its high performance, flexibility, and easy to configure. This guide teaches how to install multiple domain names in the same Nginx web server and encrypt the traffic to both the domains for free of charge.

Install Nginx

By default, Ubuntu isn’t shipped with Nginx. Therefore, it has to be installed manually with the following commands.

sudo apt-get update
sudo apt-get install Nginx

The first command updates the local repository information, whereas the second command installs the Nginx in the system.

Configure the Firewall

Configuring the firewall depends on the firewall software installed in the system. Since several firewalls are available in the market, it isn’t easy to teach them how to configure them. Thus, this guide only demonstrates how to configure the default, inbuilt firewall- UFW, aka uncomplicated firewall. Other firewalls should have a similar configuration to this one.

sudo ufw app list
sudo ufw allow 'Nginx HTTPS'
sudo ufw enable

The first command lists out available profiles to be used in the firewall. The second command uses the Nginx HTTPS profile in the allow (aka Whitelist) list of the firewall, and the third command enables the firewall. This guide later demonstrates how to use HTTPS. HTTPS is necessary nowadays as it secures the data connection between the client and the server. Browsers like Chrome will automatically default to HTTPS version of any site in the future; hence it’s required to have SSL enabled for any web site, especially when the web site owner plans to improve its SEO score and the security.

Configure File System

Even though Nginx supports to serve content through multiple domain names, it’s configured by default to serve content through a single domain. The default path is Nginx is /var/www/html. Multiple domains require to have multiple directories. The following instructions demonstrate how to create multiple directories to serve content through multiple domains.

  1. Create a directory for each domain with the following commands. The p flag is necessary to create parent directories, meaning when the www or any other directory in the address doesn’t exist, it creates the whole line of directories with p flag.
  2. sudo mkdir -p /var/www/nucuta.com/html
    sudo mkdir -p /var/www/nucuta.net/html.
  3. Assign ownership to the directories. This ensures the user has total control over the directories. However, here the user is taken from the currently logged in user, and therefore it’s important to log in to the user account that is going to be assigned to the directory. The first segment of $USER is for the user, and the second segment is for the group to which the user belongs.
  4. sudo chown -R $USER:$USER /var/www/nucuta.com/html
    sudo chown -R $USER:$USER /var/www/nucuta.net/html
  5. Change the permission of the directories with following commands. There are 3 entities, and 3 permissions in Linux file systems. In the following example, the first digit is for a user, the second digit is for the group, and the last digit is for all (aka public). The read permission has the value of 4, write permission has the value of 2, and the execute permission has the value of 1. These numbers can be added together to alter the permission of an entity, for instance, 755 means, USER has the permission to READ, WRITE, and EXECUTE (4+2+1 = 7), GROUP has the permission to READ, and EXECUTE (4+1 = 5), ALL has the permission to do the same. The permission is applied to files and directories both with different rules. The rules are listed in the following chart.
  6. sudo chmod -R 755 /var/www/nucuta.com/html
    sudo chmod -R 755 /var/www/nucuta.net/html
  7. Once the permission was assigned, create a default page for each domain in the web browser when the naked domain is called. Naked domain means the domain without any sub-domains, example nucuta.com.
  8. nano /var/www/nucuta.com/html/index.html.
    nano /var/www/nucuta.net/html/index.html.
  9. Add the following boilerplate code in each index file, and save as index.html in respective directory (as seen above).
<html>
<head>
    <title>Welcome to Site One</title>
<head>
<body>
    <h1>Success! </h1>
</body>
</html>

Configure Nginx

Configuring the Nginx is not that difficult as Nginx by default supports multiple domains. Even though it’s possible to use configuration information of multiple domains in the same file, it’s advisable to use multiple files for each domain’s configuration information. The default configuration file is named “default”, and is located in /etc/nginx/sites-available/default

  1. Navigate to /etc/nginx/sites-available/default, and delete all the configuration information. Use a text editor like nano or notepad++
  2. nano /etc/nginx/sites-available/default
  3. Copy and paste the following configuration, and save it.
  4. server {
            listen 80 default_server;
            listen [::]:80 default_server;

            root /var/www/html;
            index index.html index.htm index.nginx-debian.html;

            server_name _;

            location / {
                    try_files $uri $uri/ =404;
            }
    }
  5. Copy the configuration information in default file to a domain-specific configuration file with the following command.
  6. sudo cp /etc/nginx/sites-available/default /etc/nginx/sites-available/nucuta.com
  7. Repeat the aforesaid step to the other domain as well with the following command.
  8. sudo cp /etc/nginx/sites-available/default /etc/nginx/sites-available/nucuta.net
  9. Open both files with a text editor like nano (nano ), and change the server_name directive’s value as follows.
  10. In /etc/nginx/sites-available/nucuta.com file
    server_name nucuta.com
    In /etc/nginx/sites-available/nucuta.net file
    server_name nucuta.net
  11. Once both files were configured, copy them to the following directories to activate the configuration files. It creates a symbolic link between the actual file and the directory; hence in the future, only the files in a site-available directory have to be altered to make changes in both site-available, and site-enabled directories.
  12. sudo ln -s /etc/nginx/sites-available/nucuta.com /etc/nginx/sites-enabled/
    sudo ln -s /etc/nginx/sites-available/nucuta.net /etc/nginx/sites-enabled/
  13. Go through the configuration files, make any more changes, and use the following commands to make the changes effective. The first command ensures the configuration files are free from invalid configuration information, and the second command ensures the server is properly reloaded or restarted to make the changes effective. Use either reload or restart commands. Reload is preferred, but restart can be used if reload didn’t work out.
  14. systemctl config nginx
    systemctl reload nginx or systemctl restart nginx.

Configure the DNS Records

Configuring the DNS settings depend on the DNS provider. However, all the DNS providers have a similar interface. By default, the domain registrar provides access to the DNS records. This phase requires the IP address of the server where the nginx web server is hosted. Getting the IP address entirely depends on the platform. Platforms like Linode, DigitalOcean, Vultr display the IP in the dashboard. If it’s difficult to find contact the support of the respective service provider.

  1. In the DNS settings, add an “A” record, and use the server’s IP as value, domain name as the host. Make sure the domain name uses here is same as the domain name used in the Nginx configuration file. After configuring one domain, repeat it for the other domain as well.
  2. Let the DNS records to be updated. It usually takes up to 24 hours, but usually, it’s done in a few minutes.

Enable HTTPS

Enabling HTTPS is quite simple, and can be done for free of charge with letsencrypt. Letsencrypt is an open-source certificate authority that releases free SSL certificates to webmasters to encrypt the traffic to their website.

  1. Install snap-in the operating system with the following command. Note that this segment will use a snap daemon to install all the required packages instead of apt or apt-get. Snap is alternative package management, and a deployment tool that can be used to install packages in Ubuntu, and many other Linux operating systems. This is not required to install when having Ubuntu 16.04 LTS or any other higher version. However, still, run the last command to ensure the snap is up to date.
  2. sudo apt update
    sudo apt install snapd
    sudo snap install core; sudo snap refresh core
  3. Install the certbot that configures, and renews the SSL certificates for both the domains. Without certbot SSL certificates have to be installed manually. On top of that, renewing has to be done manually as well. This can be a problem as letsencrypt certificates expire after 3 months later. Therefore, the SSL certificate must be renewed once per 3 months to ensure the site can function properly as expected. Use the following command to install the certbot with ease.
  4. sudo snap install --classic certbot
  5. Certbot is installed in /snap/bin/certbot directory. To run the certbot executable file through the command line without specifying its full path, run the following command. It creates a symbolic link between the snap/bin/certbot and the /usr/bin/certbot directory, thereby allowing the certbot executable to run on the command line interface without specifying its full path.
  6. sudo ln -s /snap/bin/certbot /usr/bin/certbot
  7. Configure the Nginx instance in the system with the following command. There is another command that directly targets the specific domain when configuring the SSL. The 2nd command specified below installs and configures the SSL certificate for the specified domain name.
  8. sudo certbot –nginx
    certbot --nginx -d nucuta.com
  9. Run the following command to simulate the renewing process. The actual command without –dry-run flag is executed automatically as certbot configures a cronjob to run the command automatically after some times later. A dry run testing is required to ensure the certbot can renew the certificates without any obstacle.
  10. sudo certbot renew --dry-run

Conclusion

Configuring multiple domain names in a Nginx web server is quite easy as it provides a plethora of options to make the process easy. Certbot makes it possible to install SSL certificates for multiple domains for a Nginx web server. As the SSL certificate, this guide uses letsencrypt that provides SSL certificates for free of charge for any number of domains. The only downside of letsencrypt is its short lifetime, but certbot ensures it won’t be a problem to the webmaster with its automatic renewing process.

]]>
Start, Stop, and Restart Nginx Web Server on Linux https://linuxhint.com/nginx-web-server-start-stop-restart/ Mon, 28 Dec 2020 14:06:52 +0000 https://linuxhint.com/?p=83368 NGINX is an open-source web server with features for load balancing, caching, and functioning as a reverse proxy.

Igor Sysoev created it to overcome the limits of scaling and concurrency existing within regular web servers, offering an event-based, asynchronous architecture that enhances NGINX’s performance and stability as a web server.

As is the case with managing all servers, you’ll find yourself needing to start, stop, and restart the NGINX web server for various reasons.

This guide discusses how to use various methods to manage the NGINX service running on a Linux system.

NOTE: If you are running NGINX on a remote server, you will need to have an SSH connection. Ensure you also have sudo or root access to your system.

How to Manage the NGINX Service With The Systemd Service Manager

One way to manage NGINX service is by using the systemd service manager, commonly accessible using the systemctl command. This method will only work if the system where NGINX is installed uses systemd as its service manager.

How to View the NGINX web server status

In most cases, NGINX is installed as a service and runs in the background. Although NGINX runs in the background, there are ways to view the service status using the systemctl utility.

To view how the service is running, open the terminal window, and enter the command:

sudo systemctl status nginx

The command above will display information about the NGINX service. The command will display either of the following scenarios.

NOTE: Press Q to quit from status mode to shell.

  • A green indicator, which indicates that the service is active and running
  • A red indicator, which indicates the service has failed with information about the cause of the failure
  • A white indicator indicating that the service is inactive and not running (stopped)

How To Use systemd to Start and Stop the Nginx Service

Systemd is a universal utility that manages services in most Linux distribution. If NGINX is configured to run as a service, we can use the systemd to start and stop it.

To start the Nginx service, use the command:

sudo systemctl start nginx

To stop Nginx, use the command:

sudo systemctl stop nginx

How to Use systemd to restart the NGINX Service

You can also use the systemd to restart the NGINX service. Restarting a service shuts down all the running processes and restarts them afresh. Restarting a service is very useful when applying configuration changes to the server, eliminating the need to reboot the entire system.

There are two ways to restart a service:

  • Reload: Reloading a service keeps it running but tries to apply changes in the configu-ration files. If the process encounters errors, the update aborts, and the service keeps running.
  • Restarting: Restarting, also called a forceful reboot, completely shuts down the ser-vices and working process and applies any configuration file changes. If the configu-ration changes encounter errors, the service crashes until the issues get resolved.

How to Reload the Nginx Service (Graceful restart)

To restart the NGINX service gracefully using systemd, use the command:

sudo systemctl reload nginx

The above command requires the service to be running.

How to Force Restart Nginx Service

If you are performing critical changes to the NGINX server, you should reboot the service. Restarting force-closes all running processes, reinitializes them, and applies new changes. This is very useful when performing updates, changing ports, network interfaces, etc.

You can use the command:

sudo systemctl restart nginx

How to Manage The NGINX Service With Nginx Commands

NGINX has a set of built-in tools that are accessible using the Nginx command. We can use these commands to interact with the service manually.

How to use Nginx commands to start NGINX

You can start the NGINX service using the command

sudo /etc/init.d/nginx start

This command will display the output indicating that the service is starting—as shown in the image below:

How to stop the NGINX web server using Nginx commands

To stop the Nginx service and all related processes, you can use the command:

sudo /etc/init.d/nginx stop

You will get an output such as the one shown below:

How to force-close and restart the NGINX web server using a command

You can also force close and restart all nginx processes using the command:

sudo /etc/init.d/nginx restart or sudo nginx -s reopen

How to reload the NGINX web server using a command

To reboot the nginx service and its processes gracefully, you use the command:

sudo /etc/init.d/nginx reload or sudo nginx -s reload

How to force-terminate the NGINX server a using command

If you want to force-close all Nginx services without rebooting, use the command

nginx -s quit

Conclusion

In this article, we have discussed various methods you can use to interact with the NGINX service. Using what you’ve learned, you can manage the Nginx web server and troubleshoot server related problems.

]]>
How do I use Nginx Docker? https://linuxhint.com/nginx-docker/ Wed, 02 Dec 2020 11:15:39 +0000 https://linuxhint.com/?p=79125

Nginx is a fast, open-source, and more reliable web server that is used for server-side application development. Nginx server application support to run on many different operating systems. Nginx is very useful for development tasks. Therefore, the Docker container provides support for the Nginx server.

The open-source Docker platform contains a docker engine, a runtime environment that is used to execute, builds, and orchestrates containers. The term we have used in the below article is ‘docker hub’, a hosted service where containerized applications are shared, distributed, and collaborated with the rest of the development community. Dockerized applications are portable to implement on any environment like laptop, VM, cloud, or bare-metal server. The modular components can be reassembled again into the fully-featured applications and consciously do their work in a real-time environment.

We will elaborate in this article on how you can use Nginx Docker and easily set up on your system.

All the below-given steps are implemented on Ubuntu 20.04 Focal Fossa release.

Prerequisites

We have to fulfill the following requirements to complete this article:

  1. You need a sign up for a free docker account where you can receive free public repositories.
  2. Docker should install and locally be running on your system.
  3. You need a root account, or a user can run sudo commands.

For a better understanding of the Nginx docker, you have to perform the following steps:

Step 1: you will pull the Nginx Docker image from Docker Hub. Log in to your Docker Hub account. If you are not registered then, you can register for a free account. Once you logged in to the docker hub, you can search and view the image for nginx, which is given below.

To search Nginx images, type nginx in the search bar and then click on the official Nginx link that will be displayed on the search results.

Step 2: Here, you will see the docker pull command for Nginx. Now, on your docker host, run the following Docker pull command on the terminal to download the Nginx latest image from the docker hub.

$ sudo docker pull nginx

Step 4: Use the below-given command to run the Nginx docker container:

$ docker run -it --rm -d -p 8080:80 --name web nginx


We have used the Nginx server port 8080 on the docker host system. After running the above command, you will see the following output window on browsing the http://localhost:8080 URL. The following displayed output shows that the Nginx docker is working properly.

Example:

Let’s discuss an example. In the below-defined example, we can host a web page on our Nginx docker container. Here, we are creating a new custom HTML web page and then test it by using the Nginx image.
Create a new directory named ‘site-content’. In this director, you will add an html file named ‘index.html’ and let’s include the following line of code in the newly created index.html file.

<!doctype html>

<html lang="en">

<head>

<meta charset="utf-8">

<title>Docker Nginx</title>

</head>

<body>

<h2>Hello demo to use Nginx container</h2>

</body>

</html>

Run the below-mentioned Docker command.

$ docker run -it --rm -d -p 8080:80 --name web -v ~/site-content:/usr/share/nginx/html nginx

For nginx server, we exposed port 80 to port 8080 on the docker host.

Next, you will attach the volume on the container ‘/usr/share/nginx/html’ to the present running directory where the helloworld.html program file is saved.

Now, if you will browse the URL http://localhost:8080/index.html, you will get the below-given output on the displaying window:

Conclusion

Today, we have demonstrated in this article how to use NGINX official docker images. We have described how to set up and use the Nginx Docker. Using some simple steps, you can create new Docker images from available images that make your container easier to manage and control.

]]>
Nginx SSL Setup in Linux https://linuxhint.com/nginx_ssl_setup/ Mon, 30 Nov 2020 15:47:12 +0000 https://linuxhint.com/?p=78754

SSL (stands for secure socket layer) is a web protocol that makes the traffic between server and client secure by encrypting it. Server and clients safely transmit the traffic without the risk of communication being interpreted by third parties. It also helps the client to verify the identity of the website they are communicating with.

In this post, we will describe how to setup SSL for Nginx. We will be demonstrating the procedure using the self-signed certificate. A self-signed certificate only encrypts the connection but does not validate the identity of your server. Therefore, it should be used only for testing environments or for internal LAN services. For the production environment, it is better to use the certificates signed by CA (certificate authority).

Pre-requisites

For this post, you should have the following pre-requisites:

  • Nginx already installed on your machine
  • Server block configured for your domain
  • User with sudo privileges

The procedure explained here has been performed on Debian 10 (Buster) machine.

Step 1: Generating a Self-Signed Certificate

Our first step will be to generate a self-signed certificate. Issue the below command in Terminal to generate CSR (Certificate Signing Request) and a key:

$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/selfsigned-nginx.key -out /etc/ssl/certs/selfsigned-nginx.crt

You will be prompted to provide some information like your country name, state, locality, common name (your domain name or IP address), and email address.

In the above command, OpenSSL will create the following two files:

  • CSR:  selfsigned-nginx.crt in the in the /etc/ssl/cert/ directory
  • Key: selfsigned-nginx.key in the  /etc/ssl/private directory 

Now create the dhparam.pem file using the below command:

$ sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Step 2: Configuring Nginx to Use SSL

In the previous step, we have created the CSR and the key. Now in this step, we will configure the Nginx to use SSL. For this, we will create a configuration snippet and add information about our SSL certificate files and key locations.

Issue the below command in Terminal to create a new configuration snippet self-signed.conf file in the /etc/nginx/snippets.

$ sudo nano /etc/nginx/snippets/self-signed.conf

In the file, add the following lines:

ssl_certificate /etc/ssl/certs/selfsigned-nginx.crt;

ssl_certificate_key /etc/ssl/private/selfsigned-nginx.key;

The ssl_certificate is set to selfsigned-nginx.crt (certificate file) while the ssl_certificate_key is set to selfsigned-nginx.key (key file).

Save and close the self-signed.conf file.

Now we will create another snippet file ssl-params.conf and configure some basic SSL settings. Issue the below command in Terminal to edit the ssl-params.conf file:

$ sudo nano /etc/nginx/snippets/ssl-params.conf

Add following content to the file:

ssl_protocols TLSv1.2;

ssl_prefer_server_ciphers on;

ssl_dhparam /etc/ssl/certs/dhparam.pem;

ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384;

ssl_ecdh_curve secp384r1;

ssl_session_timeout  10m;

ssl_session_cache shared:SSL:10m;

ssl_session_tickets off;

# ssl_stapling on;

# ssl_stapling_verify on;

resolver 8.8.8.8 8.8.4.4 valid=300s;

resolver_timeout 5s;

add_header X-Frame-Options DENY;

add_header X-Content-Type-Options nosniff;

add_header X-XSS-Protection "1; mode=block";

As we are not using a CA-signed certificate, therefore we have disabled the SSL stapling. If you are using a CA-signed certificate, uncomment the ssl_stapling entry.

Step 3: Configuring the Nginx to Use SSL

Now we will open the Nginx server block configuration file to make some configurations.  In this step, we will assume you have already set up the server block, which will be similar to this:

server {

        listen 80;

        listen [::]:80;


        root /var/www/test.org/html;

        index index.html index.htm index.nginx-debian.html;


        server_name test.org www.test.org;


        location / {

                try_files $uri $uri/ =404;

        }

}

To open the Nginx server block configuration file, use the below command:

$ sudo nano /etc/nginx/sites-available/test.org

Now modify the existing server block to make it look like this:

server {

    listen 443 ssl;

    listen [::]:443 ssl;

    include snippets/self-signed.conf;

    include snippets/ssl-params.conf;


    root /var/www/test.org/html;

    index index.html index.htm index.nginx-debian.html;


    server_name test.org www.test.org;

}

In the above configurations, we have also added the SSL snippets self-signed.conf and ssl-params.conf that we have configured earlier.

Next, add a second server block.

server {

    listen 80;

    listen [::]:80;


    server_name test.org www.test.org;


    return 302 https://$server_name$request_uri;

}

In the above configuration, return 302 redirects the HTTP to HTTPS.

Note: Make sure to replace test.org with your own domain name. Now save and close the file.

Step 4: Allow SSL Traffic through Firewall

If a firewall is enabled on your system, you will have to allow SSL traffic through it.  Nginx provides you with three different profiles with ufw. You can view them using the below command in Terminal:

$ sudo ufw app list

You will see the following output with three profiles for the Nginx traffic.

You will need to allow the “Nginx Full” profile in the firewall. To do so, use the below command:

$ sudo ufw allow 'Nginx Full'

To verify if the profile has been allowed in the firewall, use the below command:

$ sudo ufw status

Step 5: Test NGINX configuration file

Now test the Nginx configuration file using the below command in Terminal:

$ sudo nginx -t

You should see the below output.


Now create the symbolic link between sites-available and sites-enabled:

$ ln -s /etc/nginx/sites-available/test.com /etc/nginx/sites-enabled/

Then restart the Nginx service to apply the configuration changes. Use the below command to do so:

$ sudo systemctl restart nginx

Step 6: Test SSL

Now to test the SSL, navigate to the following address:

https://domain-or-IP address

As we have set up the self-signed certificate, therefore we will see a warning that the connection is not secure. The following page appears when using the Mozilla Firefox browser.

Click the Advanced button.

Click Add Exception.

Then click Confirm Security Exception.

Now you will see your HTTPS site but with a warning sign (lock with a yellow warning sign) about the security of your website.

Also, check if the redirect works correctly by accessing your domain or IP address using http.

http://domain-or-IP address

Now, if your site automatically redirects to HTTPS, this means redirection worked correctly. To configure the redirection permanently, edit the server block configuration file using the below command in Terminal:

$ sudo nano /etc/nginx/sites-available/test.org

Now change the return 302 to return 301 in the file and then save and close it.

That is how you can set up SSL for Nginx in Debian 10 system. We have set up the self-signed certificate for demonstration. If you are in a production environment, always go for a CA certificate.

]]>
How Do I Create a Reverse Proxy in Nginx? https://linuxhint.com/create_reverse_proxy_nginx/ Mon, 30 Nov 2020 15:44:46 +0000 https://linuxhint.com/?p=78735

The standard proxy server only works according to their client’s requests by providing filtering and content privacy. When we talk about the reverse proxy server, it works on behalf of server requests, used for intercepting and routing traffic to a separate server. This proxy feature is useful for load distribution and improves performance among various available servers. It shows all the content which it takes from different online sites. Using the proxy method, you can pass requests for processing to the server applications over specified protocols other than HTTP.

There are many reasons due to which you might install the reverse proxy. One important reason is content privacy. The reverse proxy method provides a single centralized point to contact with clients. It can give you the centralized logging feature and can report across several servers. Nginx quickly processes the static content and passes dynamic requests to the Apache server; this feature improves the overall performance.

In this article, we will learn how to set up a reverse proxy in Nginx.

Prerequisites

You should have access to the root account or a user who can run sudo commands.

Creating a Nginx Reverse Proxy Server

For setting up the new Nginx proxy server, you need to follow the following steps on your system:

Step 1: Install Nginx

Open the terminal application and then open the file /etc/apt/source.list in your favorite text editor and then add the below-given lines at the end of this file. In this line, you need to replace the ‘CODENAME’ with your Ubuntu release, which you are using on your system. For example, we have a Ubuntu 20.04 focal fossa on this system. Sp, insert Focal to replace the ‘CODENAME’.

deb http://nginx.org/packages/mainline/ubuntu/ CODENAME nginx

deb-src https://nginx.org/packages/mainline/ubuntu/ &lt;CODENAME&gt; nginx

Next, you have to import the following packages repository signing key and then add it to the apt repository:

$ sudo wget http://nginx.org/keys/nginx_signing.key

$ sudo apt-key add nginx_signing.key

Now, update apt manager packages and install the latest release of Nginx on your system from the official apt repository by running the following command:

$ sudo apt update

$ sudo apt install nginx

Now, start and enabled the Nginx server by using the following commands:

$ sudo systemctl start nginx

$ sudo systemctl enable nginx

$ sudo systemctl status nginx

Step 2: Configurations for Nginx Reverse Proxy

Create a new configuration file custom_proxy /etc/nginx/conf.d/custom_proxy.conf and then paste the following lines of code in it:

server {

  listen 80;

  listen [::]:80;

  server_name myexample.com;


  location / {

      proxy_pass http://localhost:3000/;

  }

}

The directive ‘proxy_pass’ specified inside the location makes this configuration as reverse proxy. This line proxy_pass http://localhost:3000/ directs all requests that match with location root/path must be forwarded to the port 3000 on localhost where your domain website is running.

To activate and link the new configuration file run the below mentioned command:

$ ln -s /etc/nginx/conf.d/custom_server.conf

Step 3: Test Configurations

Now, test the above configurations by using the following command:

$ sudo nginx -t

After successfully testing, if no bug is reported then, reload the new Nginx configuration.

$ sudo nginx -s reload

Configure Buffers

The above configurations are enough to create a basic reverse proxy server. but, for complex applications, you need to enable some advanced options, which are given below:

location / {

    proxy_pass http://localhost:3000/;

    proxy_buffering off;

}

Configure-Request headers

location / {

    proxy_pass http://localhost:3000/;

    proxy_set_header X-Real-IP $remote_addr;

}

In the above example, the $remote_addr sends the IP address of the client to the proxy host.

Conclusion

From the above-mentioned details, you are able to understand how to create an Nginx reverse proxy. This tool is good for several server environments. Try all these configurations on your system for a better understanding.

]]>
How to Enable and Disable Nginx Cache https://linuxhint.com/enable_disable_nginx_cache/ Sun, 29 Nov 2020 18:22:01 +0000 https://linuxhint.com/?p=78521

When you have enabled caching in Nginx Plus, it stores responses in a cache disk, and these are further used to respond to clients without holding a proxy request for every time with the same content. Nginx Plus’s caching has more capabilities in which the most useful features, such as cache purging, delayed caching, and dynamic content caching, are included.

In this article, we will learn more about caching, such as how to enable and disable the caching in an Nginx server on a Linux system.

How to Enable Caching?

In the top level of the http {} context, include a directive the proxy_cache_path to enable caching. The first parameter, which is the local filesystem path for cached content, and the parameter keys_zone that defines the size and name of the shared memory zone, are mandatory. The last parameter, keys_zone, stores the metadata of cached items:

http {

    ...

    proxy_cache_path /data/nginx/cache keys_zone=one:10m;

}

You have to include the proxy_cache directive to define the items such as (protocol type and location or virtual server address) in the http context.  Through which you want to cache server responses, mention the zone name, which is defined by a parameter the keys_zone to the directive proxy_cache_path (which is one in this case):

http {

    ...

    proxy_cache_path /data/nginx/cache keys_zone=one:10m;

    server {

        proxy_cache mycache;

        location / {

            proxy_pass http://localhost:8000;

        }

    }

}

It is noted that the total amount of cached response is not affected by the size, which is defined in the keys_zone parameter. All cached responses separately themselves are saved in specific files with a copy of the metadata on your filesystem. However, if you want to limit the total cached response data amount, then you may include the parameter max_size to the directive in proxy_cache_path.

How to do limit or disable Caching?

All responses remain stored in the cache indefinitely. These responses are only removed when it exceeds the defined maximum size and the time of length since they last were requested. But, you can set according to your convenience means for how much time these cached responses are considered valid or even if they are used by different directives in the server {}, http {} or in the context of location {}. However, to limit the cached responses considered as valid, you need to include a directive with the name proxy_cache_valid.

Let’s define the cache limiting concept with an example. In the below given example, 200 or 302 code responses are considered to be valid for 10 minutes, and 404 responses are valid till 1 minute.

proxy_cache_valid 200 302 10m;

proxy_cache_valid 404      1m;

So, you can also define the validity of the cached responses of time for with all status codes to define a parameter ‘any’ that you can also see in the below line of code :

proxy_cache_valid any 5m;

There are some set of conditions under which Nginx does not send cached responses to the clients, so include a directive proxy_cache_bypass. Each parameter in the below example defines conditions and has a number of variables. If at least one parameter is not equal to zero ‘0’ or not empty then, Nginx does not find the response in the cache and requests immediately forward to the backend server.

proxy_cache_bypass $cookie_nocache $arg_nocache$arg_comment;

Under the condition where you want that Nginx does not cache a response. Then, you will include the proxy_no_cache directive and define all the following parameters:

proxy_no_cache $http_pragma $http_authorization;

Conclusion

Caching provides more features in an Nginx server. We have also explored in this article how we can enable or disable caching, including all directives and parameters. To explore more, you can get more help from online resources. I hope the above caching article will be informative for you.

]]>
How do I view Nginx logs? https://linuxhint.com/view-nginx-logs-ubuntu/ Fri, 27 Nov 2020 06:33:20 +0000 https://linuxhint.com/?p=78348 Logs are very important in a system to monitor the activities of an application as they provide you with useful debugging information and enable you to analyze all aspects of a web server. Like the other software applications, Nginx also maintains events like your web site visitors, encountered problems, and more to log files. The useful recorded information is used to take preemptive measures in order to deal with major serious discrepancies in the log events.

In this article, we will elaborate on how to configure and view Nginx Logs in Ubuntu 20.04 system to monitor the application activities.

There are two types of logs where recorded events in Nginx one is the access log, and the other is the error log. If you have already enabled these logs in the Nginx core configuration file then, you can find both types of logs in /var/log/nginx in all Linux distributions.

Nginx Access log

All activities related to site visitors are recorded in the access logs. In this type of log, you can find those files which are recently accessed, how the Nginx responded to a client request, client IP addresses, what browser a client is using, and more. By using the information of the access log, you can monitor the traffic to find site usage over time. If you monitor the access logs properly, then you can easily find some unusual requests which are sent by a user to check the flaws in the deployed application.

Enable the Nginx Access log

The access log you can enable with the access_log directive either in the server section or in HTTP.

The first argument, ‘log_file’ is compulsory, whereas the second argument is optional, ‘log_format’. If you do not mention log format, then logs will be typed in the default combined format.

The access log is defined by default in the Nginx configuration file. So, all virtual host’s access logs will be stored in the same configuration file.

http {
      ...
      access_log  /var/log/nginx/access.log;
      ...
}

It is recommended to set apart the access logs of all virtual hosts by recording into a new separate file.

http {
      ...
      ...
      access_log  /var/log/nginx/access.log;
   
         server {
                  listen 80;
                  Server_name example.com
                  access_log  /var/log/nginx/example.access.log;
                  ...
                  ...
                }
}

Reload the new NGINX configurations. Now, you can visit the access logs for the example.com domain in the file /var/log/nginx/example.access.log, by using the following command:

$ sudo tail -f /var/log/nginx/example.access.log

Customize format in the Access log

Let’s explain an example to define a custom access log format. By default, the access log is recorded in a combined log format. Therefore, you can extend the predefined format with the value of gzip response for compression ratio.

http {
            log_format custom '$remote_addr - $remote_user [$time_local] '
                           '"$request" $status $body_bytes_sent '
                           '"$http_referer" "$http_user_agent" "$gzip_ratio"';

            server {
                    gzip on;
                    ...
                    access_log /var/log/nginx/example.access.log custom;
                    ...
            }
}

Once you have made all changes in the configuration of Nginx, reload the Nginx and then run the tail command to display the gzip ratio at the end of the event log.

$ sudo tail -f /var/log/nginx/example.access.log

NGINX error log

If NGINX is suddenly stopped running or not working properly, it will record all events in the error log. Therefore, using the error logs, you can find more details. It also records warnings, but it cannot identify a problem that has occurred.

Enable error log

The following syntax of error_log directive:

error_log log_file log_level;

In the above syntax, the first argument represents the log file path, and the second argument identifies the security level of the log event.

We have mentioned an example below in which performing overriding in error_log directive in the server context.

http {
       ...
       ...
       error_log  /var/log/nginx/error_log;
       server {
                listen 80;
                server_name example1.com;
                    error_log  /var/log/nginx/example1.error_log  warn;
                        ...
       }
       server {
                listen 80;
                server_name example2.com;
                    error_log  /var/log/nginx/example2.error_log  debug;
                        ...
   }
}l

When you are required to disable the error log, assign the name of the log file to /dev/null.

error_log /dev/null;

Nginx Security Level of Error log

The following security level you can use in the error log:

  1. emerg: When your system is unstable, used for emergency messages
  2. alert: Generate alert messages of serious problems.
  3. crit: Used for Critical issues for immediately dealing.
  4. error: While processing a page, an error may occur.
  5. warn: Used for a warning message
  6. notice: Notice log that you can also ignore.
  7. info: For information, messages
  8. debug: Points the error location used for debugging information.

Conclusion

Nginx access and error logs are useful for recording certain activities. We have learned how we can enable and view these types of Nginx logs on our Linux system. That’s all about the Nginx logs.

]]>
Nginx Redirect HTTP to HTTPS https://linuxhint.com/nginx-redirect-http-https/ Sun, 15 Nov 2020 08:38:15 +0000 https://linuxhint.com/?p=76829 Nginx, pronounced as “Engine x”, is a free, open-source Linux-based high-performance web and a reverse proxy server that is responsible for managing and handling the load of the largest websites traffic on the internet. Nginx is a powerful redirecting tool that can be configured easily on your system to redirect the less secure or unencrypted HTTP web traffic to an encrypted and secured HTTPS web server. If you are a system administrator or a developer, then you are using the Nginx server regularly.

In this article, we will work on how to redirect the web traffic from HTTP to a secure HTTPS in Nginx.

The responses and requests are returned in the form of plaintext in HTTP, whereas the HTTPS uses SSL/TLS to encrypt the communication between the client and server system. Therefore due to many reasons, HTTPS is used over the HTTP, which are listed below:

  • All the data between the client-server in both directions is encrypted. However, anyone cannot access sensitive information if intercepted.
  • When you are using HTTPS, Google Chrome and other browsers will consider your website domain as safe.
  • HTTPS version improves your specified website performance using the HTTP/2 protocol.
  • If you serve your website domain via HTTPS, then the website will rank better on Google, as it favors all HTTPS secured websites.

It is preferred to redirect traffic HTTP to HTTPS in Nginx in a separate server block for each site version. It is also recommended to avoid redirecting traffic using “if” direction which may cause unusual behavior of the server.

Redirect all traffic from HTTP to HTTPS

Add the following changes into the Nginx configuration file in order to redirect all traffic from HTTP to HTTPS version:

server {
    listen 80 default_server;
    server_name _;
    return 301 https://$host$request_uri;
}

Below, we have elaborated each above-mentioned term:

Listen 80 default_server – this will signal your system that catches all HTTP traffic on Port 80.
Server_name _ – it is the domain that will match with any hostname.

Return 301 https://$host$request_uri – this tells your search engines that redirect it permanently. It specifies that the variable $host holds the domain names.

Once you change the configuration settings, you need to reload the Nginx services on your system. So, reload your Nginx services by using the following command:

$ sudo systemctl reload nginx

Redirect HTTP to HTTPS version for Specified domain in Nginx

After installing the SSL certificate on your domain, you will have two server blocks options for this domain. One block is for the HTTP version listening on port 80, and the second version is HTTPS on port 443. However, to redirect a single website domain from HTTP to HTTPS, you need to open the Nginx configuration. You can locate this configuration file in the /etc/nginx/sites-available directory. In any case, if you didn’t find this file, you can search for it with /etc/nginx/nginx.conf, /usr/local/nginx/conf or /usr/local/etc/nginx, and then perform the following changes in this file:

server {
    listen 80;
    server_name domain-name.com www.domain-name.com;
    return 301 https://domain-name.com$request_uri;
}

Let’s understand the above code line by line.
Listen 80 – using port 80, the server will listen for all incoming connections specified domain.

Server_name domain-name.com www.domain-name.com – it specifies the domain names. So, replace it with your website domain name that you want to redirect.

Return 301 https://domain-name.com$request_uri – it moves the traffic to the HTTPS version of the site. The $request_uri variable is used for the full original request URI in which arguments are also included.

Using the following method, you can redirect traffic to the HTTPS www version to the non-www version of the site. It is recommended to create a redirect in a separate server block for both non- www and www versions.

Let’s explain with an example. If you want to redirect the www HTTPS requests to the non-www version, then you would follow the following configuration:

server {
    listen 80;
    server_name domain-name.com www.domain-name.com;
    return 301 https://domain-name.com$request_uri;
}
server {
    listen 443 ssl http2;
    server_name www.domain-name.com;
    # . . . other code
    return 301 https://domain-name.com$request_uri;
}
server {
    listen 443 ssl http2;
    server_name domain-name.com;
 
    # . . . other code
}

Replace the domain name with your domain, like www.linuxhint.com.

Conclusion

We have discussed how to redirect traffic from HTTP version to the HTTPS on the Nginx server. By changing the Nginx configuration file setting, you can easily redirect traffic to HTTPS either for a specified domain or redirect all. This method, which we have mentioned in this article, may help you make your website more secure by making any changes in the user experience.

]]>
How can I make Nginx Faster? https://linuxhint.com/tips_to_make_nginx_faster/ Sun, 15 Nov 2020 03:32:44 +0000 https://linuxhint.com/?p=76664

Nginx is considered one of the most commonly used web servers today. The reasons behind preferring this webserver over other web servers available in the market are as follows: 1) It does not create a separate worker thread for each incoming request; rather, its single worker process is capable of catering to multiple requests at the same time. 2) It loads the static content immediately as soon as the user requests for it because it keeps that content in its cache.

However, there are still other hacks available out there, with the help of which we can make the performance of this webserver even better. Therefore, in today’s article, we would like to share with you some of the most efficient tips with which you can make your Nginx web server all the faster.

Ways of Making Nginx Faster:

Although the Nginx web server is still better in performance as compared to many other web servers, however, with a little more effort, it can be made even more powerful and faster. The ways of speeding up your Nginx web server are discussed below:

Optimize the Performance of Nginx Web Server with a Hardware Upgrade:

At times, when your hardware is problematic, i.e., it does not have sufficient resources to run your web server smoothly, then you might face performance-related issues, and you may feel the need to optimize your web server. The best thing that you can do in this regard is to upgrade your hardware on which your Nginx web server is supposed to run. You can either add in more components, such as extra RAM and extra hard disk, or you can even change your computer system entirely. This will greatly affect the performance of your Nginx web server.

Secure your Nginx Web Server:

Sometimes, your web server might slow down because of certain security attacks on the applications running on it. These attacks can be prevented altogether by securing your Nginx web server. You can add the HTTP Strict Transport Header (HSTH) to your Nginx web server’s configuration file to prevent XSS (cross-site scripting) attack, protocol downgrade attack, clickjacking attack, and all other types of code injection attacks. You can also make use of the limit_req flag within the Nginx configuration file to restrict the number of allowed requests at a time. This will prevent the Nginx web server from Denial of Service (DoS) or Distributed Denial of Service (DDoS) attacks.

Use a Load Balancer:

As its name implies, the job of a load balancer is to handle the distributed load of the actual web server. Whenever you want to increase the performance and capabilities of any machine in general and a web server in particular, you can adopt any two of the following approaches. 1) You can add more components to your existing server to make it more powerful. 2) You can also add a load balancer so that the workload can be distributed across two or more servers. The second approach is more practical as it can drastically improve the performance of your Nginx web server.

Keep your Data Compressed:

The data that is stored on the webserver, as well as the one that your Nginx web server sends and receives, should be in compressed form. This will not only optimize the performance of your web server but will also conserve the bandwidth because of the small size of the compressed files that will be transmitted through the network.

Monitor your Nginx Web Server:

Server monitoring plays a very vital role in ensuring the proper working of your server. It enables you to fix the issues right on time and also prevents most of the things that have the potential to go wrong as you can take all the precautionary measures well in time. This significantly improves the speed of your Nginx web server.

Disable the Access Logs if you do not need them:

Nginx saves the information regarding every event that takes place on that web server in the form of access logs. These logs are a good way of tracing the issues within your web servers; however, these logs also occupy a handsome amount of space that can cause your web server to slow down. At any point in time, if you feel like you do not need these access logs anymore, then it is highly recommended for you to disable these access logs. This will save a lot of your web server’s resources hence making it faster.

Keep Your Software Up to Date:

Whenever you use any software, it is always advised to keep it up to date. Similarly, you also need to keep the applications running on your Nginx web server up to date. Doing this will fix all those issues that can possibly impact the working of your Nginx web server and result in poor performance.

Conclusion:

By following all the tips provided to you in this article, you can easily get the best performance out of your Nginx web server, and hence you can fulfill all your desired goals very efficiently.

]]>
How Do I Fix 502 Bad Gateway Nginx? https://linuxhint.com/fix_502_bad_gateway_nginx/ Thu, 12 Nov 2020 09:00:09 +0000 https://linuxhint.com/?p=76494

Nginx was launched in 2004 as an open-source web server. Since the time it has been released, it is very commonly used for hosting websites. Apart from this, it is also being used as Load Balancer, Email Proxy, Reverse Proxy, and HTTP Cache. Like every other web server, Nginx is also prone to certain errors, out of which the most common one is the 502 Bad Gateway error. This is a highly generic type of error that arises when you try to access a web server but fail to reach it. In that case, your browser may render the 502 Bad Gateway error. Since there is no other information that appears along with this error, it leaves the user clueless about what exactly went wrong and how they can fix it.

Therefore, in today’s article, we will try to look for all the potential causes of the 502 Bad Gateway error in Nginx, as well as the ways on how we can possibly fix it.

Causes of 502 Bad Gateway Error in Nginx

There could be multiple reasons for a 502 Bad Gateway error in Nginx, and the most common ones are listed below:

Unreachable Domain
Whenever you type in a domain name in the search bar of your browser and press the Enter key to access that website, the very first task that is performed is contacting your Domain Name System (DNS). The DNS server maps the specified domain name onto its reserved IP address and then contacts the respective server, which in turn responds to you by displaying the requested web page on your web browser. However, at times, the DNS server fails to reach the specified domain because of a 502 Bad Gateway error in Nginx. This may happen because of certain changes taking place in your DNS, which takes a sufficient amount of time to take effect after it starts working correctly.

Overly Activated Firewalls
At times, your Firewall settings are so strict and hard that they even block legitimate users and disallow them to reach your site. This, in turn, may cause the users to see a 502 Bad Gateway error whenever they try to access your website.

Hosting Server Goes Down
Since the servers have a finite capacity in which they cannot entertain user requests, therefore, once that capacity is reached, all the future incoming users might experience a 502 Bad Gateway error since your server will be down. Another reason for it could be that you have intentionally brought down your server for maintenance.

Fixing the 502 Bad Gateway Error in Nginx

Depending on the causes of the 502 Bad Gateway error in Nginx, you can try to resolve it by using any of the following solutions:

Refresh your Web Page
At times, you can see a 502 Bad Gateway error only because of some temporary connectivity issues, which can be resolved simply by refreshing your web page and checking if you can access the web page or not. If you still fail to reach the desired web page, then you might also try to clear your browser cache because sometimes, a 502 Bad Gateway error response is saved in your browser cache. Due to this, your browser renders this error again and again, so clearing the cache might resolve this issue.

Perform a Ping Test
If you are still not able to access your web server even after refreshing the web page and clearing the browser cache, then your web server might have some serious connectivity issues. In that case, you can try to perform a Ping Test where you send the Ping request to your server and check if it is reachable or not. If your server is reachable, then you will be able to access it, if not, then you will have to look for the other solutions that are listed below.

Look for Potential Changes in your DNS
You might have changed your hosting service provider or changed the IP address with whom one can contact your web server. These changes are always reflected in the DNS server, but they take some time to take place correctly. In that case, you need to wait till the changes have taken effect all across your DNS, after which you will no longer be seeing the 502 Bad Gateway error in Nginx.

Monitor your Server Logs
Server logs contain detailed information about the status of your server and all the activities it performs. If you are monitoring the server logs regularly, then they can help you a lot in figuring out exactly what went wrong, hence enables you to fix the 502 Bad Gateway error in Nginx since knowing the exact cause of the error, is in fact, the first step towards resolving that error.

Recheck your Firewall Configurations
You need to apply this fix if you have figured out your Firewall configurations to be too strict that they are even blocking legitimate users to access your website. In that case, resetting your Firewall configurations can easily fix the 502 Bad Gateway error in Nginx.

Debug your Website’s Code
At times, the problem does not trace back to the connectivity issues, rather, it is your website’s code that is faulty, which causes the 502 Bad Gateway error in Nginx. Manually figuring out such errors is nearly impossible, which is why it is highly recommended for you to debug your website’s code in a sandboxed environment. Doing this will not only pinpoint the exact issue that you can immediately fix but will also prevent your physical system from getting harmed by running a faulty code on it since you are running it in a sandboxed environment.

Try Contacting your Hosting Service Provider
Sometimes when you cannot host your own web server, you take the hosting services on rent from a hosting service provider. In that case, the problem that is causing the 502 Bad Gateway error in Nginx, possibly, does not reside in your end, rather, there is some issue with the hosting service that you are getting. The only solution to this problem is contacting your hosting service provider, who will not only take the responsibility of figuring out that issue but might also suggest ways in which you can prevent this error from recurring in the future.

Conclusion

In this article, we provided you a brief introduction of Nginx and the most common type of error that this webserver faces, specifically, the 502 Bad Gateway error. Then, we also stated all the possible causes behind this error. Finally, we shared with you all the different solutions on how you can resolve this error in Nginx.

]]>
Nginx vs. Apache Comparison https://linuxhint.com/nginx_vs_apache/ Sun, 08 Nov 2020 17:01:25 +0000 https://linuxhint.com/?p=76165

Whenever it comes to deploying a website, the first thing that comes to your mind is choosing the right web server since, after deploying your website, your web server will be responsible for handling all the requests and serving the users with what they need.

Nginx and Apache are the two leading web servers in the market that handle more than half of the Internet traffic these days. Apache was launched back in 1995, whereas Nginx is relatively newer since it was launched in 2004.

The market share of both these web servers is more or less the same, which leaves users confused in choosing which web server they need for their particular website. Therefore, today we will try to draw a comparison between Nginx and Apache by discussing multiple parameters in which these web servers can be compared. After drawing that comparison, we will give you our take on which web server is better in certain situations. So let us try to find it out together.

Comparison between Nginx and Apache

There are certain important parameters against which Nginx and Apache can be compared. These parameters have been discussed one by one below:

Architecture:

While drawing a comparison between any two entities, the most crucial parameter that we need to consider is the architecture and working of both. In the case of Nginx and Apache, there is a core difference between the architectures of both web servers on which they operate. It means that how Nginx and Apache respond to their respective requests are significantly different. We will try to understand both the architectures by giving an example of how these web servers work.

In the case of Apache, whenever this web server receives a connection request, it creates a new thread to handle that request. It means that if there are thousand connection requests at any given instance, then Apache will have to create thousand different threads to serve these requests, which will prove to be a huge burden on the web server. On the other hand, Nginx handles the requests asynchronously as its single process is capable of handling thousands of requests at a time. It means that it does not have to create a different thread for each incoming connection request.

Performance:

The performance of a web server is mostly judged by two parameters, i.e., its capability of handling static as well as dynamic content. In the case of the static content, Nginx is considered way better than Apache because instead of going for the traditional file-based approach, it caches the static content, which makes it readily available whenever it is requested. On the other hand, Apache still works on the conventional file-based approach for handling the static content.

As far as the dynamic content is concerned, Apache processes the dynamic content within the same server, whereas Nginx is still incapable of processing dynamic content, and rather uses an external process for handling all the dynamic content. However, despite this difference in the ways of handling the dynamic content, the performance of both web servers is more or less the same in this regard.

Supported Operating Systems:

The support of Apache spans more operating systems than Nginx, as it provides supports for all the UNIX based systems, as well as the Windows operating system. However, Nginx does support most of the UNIX based systems, but its support for the Windows operating system is very minimal.

Customizability:

Apache web servers can be customized by writing modules of your choice to it, while Nginx web servers lack this capability, which makes Apache more flexible in this regard.

Security:

Although Apache web servers provide great security against DoS and DDoS attacks, because of the relatively smaller code base of Nginx, it is considered more secure than Apache web servers.

Modules:

Both Apache and Nginx provide official modules that you can download with these web servers to make them function the way you like, but as mentioned before, the Nginx web servers do not allow you to write customizable modules. Moreover, the Apache web server modules can be loaded dynamically, while the Nginx web server modules need to be selected and compiled with the software core.

Support and Documentation:

Support and documentation for both the web servers are more or less the same. However, a few years back, it was considered difficult to find detailed documentation for Nginx as it was relatively newer in the market However, now its documentation is also very well maintained by the company behind it.

A Critical Analysis of Nginx and Apache

Based on the parameters that we discussed above, we can deduce our opinion on choosing between Apache and Nginx. As far as the architecture of both web servers is concerned, Nginx clearly has an edge over Apache since the way it handles requests is a lot more efficient than Apache. In the case of static content, Nginx takes the lead again. As for the dynamic content, although both web servers handle it differently, they still give almost the same performance.

For OS support, Apache is ahead of Nginx since it is a very well-established platform, which has spent a relatively larger amount of time in the market as compared with Nginx. Also, Apache web servers are way more flexible than Nginx because of the customizable modules that they allow. Moreover, in terms of modules, Apache is better than Nginx because of providing the dynamic loading feature. The security of Nginx is ahead of Apache because of its smaller codebase, but the documentation and support for both web servers are almost the same.

Conclusion:

In this article, we gave you a brief overview of the Apache and Nginx web servers. We tried to draw a comparison between both web servers by discussing several factors that affect the overall performance and throughput of these web servers. Based on these factors, we tried to provide you with a critical analysis on which web server is better in which regard. Having said that, we would like to reiterate that choosing a web server highly depends on the use case and the scenario in which you are going to employ that web server.

It means that we cannot regard any particular web server as best or worst straightaway, rather it is the purpose for which it is used that makes it best or worst. Therefore, before choosing between Apache and Nginx, you carefully need to analyze your requirements, which you must want your web server to serve. Only then you will be able to make the right choice of a web server.

]]>
How to Encrypt Nginx server with Let’s Encrypt on Ubuntu 20.04 https://linuxhint.com/lets_encrypt_encrypting_nginx_server_ubuntu/ Mon, 24 Aug 2020 06:30:10 +0000 https://linuxhint.com/?p=66228 A certificate authority known as Let’s Encrypt demonstrates an easy method to get and install certificates for encrypting HTTPS on web servers. A software client called Certbot is used in automating the required steps for this process. The installation of certificates on Nginx and Apache is fully automatic. I will show you how to secure your Nginx server with a free SSL certificate on Ubuntu 20.04.

We will be using different Nginx server configuration files as it helps in avoiding the common mistakes and also helps in maintaining the default configuration files as a fallback option.

Step 1:

As always, first, update your APT.

$ sudo apt update

Step 2:

Now, upgrade your APT.

$ sudo apt upgrade

Step 3:

Now, download and install a Certbot software tool that will help you to get an SSL certificate from Let’s Encrypt. Execute the following terminal command for installing Certbot via APT.

$ sudo apt install certbot python3-certbot-nginx

This will install certbot, but you will still need to configure the Ngnix configuration file for SSL certificate installation.

Step 4:

You should set up a server block before moving to the next step, and it is a necessary step in case you are hosting multiple sites. We will create a new directory in “/var/www” path and let the default directory un-touched. Execute the following command for creating a new directory.

$ sudo mkdir -p /var/www/example.com/html

Step 5:

Now provide ownership permissions to this directory via the following terminal command.

$ sudo chown -R $USER:$USER /var/www/example.com/html

Step 6:

Now ensure that the permissions are granted by executing the following terminal command.

$ sudo chmod -R 755 /var/www/example.com

Step 7:

Now create an index.html file using your favorite text editor, I am using a gedit text editor.

$ sudo gedit /var/www/example.com/html/index.html

Add the following text inside this HTML file.

<html>
    <head>
        <title>Welcome to example.com!</title>
    </head>
    <body>
        <h1>Success!  The example.com server block is working!</h1>
    </body>
</html>

Save and close the file.

Step 8:

Now create a new configuration file the sites-available directory using your favorite text editor by executing the following command.

$ sudo gedit /etc/nginx/sites-available/example.com

Now add the following text in this configuration file for the new directory and domain name.

server {
        listen 80;
        listen [::]:80;

        root /var/www/example.com/html;
        index index.html index.htm index.nginx-debian.html;

        server_name example.com www.example.com;

        location / {
                try_files $uri $uri/ =404;
        }
}

Save and close this file to take effects.

Step 9:

Now, enable the new directory for Nginx startup via the following terminal command.

$ sudo ln -s /etc/nginx/sites available/example.com /etc/nginx/site-enabled/

Step 10:

To avoid any server name hash bucket memory problems, provide a single value in the following configuration file.

$ sudo gedit /etc/nginx/nginx.conf

Now remove the # sign from hash_bucket_size option to uncomment it. Save the close the file.

Step 11:

Now type the following two commands for removing syntax errors and restarting the Nginx server.

$ sudo nginx -t

$ sudo systemctl restart nginx

Step 12:

Now, you need to verify and confirm Nginx configuration files. As certbot needs to find the correct server block in Nginx configuration, so it looks for a server_name that is in matching with the requested domain. To verify these configuration files, type the following terminal command.

$ sudo nginx -t

Step 13:

Now, update your UFW firewalls rules to allow Nginx for full permissions. If you are having any previous rules relating to the HTTP server, delete them by using the UFW deny option before adding the following command.

sudo ufw allow ‘Nginx Full’

Step 14:

Now we arrive at the point where we have to install an SSL certificate using certbot software. Execute the following terminal command.

$ sudo certbot --nginx -d example.com -d www.example.com

If you are using certbot for the first time, you will be asked for an email address and terms and conditions prompt, agree to do so, and you will be able to move the next step.

Step 15:

Now you will be asked for configuration of your HTTPS settings, choose the necessary options, and hit the Enter button to continue. Certbot will install all the required certificates and update the Nginx files; your server will reload with a message to tell you that your process is successful.

Step 16:

Now that you have installed the certificates, you should also make sure that these certificates are auto-renewed after a specific time. Execute the following two terminal commands to ensure this process’s ability.

$ sudo systemctl status certbot.timer

$ sudo certbot renew --dry-run

Conclusion:

So far, we have covered how to build a separate server block in Nginx, install certificates using Certbot software tool from Let’s Encrypt certificate authority servers, and how to apply a renewal process for these certifications.

]]>
How to Install NGINX on Ubuntu 20.04 https://linuxhint.com/install_nginx_ubuntu/ Sat, 13 Jun 2020 18:58:22 +0000 https://linuxhint.com/?p=61293 NGINX is a popular, high-performance tool in HTTP. This tool is a reverse proxy server responsible for managing the traffic load of a large number of websites on the Internet. NGINX is a fast, open-source, freely available software tool used in traffic load balancing. NGINX provides a complete web server, a content management cache, and a reverse proxy feature for HTTP and non-HTTP servers. This article will show you how to install, set up, and uninstall NGINX on Ubuntu 20.04.

Installing NGINX

First, for NGINX to work, you should stop the Apache service running on port 80 or port 443.

Step 1: Update Your APT

As always, first, update and upgrade your APT.

$ sudo apt update

$ sudo apt upgrade

Step 2: Download and Install NGINX

The NGINX software tool is present in the Ubuntu official software repository. Simply type the following command in the terminal to download and install NGINX.

$ sudo apt install NGINX

Step 3: Verify Installation

When the installation completes, the Nginx service will start automatically. To verify this installation, execute the following terminal command.

$ sudo systemctl status NGINX

Step 4: Update Firewall Settings

Update the firewall settings through the UFW command to allow incoming traffic to your NGINX server from various HTTP and non-HTTP web servers on port 443, 80, or both of these ports.

$ sudo ufw allow ‘NGINX Full’

Step 5: Test Installation in Browser

Test your NGINX installation by opening a new tab in a browser on your Ubuntu machine and typing the following URL in the URL bar. Instead of the text “YOUR_IP,” put your own machine IP in the following command.

URL= http://YOUR_IP

Figure: NGINX testing server opened in a web browser tab.

Step 6: Test Installation in Command-Line Interface

You can also test the installation of NGINX via the command-line interface by executing the following terminal command.

$ curl -i 10.0.2.15

Step 7: Configure NGINX Server

Now you should configure your NGINX server for it to restart after system reboots.

$ sudo systemctl enable NGINX

You can also use the following additional commands to check the status of the NGINX server, in addition to restarting it, reloading it, starting it, stopping it, and disabling it from starting every time the system boots up.

$ sudo systemctl status NGINX

$ sudo systemctl restart NGINX

$ sudo systemctl reload NGINX

$ sudo systemctl start NGINX

$ sudo systemctl stop NGINX

$ sudo systemctl disable NGINX

Uninstalling NGINX Server

You can remove NGINX from Ubuntu via the following terminal commands.

$ sudo apt-get purge NGINX

$ sudo apt-get autoremove

Conclusion

We have covered how to install the NGINX server on Ubuntu 20.04 systems, how to set up and configure NGINX servers, and how to uninstall the NGINX tool from Ubuntu 20.04. Hope this helps.

]]>
How to use CORS with Nginx https://linuxhint.com/cors_nginx/ Mon, 11 May 2020 10:08:23 +0000 https://linuxhint.com/?p=59889

What is CORS

CORS, also known as cross origin resource sharing is a technique used in modern web browsers that controls access to resources hosted in a web server. CORS uses additional headers such as origin, access-control-origin, and many more to determine whether the requested resource has permission to be sent to the browser. The primary purpose of CORS is to prevent a web application running in a web browser from accessing resources hosted in a different origin when there is no permission, what it means the web application can’t download resources, such as images, scripts, css like any content etc. when they are not hosted in the same origin (usually all should be in the same domain) as the web application unless the server is configured to allow this behaviour. By having this implementation in a web browser, users can protect their data from unauthorized parties. A hacker can secretly modify a web page while being middle of the connection to disrupt business of the user or gain access to valuable data. However, there are advantages of CORS too, such as it allows developers to load resources from a different origin due to cost effectiveness, or simply convenience. In that case they have to modify their web server to allow such requests. This article demonstrates how to get it done on a Nginx web server with ease.

What Triggers a CORS Request

Not all the requests trigger a CORS request as usually the resources are hosted in the same origin as the web application. If it’s different, then CORS is triggered. CORS has two types of requests, simple request and CORS pre-flighted request.

Simple Request works as a regular request, the web browser sends a request to the server to download a particular resource when the user initiated it, then web server checks the origin of the request, compares it against the rules in the web server, if it’s matched, the resource is supplied. This request type uses OIRIGN, and ACCESS-CONTROL-ALLOW-ORIGIN headers to determine whether the resource should be supplied or not. Simple request is only triggered if request methods like GET, HEAD, POST are used, and headers like Accept, Accept-Language, Content-Language, Content-Type, DPR, Downlink, Save-Data, Viewport-Width, Width are used. Even then, not all the content types trigger a simple request. Here only form encoding types trigger a simple request.

Pre-flighted request type is rather different as there is no direct access to the resources in the first round. When the aforesaid conditions are altered somehow, either by using a different request header or a different content type, a Pre-flighted request is triggered. In Pre-flighted requests, the web browser first make sure it can access to the resource by communicating with the web browser, then when the web browser replied with okay (HTTP 200) response, then it sends another request to download the resource. It utilizes HTTP OPTION request method to initiate the first request, then it uses GET, POST like request types to download the resources.

How to Configure Nginx to Support CORS Requests

This section demonstrates how to configure a nginx web server to allow cross origin resource sharing. This can only be done if the developer has access to the web server, as it involves modifying the configuration file of Nginx.

Use the following simple code snippet to allow CORS requests. This has to be copied to the default file of nginx service in Ubuntu or any other platform.

location \ {

if ($request_method = 'OPTIONS') {
        add_header 'Access-Control-Allow-Origin' 'https://localhost;
        add_header '
Access-Control-Allow-Methods' 'POST, OPTIONS';
        add_header '
Access-Control-Max-Age' 1728000;
        add_header '
Content-Type' 'text/plain; charset=utf-8';
        return 204;
   }
if ($request_method = '
POST') {
        add_header '
Access-Control-Allow-Origin' 'https://localhost;
        add_header 'Access-Control-Allow-Methods' 'POST';
   }

}

The basic code snippet goes as above. It contains directives like request_method, add_header to identify the request type, and set the header of the response for browser to read respectively. The Access-control-allow-origin header defines what origin the resource has access to, for instance if a web application hosted in github wants to access an image hosted in myOwnServer.com, then the URL of github should be used as the value of Access-control-allow-origin directive in myOwnServer.com, then whenever the web application hosted in github sends requests to myOwnServer.com to download the image file, all these requested are granted permission. Access-control-allow-method header defines what request types the web application that sends the requests supports, then rest of the headers are for its max age to cache the requests, and the supported content type.

As described above, once the OPTION request was completed, the browser sends another request to download the resources if the first request was successful, its headers are set in the first request_method if brackets.

Other than the aforesaid directives, there are some other important directives in Nginx that can be used in CORS requests. One of the most important directives is access-control-allow-headers, what it does is to set the response header with allowed header names for browser to verify. A web application can have its own headers for various purposes, and if such headers present in the subsequent requests after the initial OPTIONS request, then all these headers should be allowed by the web server before the requested resource to be shared.

It’s important that this code snippet to be in the right place in Nginx default file, because Nginx executes different location blocks depending on the matched URL, if such location block doesn’t contain this code snippet, then it isn’t executed at all, and therefore it’s important to use this in all the location blocks for the safe side. Some of the important location blocks are Images, PHP (~ \.php$), CSS, etc.. blocks.

Once the aforesaid code snippet is saved, save the Nginx file, and reload the Nginx service to changes to take effect.

Conclusion

CORS, is known as cross origin resource sharing, and is a technique to control the access of resources. These resources can be any file from image to a javascript file. The primary purpose of CORS is to tighten the security of web applications to prevent man in the middle attacks. However, CORS can have benefits too. In that case, the CORS has to be turned on as it’s not allowed by default. The basic CORS request type is simple request type, it uses only ORIGIN, and ACCESS-CONTROL-ALLOW-ORIGIN directives, and with that help the Nginx can grant permission for the web browser to access the requested resource depending on the origin. Either way CORS is quite useful and should be used carefully.

]]>
How to Install Nginx and Configure on CentOS 8 https://linuxhint.com/install_nginx_centos8/ Fri, 03 Apr 2020 19:27:34 +0000 https://linuxhint.com/?p=57569 Nginx is a fast and lightweight web server. The configuration files of Nginx are really simple and easy to work with. It is a great alternative to the Apache web server.  In this article, I am going to show you how to install and configure Nginx web server on CentOS 8. So, let’s get started.

Installing Nginx:

Nginx is available in the official package repository of CentOS 8. So, it’s very easy to install.

First, update the DNF package repository cache as follows:

$ sudo dnf makecache

Now, install Nginx with the following command:

$ sudo dnf install nginx

To confirm the installation, press Y and then press <Enter>.

Nginx should be installed.

Managing the nginx Service:

By default, nginx service should be inactive (not running) and disabled (won’t automatically start on boot).

$ sudo systemctl status nginx

You can start the nginx service as follows:

$ sudo systemctl start nginx

nginx service should be running.

$ sudo systemctl status nginx

Now, add nginx service to the system startup as follows:

$ sudo systemctl enable nginx

Configuring the Firewall:

You must configure the firewall to allow access to the HTTP port 80 and HTTPS port 443 in order to access the Nginx web server from other computers on the network.

You can allow access to the HTTP and HTTPS port with the following command:

$ sudo firewall-cmd --add-service={http,https} --permanent

Now, for the changes to take effect, run the following command:

$ sudo firewall-cmd --reload

Testing the Web Server:

You must know the IP address or domain name of the Nginx web server in order to access it.

You can find the IP address of your Nginx web server with the following command:

$ ip a

In my case, the IP address is 192.168.20.175. It will be different for you. So, make sure to replace it with yours from now on.

Now, visit http://192.168.20.175 from your web browser. You should see the following page. It means Nginx web server is working.

Configuration Files of nginx:

Nginx web server configuration files are in the /etc/nginx/ directory.

$ tree /etc/nginx

/etc/nginx/nginx.conf is the main Nginx configuration file.

The default web root directory of Nginx web server is /usr/share/nginx/html/. So, this is where you should keep your website files.

Setting up a Basic Web Server:

In this section, I am going to show you how to set up a basic Nginx web server.

First, take a backup of original Nginx configuration file with the following command:

$ sudo mv -v /etc/nginx/nginx.conf /etc/nginx/nginx.conf.original

Now, create a new Nginx configuration file as follows:

$ sudo nano /etc/nginx/nginx.conf

Now, type in the following lines in the /etc/nginx/nginx.conf file and save the file.

user nginx nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
 
events {
worker_connections 1024;
}
 
http {
include             /etc/nginx/mime.types;
default_type        application/octet-stream;
 
server {
listen          80;
server_name     example.com www.example.com;
root            /usr/share/nginx/html;
index           index.html;
access_log      /var/log/nginx/access.log;
}
}

Here, user option is used to set the Nginx run user and group to nginx respectively.

The error_log option is used to set the error log file path to /var/log/nginx/error.log. This is where errors related to the Nginx server will be stored.

The main Nginx server configuration is defined in the server section inside the http section. You can define more than one server section inside the http section if needed.

In the server section,

listen option is used to configure Nginx to listen to port 80 (HTTP port) for web requests.

server_name option is used to set one or more domain names for the Nginx web server. If your DNS settings are correct, you can access Nginx web server using these domain names.

access_log is used to set the access log file path to /var/log/nginx/access.log. When someone tries to access the Nginx web server, the access information (i.e. IP address, URL, HTTP status code) will be logged to this file.

The location option is used to set the root directory of the Nginx web server.

Here, the root directory is /usr/share/nginx/html/.

This is where all the website files should be kept. The index option sets index.html as the default file to serve if no specific file is requested. For example, if you visit http://192.168.20.175/myfile.html , then you Nginx will return myfile.html file. But, if you visit http://192.168.20.175/, then Nginx will send you index.html file as no specific file was requested.

Now, remove all the files from the /usr/share/nginx/html/ directory (web root) as follows:

$ sudo rm -rfv /usr/share/nginx/html/*

Now, create a new index.html file in the /usr/share/nginx/html/ directory as follows:

 

Now, type in the following lines in index.html file and save the file.

<h1>Hello world</h1>
<p>&copy; 2020 LinuxHint.com</p>

Now, restart nginx service as follows:

$ sudo systemctl restart nginx

Now, visit http://192.168.20.175 from your web browser and you should see the following page. Congrats! You’ve set up your first Nginx web server.

Configuring Error Pages:

You can configure error pages in Nginx. For example, if a page/file/directory is not available, HTTP status code 404 will be returned to the browser. You can set a custom HTML error page for the HTTP status code 404 which will be returned to the browser.

To do that, add the following line in the server section of nginx.conf file.

server {

error_page 404 /404.html;

}

Now, create a file 404.html in the Nginx web root /usr/share/nginx/html/ as follows:

$ sudo nano /usr/share/nginx/html/404.html

Now, type in the following lines in 404.html and save the file.

<h1>Error 404</h1>
<h2 style="color: red;">Page not found</h2>
<p>&copy; 2020 LinuxHint.com</p>

Now, restart nginx service as follows:

$ sudo systemctl restart nginx

Now, try to access a non-existent path (http://192.168.20.175/nopage.html) and you should see the following error page.

If the 404.html file is in a different filesystem path (let’s say /usr/share/nginx/html/errors/ directory), you can map the URL /404.html to it as follows:

server {

error_page 404 /404.html;
location /404.html {
root            /usr/share/nginx/html/errors;
}

}

Now, make a new directory  /usr/share/nginx/html/errors/ as follows:

$ sudo mkdir /usr/share/nginx/html/errors

Now, create a new file 404.html in the  directory /usr/share/nginx/html/errors/ as follows:

$ sudo nano /usr/share/nginx/html/errors/404.html

Now, type in the following lines in the 404.html file and save the file.

<h1 style="color: red;">PAGE NOT FOUND</h1>
<a href="/">GO BACK HOME</a>

Now, restart nginx service as follows:

$ sudo systemctl restart nginx

Now, try to access a non-existent path (http://192.168.20.175/nopage.html) and you should see the updated error page.

The same way, you can set error page for other HTTP status codes.

You can also set the same error page for multiple HTTP status codes. For example, to set the same error page /404.html for the HTTP status codes 403 and 404, write the error_page option as follows:

error_page    403 404    /404.html;

Configuring Logs:

In Nginx, the error_log and access_log options are used for logging error messages and access information.

The format of the error_log and access_log options are:

error_log /path/to/error/log/file [optional:custom-log-name];
access_log /path/to/access/log/file [optional:custom-log-name];

You can define your own error log and access log formats if you want.

To do that, use the log_format option in the http section to define your custom log format as follows.

http {

log_format simple      '[$time_iso8601] $request_method $request_uri '
'[$status] ($request_time) -> $bytes_sent bytes';

server {

access_log /var/log/nginx/access.log simple;

}
}

Here, the log format name is simple. Some nginx variables are used to define the custom log format. Visit the Nginx Embedded Variables Manual to learn about all the Nginx variables.

The custom log format should be enclosed in single quotes. The log format can be defined in a single line or in a multiple lines. I’ve shown how to define the log format in multiple lines in this article. You won’t have any trouble with single line log format, trust me!

Once the log format simple is defined, access_log option is used to tell Nginx to use it as access log.

The same way, you can set a custom error log format using the error_log option.

I’ve only configured custom log format for the access log in this article.

Now, restart nginx service as follows:

$ sudo systemctl restart nginx

Now, you can monitor the access log file as follows:

$ sudo tail -f /var/log/nginx/access.log

You can also monitor the error log file as follows:

$ sudo tail -f /var/log/nginx/error.log

If you want, you can monitor the access log and error log files at the same time as follows:

$ sudo tail -f /var/log/nginx/{error,access}.log

As you can see, the new access log format is being used.

Denying Access to Certain Paths:

You can use regular expressions to match certain URI paths and deny access to it in Nginx.

Let’s say, your website is managed by Git, and you want to deny access to the .git/ directory on your web root.

To do that, type in the following lines in the server section of /etc/nging/nginx.conf file:

server {

location ~ \.git {
deny all;
}

}

As you can see, access to any path that contains .git is denied.

Configuring Compression:

You can compress web contents before sending them to the browser using gzip to save bandwidth usage of the Nginx web server.

I have some JPEG images in the /usr/share/nginx/html/images/ directory.

I can access these images using the URI path /images.

To enable gzip compression for only the JPEG images in the URI path /images, type in the following lines in the server section of /etc/nginx/nginx.conf file.

server {

location /images {
gzip on;
gzip_comp_level 9;
gzip_min_length 100000;
gzip_types image/jpeg;
}

}

Here, gzip_comp_level is used to set the compression level. It can be any number from 1 to 9. The higher the level, the smaller the compressed file will be.

The file will only be compressed if the size of the file is above gzip_min_length. I’ve set it to about 100 KB in this example. So, JPEG files smaller than 100 KB won’t be gzip compressed.

The  gzip_types is used to set the mime type of the files that will be compressed.

You can find mime type from file extensions as follows:

$ grep jpg /etc/nginx/mime.types

As you can see, for .jpg or .jpeg file extension, the mime type is image/jpeg.

You can set one or more mime types using gzip_types option.

If you want to set multiple mime types, then make sure to separate them with spaces as follows:

"
gzip_types image/jpeg image/png image/gif;

Now, restart nginx service as follows:

$ sudo systemctl restart nginx

As you can see, Nginx sends gzip compressed image files to the browser when requested.

As you can see in the screenshot below, the gzip compressed file is smaller than the original file.

$ sudo tail -f /var/log/nginx/access.log

Enabling HTTPS:

You can enable SSL in Nginx very easily. In this section, I am going to show you how to set self-signed SSL certificate in Nginx.

First, navigate to the /etc/ssl/ directory as follows:

$ cd /etc/ssl

Now, generate an SSL key server.key and certificate server.crt with the following command:

$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout
server.key -out server.crt

NOTE: You must have openssl installed for this to work. If openssl command is unavailable, install openssl with the following command:

$ sudo dnf install openssl  -y

Now, type in your 2-letter country code (i.e. US for USA, UK for United Kingdom, RU for Russia, CN for China) and press <Enter>.

Now, type in your State/Province name and press <Enter>.

Now, type in your City name and press <Enter>.

Now, type in your Company name and press <Enter>.

Now, type in the organizational unit name of your company which will use this certificate and press <Enter>.

Now, type in the fully qualified domain name (FQDN) of your Nginx web server and press <Enter>. The SSL certificate will be valid only if the Nginx web server is accessed using this domain name.

Now, type in your email address and press <Enter>.

Your SSL certificate should be ready.

The SSL certificate and key should be generated in the /etc/ssl/ directory.

$ ls -lh

Now, open the Nginx configuration file /etc/nginx/nginx.conf and change listen port to 443 and type in the following lines in the server section.

server {

ssl on;
ssl_certificate /etc/ssl/server.crt;
ssl_certificate_key /etc/ssl/server.key;

}

Now, restart nginx service as follows:

$ sudo systemctl restart nginx

In real life, you will have correct DNS setup. But for testing purpose, I’ve configured local file-based domain name on the computer I’ve used to access the Nginx web server from.

If you want to follow along, open the /etc/hosts file as follows:

$ sudo nano /etc/hosts

Then, add the following line to the /etc/hosts file.

192.168.20.175 www.example.com

Now, try to visit https://www.example.com and you should see the following page. You will see Your connect is not secure message because it is a self-signed certificate. This is good for testing purpose only.

In real life, you will be buying SSL certificates from Certificate Authorities (CAs) and use them. So, you won’t see this type of message.

As you can see, Nginx served the web page over HTTPS. So, SSL is working.

The SSL information of www.example.com.

Redirecting HTTP Requests to HTTPS:

If someone visits your website over HTTP protocol (http://www.example.com or http://192.168.20.175) instead of HTTPS (https://www.example.com), you don’t want to reject the HTTP request. If you do that, you will lose a visitor. What you really should do is redirect the user to the SSL enabled site. It is really simple to do.

First, open the Nginx configuration file /etc/nginx/nginx.conf and create a new server section inside the http section as follows:

http {

server {
listen 80;
server_name www.example.com;
return 301 https://www.example.com$request_uri;
}

}

This is the final /etc/nginx/nginx.conf file:

user nginx nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
include             /etc/nginx/mime.types;
default_type        application/octet-stream;
log_format simple   '[$time_iso8601] $request_method $request_uri '
'[$status] ($request_time) -> $bytes_sent bytes';
 
server {
listen 80;
server_name www.example.com;
return 301 https://www.example.com$request_uri;
}
server {
listen          443;
server_name     www.example.com;
ssl on;
ssl_certificate /etc/ssl/server.crt;
ssl_certificate_key /etc/ssl/server.key;
access_log      /var/log/nginx/access.log simple;
location / {
root            /usr/share/nginx/html;
index           index.html;
}
location /images {
gzip on;
gzip_comp_level 9;
gzip_min_length 100000;
gzip_types image/jpeg;
}
error_page 404 /404.html;
location /404.html {
root            /usr/share/nginx/html/errors;
}
location ~ \.git {
deny all;
}
}
}

Now, restart nginx service as follows:

$ sudo systemctl restart nginx

Now, if you try to access http://192.168.20.175 or http://www.example.com, you will be redirected to https://www.example.com.

So, that’s how you install and configure Nginx web server on CentOS 8. Thanks for reading this article.

]]>
How to Install, And Configure a Nginx Server For the First Time https://linuxhint.com/install_get_started_nginx/ Tue, 21 Jan 2020 11:20:59 +0000 https://linuxhint.com/?p=53833 Nginx is one of the popular web servers, and is used as a proxy server, reverse proxy server, load balancer. It’s a popular alternative to the aging Apache web server, as it’s designed keeping resource intensive applications in mind. It is event driven, asynchronous and non-blocking, and therefore it frequently beats Apache in terms of performance. Nginx is often used in large web servers to which millions of users simultaneously connect to access resources.

Being asynchronous, ability to handle millions of users without slowing down the server makes it the number one choice in many enterprises to deploy their systems. This guide demonstrates how to install, and configure Nginx web servers with ease. The guide uses Ubuntu 18.04 as the version because it’s LTS; hence it has a long-term support which is required in a production environment. Installing and configuring a Nginx web server are relatively easy, but it involves a number of steps.

Installation

These instructions were written for Ubuntu 18.04 LTS version, and thus it should not be used in a different Linux flavour unless the same commands work over there as well. It’s encouraged to install Nginx in a regular user account with sudo permission in order to the mitigate security risk. However, this article doesn’t demonstrate how to create a user account as it’s out of its scope.

  1. Prior to install Nginx, update the local package information, then update the packages with the following commands. It makes sure latest version of Nginx is retrieved from the repository (server) when Nginx install command is used. Dist-upgrade command intelligently handles dependencies to prevent incompatibility problems among different packages.
apt-get update && apt-get dist-upgrade
  1. Install Nginx with the following command
apt-get install nginx
  1. The installation only requires 3 main commands to be used, then Nginx is installed in the server. Since in this guide, Nginx is used as a web server, the index.html is created as soon as Nginx is installed, and it can be accessed through the external IP address of the server.

http://IPAddress

  1. Even though it’s installed, it’s important to make sure Nginx service automatically starts its own if in case the server is restarted for some reason. It can be done as following.
sudo systemctl enable nginx
  1. Use the following two commands to adjust the file system permissions. The first command assigns currently logged in user’s name to the file’s permission. If it’s root, then it’s root, if it’s a custom name, then its name. With the second command the file’s permission is set. Since permission for “all users” is set to R, the file can be read by anyone, which is recommended for publicly accessible files. W standards for write permission, which is required for owner to make changes to the file, and It comes handy when a file is modified through a script while being in the server, such as on the WordPress dashboard.
sudo chown -R $USER:$USER /var/www/html
sudo chmod -R 755 /var/www/example.com

Configuration

Installing Nginx is simple as described above, but configuration requires more effort, and it also depends on the requirements and environment of the server. This guide demonstrates how to configure a nginx web server for one domain, how to adjust basic settings, how to set up SSL/TLS, which is required by Google to improve the rank of the web site, and finally what commands are involved in setting up a Nginx server.

  1. Use the following command to open Nginx default file via nano editor. Default file is automatically created when Nginx is installed at first time, and defines the configuration for a web server. This configuration contains a server block which is dedicated for one domain name, and processes the requests to its domain as per the rules within its boundary. Nano editor is just a console editor which helps in opening text files with ease. It’s highly recommended to use a better editor like Notepad++ with NppFTP extension as it’s quite user friendly compared to a console text editor.
nano /etc/nginx/sites-available/default

The configuration file contains a few important lines as seen in the following code snippet.

  • Listen directive specifies the port number of the IP address to listen. For connection encrypted web servers it’s 443, and for non-encrypted web servers it’s 80. Default_server makes it the default server out of all the sever blocks, meaning this server block is executed if request’s header field doesn’t match with any of the specified server names. It’s useful to capture all the requests to the server regardless of the host name (meaning domain in this case).
  • Server_name specifies the host name, usually the domain name. It’s recommended to use both naked, and www flavours of the domain, for instance…
server_name google.com www.google.com
  • Root directive specifies where the web pages are located in the file server, for instance Index.html, and all other sub folders of a web site. The directive only requires the path to the root folder of the web site, the rest is taken relative to that.
  • Index directive specifies the index file’s name, meaning the file that opens up when the host name is entered in the address bar of the web browser.
  • The location block is useful to process directives under the host name, for instance google.com/images, /videos. The / captures the root directive of the domain name. try_files directive tries to serve the content (file, folder) or throws not found message if the resource is not available. If the /videos directory needs to be processed, then use location /videos.
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
 root   /var/www/html/;
 index index.php index.html index.htm;
location / {
       try_files $uri $uri/ =404;
      }
}
  1. It’s recommended to restart the server once it’s configured at first. Restarting the nginx service, reload the configuration file as well. If a simple change was made in the configuration file, using reload is enough too instead of restart to prevent the connection from dropping to the server.
sudo systemctl restart nginx
  1. Nowadays it’s important to encrypt the connection to the website in order to improve the rank of the website in the Google index. Encrypting can be done by implementing SSL/TLS certificate in the web server. There are numerous certificates available in the market, both paid, and free, but this guide uses a free certificate known as let’s encrypt. It’s free of charge but required to renew the certificate once in every 3 months compared to a year in commercial certificates. The following command adds certbot PPA (personal package archive) to the system. These PPAs are hosted in launchpad.net, and when apt-get is used, they are downloaded to the system immediately.
sudo add-apt-repository ppa:certbot/certbot
  1. The following command downloads and install certbot flavour for nginx. As mentioned above, it’s downloaded from launchpad.net.
sudo apt-get install python-certbot-nginx
  1. Once it’s installed, use the following command to enable SSL/TLS for the specified domain name, and its www flavour. This should be the same domain configured in aforesaid steps. If the domain is not configured, make sure it’s done prior to this step.
sudo certbot --nginx -d domain.extension
-d www.domain.extension
  1. When the SSL/TLS was installed as above, restart the server again to changes take effect.
sudo systemctl restart nginx
  1. It’s also recommended to use configuration stated in the following website as it tweaks the SSL/TLS configuration for a specified requirement. The important options in the following website are, modern, intermediate, and old. Modern option makes the connection highly secure, but at the cost of compatibility, and thus the site won’t load on older we browsers. Intermediate option balances out both compatibility, and security, and thus recommended for most web sites. Old type is for legacy systems. It’s not recommended for production sites, but for warning users when they visit the site from ancient web browsers, like Internet Explorer 5.

https://ssl-config.mozilla.org/

Conclusion

Nginx is a proxy server, reverse proxy server and load balancer, and due to its high performance, it’s often used in enterprises to serve their web services. This guide teaches how to install and configure a Nginx server for the first time on an Ubuntu server with ease. Installing, and configuration are not that difficult as all the commands abstract out the complicated tasks under the layer. All in all, there is no reason to not use Nginx unless the business expects a different requirement that Nginx doesn’t offer.

]]>
How to Install Free SSL Certificate for Nginx on Debian 10 https://linuxhint.com/install_free_ssl_cert_nginx_debian/ Mon, 30 Dec 2019 04:07:25 +0000 https://linuxhint.com/?p=52695 TLS and SSL protocols cipher the connection between a site (or other service, but in this tutorial Nginx is the focus) and a client or web browser avoiding sniffers or MiM (Man in the Middle) attack from spying communication. A couple of years ago Google demanded all webmasters to use SSL, also for sites  without sensitive information exchange making this protocol a must also for marketing purposes (SEO).

This tutorial shows how to install a free SSL certificate for Nginx on Debian 10 using Certbot.

For users who haven’t installed Nginx yet the tutorial starts with a fast introduction to Nginx installation and configuration to show the site linux.bz, users who already have Nginx installed and configured can jump to How to Install Free SSL Certificate for Nginx on Debian 10.

Installing Nginx on Debian 10 Buster

Nginx was developed for high performance supporting millions of connections  simultaneously. While by default it can only serve static sites contrary to Apache which can serve both static and dynamic sites, dynamic sites may be also served with Nginx aided by Apache or other software.
If you have Nginx not installed on your PC yet this section will show its installation and configuration, if you have Nginx already installed jump to How to Install Free SSL Certificate for Nginx on Debian 10.

To begin installing Nginx on Debian 10 Buster, previous Debian versions or based Linux distributions run:

# apt install nginx -y

You should be able to access your web server through your browser at http://127.0.0.1/ (localhost).

Now create a configuration file for your website using nano, on the terminal run:

# nano /etc/nginx/sites-available/linux.bz

Within the newly created file input the content shown in the image below, replacing linux.bz for your domain name.

server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/linux.bz;
index index.html;
server_name linux.bz www.linux.bz;
location / {
try_files $uri $uri/ =404;
}
}

After adding the lines above (replacing linux.bz for your domain), press CTRL+X to save and exit the nano text editor.

Then create a symbolic link to /etc/nginx/sites-enabled/linux.bz by running:

# ln -s /etc/nginx/sites-available/linux.bz /etc/nginx/sites-enabled/linux.bz

Now create a directory /var/www/<yourdomain>

In my case:

# mkdir /var/www/linux.bz

# sudo service nginx start

Now you should be able to access your website through nginx with your browser:

Note for domestic users:

In order to allow access from outside the network, some home users will need to configure their routers to forward necessary ports to their web servers. The following image only shows an example of router configuration page for port forwarding, in your router you’ll need to forward ports 80 and 443 to your Apache computer local IP address.

How to Install Free SSL Certificate for Nginx on Debian 10

The free SSL installation process for Nginx under Debian 10 Buster is pretty easy and fast thanks to Certbot which makes Lets Encrypt SSL certificates easy to install.

You can install Certbot  on Debian 10 Buster by running the following command:

# apt install certbot python-certbot-nginx -y

To start the configuration process to add a SSL certificate to Nginx and redirect all http traffic to https run:

# sudo certbot --nginx

You can fill all requested fields or leave them blank, the last step allows you to automatically configure Nginx to redirect all unsecure traffic through https.

Conclusion on Free SSL Certificate for Nginx on Debian 10 Buster

The process to install a  free SSL certificate for Nginx on Debian 10 Buster is pretty simple and fast thanks to Certbot. The whole process took minutes to install the Nginx web server and configure it with SSL.

Other free options to get free SSL certificates may include SSL for Free (https://sslforfree.com, the short Comodo free SSL licences or Zerossl which I didn’t try yet, but none of them means a quick and simple way like this one.

I hope you found this brief article on How to Install Free SSL Certificate for Nginx on Debian 10 useful, thank you for reading it.

Other Articles related to How to Install Free SSL Certificate for Nginx on Debian 10

]]>
How to Use URL Rewriting https://linuxhint.com/url_rewriting/ Sun, 01 Sep 2019 14:52:17 +0000 https://linuxhint.com/?p=46743 URL Rewriting is a process of changing the request URL to something else as defined in the web server. Nginx uses ngx_http_rewrite_module module, which mainly uses return, rewrite directives for rewriting purpose. Other than these directives defined in this module, the map directive, defined in ngx_http_map_module, can also be used to rewrite URLs with ease. This guide intends to explain about 2 main directives – return, rewrite, and their flags, how they do work, and their applications.

Prerequisites

This guide is optimized for Nginx 1.0.1 and above, and thus it’s highly recommended to update the existing Nginx instance to aforesaid or above version. However, some of the commands, syntaxes might still work for any version before the aforesaid version. Since this guide is about URL rewriting, which is a bit advanced topic, it assumes the audience is aware of installation procedure of Nginx, and thus it’s not explained here.

Return

Return is the basic directive that performs URL rewriting and is simple to understand. It doesn’t use regular expressions, but it can include variables to parse, captured from the location block’s path. Usually, return directive is used to redirect the request URL to a different location, and therefore it often uses HTTP codes like 301 for permanent redirection, and 302 for temporary redirection. The following code snippets demonstrate some of the use cases of the return directive.

The following code snippet redirects the request URL to Google.com. It can be used either directly under the server code block or under a location code block, but make sure not to redirect to the same domain in order avoid redirect loop

return 301 https://google.com;

The following code snippet redirects the request URL to Nucuta.com along with the path, for instance the aforesaid example doesn’t contain any path, parameters, and thus no matter which URL is typed in the address bar, the request is redirected to the root domain of Google, whereas in the following example, the path, along with the parameters are carried over without the domain name. Alternatively, $is_args$args can be used, but then instead of $request_uri, $uri variable should be used as $request_uri contains parameters of the URL as well. If the requirement is to redirect to a different directory of the same domain, then use $host variable instead of the domain name in return directive, for instance in the following example instead of nucuta.com, use $host.

return 301 https://nucuta.com$request_uri;

The following code snippet redirects the incoming request to the path directory of the same domain, and the scheme, meaning if the following code snippet is used in http://Linux.com, and if a visitor made a request to it, it’s redirected to the path directory, and therefore the following code snippet is useful when managing a large number of web sites. Here $scheme defines protocol of the URL, such as FTP, HTTP, HTTPS, and the $host defines the current server’s domain with its domain extension, such as Google.com, Linux.Net etc. Since this doesn’t perform any protocol redirection, such as from HTTP to HTTPs, it has to be done manually as in the second example.

return 301 $scheme://$host/path;
if ($scheme != "https") {
return 301 https://$host$request_uri;
}

Another useful use case of return directive is the ability to include regex variables, but for that the regular expression should be specified in the location block, and it should capture a pattern, then the captured pattern can be combined with the existing URL in return directive for redirection purpose, for instance in the following example, when a request is made to access a text file, it captures the text file’s name in location block, then it passes that name to the return directive, then return directive combines it with the existing URL to redirect the request to another directory.

location ~* ^/([^/]+.txt)$ {
return 301 /chrome/$1;
}

Rewrite

Rewrite is a directive used to rewrite URLs internally in the web server without exposing the underlying mechanism to the client side. As per its syntax, it’s used with regular expressions. The basic syntax goes as following. The regex placeholder is for using regular expressions, replacement placeholder is for replacing the matched URL, whereas the flag is for manipulating the flow of the execution. At the moment, the flags used in rewrite directive are break, permanent, redirect and last.

rewrite regex replacement [flag];

Before proceeding to the regular expressions, replacements, pattern capturing, and variable, it’s important to know about how flags make the internal engine of Nginx to behave. There are four major flags used with rewrite directive as explained earlier, among them permanent, redirect flags can be paired together as both perform same functionality, meaning redirection.

Redirect

Redirect flag is used to signal the browser the redirection is temporary, which is also helpful in search engine crawlers to recognize the page is temporary moved away and will be reinstated in its original location some time later. When the page signals it’s 302, search engines don’t make any changes in its indexing, and therefore visitors still see the original page in search engine index when searching, meaning the old page isn’t removed and, in addition, all the qualitied, such as page rank, link juice are not passed to the new page.

location /
{
rewrite ^ http://155.138.XXX.XXX/path redirect;
}

Permanent

Permanent flag is used to signal the browser the redirection is permanent, which is also helpful in search engine crawlers to recognize the page is permanently moved away and will NOT be reinstated in its original location some time later like with temporary moving. When the page signals it’s 301, search engines make some changes in its indexing, and therefore visitors see the new page in search engine index instead of the old page when searching, meaning the old page is replaced with the new page, in addition, all the qualitied, such as page rank, link juice are passed to the new page.

location /
{
rewrite ^ http://155.138.XXX.XXX/path permanent;
}

Regular Expression, Pattern Capturing, And Variables.

Nginx uses Regular expression heavily with rewrite directive, and thus knowing about Regular expressions come in handy in this segment.  There are multiple types of regular expressions, but Nginx uses Perl Compatible Regular Expressions aka PCRE. Having a regular expression testing tool is useful to make sure the written pattern indeed works beforehand using it in the Nginx configuration file. This guide recommends https://regex101.com/ as the tool, and all the following examples are tested with the aforesaid tool thoroughly.

Regular Expressions

rewrite ^/fr/(.*)$ http://nucuta.com/$1 permanent;

A typical pattern of rewrite directive goes as above, it contains the rewrite directive at the beginning, then with a space the “pattern” in regular expression, then with a space the “replacement”, then finally the “flag”. The rewrite directive can be placed anywhere within the server brackets, but is recommended to keep it after specifying listen, server_name, root, and index directives. When a visitor makes a request to the server, a URL is sent along with the request, then if the URL is matched with the regular expression pattern specified in the rewrite directive, it’s rewritten based on the replacement, then the execution flow is manipulated based on the flag.

The regular expression pattern uses brackets to indicate the group, whose sub-string is extracted out of the URL upon matching the regex pattern with the URL of the request, then that sub-string taken out of the URL is assigned to the variable in the “replacement” of rewrite directive. If there are multiple matched groups, sub-string of each matched group is assigned to the variables in “replacement” in numerably order, meaning sub-string of the first matched group is assigned to first variable ($1), sub-string of the second matched group is assigned to second variable ($2), and so on.

Out of 4 flags, 2 flags were already explained in this guide, the remaining ones are last, and break. Before understanding how the remaining flags work, it’s important to understand how Nginx engine behaves with rewrite directives. When a URL is sent along with a request, the Nginx engine tries to match it with a location block. Whether it’s matched or not, if a directive like rewrite, return is stumbled upon, it’s executed sequentially. If the sent URL is matched with the pattern of a rewrite directive, Nginx engine executes the whole configuration file, regardless of where the rewrite directive is specified as a loop, until the newly rewritten URL matches with one of the location blocks.

The following URL is used as a demonstration to explain how both flags make the execution flow of Nginx engine behaves with rewrite directive. The following screenshot portrays the file structure of the web server.

http://155.138.XXX.XXX/browser/sample.txt (the URL sent as a request)

When No Flag Is Used

When no flag is used, both rewrite directives are executed sequentially; hence first URL in the following list turns into 2nd, then 2nd URL turns into the last URL So when the sample.txt file in browser folder is requested, web server actually serves the sample.txt file in the root folder. Since the URL rewriting is completely abstracted away from the browser, it doesn’t see any difference in serving compared with return directive that states the browser whether the request was redirected or not with a HTTP number.

  1. http://155.138.XXX.XXX/browser/sample.txt
  2. http://155.138.XXX.XXX/chrome/sample.txt
  3. http://155.138.XXX.XXX/sample.txt
location /{
}
rewrite ^/browser/(.*)$ /chrome/$1;
rewrite ^/chrome/(.*)$ /$1;
location /chrome {
try_files $uri $uri/ =404;
}

When Either Break, or Last Flag is Specified Outside of Location Block

When either break or last flag is specified outside of the location block, the rewrite directives after the matched rewrite directive are not parsed at all, for instance in the following example the request URL is rewritten to the 2nd one in the following list regardless of the flag used, and that’s it.

  1. http://155.138.XXX.XXX/browser/sample.txt
  2. http://155.138.XXX.XXX/chrome/sample.txt
location /{
}
rewrite ^/browser/(.*)$ /chrome/$1 last;#break
rewrite ^/chrome/(.*)$ /$1 last;#break
location /chrome {
try_files $uri $uri/ =404;
}

When Last Flag Is Used Inside of a Location Block

When the last flag is used inside of a location block, it stops parsing anymore rewrite directives inside of that particular location block and plunges into the next rewrite location block if the rewritten URL is matched with the path of that location block, then it executes the subsequent rewrite directive inside of it.

  1. http://155.138.XXX.XXX/browser/sample.txt
  2. http://155.138.XXX.XXX/chrome/sample.txt
  3. http://155.138.XXX.XXX/sample.txt
location /{
rewrite ^/browser/(.*)$ /chrome/$1 last;
}
location /chrome {
rewrite ^/chrome/(.*)$ /$1 last;
try_files $uri $uri/ =404;
}

When Break Flag Is Used Inside of a Location Block

Break flag, on the other hand, when it’s inside of a location block, stop parsing anymore rewrite directives, regardless of where they are located, when one rewrite directive is matched with the request URL, and serves the content to the user.

location /{
rewrite ^/browser/(.*)$ /chrome/$1 break;
}
location /chrome {
rewrite ^/chrome/(.*)$ /$1 break;
try_files $uri $uri/ =404;
}

Conclusion

URL rewriting is a process of rewriting URLs within a web server. Nginx provides multiple directives like return, rewrite, map directives to make it possible. This guide demonstrates what are return, and rewrite directives, and how they are used to rewrite URLs with ease. As demonstrated in the examples, return directive is suitable to signal the browser, and the search engine crawlers the whereabouts of the page, whereas rewrite directive is useful in abstracting out the URL rewriting process without letting the browser knows what is going on behind the scene. This is quite useful in serving content through a CDN, cached server or from a different location within the network. The users never know from where the resource is coming from as the browser only shows the URL given to them.

]]>
How to Block Hotlinking with Nginx https://linuxhint.com/block_hotlinking_nginx/ Sun, 04 Aug 2019 16:13:05 +0000 https://linuxhint.com/?p=44668 Nginx is a lightweight web server capable of handling humongous number of requests at a given time without making the server busy. It contains sophisticated features such as asynchronous processing, support to ipv6, cache loader, http/2 support, block hotlinking, thread pools, SPDY and SSL, and many more. Among them one of the most important features for any website in general is block hotlinking. Hotlinking is a malicious practice often done by certain petty web masters when they are unable to afford for bandwidth cost, and thereby they end up taking it from somewhere else. This hampers legitimate web masters from utilizing the bandwidth they paid for. On top of that, the linked resource might be unavailable for the users who visit the original website, when the bandwidth allocated for the original webmaster is run out, and site owner didn’t pay for the excessively consumed bandwidth. All in all, to preserve the integrity, availability of the website hotlinking should be stopped, and this guide teaches how to get it done with ease.

Preparation

In the preparation segment, the general instructions for both later said methods are taken down. Obviously, it’s important to have a console to access the server over SSH, and a proper text editor as nano to open the Nginx configuration file. Once both are acquired, use the following commands to open, save, and apply the changes. The following steps assume the user already accessed to the server over SSH.

  • Type the following command to open the default configuration file of Nginx. If each domain has a separate configuration file, use its name instead of default.
nano /etc/nginx/sites-available/default

  • In the default or the configuration file type the codes stated in one of the later said methods. Make sure to use only one of them.
    • Use the following command to test out the configuration file before pushing it to the live mode.
    nginx -t
    • If everything is in the right order, go ahead and type the following command to apply the changes to take effect.
    sudo systemctl restart nginx

Method 1: General Method

The general method is very easy to implement and understand as it contains just a location block. Furthermore, it blocks requests to certain file formats only instead of blocking every request from invalid referers to the server.

  1. Copy the following code snippet.
  2. Open the default file of nginx as seen in the “Preparation” phase.
  3. Paste the copied code snippet under the first location block found in default file. In nginx, the regular expression case insensitive (~*) is always prioritized before forward slash (/), and thus the following code snippet is executed before the forward slash location block.
  4. Save, and close the default file, and then follow 3, 4 steps in “Preparation” phase to make changes to take effect.

In the following example, it blocks requests to css, gif, ico, jpeg, js, png, woff, woff2, ttf, ttc, otf, and eot files. There are 10 conditional statements under location block. The first conditional statement allows the resources to be directly viewed through the web browser, 2nd and 3rd blocks allow the resources to be viewed through the original site (both naked, and www sub domains), rest of the blocks except the search?q and the last block allow search engine crawlers to access, and index the resources, which is very important to index the images in both google images, and bing images. The search?q allows google cache service to access, and save the resources along with the page, and thereby the page can be accessed directly through google search result when the site is offline.

location ~* \.(css|gif|ico|jpeg|jpg|js|png|woff|woff2|ttf|ttc|otf|eot)$ {
if ($http_referer !~ "^$"){
set $rule_0 1$rule_0;
}
if ($http_referer !~ "^http://nucuta.com/.*$"){
set $rule_0 2$rule_0;
}
if ($http_referer !~ "^http://nucuta.com$"){
set $rule_0 3$rule_0;
}
if ($http_referer !~* "google."){
set $rule_0 4$rule_0;
}
if ($http_referer !~* "search?q=cache"){
set $rule_0 5$rule_0;
}
if ($http_referer !~* "msn."){
set $rule_0 6$rule_0;
}
if ($http_referer !~* "yahoo."){
set $rule_0 7$rule_0;
}
if ($http_user_agent !~* "googlebot"){
set $rule_0 8$rule_0;
}
if ($http_user_agent !~* "msnbot"){
set $rule_0 9$rule_0;
}
if ($http_user_agent !~* "slurp"){
set $rule_0 10$rule_0;
}
if ($rule_0 = "10987654321"){
return 403;
break;
}
}

Method 2: Valid_Referers Method

Valid referers is the most convenient, and the widely recognized method to block invalid referers with ease. It contains just two lines compared to the previous method and is very flexible. However, it’s a bit hard to digest as it’s involved regular expressions, and a different mechanism to block requests from invalid referers.

  1. Copy the following code snippet to in between, and at the very beginning of the main location block.
  2. Replace the domain name list with the allowed domain names, for instance google, bing, or your own domains etc.
  3. Save, and close the default file, and then follow 3, 4 steps in “Preparation” phase to make changes to take effect.

valid_referers none blocked server_names

*.linux.com linux.* www.linux.com/about/
~\.linux\.;
 
if ($invalid_referer) {
return 403;
}

It mainly has two code blocks, valid_referers, and the if conditional expression with invalid_referer variable. By default, this code block is used in between, and at the very beginning of the location block before the execution of any other code, but it can be used any other place too, such as in between a location code block with regular expressions to detect specific file formats to make the blocking relevant for the aforesaid file formats, as in the method 1. As explained earlier, the method contains just two code blocks, the first code block contains 3 keywords, the first one is “none” when the referer field is missing in the HTTP request, second one is “blocked” when the referer field is deleted by any middle party, such as a proxy, firewall etc., the third keyword is for specifying the valid domain names.

When the domain name starts with “~” symbol it’s regarded as a regular expression, and thus very complex patterns can be used, but it might be difficult to understand if regular expressions are not known well. If none of the conditions are met in valid_referers statement, the invalid_referer variable is set to empty string, otherwise it’s set to 1, what it means if the coming request doesn’t contain any referer field, or if nginx identified that referer field is removed by a firewall or a proxy, or if the referer field is set to the specified domains (valid domain name list) then invalid referer variable is set to empty string, and thereby its if condition is not executed. However, if request is coming from a domain that is not specified in the valid_referers expression as a valid domain, then it’s blocked.

 

CONCLUSION

Please be sure to consider this content and prevent hotlinking on your Nginx hosted sites.

]]>
How to Redirect URLs in Nginx https://linuxhint.com/redirect_urls_nginx/ Mon, 10 Jun 2019 20:46:00 +0000 https://linuxhint.com/?p=41884 Nginx is a lightweight web server, which is often used as a reverse proxy, web server, and a load balancer as well. Nginx, by default comes up with a lot of useful features, and more can be added as modules when it’s being installed. This guide intends to demonstrate how to use Nginx to redirect URLs to different directions. Even though Nginx provides plethora of features to redirect URLs, this guide uses a fraction of them as it’s intention is to teach only the essential ones in URL redirection. The areas covered in this guide are redirect unsecure (port 80) URLs to its secured version, redirect a request to the IP to a domain name, and finally redirect any other sub domains, domains to the main domain.

Pre-requirements

First of all, this guide assumes the user has a proper SSH client had installed on the computer, if not go ahead and install Putty as the client, then use the following commands. Additionally, having Nginx, Nano editor are required as well.

  1. Type the following commands to install Nano text editor. The first command helps to retrieve latest packages from the repositories, and the second command installs the latest version of nano text editor.
sudo apt-get update
sudo apt-get install nano
  1. In the terminal window, type the following command to change the current directory to nginx directory.
cd /etc/nginx/sites-available
  1. Now type nano default or the file’s name associated with the domain to change the settings of the domain.
  2. Since now follow one of the following segments to proceed.

Redirect from HTTP (Port 80)

Google, Bing and many other search engines nowadays favor websites with an encrypted connection. When the connection between the client, and server is encrypted, the data transmitting through that particular connection is secure, and thus third parties are unable to access to those data. When the connection is not encrypted, such sites are insecure, and thus it jeopardizes the safety of the data. Insecure website uses port 80 to provide its service to public. Unfortunately, by default the web browser connects with the port 80, as web server assumes it’s what the client wants by default, and thus the request has to be redirected to its secured version. There are multiple ways to get it done with Nginx.

Method 1

If the current domain name is available, and if it receives requests from clients, then they can be redirected to another domain with the following code snippet. Simply copy it to the default file or the file of the domain.

Default server parameter specifies this server block is the default server, hence any requests to the port 80 executes this server block at first by default, and then rest follows thereafter. The parenthesis signifies it also captures requests from ipv6 networks. Return 310 signifies, the redirection is permanent, and thus link juice is passed along with it.

server {
listen 80 default_server;
listen [::]:80 default_server;
server_name domain.com www.domain.com;
return 301 https://domain.com$request_uri;
}

Method 2

If the current server has no website attached to it, and the requirement is redirecting any requests to the port 80, then the following server block can be used. Copy it to the default file as stated earlier. Here _ (underscore) signifies any domain. Like earlier, default_server parameter, parenthesis (for IPv6 addresses) like optional attributes can be used here as well.

server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}

 

Method 3

The following code snippet signifies if the connection is not encrypted, meaning port 80 receiving requests, then they are redirected to a secure version of the specified domain. This should be copied to anywhere in the server {} block, but after the server_name parameter.

if ($scheme != "https") {
return 301 https://$host$request_uri;
}

 

Redirect from The IP Address

Unlike a shared host, both dedicated servers, and virtual private servers always have a dedicated IP address allocated to it. If the web server is configured with Nginx with underscore (which means server processes every request), then any request to the IP address gains access to the website as well. Having access to the website through an IP address is not something every web master wants due to various reasons. On the other hand, if every request is processed, malicious users can associate any random domain with the web server, which is not good for the name of the brand or the business, and therefore it’s important to process only requests to specific domains or and IP address. This segment demonstrates in such cases how to process requests to the IP address of the web server. Using this code block along with one of above code blocks (except method 2 of previous solution) ensures every request to both domain, and IP is redirected to the desired destination.

As said above, copy the following code snippet to default file of Nginx (pre-requirements, 3rd step). Instead of using the name of the domain in server_name parameter, simply use the IP address of the server, then in the next line, use “return 301 domain” to where the request is being redirected. Now when a request to this particular IP address received to the server, it’s redirected to the stated domain. A best example for that is, when a random user types the IP of the web server to access the site directly. If the following code snippet isn’t stated anywhere in default file, any request to the IP isn’t processed; hence users are unable to access the web site via the IP address.

server {
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name 192.168.1.1;
return 301 https://nucuta.com;
}

Redirect from any other Domain

This solution is same as the first solution of this guide, except it also redirects requests to the 443 port of the web server, meaning both secured, and unsecured requests are redirected to the stated domain in return parameter. As said earlier, simply copy this to the default file.

server {
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name domain.com www.domain.com;
return 301 https://nucuta.com;
}

 

Finalizing

After following one of the above solutions, nginx file has to be compiled to take its configuration into effect. However, the default file has to be tested before compiling, as it prevents the web server from crashing if there was an error in the configuration.

  1. Simply use the following command in Linux terminal to test the default configuration file, it the result is good continue to the next step.
sudo nginx -t
  1. Use one of the following commands to restart the Nginx web server. The command depends on the name, and version of Linux distro.
sudo systemctl restart nginx
sudo service nginx reload
sudo /etc/init.d/nginx reload
]]>
NGINX: Block Based on Geographical Location https://linuxhint.com/nginx_block_geo_location/ Mon, 15 Apr 2019 19:44:16 +0000 https://linuxhint.com/?p=38867 Nginx is a high performance, lightweight, open source web server available to public for free of charge. It has tremendous number of valuable features compared to other lightweight servers. One of such features is its geoip_module, which is used to identify the geo graphical location from where the visitor comes. By default, it uses in combination with data provided by maxmind to find out the geographical location of the visitor. The advantage of identifying the geographical location is to enforce different policies to different geographical locations, for instance if a business is only available to countries in north America, with geoip_module it can block out all other visitors coming from other regions. This ensures the business doesn’t have to comply to rules, and regulations enforced by different regions, such as GDPR (General Data Protection Regulation).

Implementation

Even though there are many ways to implement the solution in the system, this guide demonstrates the easiest way to enroll it with minimum effort.

  1. Obviously, Nginx has to be installed in the system prior to initiate the steps in this guide. However, having Nginx installed is not enough, as it also requires geo_ip_module to be installed too. Maxmind used to release their database in dat format, but since a while ago it’s released in mmdb format. This makes Nginx to require a new geo_ip_module called ngx_http_geoip2_module. However, it’s not required as the old dat database is still sufficient. Anyway, if nginx isn’t installed set it up with the following two commands.
apt-get update
apt-get install nginx
  1. Type the following command to make sure http_geoip module is installed.
nginx -V

  1. There are multiple ways to acquire/build the database that contains IP addresses, and their respective country, city names. Install the geo_ip database with the following commands. Using this method makes it easy to install the geo_Ip database in the system. However, the most ideal way is downloading a fresh copy as they are updated with latest information. So, use one of the three options given below. The first option is enough for any average user, the 2nd option is to get the latest database of maxmind, the third option converts the mmdb database to its respective dat file format.
     
    It’s time, and resource consuming, and thus not recommended for weak servers. However, if updated database is still needed, then use the option 2. It saves the time, and money in converting the file, but the security can’t be guaranteed as it’s converted by someone else, not by any official party. The option 3 requires 3 pip packages, setuptools, ipaddr, dcryptit. And it uses python 2 to process the script. The last line converts the zip archive to .dat file. Even though it’s mentioned about conversion of mmdb file format to .dat, here it actually does convert a CSV file to a .dat file format, and thus it requires geoname2fips.csv file which comes along with the conversion file bundle.

Option 1

apt-get install geoip-database libgeoip1

Option 2

cd /usr/share/GeoIP
wget -o maxmind.dat.gz https://bit.ly/2Gh3gTZ
gunzip maxmind.dat.gz

Option 3

cd /home/
mkdir geolite2legacy/
git clone https://github.com/sherpya/geolite2legacy
apt-get install python
apt-get install python-pip
pip install setuptools
pip install ipaddr
pip install dcryptit
cd /usr/share/
mkdir GeoIP/
cd /usr/share/GeoIP/
wget https://geolite.maxmind.com/download/geoip/database/GeoLite2-Country-CSV.zip
pyton /home/geolite2legacy/geolite2legacy.py -i /usr/share/GeoIP/GeoLite2-Country-CSV.zip -f
/home/geolite2legacy/geoname2fips.csv -o /usr/share/GeoIP/GeoLite2-Country.dat
  1. Configure the Nginx configuration file as following. Type the command in the first line in Linux terminal as usual, and copy the rest of the lines to the nginx.conf file. Make sure the name mentioned in /usr/share/GeoIP/GeoIP.dat matches with the dat file stored in usr/share/GeoIP folder. Even though in the following example, it specifies just one country, multiple country codes can be specified as the given example with one line per country code. The available country code list for countries can be located at this location. http://www.maxmind.com/app/iso3166.
nano /etc/nginx/nginx.conf
geoip_country /usr/share/GeoIP/GeoIP.dat;
map $geoip_country_code $allowed_country {
default yes;
LK no;
}
  1. Open the default file via any text editor (nano is preferred as it’s quite easy to edit with it), then add the content since 2nd line to in between anywhere in location block in the default file. The code works as this, when a visitor makes a request to the web server, nginx fetches their IP addresses and matches with its records to find the respective country code, if the country mentioned in the map block matches, the no is assigned to the $allowed_country variable, and thereby checking the $allowed_country allows to manipulate the response. In this guide it uses no, and thus the visitor is denied from seeing the content. If there are multiple domains like .com, .lk, or nucuta.com, or nucuta.net add the code since line 3 to each “domain”.conf file as well. If nginx is configured well, the file to respective domain is located in sites-available folder.
nano /etc/nginx/sites-available/default
if ($allowed_country = no) {
return 444;
}
  1. Restart the nginx server with the following command. Hereafter accessing the web server from any sri lankan (LK) domain causes the web server to return nothing as seen in the following screenshots. 444 in nginx represents nothing. Any other code such as 302, 301, 404 can be used here instead as well. If 302,301 are specified, an URL to redirect the visitor should be specified as well.
systemctl restart nginx

Conclusion

Blocking visitors based on their geography is critical for some businesses to function due to various regional rules, and regulations. Nginx caters to such needs with its geo_ip module. It uses maxmind databases to find the country by the ip address of the visitor. The database works with both Ipv4, and ipv6. Since maxmind discontinued their legacy dat database format, the only way to make use of their data is either converting the new file format to dat file or using an already converted one or use a third party module for Nginx to support mmdb file format. The python script provides here is ideal for conversion even though it takes a while to see the outcome. Maxmind guarantees over 99% accuracy in finding the country based on the IP; hence it’s a must have tool for any business.

]]>
Install NGINX on CentOS https://linuxhint.com/install_nginx_centos/ Thu, 10 Jan 2019 06:45:53 +0000 https://linuxhint.com/?p=35237 In the case of any web server, the performance is something that you need to keep in mind. In fact, performance is the main factor that decides the success of running a server. The faster the server, the better performance you get out of your current hardware config.

There are a number of available server apps out there. The most popular ones include Apache and NGINX. Both of them are free and open-source. Of course, in terms of popularity, Apache is a quite popular choice even in the world. In fact, more than 65% of all the servers in the current cyber world is powered by Apache!

However, that doesn’t diminish the benefits of NGINX (engine-ex – that’s how it’s pronounced). There are tons of additional benefits that NGINX provide that Apache fails to serve.

The first and foremost reason is the performance. NGINX, being a lightweight alternative to Apache, offers better overall performance than Apache. NGINX is also well-suited with the Linux and other UNIX-like environment. However, NGINX falls short in terms of flexibility. You need to compile additional modules into the NGINX binary in most cases as not all the modules of NGINX support dynamic module loading.

As both of them are free, you can easily start your own server right now! In today’s tutorial, we’ll be checking out NGINX running on my test CentOS system.

Installing NGINX

NGINX is available on the EPEL repository. Let’s start the installation!

At first, make sure that your system has EPEL repository enabled –

sudo yum install epel-release

sudo yum update

Now, time to perform the installation!!!

sudo yum install nginx

Starting NGINX

The installation is complete, time to fire it up! It’s not going to start itself all by itself!

sudo systemctl start nginx

If your system is configured to use a firewall, enable HTTP and HTTPS traffic from/to the server –

sudo firewall-cmd --permanent --zone=public --add-service=http
sudo firewall-cmd --permanent --zone=public --add-service=https
sudo firewall-cmd --reload

Time to test the server working –

http://<server_domain_IP>

Don’t have the IP address of the server? Then you can find out by running the following command –

ip addr

In my case, I need the “enp0s3” connection. Now, find out the IP address by running the following command –

ip addr show enp0s3 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'

You may also want to enable NGINX every time your system boots up –

sudo systemctl enable nginx

Additional configurations

The default configuration isn’t always the best one as it depends on the particular usage case. Fortunately, NGINX comes up with a handy set of configuration files.

  • NGINX global configuration file
    /etc/nginx/nginx.conf
  • Default server root
    /usr/share/nginx/html
  • Server block configuration
    /etc/nginx/conf.d/*.conf

Enjoy!

]]>
Nginx Reverse Proxy https://linuxhint.com/nginx_reverse_proxy/ Sun, 11 Nov 2018 05:35:51 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=32195 What is a reverse proxy?

A proxy server is the one that talks to the Internet on your behalf. For example, if your college’s network has blocked https://www.facebook.com/ but the domain  https://exampleproxy.com is still accessible, then you can visit the latter and it will forward all your requests for Facebook servers to Facebook, and send by the responses from Facebook back to your browser.

To recap, a proxy sends requests on behalf of one of more clients to any servers out in the Internet. A reverse proxy behaves in a similar fashion.

A reverse proxy receives request from any and all clients on behalf of one or more servers. So if you have a couple of servers hosting ww1.example.com and ww2.example.com a reverse proxy server can accept requests on behalf of the two servers, forward those requests to their respective end points where the response is generated and sent back to the reverse proxy to be forwarded back to the clients.

The set up

Before we start tweaking Nginx config files and make a reverse proxy server. I want to set in stone what my setup looks like, so when you are trying to implement your design you, it would be less confusing.

I used DigitalOcean’s platform to spin up three VPS. They are all on the same network each with its own Private IP, and only one VPS has a static public IP (This will be our reverse proxy server.)

VM/Hostname Private IP Public IP Role
Reverseproxy 10.135.123.187 159.89.108.14 Reverse proxy, running Nginx
Node-1 10.135.123.183 N/A Running first website
Node-2 10.135.123.186 N/A Running second website

The two different websites that are running have domain names ww1.ranvirslog.com and ww2.ranvirslog.com and both of their A records point to the reverseproxy’s public IP, i.e, 159.89.108.14

The idea behind private IP is that, the three VMs can talk to one another via this private IP, but a remote user can only access the reverse proxy VM at its Public IP. This is important to keep in mind. For example, you can’t ssh into any of the VM using its Private IP.

Furthermore, both Node-1 and Node-2 have an Apache web server serving two distinct webpages. This will help us distinguish one from another.

The first website says “WEBSITE 1 WORKS!!!”

Similarly, the second website shows this:

Your websites may differ, but if you want to replicate this setup as a starting point, run apt install apache2 on Node-1 and Node-2. Then edit the file /var/www/html/index.html so that the web server says whatever you want it to say.

The reverseproxy VM is still untouched. All the VMs are running Ubuntu 18.04 LTS, but you are free to use any other OS that you want. You can even emulate this using Docker containers. By creating a user-defined Docker bridge network and spawning containers on it, you can assign each container a private IP and forward all the HTTP/HTTPS proxy to one container, which would be our Nginx reverse proxy container.

So far so good.

Nginx Default Configuration

Let’s begin by installing Nginx to the reverseproxy server, I am using Ubuntu so apt is my package manager:

$ sudo apt install nginx

Removing default configuration if you are using Debian-based distribution

Before we go any further a small note on Nginx’s configuration. All the various configuration files are stored in /etc/nginx including the nginx.conf file which is the main configuration file. If we look at the contents of this file (inside http block) you will notice the following two lines:

...
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
...

The second line includes all the files in the sites-enabled directory to the Nginx’s configuration. This is the standard practice on most Debian-based distributions. For example the default “Welcome to Nginx” webpage has a corresponding file named default at the location /etc/nginx/sites-available/default with a symlink to /etc/nginx/sites-enabled/, but we don’t need this default webpage so we can safely remove the symlink. The original is still available at sites-available directory.

$ rm /etc/nginx/sites-enabled/default

But when we will create reverse proxy configuration we will do so in conf.d directory (with our file name having a .conf extension) this is universal, and works across all distributions not just Debian or Ubuntu.

Removing default configuration for other distros

If you are not using Debian-based distro, you will find the default Welcome Page configuration at /etc/nginx/conf.d/default.conf just move the file to some place safe if you want to use it in the future (since this is not a symlink)

$ mv /etc/nginx/conf.d/default.conf ~/default.conf

It can sometimes be found in /etc/nginx/default.d because people just can’t agree upon a single simple standard! So you would have to do a bit of digging in the /etc/nginx directory, to figure this out.

Adding Reverse Proxy Blocks

As stated before, the two different domain names I am hosting behind this proxy are

  1. ranvirslog.com (WEBSITE 1) with IP 10.135.123.183
  2. ranvirslog.com (WEBSITE 2) with IP 10.135.123.186

So let’s create one file per website in /etc/nginx/conf.d/ folder. So we are well-organized.

$ touch /etc/nginx/conf.d/ww1.conf
$ touch /etc/nginx/conf.d/ww2.conf

You can name the files whatever you wish, as long as it has a .conf at the end of its name.

In the first file ww1.conf add the following lines:

server {
listen 80;
listen [::]:80;
 
server_name ww1.ranvirslog.com;
 
location / {
proxy_pass http://10.135.123.183/;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
}
}

The listen statements tells Nginx to listen on port 80 for both IPv4 and IPv6 cases. It then checks if the server_name is ww1.ranvirslog.com then the location block kicks in and proxies the request to http://10.135.123.183/ with buffering turned off. Moreover, the proxy_set_header…line ensures that the client’s original IP is forwarded to the proxied server. This is helpful in case you want to calculate the number of unique visitors, etc. Otherwise the proxied server would have only one visitor — the Nginx server.

The buffering option and set_header options are completely optional and are just added to make the proxying as transparent as possible. For the ww2.ranvirslog.com website, I added the following configuration at /etc/nginx/conf.d/ww2.conf:

server {
listen 80;
listen [::]:80;
 
server_name ww2.ranvirslog.com;
 
location / {
proxy_pass http://10.135.123.186/;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
}
}

Save both the files and test whether the overall configuration is valid or not:

$ sudo nginx -t

If there are errors, the output of the above command will help you find and fix them. Now restart the server:

$ service nginx restart

And you can test whether it worked or not by visiting the different domain names in your browser and seeing the result.

Conclusion

Each individual’s use case is different. The configuration mentioned above may need a bit of tweaking to work for your scenario. Maybe you are running multiple servers on the same host, but at different ports, in that case the proxy_pass… line will have http://localhost:portNumber/ as its value.

These details depend very much on your use case. For further details about other options and tuneables see the official Nginx docs. ]]>