Ali Imran Nagori – Linux Hint https://linuxhint.com Exploring and Master Linux Ecosystem Wed, 10 Mar 2021 03:27:31 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.2 How to List Startup Services at Boot Time in Fedora Linux? https://linuxhint.com/list-startup-services-at-boot-time-in-fedora-linux/ Tue, 09 Mar 2021 02:05:23 +0000 https://linuxhint.com/?p=93064 Red Hat invented the ‘systemd’ as a manager for system and service on Linux OS. It is compatible with the old SysV and LSB init scripts with more features such as simultaneous start-up of system services at boot time, daemon (background process) activation on-demand, or service control logic based on dependency.

Systemd brings the concept of systemd units in Linux. For e.g., service unit, target unit, mount unit etc. are unit types with file extension as .service, .target, .mount respectively. The configurational file representing these units are stored inside the directories: /usr/lib/systemd/system/, /run/systemd/system/, /etc/systemd/system/

Earlier versions of Red Hat Enterprise Linux (RHEL) used init scripts. These scripts were written in BASH and were located in the directory “/etc/rc.d/init.d/”. These are scripts used to control the services and daemons. Later in RHEL 7, service units were introduced to replace the init scripts. Fedora, which is an upstream OS of Red Hat Enterprise Linux, has started using the systemd from the Fedora version 15.

Service units have .service file extensions and have similar roles as init scripts. “Systemd” uses the “systemctl” utility to manage system services. It can be used to view, start, stop, restart, enable or disable these services.

Advantages of Systemd Over Init System

  1. With systemd, we can prioritize necessary services over less significant services.
  2. Cgroups are used by systemd to keep track of processes and control the execution.environment.
  3. Systemd still supports the old init process and has more control.
  4. Systemd is capable of dealing with dynamic system configuration modifications.

What Will We Cover?

In this guide, we will learn about managing systemd processes. We will see how to enable and disable startup services at boot and how to do service operations like start, stop, restart, etc. We have performed the below exercises on Fedora 30 workstations, which will be most applicable to other Linux OSes.

List Startup Services at Boot in Fedora Linux

The old SysV method uses the service and chkconfig commands to manage the services. These commands are now replaced with the systemd commands like systemctl. Let us see some of the operations of “systemctl” on various services in Linux.

1. To list all the services running on your system, along with their states (enabled or disabled), use the command below:

$ sudo systemctl list-unit-files --type=service

A service can have three states: 1) enabled 2) disabled 3) static

An enabled service has a symlink in a .wants directory, whereas a disabled service does not have one. A static service does not have an install section in the corresponding init script. So, it cannot be enabled or disabled.

To get more details of the services, the below command should be used.

$ sudo systemctl -at service

Summary of the above column names:

UNIT — systemd unit name (here a service name).
LOAD — Specify if the systemd unit was loaded correctly or not.
ACTIVE — State of the unit (here service).

SUB — A sub-state of a unit activation.
DESCRIPTION — A short info of the unit.

We can also use the following command:

$ sudo ls /lib/systemd/system/*.service

or

$ sudo /etc/systemd/system/*.service

The “/etc/inittab” is now replaced by “/etc/systemd/system/” in systemd. This directory now contains the symlinks to the files in the directory “/usr/lib/systemd/system”. The init scripts are placed in the “/usr/lib/systemd/system”. A service must be mapped to “/etc/systemd/system/” for starting it at system boot. For this purpose, the systemctl command is used in Fedora and other latest Linux systems.

2. Let us see the below example of enabling the httpd service:

$ sudo systemctl enable httpd.service

Also, we can use the command below to filter all the enabled services:

$ sudo systemctl list-unit-files | grep enabled

or use the command:

$ sudo systemctl | grep running



3.
To list all the active (running) services, use the command:

$ sudo systemctl -t service --state=active

4. To see which services are enabled to start automatically on system boot, we can also use the following command:

$ sudo systemctl list-unit-files --type=service --state=enabled --all

5. Similarly, we can check the services disabled to start at boot with the command:

$ sudo systemctl list-unit-files --type=service --state=disabled --all

6. We can also see what time each service is taking at startup:

$ sudo systemd-analyze blame

7. To check if a service is enabled for autostart at boot, use the command:

$ sudo systemctl is-enabled xxx

Put the name of service in place of xxx. E.g., in the case of httpd service, the command will be:

$ sudo systemctl is-enabled httpd.service

or

$ sudo systemctl is-enabled httpd

8. To check the status of a service, use the command:

$ sudo systemctl status xxx.service

For example, to check the status of the sshd service:

$ sudo systemctl status sshd.service

9. To check if a service is running or not, just run the below command:

$ sudo systemctl is-active xxx.service

For example, to check the telnet status:

$ sudo systemctl is-active telnet.service

10. To start a dead or inactive service, use the command:

$ sudo systemctl start xxx.service

For example, to start an sshd service:

$ sudo systemctl start sshd



11.
To disable a service at system boot

$ sudo systemctl disable xxx

For example, to disable the httpd service:

$ sudo systemctl disable httpd.service

or

$ sudo systemctl disable httpd

12. To restart a running service

$ sudo systemctl restart xxx.service

To restart the sshd service, use the command:

$ sudo systemctl restart sshd

If the service is not already running, it will be started.

13. To reload a running service

$ sudo systemctl reload xxx.service

For example, reload the httpd service with:

$ sudo systemctl reload httpd.service

This command reloads the configuration of a specific service. To reload the unit configuration file of systemd, we need the command:

$ sudo systemctl daemon-reload

14. To list all the dependencies of a service:

$ sudo systemctl list-dependencies xxx.service

In the case of httpd service, the command will be:

$ sudo systemctl list-dependencies httpd.service

Conclusion

In this guide, we have seen various ways of managing services with systemd utility like enabling services at boot time, starting and stopping them, etc. If you were used to the service command of old Sysvinit, you should switch to systemd as it has more features and it is the default init system in newer versions of Fedora, RHEL, and most of the other major Linux distributions.

]]>
Installing Apache CouchDB on Fedora https://linuxhint.com/installing-apache-couchdb-on-fedora/ Fri, 05 Mar 2021 10:33:00 +0000 https://linuxhint.com/?p=92891

Developed by Apache software foundation, CouchDB is a database management system that stores data in JSON documents. We can access our data using the HTTP protocol. Similarly, we can manipulate the data with JavaScript. CouchDB database has RESTful HTTP API for managing database documents

What Will We Cover?

In this guide, we will tackle how we can install Apache CouchDB’s latest version on Fedora 30 workstation. We will also use the source code from the official website for this guide. Before we can start, ensure the following requirements are met.

Prerequisites:

  1. User account with “sudo” privileges
  2. Internet connection to download various files
  3. Basic knowledge of running commands on Linux

Installing Apache CouchDB:

Apache CouchDB requires various dependencies before it can be installed. The official site of CouchDB list these dependencies and their exact version number to be installed:

Erlang OTP (19.x, 20.x >= 21.3.8.5, 21.x >= 21.2.3, 22.x >= 22.0.5)

ICU

OpenSSL

Mozilla SpiderMonkey (1.8.5)

GNU Make

GNU Compiler Collection

libcurl

help2man

Python (>=2.7) for docs

Python Sphinx (>=1.1.3)

These dependencies can be installed from the official repository of Fedora 30. Let us install them:

Install the above-mentioned dependencies from the following command below:

$ sudo dnf install autoconf autoconf-archive automake curl-devel erlang-asn1 erlang-erts erlang-eunit gcc-c++ erlang-os_mon erlang-xmerl erlang-erl_interface help2man js-devel-1.8.5 libicu-devel libtool perl-Test-Harness

Once these dependencies are installed, we can continue to the process of installing Apache CouchDB, as shown below:

Step 1. Download the tarball file for Apache CouchDB using the ‘wget’ command:

$ wget https://mirrors.estointernet.in/apache/couchdb/source/3.1.1/apache-couchdb-3.1.1.tar.gz

Step 2. Extract the downloaded tarball file with the command given:

$ tar -xf apache-couchdb-3.1.1.tar.gz

Step 3. Move the extracted folder to /opt folder and change the directory there:

$ sudo mv apache-couchdb-3.1.1 /opt/

$ cd /opt/apache-couchdb-3.1.1/

Step 4. To configure the package for your system, use the configure script, as shown below:

$ ./configure

If you want to see options available with the configure script, use the command:

$ ./configure --help

At the end of the script, if you see the message:

You have configured Apache CouchDB, time to relax.

It means that you have correctly configured the package.

Step 5. Now we will build the source code by running the command below:

$ make release

Or use gmake if make does not work.

In case you got the below error:

ERROR: Reltool support requires the reltool application to be installed!ERROR: generate failed while processing

IT means that you must install the erlang-reltool package to build the CouchDB. Use the command below for this:

$ sudo dnf install erlang-reltool

Now, run the ‘make release’ again with the command below:

$ make release

If the above command finishes successfully, then you should see the message shown below:

“… done

You can now copy the rel/couchdb directory anywhere on your system.

Start CouchDB with ./bin/couchdb from within that directory.”

Step 6. Registering CouchDB user

CouchDB suggests creating a separate user (couchdb) for running its services. This is because of security considerations. Create the user with the command below:

$ sudo adduser --system -m --shell /bin/bash --comment "CouchDB Administrator" couchdb

The above command will create a user named as “couchdb”, together with a home directory and bash shell.

Step 7. Now use the cp command to copy the directory “rel/couchdb” to the couchdb’s home directory (/home/couchdb):

$ sudo cp -R /opt/apache-couchdb-3.1.1/rel/couchdb /home/couchdb

Note: Use the path “rel/couchdb” relative to your path of extraction for couchdb.

Step 8. We now need to change the ownership of the CouchDB directories using the command below:

$ sudo chown -R couchdb:couchdb /home/couchdb/couchdb

Step 9. Similarly, change the permission of the CouchDB directories with the command given below:

$ find /home/couchdb/couchdb -type d -exec chmod 0770 {} \;

Step 10. To modify the permissions for the ini files, open a new terminal window and run the below commands:

$ sudo -i

# chmod 0644 /home/couchdb/couchdb/etc/*

Step 11. Create an admin user before starting couchdb (required in CouchDB version 3). For this, open the file local.ini file in the directory “/home/couchdb/couchdb/etc/local.ini”.

# vi /home/couchdb/couchdb/etc/local.ini

Now go to the admin’s section and uncomment the admin line, then put your password in the following way:

admin = YourPassword

In place of YourPassword, put the password you want to use. You can add any admin user in the format of “username = password”. See the reference picture below:

Now return to the normal user terminal by typing exit:

# exit

Step 12. We will start the CouchDB server with the command given below:

$ sudo -i -u couchdb /home/couchdb/couchdb/bin/couchdb

The above command starts the CouchDB as the couchdb user, as shown in the following picture:

Step 13. Open a web browser and browse the below address to access the admin panel:

http://127.0.0.1:5984/_utils/index.html

To verify the installation, go to:

http://localhost:5984/_utils/verify_install.html

CouchDB can be configured as a single node or clustered. Let’s see the setup for a single node:

Step 1. Go to http://127.0.0.1:5984/_utils#setup

Step 2. Login with your admin account

Step 3. For the first-time setup, click on the setup icon and select the option “Configure a Single Node”.

Step 4. Create a new admin user for this setup. We can also continue with the previous “admin” user. In our case, we have created a new user: admin2 and password: 123. Now click the configure Node button:

Step 5. When you click the database icon, it will show you two system databases:

Note: Always restart the couchdb after creating an admin account

Step 6. After restarting the couchdb, create a new database in the admin2 account, as follows:

You should see a “database created successfully” message, as shown in the image below:

Conclusion:

In this guide, we learn how to install CouchDB using the source code on Fedora 30 workstation. We have managed to configure various aspects of the installation process and troubleshoot some of the errors. We have also learned to set up the single-node configuration from GUI. What you can do next is:

  • To manually configure CouchDB for a single node; and
  •  Create a clustered setup for CouchDB
]]>
How to install Enlightenment Desktop in Fedora 30 Workstation https://linuxhint.com/install-enlightenment-desktop-fedora-workstation/ Tue, 02 Mar 2021 13:38:14 +0000 https://linuxhint.com/?p=92435 Enlightenment is a desktop environment like GNOME, KDE, MATE, Cinnamon, and others. The first release appeared in early 1997. It is a graphical desktop environment maintained by the Enlightenment project. It has a typical UNIX/X11-based desktop style.

It has a rather elegant desktop interface and a different central philosophy of design.

Enlightenment desktop can manage windows and files. It can do compositing. It can also start applications as well as handle UI and manipulate system settings. In fact, Enlightenment was the first Window Manager to bring themes into the X11 window system.

Enlightenment is in existence before GNOME and is hardly younger than KDE. Its first release was version 0.1 in the first part of 1997. Initially, it was launched as a simple window manager. Despite the limited capability of computers to handle a user interface’s complex functionality, it proved to be very flexible in terms of behavior and visuals features.

Enlightenment has too many features along with too much flexibility. One can configure it to be a simple GUI desktop or make it more dazzling with various activity options.

What will we cover

In this guide, we will see how to install the Enlightenment Desktop environment on Fedora 30 OS. We will see the installation method via the official repository and via source code. Let’s get started with the installation process of Enlightenment Desktop.

Prerequisites

  1. Fedora 30 OS with Gnome desktop installed on your system.
  2. Basic idea of running commands on Linux command-line interface.
  3. Root user account or a normal user account with sudo privileges.
  4. Good Internet connectivity to download various files.

Method 1. Installing Enlightenment Desktop Using official Fedora Repositories

Step 1. Installation using this method is pretty easy. You only need to install the enlightenment group package to get things working.

$ sudo dnf install @enlightenment

That’s all. The above command installs all the required packages and dependencies. Your new desktop environment is installed and ready to use. We only need to logout and login again to apply the changes.

We will see the configuration steps after Method 2. If you are not interested in installing the Enlightenment Desktop from source code, you can skip to the configuration section.

Method 2. Installing Enlightenment Desktop from the source code

Installing an enlightenment desktop from source code is a bit complex. We need to install several needed packages before running the install scripts. Without these packages, we might get an error like the one below:

Package requirements (eeze >= 1.20.5 ecore >= 1.20.5 eina >= 1.20.5) not found

Let us first install these dependencies:

1. Install the efl-devel package:

$ sudo dnf install efl-devel-1.21.1-4.fc30

2. Install the xcb-util-keysyms-devel package:

$ sudo dnf install xcb-util-keysyms-devel

Now we can continue with the further process of installation:

Step 1. Download the archive binary of enlightenment from the below command:

$ wget https://download.enlightenment.org/rel/apps/enlightenment/enlightenment-0.22.4.tar.xz

Step 2. Extract the downloaded file with the command:

$ tar -xf enlightenment-0.22.4.tar.xz

Step 3. Now move to the extracted folder with the change directory command:

$ cd enlightenment-0.22.4/

Step 4. Now to configure the package for your system, run the configure script as below:

$ sudo ./configure

You might get some error after running the above script like:

config.status: error: Something went wrong bootstrapping makefile fragments
for automatic dependency tracking. Try re-running configure with the
'--disable-dependency-tracking' option to at least be able to build
the package (albeit without support for automatic dependency tracking).

To fix such error, add the option –disable-dependency-tracking to the configure script as shown below:

$ sudo ./configure --disable-dependency-tracking

Step 5. To compile the code, we need to install the make utility with the command:

$ sudo dnf install make

If the configure script finishes without any error, we can compile the source code:

$ make

Step 6. Now install the enlightenment package with the command:

$ sudo make all install

After the above command is finished successfully, our enlightenment desktop is installed, and we can continue to configure it.

Configuration

Follow the below steps to configure the enlightenment desktop environment:

Step 1. Logout from your current session as shown below:

Step 2. Now at the start screen, select the ‘Enlightenment’ session from the setting icon as shown below:

Step 3. Now login with your credentials. A startup screen might appear and will quickly fade out. On the next screen, it will ask you to select the language for installation. You can use a usb mouse or keyboard to select the required language. Now hit the Next button to continue.

If you are not sure, you can stick to the system default language.

Step 4. Select the Keyboard layout of your choice and hit the Next button to move further:

If you are not sure, you can stick to a commonly used English(US) keyboard.

Step 5. The next step will ask to select a profile from three options: 1. Mobile 2. Computer 3. Tiling. We are selecting the Computer (Standard Enlightenment) profile:

Step 6. Now it will display different sizes of titles to select from. We have chosen the default highlighted 1.0 Title size. You can select as per our choice:

Step 7. After the above window, the configuration process will ask to select a behavior for window focus. If you select the first option, the window will be focused only when a mouse is clicked on it. In the second option, the window is selected whenever the mouse enters it or hovers upon it. We are sticking with the already checked second option.

Step 8. In this part, you can choose the way you want to bind the mouse actions (move, resize, open) with the keyboard buttons ( shift, ctrl, alt, win, altgr). The default option is the alt key. We are just entering the Next button without checking any option to use the default setting (alt key).

Step 9. If the next screen says ‘Connman network service not found’, just skip the message and hit the Next button:

Step 10. In the next screen, we have not disabled the Compositing feature.

Step 11. Here it will ask for automatic checking of new versions, updates, etc. Simply mark the checkbox (already checked by default) and continue.

Step 12. Enable the taskbar and hit the Next button:

Lastly, enter the Next button to launch and explore the new desktop environment.

Conclusion

Congratulations, you have successfully installed the Enlightenment desktop on Fedora 30 workstation. Enjoy the numerous features and customize them as per your choice. While following this guide, you may have observed that installing from source code is a bit more typical than installing it. If you are a Linux beginner, we will recommend you to use the first method.

]]>
How to Install and Configure Git on Fedora? https://linuxhint.com/install-and-configure-git-on-fedora/ Fri, 26 Feb 2021 06:28:53 +0000 https://linuxhint.com/?p=91756 Git is one of the popular Distributed Version Control Systems (DVCS) among programmers. It lets you manage the incremental changes you make to your code. We can also easily revert to the earlier version of a code. Multiple developers can work simultaneously on the same project. Team members can see the changes to a project, message associated with the changes, their collaborators, project timeline, progress of the work, etc.

Benefits of Using Git

Git is an open-source tool and is free for anyone to use. Almost all the changes are done locally and there is no need for propagating those changes to any central server as well. A project can be edited locally and can be later saved to a server, in which every contributor can see and track these changes. Unlike centralized VCS, Git does not have a single point of failure.

Since Git has distributed architecture, everyone can get the latest snapshot of the work, as well as the entire repository contents and its history. If for some reason the server goes down, a copy from the client can be used as a backup and restore to the server.

To store and identify objects within its database, Git uses a cryptographic hash function known as the SHA-1 hash. Before storing any data, Git checks summed it and uses this checksum to refer to it.

It is very easy to install and does not require high-end hardware on the client-side. Many online hosting services like GitHub provide services to host your Git project online for remote access. One can get an entire backup of a repository on their local computer. Changes made by a contributor to a repository become its part after a commit operation.

The commit operation makes a snapshot of the current state in the repository or the database. After we have worked on our project locally, we can publish local commits to our remote Git database or repository using the push command.

What Will We Cover?

In this guide, we will see how we can install and configure Git on Fedora 33 OS. We will install Git from the official repository on Fedora, as well as from the source code downloaded from the Git official website. Let’s get started with the Git installation process.

Method 1. Installing Git from Fedora Repositories Using dnf/yum

This is a very simple method of installing Git. You just need to run the commands below:

Step 1. Update the available system packages with the following command:

$ sudo dnf -y update

Step 2. Now install git with the below command:

$ sudo dnf -y install git

After the above command finishes, use the following command to check the installed version of Git:

$ git --version

That’s all! As you can see, Git already comes installed on Fedora 33, but if it is not, you can install it from the above command.

In this case, you want to uninstall Git, simply run the appended command below:

$ sudo dnf  -y remove git

Method 2. Building Git from source code on Fedora

Git can also be installed on Fedora from the available source code on the Git website. To install them from source code, follow the below procedure:

Step 1. Git requires several packages to be installed before we can install it from source code. Run the below command to install these dependencies:

$ sudo dnf install dh-autoreconf curl-devel expat-devel gettext-devel openssl-devel perl-devel zlib-devel

Step 2. Once we have all the required dependencies in place, we can move on to download the source code. Run the following command to download the compressed tarball of Git source code:

$ wget https://www.kernel.org/pub/software/scm/git/git-2.30.1.tar.gz

Alternatively, you can also visit this link and manually download the file to your system. This is shown here:

Step 3. Extract the downloaded tar file with the below command:

$ tar -zxf git-2.30.1.tar.gz

Step 4. Now move to the extracted folder on the command line window:

$ cd git-2.30.1

Step 5. Run the make command:

$ make configure

Step 6. Run the config script:

$ ./configure --prefix=/usr

Step 7. Run the make all command:

$ make all

Step 8. Run the make install command:

$ sudo make install

Now, Git is installed on your system. Check the version from here:

$ git --version

Configuring Git settings on Fedora

After installing Git, we will need to add our username and email address to our Git account. This will enable us to commit our code properly. This information is used by Git with every commit we make.

Note: The Git username is not the same as that for GitHub.

To set these details, run the following commands:

$ git config --global user.name "your-username"
$ git config --global user.email "your@emailID"

Here replace “your-username” with a username of your choice and “your@emailID” with your email id. The global keyword will make this information be used by every change on your system. If you want to use different information for a project, then simply remove the global keyword when you are inside that specific project.

Let’s add a sample username and email as:

User-name = linuxhint
User-email = mail@me.com

Run the following command to check if these settings worked correctly:

$ git config --list

This is shown below:

Conclusion

Congratulations, you have now successfully installed Git on your Fedora OS. If you have followed this tutorial properly, you will have noticed that Method 1 is very straightforward for installing Git. You only need to run a simple command to get the Git on your system. Meanwhile, Method 2 is a long way route for installing Git, and it is recommended only for advanced users and system administrators. The benefit of using this method is that you can get its latest available version. For example, in Method 1, the version of Git installed from the official repository is 2.28.0, whereas in Method 2 we have version 2.30.1.

]]>
How to configure a static IP address on Fedora? https://linuxhint.com/configure-static-ip-address-fedora/ Fri, 26 Feb 2021 02:02:29 +0000 https://linuxhint.com/?p=91707 IP address configuration is one of the normal tasks system administrators do on a System.
IP address is used for identifying a device on a network. There are basically two types of IP addresses: 1) Public 2) Private. We can further divide these IP addresses into IPv4 and IPv6.

By default, Fedora uses DHCP-provided IP addresses when it is connected to a DHCP server. We can use the below methods to use static IP addressing and other networking options like vlans, bonds, bridges, teams, etc.

What will we cover?

In this guide, we will see two methods for setting a static IP on Fedora 33 workstation. Although this guide is performed on Fedora 33, it should also work on other Fedora versions. Let’s get started with this process.

Before you start

Please note that we have assumed that you have

  1. a basic understanding of IPv4 addressing and other computer networks basics
  2. knowledge of Linux command-line interface
  3. root access on the system or a user with root privileges.
  4. Fedora 33 OS installed on your system

Method 1. Using nmcli command-line utility for setting a static IP address on Fedora 33

Nmcli or NetworkManager Command Line Interface is a command-line utility for managing network connections. Users and scripts both use the nmcli utility to control NetworkManager. For e.g., you can edit, add, remove, activate or deactivate network connections. We can also use it to display the status of a network device.

The syntax of a nmcli command is as follows:

nmcli [OPTIONS] OBJECT { COMMAND | help }

Step 1. To check the overall status of NetworkManager, use the command:

$ nmcli general status

You can also use the below command to see a terse output about the connection state:

$ nmcli -t -f STATE general

As you can see, it is showing a connected state for now. If you turn off the wired connection, it will change to a disconnected state. This is shown in below picture:

Step 2. Now, after connecting to a network, we can see the active connections on our system using:

$ nmcli con show -a

You can also use the command below to see active and inactive interfaces:

$ nmcli dev status

As you can see, right now, only one connection is active on device enp0s3. To see the current network configuration for enp0s3, use the command:

$ ifconfig enp0s3

You can also use the ip command:

$ ip addr | grep enp0s3

Please note that our current IP is 10.0.2.15; we need to set it to 10.0.2.27.

Step 3. To change the IP of enps03 to a static IP, use the following command format:

$ sudo nmcli connection modify network_uuid IPv4.address new_static_IP/24

Where network_uuid is as obtained in Step 2. ‘new_static_IP’ is the new IP we want to assign statically. If our new IP address is 10.0.2.27, then the command will be:

$ sudo nmcli connection modify f02789f7-9d84-3870-ac06-8e4edbd1ecd9 IPv4.address 10.0.2.27/24

If you are feeling uncomfortable with the network UUID, you can also the connection name (Wired connection 1) as shown below:

$ sudo nmcli connection modify 'Wired connection 1' IPv4.address 10.0.2.27/24

NOTE: To avoid IP conflict, do not use an already assigned IP.

Step 4. Now configure the default gateway for our IP with the command:

$ sudo nmcli connection modify 'Wired connection 1' IPv4.gateway 10.0.2.11

Step 5. Now Set network DNS address using:

$ sudo nmcli connection modify 'Wired connection 1' IPv4.dns 8.8.8.8

Step 6. Now we need to change the IP addressing scheme from DHCP to static:

$ sudo nmcli connection modify 'Wired connection 1' IPv4.method manual

Step 7. Now turn off and then turn on the connection to apply changes:

$ sudo nmcli connection down 'Wired connection 1.'
$ sudo nmcli connection up 'Wired connection 1.'

All the above steps are shown in the below picture:

Now again, check the Gateway and IP with the command:

$ route -n
$ ip addr | grep enp0s3

You can see the Gateway and IP addresses are both changed to the values we have set in the above steps.

Method 2. Using a graphical method for setting a static IP address on Fedora 33

This is a very straightforward way to set a static IP address on Fedora 33 OS; follow the steps below:

Step 1. On the Gnome desktop, go to the activities tab and search for Settings and launch it:

Step 2. In the left panel, you will see the network tab. Inside the network tab, click on the Settings icon as shown below:

Step 3. A new window will open, displaying the already configured IP addresses, Gateway, DNS as shown below:

Step 4. In the above window, select the IPv4 option from the top bar:

Step 5. Inside the IPv4 method segment, select the radio button corresponding to the manual option:

Step 6. When you select the manual method, it will open some text boxes for filling the IP addresses, DNS, Routes, and other information related to network configuration, as shown in the above image. We are adding the following details:

IP addresses: 10.0.1.27
Netmask: 255.255.255.0
Gateway: 10.0.1.0
DNS: 8.8.8.8

We are leaving the Route segment row to be set automatically. See the reference picture below:

Step 7. Now we only need to stop and then restart the network connection using the connection switch in the main Network Tab as shown below:

  1. Switch Off
  2. Switch On

Step 8. Now we will verify if the new IP address, DNS, and Gateway are assigned properly. Go to the main Network Tab and click the settings icon as depicted in the picture below:

Step 9. I noticed that IP address, Gateway, and DNS are all changed to the new values that we have selected in the above steps:

Conclusion

That’s all for now; we have successfully set a static IP address on Fedora 33 workstation. We have seen both the command line and graphical methods. The CLI method is the only way to set the static IP address on non-gui or headless servers. The graphical method is more convenient for desktop users and novice Linux users.

]]>
How to Install Spotify in Fedora Linux https://linuxhint.com/install-spotify-fedora-linux/ Mon, 22 Feb 2021 10:49:27 +0000 https://linuxhint.com/?p=90685

Spotify is a popular audio and video streaming service used by millions of people. Spotify is available for download on smartphones, tablets, and desktops for Windows, Mac, and Linux. Though Spotify works in Linux, this application is not actively supported, as it is on Windows and Mac. You can also enjoy Spotify on wearable gadgets. For example, if you have a Samsung smartwatch, you can listen to and control Spotify using the watch only. You need only install the app on your smartphone from the Play Store to start listening to tracks on Spotify.

The free version of the application provides access to limited audio streaming services with advertisements. The premium service offers many features, including the ability to download media, ad-free browsing, better sound quality, and more. There are also other plans offered to specific individuals and groups. Spotify also supports various devices, such as Wireless Speakers, Wearables, Smart TVs, and Streamers.

This guide shows you how to install Spotify in Fedora Linux using three different methods of installation.

Method 1: Install Fedora Using the Fedora Snap Repository

Snap is the easiest way to install Spotify and many other popular Linux applications. Snap applications are packaged with all the required dependencies. In Linux, you can simply search for and install the application from the Snap Store.

First, install Snap in Fedora. To do so, open the terminal by hitting the shortcut Ctrl+Alt+T and issue the following command:

$ sudo dnf install snapd

To verify whether the Snap’s path has been properly updated, either log out and log in again or restart the system.

Next, create a symbolic link to enable classic support, as shown below:

$ sudo ln -s /var/lib/snapd/snap /snap

Now that Snap has been installed on your system, install Spotify with the following command:

$ snap install spotify

This process is shown below:

If you obtain the following error after entering the command above, simply log in and log out:

error: too early for operation, device not yet seeded or device model not acknowledged.


Then, reboot your system and run the following command again:

$ snap install spotify

And that is all! Now, you can see how it is easy to use Snap to install Spotify.


You can launch Spotify from the system menu or directly from the terminal.


If you want to remove the Spotify application, issue the following command to uninstall Spotify if it was installed from the Snap store:

# snap remove spotify --purge

Method 2: Install Spotify Using the Fedora RPM Fusion Repository

RPM Fusion provides software for Fedora, Red Hat, and clone versions of Fedora. RPM Fusion provides third-party software as precompiled RPMs. This software is not officially supported by the Fedora Project due to some legal issues.

As in the previous method, first, you will install and enable the RPM Fusion repository before installing Spotify. Enter the following command to enable the non-free RPM Fusion repositories:

# dnf install https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm

‘lpf-spotify-client’ requires the non-free version of RPM Fusion repositories.

To install the free version of the RPM Fusion repositories, do so by issuing the following command:

# dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm

If you prefer to use the graphical method to install the RPM Fusion repo, do so by downloading the file from the official RPM Fusion website, as shown below:

Now that the RPM Fusion repository has been installed, install the Spotify application using the following command:

$ sudo dnf install lpf-spotify-client


Once the above command has been executed successfully, launch the “lpf-spotify-client” in the Applications list from the following path:

Activities -> "lpf spotify-client"

For the first time it will prompt a window saying: “You must be a member of the pkg-build group to run lpf (log out and in again to mute this dialog). OK to add group pkg-build to your current user muhammadsadaan?

Enter Yes and log out and log in again. Then, launch the “lpf-spotify-client” from the Applications list.


You will be prompted to accept the EULA terms for using the Spotify client in Linux. Simply read and accept the terms and click OK.


A new window will open that will show the dependencies and sources that are required to install the Spotify client.


After the above process has been completed, you will see the following window appear:

Now, you can launch Spotify from the Applications list, as shown below:


If the gnome shortcut does not work for you because of the following error:

GPU process isn't usable

Then, launch Spotify from the command-line using the following command:

# spotify --no-zygote

Method 3: Install Spotify Using Flatpak in Fedora

Flatpak, formerly known as xdg-app, is an app bundling framework focused on improving the Linux desktop experience by simplifying the installation process for many Linux applications. Applications created using Flatpak for one Linux distro can be distributed to others.

To install Flatpak in Fedora, issue the following command:

# dnf install  flatpak -y

Flatpak comes already installed with the Fedora 33 gnome. Once Flatpak has been installed, use the following command to install Spotify:

# flatpak install -y --from https://flathub.org/repo/appstream/com.spotify.Client.flatpakref

To run Spotify, issue the following command.

# flatpak run com.spotify.Client

Again, if the gnome shortcut or the above command does not work for you due to the following error:

GPU process isn't usable

Then, launch Spotify from the command-line using the command below:

# flatpak run com.spotify.Client --no-zygote

This process is shown below:

Conclusion

That concludes today’s guide on installing the Spotify application. In this tutorial, you learned three methods for installing Spotify on your Fedora Linux system. If you are an absolute beginner in the Linux operating system, then you should follow Method 1 to install Spotify on your system. This is a very straightforward and easy-to-use method for most beginners. However, any of the above methods will work perfectly well for installing the Spotify application.

]]>
Install Adobe Reader on Fedora Linux https://linuxhint.com/install-adobe-reader-fedora-linux/ Sun, 21 Feb 2021 01:15:29 +0000 https://linuxhint.com/?p=90357 Adobe Acrobat Reader DC or simply Adobe Reader is a popular software for document viewing, printing, and adding comments. It can also add signs and annotate portable document format or PDFs. It is primarily built for handling PDF documents. The premium version, Adobe Acrobat Pro DC, has more features than Adobe Acrobat Reader DC. For example, you can create PDFs, convert to other formats, edit and protect them.

Adobe now also provides online document cloud services for Adobe Acrobat Reader for managing your work from anywhere and from any device.

Adobe Inc develops the Adobe Acrobat family. The Adobe reader is available for direct download on Windows and Mac OS. It can be installed on Android and IOS as well. There are multiple languages available for installing Adobe Reader. Adobe does not provide a direct download option on the Linux systems as it used to do earlier. In this guide, we will see some workaround for installing Adobe Reader on the Fedora operating system.

What we will cover

This guide will show you two different ways to install Adobe Acrobat Reader on Fedora 33 OS. So let’s get started with this HowTo.

Method 1. Installing Adobe Acrobat Reader using Snap repository for Fedora

Snap is the easiest way to install Adobe Acrobat Reader like many other popular Linux applications. Snap applications are packaged with all required dependencies. You only need to discover and install them from the Snap Store. We need first to install snap on Fedora. Open a terminal (alt+ctrl+T) and type the below command:

$ sudo dnf install snapd

Or

# dnf install snapd

To confirm if snap’s path is properly updated, you can either log out and log in again or restart the system. If you did not log out and log in again, you might get the error:

error: too early for operation, device not yet seeded or device model not acknowledged

Now create a symbolic link as shown below to enable classic support:

# ln -s /var/lib/snapd/snap /snap

Now that snap is installed on our system; we can install Adobe Acrobat Reader with the command below:

# snap install acrordrdc

This process may take some time to download various files like snapd, core18, acrordrdc, etc. Open System monitor on your Fedora OS and go to the Resources tab. Here you can see the downloaded data at the bottom left side of the System monitor as shown here:

Once the above process is completed, it will display the following message on the terminal window:

Now run the below command to start Adobe Acrobat Reader:

# acrordrdc

Hold on for some time as it will initialize and download various files for wine, like winetricks and others:

During the installation, it will ask for the language of installation for Adobe Acrobat. Simply select English or any other language you want and click install to continue. See the below screenshot for reference:

Once you click install, it will start downloading the AcroRdrDCxxx.exe file as shown below:

It will later ask to open Adobe Acrobat Reader in Protected mode or not. This feature is used to prevent attacks from sandboxing application processes. You can select “Always Open with Protected Mode Disabled.” This will help the Adobe Acrobat Reader to run with your system configuration smoothly:

Once you enter OK, it will launch the Adobe Acrobat Reader main window. A new window will also prompt up, asking you to accept the Adobe Acrobat Reader Distribution License Agreement. This is shown in the screenshot below:

Some text might not be visible, as in the above picture of the Adobe license window. It might be because of the missing fonts for wine. Accept the license agreement to continue.

Now let us check if we can open a PDF file with this installed Adobe Acrobat Reader. Go to the ‘File’ menu in the top bar and hit ‘Open’ in the submenu. Now select the ‘Welcome.pdf’ file from the list in the new window:

You can see the file is successfully opened as shown here:

To uninstall Adobe Acrobat Reader installed from snap repository, use the following command:

# snap remove acrordrdc

Method 2. Installing Adobe Acrobat Reader on Fedora Using Tarball

Step 1. Download the tar file of Adobe Reader using the following command:

# wget ftp://ftp.adobe.com/pub/adobe/reader/unix/9.x/9.5.5/enu/AdbeRdr9.5.5-1_i486linux_enu.tar.bz2

Step 2. Now extract this file using the tar command as shown below:

# tar -xf AdbeRdr9.5.5-1_i486linux_enu.tar.bz2

Step 3. Now enter the extracted folder with the command:

# cd AdobeReader

Step 4. Once you are inside the extracted folder, run the below install script to install Adobe Reader:

# ./INSTALL

Or

$ sudo ./INSTALL

It will print some text on the terminal and ask you to enter the installation directory. You can choose the default (/opt) or enter a new one. We are pressing the enter key to select the default directory.

This installation requires 136 MB of free disk space.
Enter installation directory for Adobe Reader 9.5.5 [/opt]

After the above script finishes, Adobe Reader is installed on your system. Now at this point, we need to configure the installation process further to launch the Adobe Reader.

Step 5. Now run the below command, to try to launch Adobe Reader:

$ /opt/Adobe/Reader9/bin/acroread

Note: If you were earlier logged in as root (#) or using the sudo keyword, you will need to come to the normal user account and remove the ‘sudo’ from the above command. If you continue to use the root account or ‘sudo’, you will get the below error:

Adobe Reader does not need to be run as a privileged user. Please remove ‘sudo’ from the beginning of the command.

When you run the above command, it might show errors like the one below:

To remove these errors, we need to install some packages from the following command:

$ sudo dnf install libgdk_pixbuf_xlib-2.0.so.0 libxml2.so.2 https://download-ib01.fedoraproject.org/pub/fedora/linux/updates/33/Everything/x86_64/Packages/g/gtk2-2.24.33-1.fc33.i686.rpm -y

When all the dependencies are installed, again run the below command:

$ /opt/Adobe/Reader9/bin/acroread

It will ask to select the language of installation and to accept the Adobe License agreement:

Now accept this license agreement to launch the Adobe Acrobat Reader as shown here:

Now we can open any file from the ‘File’ menu at the top bar as shown here:

Conclusion

This finishes our today’s guide on installing Adobe Acrobat reader on Fedora 33 OS. In this tutorial, we have learned two ways of installing Adobe Reader on the Fedora Linux system. If you have properly followed the guide, you will have noticed that although Method 1 is easier than Method 2, the Adobe Reader is more stable if you use Method 2. Now might need to install more packages to use the Adobe Reader smoothly. Also, in Method 2, we can easily browse local files, but in Method 1, it is not easy as we are confined inside the Wine environment.

We recommend that you use native applications on Linux for managing PDF files. This is because Adobe has stopped supporting Linux for a long time, so you may have to waste a lot of time finding many dependencies and settling the conflicts between them.

]]>
How to install and configure Apache Tomcat on Fedora Linux https://linuxhint.com/install-apache-tomcat-fedora-linux/ Thu, 11 Feb 2021 07:01:36 +0000 https://linuxhint.com/?p=89576 Apache Tomcat is one of the most widely used web application servers in the world. It is an open-source project of Apache Software Foundation. It is written in Java. It is used for implementing servlet containers and Java Server Pages(JSP) in Java.

Earlier, Tomcat required a high level of expertise for configuring and administering its services, as only advanced users and developers were able to work it out. With Tomcat’s GUI installer, it has become just a matter of a few commands to administer the server as a system service.

What will we cover

This tutorial will show you how to install apache Tomcat and use it to deploy a basic JSP program. Tomcat requires JRE (Java Runtime Environment) for running java web applications. In case if you are developing a Java application, you will need a full JDK application installed. For this, we will cover the guide only with the JRE only.

Prerequisites

You need to be familiar with the Java and basic Linux command to understand this tutorial better. We assume that you have already installed the JRE (Java Runtime Environment) on your system. You also need to have root privileges for installing Apache Tomcat.

Downloading Tomcat

1. To download the Apache Tomcat, visit the Apache Tomcat home page, where you will see different available versions. Alternatively, you can also use the wget command to get the file. For this guide, we are using Tomcat 9.

# wget https://mirrors.estointernet.in/apache/tomcat/tomcat-9/v9.0.43/bin/apache-tomcat-9.0.43.tar.gz

2. If you prefer, you can download Tomcat from the homepage. This is shown below:

Extracting The Binary Archive

1. Once the archive binary file is downloaded, you need to copy it to the directory where you want to install the Tomcat server and extract the file there. For example, we will extract the Tomcat tar file into /opt/tomcat. For this, we first need to create a directory ‘tomcat’ inside /opt. Use the following command to create a directory.

# mkdir /opt/tomcat
# tar xzf apache-tomcat-9.0.43.tar.gz -C /opt/tomcat

Creating a user and group for Tomcat

We will create a non-root user and group for running the Apache Tomcat server. Use the command below for creating the user and group.

# useradd -r tomcat

The above command will also add a ‘tomcat’ group.

Now we will change the ownership of the tomcat directory to the Tomcat user with the command:

# chown -R tomcat:tomcat /opt/tomcat

Setting Environment Variables

Tomcat requires certain environment variables to be set for running the startup scripts. Let’s see those variables:

a. CATALINA_HOME: The location of this environment variable is the root directory of Tomcat’s “binary” distribution. In our case, this root directory is /opt/tomcat/apache-tomcat-9.0.43

b. JRE_HOME or JAVA_HOME: These environment variables specify the location of Java Runtime Environment and a JDK location, respectively. If you are specifying both JRE_HOME and JAVA_HOME, then JRE_HOME will be used by default.

To set these variables, open the following file:

# vi /etc/profile

Now insert the following lines at the end of this file:

export JRE_HOME=/usr/java/jre1.8.0_281-amd64/bin/java
export CATALINA_HOME=/opt/tomcat/apache-tomcat-9.0.43

Now save the file and run the below command to apply these changes:

# . /etc/profile

To check if these variables are correctly set, check if the output of the below command is the same as the value for JRE_HOME and CATALINA_HOME:

# echo $JRE_HOME
# echo $CATALINA_HOME

See the below pictures for reference:

Creating Tomcat service

Now we will create a simple systemd unit file to define our Tomcat service. Create the service with the following instructions:

1. Create a file tomcat.service:

# vim /etc/systemd/system/tomcat.service

Now put the following content inside it:

[Unit]
Description=Apache Tomcat Server
After=syslog.target network.target

[Service]
Type=forking
User=tomcat
Group=tomcat

Environment=CATALINA_PID=/opt/tomcat/apache-tomcat-9.0.43/temp/tomcat.pid
Environment=CATALINA_HOME=/opt/tomcat/apache-tomcat-9.0.43
Environment=CATALINA_BASE=/opt/tomcat/apache-tomcat-9.0.43

ExecStart=/opt/tomcat/apache-tomcat-9.0.43/bin/catalina.sh start
ExecStop=/opt/tomcat/apache-tomcat-9.0.43/bin/catalina.sh stop

RestartSec=10
Restart=always
[Install]
WantedBy=multi-user.target

Note: Please replace the bolded text with the path of your Tomcat installation.

Now save the file and reload the systemd configuration with the following command
to apply the changes

# systemctl daemon-reload

We are now ready to use the tomcat service. Start the service and enable it to persist the reboot.

# systemctl start tomcat.service
# systemctl enable tomcat.service

Check the status of service; it should show an active running status:

# systemctl status tomcat.service

All the above steps are shown below:

Accessing Tomcat in Browser

Now we are ready to test if our tomcat server is correctly installed or not. To check this, open your web browser and browse the addresses:

http://localohost:8080
or
http://system_IP_addr:8080 (To see your system IP, use the ip addr command.)

You would see the default homepage of Apache Tomcat. The following screenshot shows the tomcat homepage:

Deploying a simple JSP application

Now we will deploy a basic JSP application with a Tomcat server.

1. Create a basic JSP application called ‘test.jsp’ inside the directory “/opt/tomcat/apache-tomcat-9.0.43/webapps/ROOT/”:

# nano /opt/tomcat/apache-tomcat-9.0.43/webapps/ROOT/test.jsp

Note: Again, replace the bolded text with the path of your Tomcat installation.

2. Put the following content inside it:

<html>
<head><title> JSP Page</title></head>
<body>
This is a JSP Page from LinuxHint!<br/>
<%
out.println("Your System IP address is: " + request.getRemoteAddr());
%>
</body>
</html>

3. Now again, open the web browser and browse the following address:

http://localhost:8080/test.jsp

This time you should see the following web page:

Conclusion

This tutorial shows how we can install Apache Tomcat from an archive binary file on Fedora Linux. We have learned to install a JSP application with tomcat.

]]>
How to Install and configure Apache httpd on Fedora Linux https://linuxhint.com/install-apache-httpd-fedora-linux/ Thu, 11 Feb 2021 04:58:31 +0000 https://linuxhint.com/?p=89598 Apache web server is one of the most used web servers in the world. It is very easy to configure. It is open-source software and maintained by the Apache Software Foundation. Apache supports numerous features. Many of these features are implemented as compiled modules to expand the core functionality.

httpd is an apache web server in Red Hat-based distros, while it is called apache on Debian distros. It depends on the OS you use. For example, in RHEL 6.2, it is called httpd, and in Ubuntu, it is called apache2.

In Fedora Linux, the httpd package provides the Apache webserver application.

What will we cover

In this tutorial, we will see how to install Apache webserver from the source file as well as from the Fedora repository.

It is recommended that you first read this post and then apply it to your system. This will make sure that you correctly configure the apache web server.

Prerequisites

  1. Fedora Operating System installed
  2. User account with root access
  3. Internet connectivity to download various files.

Method 1. Installing from source code

Step 1. Open a web browser and go to the apache download page. At this article’s writing, the latest and stable version available for Apache HTTP Server (httpd) is 2.4.46. Download the file as shown below:

Another way to get the file is using the wget command. Open the terminal and run the following command:

# wget https://mirrors.estointernet.in/apache//httpd/httpd-2.4.46.tar.gz

This is shown below:

The benefit of using the source code is that you always get the latest available version of the software.

Step 2. Once we get the source file, we can start with the commands’ gzip’ and ‘tar’ to extract the file. The exact name of the file depends on the available version you have downloaded. In our case, it is httpd-2.4.46.tar.gz.

# gzip -d httpd-2.4.46.tar.gz

# tar xvf httpd-2.4.46.tar

After running the above command, you can see the extracted folder as here:

Step 3. Now go to the extracted directory with the command:

# cd httpd-2.4.46

Step 4. We now need to run the configure script to configure the apache. This is available inside the root directory of apache, i.e., the current directory. But before running this script, make sure where you want to install apache.

You can install the apache server in the default location. For this, you have to run the script simply:

# ./configure

If you want to install apache in a directory other than the default, use the following syntax:

# ./configure --prefix=/path/of/installation

Inside the ‘–prefix=’ enter the path of installation. In our case, we will install apache inside the /opt/httpd directory. For this, follow the instruction below:

1. Create a directory inside /opt as shown below:

# mkdir /opt/httpd

2. Run the script as shown below:

# ./configure --prefix=/opt/httpd

The configure script will take some time to run and verify the features on your system. It will also prepare Makefiles to compile the apache web server.

Note for several errors when running the ./configure script:

1. You may get the following error “configure: error: APR not found”:

For fixing this error, you need to download the apr-*.tar.gz from here.

Now extract this directory inside the ‘srclib’ directory, which is available in the apache httpd distribution folder. To extract the file, use the command:

# tar xvf apr-util-1.6.1.tar.gz

# tar xvf apr-1.7.0.tar.gz

Now rename these files by removing the version number as here:

# mv apr-util-1.6.1 apr-util

# mv apr-1.7.0 apr

2. If the error is “configure: error: pcre-config for libpcre not found.” Then you just need to install the PCRE devel package as shown below:

# dnf install pcre-devel -y

Now continue to run the configure script as before. In last it would print the summary as shown here:

Step 5. To build the several components that comprise the Apache web server, use the following command:

# make

This may take significant time to run this command as it will compile the base configuration. It largely depends on system hardware and also on the number of modules enabled.

If you get an error like “fatal error: expat.h: No such file or directory”, you will need to download expat from here. Now extract the file inside some directory. We are using /opt/httpd for extraction.

# tar xvjf expat-2.2.10.tar.bz2 -C /opt/httpd

Now go to the extracted directory and run the following command one by one to configure expat:

# cd /opt/httpd/expat-2.2.10

# ./configure

# make

# make install

Now again run the configure script by specifying the path of expat installation:

# ./configure --prefix=/opt/httpd  --with-expat=/opt/httpd/expat-2.2.1

Step 5. Once the make command finishes, we are ready to install the packages. Run the command:

# make install

Step 6. To customize your apache server, use the httpd.conf file located inside:

# nano PREFIX/conf/httpd.conf

Where PREFIX is the path of apache installation. In our case it is /opt/httpd/, so we use:

# nano /opt/httpd/conf/httpd.conf

Inside this file, change the ServerName directive to the IP address of your system.

Step 7. Now apache is ready to use; we only need to start the service from the directory where it is installed. For e.g., if you have installed the apache inside /opt/httpd, then run the command:

# /opt/httpd/bin/apachectl -k start

Method 2. Installing from Fedora Repository

Installing Apache httpd from the Fedora repository is quite easy; just follow the below steps:

Step 1. Open a terminal (ctrl+alt+f2) with root user or at least with superuser privileges.

Step 2. Now use the following command to install apache:

# dnf install httpd

Step 3. Start and check the status of the apache service with the command:

# systemctl start httpd.service

# systemctl status httpd.service

It should show a running status

Step 4. Open a web browser and enter your system IP. It would show the following page:

Conclusion

Congratulations, You have successfully configured the Apache webserver. In this guide, we have learned how to install apache from the source file and Fedora repository.

]]>
How to Install Oracle JRE on Fedora https://linuxhint.com/install-oracle-jre-fedora/ Sat, 06 Feb 2021 18:44:49 +0000 https://linuxhint.com/?p=89080

Java is one of the most used programming languages. Due to its object-oriented nature, it is preferred by developers. Java can be used to develop Mobile, Desktop and Web-based applications. Java allows running java programs on many platforms with the help of JVM. JVM has a JRE or Java Run-time Environment that provides resources and class libraries to Java code for execution. JDK is only needed for developing Java applications.

What’s new in Java SE Release 8 for Linux

  • Support for configuration file along with command-line options for installation with cli. The configuration file-based installation has more options as compared to cli based installation.
  • Commands like java, javap, javac and javadoc can be used by users on the command line.
  • Java SE Release 8 users can now also verify which particular RPM package offers Java files.

What we will Cover

This post will explore Oracle JRE, and we will see how to install Oracle JRE on Fedora Linux using i) an archive binary file ii) an RPM binary file.

We also see how to uninstall JRE in both cases.

Oracle has different versions of JRE for Linux platforms based on system architecture. It is very important to download and install the version specific to your system. The following table shows different versions of Oracle JRE and the system architecture they are built for:

JRE Version System Architecture
jre-8u281-linux-x64.tar.gz 64 bit Linux
jre-8u281-linux-i586.tar.gz 32 bit Linux
jre-8u281-linux-x64.rpm 64 bit RPM based Linux
jre-8u281-linux-i586.rpm 32 bit RPM based Linux

Note: The above naming may change with time as it all depends on the JRE update version number

For this guide, we will be using

  1. “jre-8u281-linux-x64.tar.gz” which is actually an archive binary file.
  2. “jre-8u281-linux-x64.rpm” which is an RPM binary file.

So let’s get started with the installation of Oracle JRE.

Method 1. (a) Installation using archive binary file

Step 1. Open a web browser and go to Oracle JRE download page and download the archive binary file. This is shown below:

Review and accept the Oracle license agreement. It will now redirect you to the login page before downloading the file. You will need to create a new account with Oracle. If you already have an account, you can login directly.

Step 2. Once the file is downloaded, we can continue further. Beside the root user, any other user can also install the archive binary in any location. But for installing in system location root user is required. We will go to the file download directory and create a new directory as ‘lh-dir’ and move the archive binary to this folder.

# mkdir lh-dir

# mv  jre-8u281-linux-x64.tar.gz lh-dir/

This is shown in the screenshot below:

You can also use any other location where you would like to install JDK.

Step 3. Now we will unpack the downloaded archive binary in this new directory.

# tar zxvf jre-8u281-linux-x64.tar.gz

Sample Output:


Step 4. Now if you want, you can remove the archive binary (.tar.gz) file as below:

# rm  jre-8u281-linux-x64.tar.gz

This will help us to save disk space.


Step 5. To start using JRE from anywhere on the system, we will specify our Java installation path in the /usr/bin directory. The /usr/bin directory contains executable commands on the system.

# update-alternatives --install "/usr/bin/java" "java" "/root/Downloads/lh-dir/jre1.8.0_281/bin/java" 1

Note: Please do not forget to change the name of the directory ‘lh-dir’ to the one you have created.


Step 6. Once we have specified the java path, we can use the java command from anywhere on the system. Let’s check the java version from the documents folder.

# cd /root/Documents

# java -version

The following screenshot demonstrates this:


To Check the PATH Variable for the JRE, run the following command:

# which java

It will produce output like

/usr/bin/java

(b) Uninstalling Oracle JRE

In case you would like to remove the Oracle JRE from your system, you will need to follow the steps below:

Step 1. Remove all link for the alternatives by running the following command:

# update-alternatives --remove "java" "/root/Downloads/lh-dir/jre1.8.0_281/bin/java"

Please do not forget to change the java file’s location in the above command with your system’s one.

Step 2. Verify if the Oracle JRE has been removed with the below command:

# java --version

It should say: bash: /usr/bin/java: No such file or directory

Method 2. (a) Installation using the RPM binary file

Step 1. Now again go to the Oracle JRE download page and this time download the 64-bit rpm file as shown below:

Note: Make sure that before installing the rpm file you have removed the old JDK installation packages.

Step 2. After you have downloaded the file, open a terminal and get root access. Go to the folder containing the rpm file. Now run the following command:

# rpm -ivh jre-8u281-linux-x64.rpm

The above command will install the JRE rpm file, as shown below:


Step 3. Now again check the version of java from any directory, it will show the following output:

(b) Uninstalling Oracle JRE

Step 1. First, check the installed package of JRE from the following command:

# rpm -qa | grep java

It will show the corresponding jre package:


Step 2. Now uninstall the JRE package with the following command:

# rpm -e jre1.8-1.8.0_281-fcs.x86_64


Step 3. Now again check the version of java, this time it should show:

bash: /usr/bin/java: No such file or directory

Conclusion

In this guide, we have learned how we can install Oracle JRE on Fedora Linux. We have also seen how it can be uninstalled from the system. This guide was successfully tested on Fedora 33 Linux. Since we have installed JRE with .tar.gz file in Method 1, the installation steps will remain the same for all 64 bit Linux distributions.  The same steps should be used for installing Oracle JRE for 32 bit Linux. The only thing to change here is to use the 32-bit version of JRE.

Method 2 is comparatively easy for installing and removing Oracle JRE. The same method should also work on 32-bit Linux by installing the 32-bit version of JRE.

]]>
How to Upgrade Fedora Linux? https://linuxhint.com/upgrade-to-fedora-linux/ Sat, 06 Feb 2021 18:41:58 +0000 https://linuxhint.com/?p=89096

Fedora is a Linux distribution that is sponsored by Red Hat. The best thing is that it is free and open source. It is also available for desktop, server, and IoT systems. It has a different desktop environment like KDE Plasma, XFCE, LXQT, etc.

What will we cover?

In this guide, we will cover how to upgrade Fedora 32 to Fedora 33. We will see three different ways of upgrading Fedora:

  1. Upgrade using Software Center
  2. DNF system upgrade plugin
  3. Upgrade using package manager with dnf only

Things to Do Before Starting

We need to do certain things before starting the process for a smooth upgrade experience.

The first thing is that you should always backup your data before attempting to upgrade. It is highly recommended for any production system.  If you are experimenting with a virtual machine, 0then you don’t have to worry. Second thing, you should have a root account or at least a user account with root access privileges. This is necessary as you cannot run the upgrade commands without the superuser rights.

Method 1. Upgrade using Software Center (recommended for the Fedora Workstation release)

This is the most recommended way to upgrade Fedora Workstation, and it is also the easiest way for beginners. From Fedora 23 Workstation edition, a notification for a new Fedora release starts to appear whenever a new stable release is introduced. Check out the notification or go to Fedora’s graphical software center, you will be presented with a simple update window, as shown below:

When you hit the download button, all the files required for upgrade will be automatically downloaded. When the download is completed, it will ask for a reboot to install the upgraded files. After the reboot, you will be able to see your new release.

Method 2. Using the DNF system upgrade plugin

This is the officially recommended upgrade method for all Fedora installations, except for the Fedora Workstation. It uses dnf-plugin-system-upgrade when performing a system upgrade. This is actually a command-line method as it requires running some command. Okay, let’s dive in to see how this is going to work.

Step 1. First, update your Fedora system with the command:

# dnf upgrade --refresh

This will install all the necessary updates to the system before upgrade. The actual download size may vary for every different system.

This may take a considerable time to download and install all the updates depending on your internet connection speed and system hardware.

Step 2. Once the installation of updates is finished, do a system reboot.

Step 3. After rebooting the system, open a terminal and install the plugin: dnf-plugin-system-upgrade. To do this use the command below:

# dnf install dnf-plugin-system-upgrade

Step 4. Now, we will use the dnf plugin to download the release update packages. Run the below-given command:

# dnf system-upgrade download --refresh --releasever=33

When you run the above command, it will ask to run the “dnf upgrade –refresh” command to ensure that the system is up to date. Press ‘y’ and hit enter, so it can download any new update.

The releasever argument is used to specify the version of Fedora OS we want to install. Here we have specified version number 33, which is the latest available version right now.  To upgrade to a branched release, we need to use 34, or we can take rawhide for upgrading to a rawhide version.

Once the update process is completed, you can download the upgrades as shown below:

As you can see, this version update is about 1.3 G in size, so it may take a long time to download and install all these updates. Wait for the process to complete.

During the upgrade process, it will import a gpg key and ask you to verify it, just press ‘y’ here:

The installation process is almost completed, what’s remaining is to run the command:

# dnf system-upgrade reboot

Note: Please do not run any other command besides “dnf system-upgrade reboot”, otherwise you may need to restart the whole process.

The system will now restart to apply the downloaded system upgrades, as shown below:

After the upgrade process is completed, you should see a new login screen for Fedora 33 OS, as shown here:

We can check the Fedora version with the command:

# /etc/os-release

Since we were using Fedora 32 xfce version, we are upgraded to Fedora 33 xfce. This should be the same if you are upgrading from the gnome version, you should land on gnome Fedora.

Method 3. Upgrade using package manager with dnf only (without using the DNF system upgrade plugin)

The last method is using DNF, which is actually not recommended by Fedora. While upgrading this way, you may encounter general dependency issues.  For any such issue, you can refer to the reference pages and other posts related to the installation guide. This is a very brain-teasing method and should only be used by experienced system administrators.

Step 1. Open a terminal and login as a root user and run the command below:

# systemctl isolate multi-user.target

Step 2. At this point, we have to update the packages of our current Fedora OS with the following command:

# dnf upgrade

Step 3. In case of upgrading across three or more releases or upgrading from an old version of Fedora before Fedora 20, it may require you to import and install the package signing key. Otherwise, it is not required for upgrading across two releases or less from the version of Fedora 20 or later.

So, if it is required to import the key, do run the following command:

# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-23-x86_64

Do not forget to replace “23” with your target release like 32 or 33 for the latest Fedora. Also, replace “x86_64” with your system architecture.

Step 4. Clean all the cache of dnf by running:

# dnf clean all

Step 5. Start the upgrade process with the command:

# dnf --releasever=<target_release_number> --setopt=deltarpm=false distro-sync

Step 6. Install new packages for the new version with:

# dnf groupupdate 'Minimal Install'

Other groups like GNOME Desktop, Administration Tools can also be updated as shown here:

# dnf groupupdate "GNOME Desktop"

# dnf groupupdate “Administration Tools”

Step 7. Install the bootloader for your boot-device with the command:

# /usr/sbin/grub2-install BOOTDEVICE

The boot-device is usually /dev/sda or /dev/sdb, depending on your hard disk. If you are using a virtual machine, it might be like the dev/vda.

Step 8. Now, delete unnecessary cache files and other redundant files by cleaning up the system. These files often reside in the following directories:

  1. /var/cache/dnf
  2. /var/lib/mock
  3. /var/cache/mock

Conclusion

In this guide, we have seen how we can upgrade Fedora Linux using three different ways. We have also learned the main difference in using these upgrade methods.  This guide has been successfully tested on Fedora 32 for upgrading to Fedora 33. If you have liked this HowTo guide, please share it with others.

]]>
How to Install MySQL on Fedora https://linuxhint.com/install-mysql-fedora/ Sat, 06 Feb 2021 18:38:43 +0000 https://linuxhint.com/?p=89109

MySQL is a database system that provides database services for storing and managing data. It is one of the popular open-source databases.

MySQL comes with the following commercial products:

  1. MySQL Standard Edition
  2. MySQL Enterprise Edition
  3. MySQL Cluster Carrier Grade Edition

All these editions come with a price tag and are mostly suitable for commercial use. We will use the  MySQL Community Edition, which is available for free usage under the GPL license for our this guide.

What  will we cover here

In this guide, we will go through the process of installing MySQL Community Edition on Fedora Linux. We will install MySQL from Yum repository using the YUM utility. Let’s get started with the installation process.

Step 1. The first thing is that we need to add the official yum repository for our Fedora Linux provided by MySQL. We will download the yum repository using the wget tool on Linux using the command:

# wget <a href="https://dev.mysql.com/get/mysql80-community-release-fc33-1.noarch.rpm">https://dev.mysql.com/get/mysql80-community-release-fc33-1.noarch.rpm

Please remember that download link may change with time, in case the above link does not work, you should manually copy the link from the official website.

Another way to get the yum repository is to directly download this file to your system from MySQL as here:

Step 2. Once the file download is complete, we can install it with the following command:

# yum localinstall mysql80-community-release-fc33-1.noarch.rpm

Note: We can also use the dnf command instead of yum.

When you run the above command, it will add the MySQL Yum repository to your system’s repositories list. Also, enter ‘y’ when it asks to verify the packages’ integrity with downloaded GnuPG key.

Step 3. Now we will verify if the MySQL repository is added to our system repository  list or not:

# yum repolist

The output of the above command will show you all the repositories configured on our system under YUM.

Step 4. Start the installation of MySQL community release with the following command:

# dnf install mysql-community-server

Step 5. Once the MySQL server is installed, we can start it with the command:

# service mysqld start

or

# systemctl start mysqld.service

Note: If you take much time to start the MySQL service, then stop the above command by pressing ‘ctrl+c’. Now run the ‘dnf update’ command and then start the MySQL service again.

Step 6. Check the status of the MySQL service by running the command:

# service mysqld status

It should show an active running status for MySQL service.

Beside the status, we can also verify the status of the mysql server with the command:

# mysql --version

The above command shows that we have installed the latest version of MySQL available in the yum repository.

Step 7. Now, as our MySQL is installed and working, we need to secure it. But before that, we need to get a root password created by Mysql during the installation process. This temporary password is required during the configuration of the MySQL server.

To get this password, open a new terminal and run the below command:

# cat /var/log/mysqld.log | grep ‘temporary password'

The password will be printed on your terminal.

Step 8. Now for securing the MySQL server, we need to change certain settings. Run the below command to enter the MySQL secure installation:

# mysql_secure_installation

It will ask for the temporary password which we created in step 7.  Put that here. Now it will prompt for password change for the root user. Make sure that you enter a strong password satisfying all the requirements. Otherwise, you will get an error regarding password policy. This is shown below:

Once you have entered the correct password, you will see some instructions and questions on the screen like:

Securing the MySQL server deployment.

Enter a password for user root: [Enter the Temporary Password here]

The existing password for the user account root has expired. Please set a new password.

New password: [New password here]

Re-enter new password: [Retype the password]

The ‘validate_password’ component is installed on the server.

The subsequent steps will run with the existing configuration of the component.

Using the existing password for root.

Estimated strength of the password: 100

Change the password for root? ((Press y|Y for Yes, any other key for No) : [You can change MySQL root password here]

… skipping.

By default, a MySQL installation has an anonymous user, allowing anyone to log into MySQL without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment.

Remove anonymous users? (Press y|Y for Yes, any other key for No) : [Type ‘y’ to remove the anonymous user]

Success.

Normally, root should only be allowed to connect from ‘localhost’. This ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? (Press y|Y for Yes, any other key for No) : [Deny root login by entering ‘y’]

Success.

By default, MySQL comes with a database named ‘test’ that anyone can access. This is also intended only for testing and should be removed before moving into a production environment.

Remove test database and access to it? (Press y|Y for Yes, any other key for No) : [Press ‘y’ here]

– Dropping test database…

Success.

– Removing privileges on test database…

Success.

Reloading the privilege tables will ensure that all changes made so far will take effect immediately.

Reload privilege tables now? (Press y|Y for Yes, any other key for No) : [Reload the privilege tables to apply changes by pressing ‘y’ here]

Success.

All done! 

Step 9.  Once the above steps are completed, we are all set to login the MySQL database server. Use the password you have created during mysql secure installation in step 8:

# mysql -u root -p

You will see an output similar to this:

Enter password: [Enter MySQL root Password here]

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 8 Server version: 8.0.23 MySQL Community Server – GPL.
Copyright (c) 2000, 2021, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

mysql>

Conclusion

That’s all; we have managed to set up a working MySQL database server. What you can do next is to:

  1. Create new users and grant different privileges to them.
  2. Create databases and tables and then create a join between tables of different databases.
  3. Define a trigger that is automatically invoked with a response to operations like insert, update or delete.
]]>
How to Install LAMP in Fedora Linux https://linuxhint.com/install-lamp-fedora-linux/ Tue, 02 Feb 2021 19:09:54 +0000 https://linuxhint.com/?p=88752

The LAMP server is one of the most commonly used sets of open-source applications for building web applications. LAMP is a stable and powerful server structure and, at the same time, is very easy to use and set up. LAMP is an acronym for the four components comprising it: Linux, Apache, MySql, and Php. A similar counterpart for Windows and MacOS is also there, namely, WAMP and MAMP.

Prerequisites:

Before proceeding to install the LAMP server in Fedora OS, make sure that you fulfill the following prerequisites:

  1. Have Fedora OS installed on your system. In this article, we are using Fedora 32 OS.
  2. Have root privileges access to the system you are working on.
  3. Have good Internet connectivity for downloading the various packages.

This guide shows you how to install the three components of the LAMP server. Later, you will learn how to make a basic LAMP application to check whether the installation is working as expected.

The following sections show the installation process for installing the LAMP server in Fedora OS:

Installing Apache

To install the Apache, or httpd, web server, run the following command:

# dnf install httpd -y

Next, enable the Apache service to automatically start at next system bootup:

# systemctl enable httpd.service

Now, start the service and check the status with the following commands:

# systemctl start httpd

# systemctl status httpd

Allow the HTTP and HTTPS services from the firewall. This is necessary if your firewall is blocking access to these services:

# firewall-cmd --permanent --add-service=http

# firewall-cmd --permanent --add-service=https

# firewall-cmd --reload

The process of installing Apache is now finished. Next, we will continue with the installation of the MariaDB database.

Installing MariaDB

MariaDB is a fork of the original MySQL database.

To install the MariaDB database in Fedora, issue the following command:

# dnf install mariadb-server -y

Once the installation is completed, we will enable and start the mariaDB service, as we did for the Apache server:

# systemctl enable mariadb

# systemctl start mariadb

# systemctl status mariadb

To finish configuring and securing the MariaDB server, we need to tweak certain settings. Run the command below to begin the secure installation of the MariaDB server:

#  mysql_secure_installation

When you run the above command, a set of questions will appear on the screen, such as:

  1. Enter current password for root (enter for none): [press Enter]

Here, simply press Enter, as there is no default password the first time that you configure MariaDB.

  1. Switch to unix_socket authentication [Y/n] n

From MariaDB 10.4, a new authentication method has been added based on unix_scoket. In this guide, we will go through with the conventional MariaDB password. Continue by typing N/n.

  1. Change the root password? [Y/n] n

Note that we are already the root user when installing MariaDB 10.4, so we automatically have password-less, root-like access. Continue by typing N/n.

  1. Remove anonymous users? [Y/n] y

Here, we will remove the anonymous user. The anonymous user allows anyone to log in to the database without an account. Removing the anonymous user is necessary for a production environment, as this account is only meant for testing purposes. Continue by typing Y/y.

  1. Disallow root login remotely? [Y/n] y

Next, deny access for root login from remote address to improve security. Continue by typing Y/y.

  1. Remove test database and access to it? [Y/n] y

The test database is a default database that can be accessed by anyone. Like the anonymous user, the test database is only meant for testing purposes and should be removed before moving to a production environment. Type Y/y here, as well.

  1. Reload privilege tables now? [Y/n] y

Press Y/y to apply all the above changes immediately.

Now, the installation and configuration of MariaDB is complete. We will now move on to install PHP.

Installing PHP

PHP is one of the most widely used scripting languages for application development. To install PHP in Fedora 32 OS, we will run the following command:

# dnf install php php-common

Development with PHP will likely require the installation of several application-specific PHP modules, as shown below:

# dnf install php-mysqlnd php-gd php-mbstring

Some of these modules could already be installed with PHP; in our case, php-mbstring was installed alongside PHP.

A note about these modules:

php-mysqlnd – MySQL Native Driver Plugin, or msqlnd, is required by PHP for working with the MariaDB/MySQL database.

php-gd – Required by PHP for working with and handling various image file (GIF, PNG, JPEG, etc.) operations.

php-mbstring – This module provides PHP with multibyte string handling capability.

Testing the LAMP Server Configuration

After installing PHP, we are now all set to test our configuration. We will create a test project to check whether all the components of our LAMP setup are working properly.

Follow the steps below to do so:

Log in to the MariaDB database, as shown below:

# mysql

For MariaDB 10.4, we do not need to specify the password to log in as a system-wide root user.

As we have denied the remote access for the root login in MariaDB while installing, we need to create a new user for remote access. In MariaDB, run the following command to create a new user:

CREATE USER 'myuser'@'localhost' IDENTIFIED BY '123';

GRANT ALL ON *.* TO 'myuser'@'localhost';

flush privileges;

Return to the Apache root document directory and create a file with any name; for example, we will use “test.php.”

Put the following code inside the new file and save it:

<html>

   <head>

     <title>LAMP Application</title>

   </head>

   <body>

      <?php

     $stmt = new mysqli(“localhost”,”myuser”,”123)

 

     if($stmt->connect_error) {

        die('Error in Connection ->'.$stmt->connect_error);

      }

 

      echo 'Connection successful: You are all set to go.';

 

      ?>

   </body>

</html>

Open a web browser and navigate to the following address:

http://localhost/test.php

or

http://”Apache_System_IP ”/test.php

If you have correctly followed the steps provided in the procedure above, you should now be able to see the “Connection successful” message, as shown below:

Conclusion

Congratulations! You have successfully built a LAMP environment and deployed a basic working LAMP application. In this guide, you learned how to install a LAMP server in Fedora OS, as well as the method for deploying a basic application using the LAMP server. If you found this guide useful, then please share it with others.

]]>
Configure OpenStack Network Service- Step By Step Guide https://linuxhint.com/configure-openstack-network-service/ Tue, 26 Jan 2021 04:05:35 +0000 https://linuxhint.com/?p=87645 OpenStack is an open-source cloud platform that provides infrastructure-as-a-service (IaaS) for private, public, and hybrid cloud computing. OpenStack Foundation manages and develops the OpenStack project. The OpenStack provides a wide range of services for processing, storage, and networking inside a data center.

OpenStack has full capability to deploy virtual machines (VMs) and handle various tasks required for managing a cloud environment. With its horizontal scaling feature, it can spin up more as per requirement.

One of the important features of OpenStack is that it an open-source software. Microstack is a tool for installing the OpenStack environment in a very easy way. If you have previously gone through the custom steps of installing OpenStack, you might see the real pain of customizing and configuring various installation steps. But with Microstack, it is simply 2-3 steps of the process. In this guide, we have used the Microstack based variant of OpenStack. You can use any other way to install OpenStack, but with Microstack, things got very simple.

Let us review some of the major components of OpenStack here:

  1. Nova: Manages various aspects of compute instances on demand. It is the compute engine of OpenStack for managing and deploying VMs.
  2. Neutron: Provides OpenStack networking services. It helps in establishing a communication path between various OpenStack instances.
  3. Swift: Provide storage services for files and objects inside an OpenStack environment.
  4. Horizon: It is a web-based graphical dashboard interface of OpenStack for managing OpenStack’s different operations.
  5. Keystone: It is an identity service for authentication, access control, authorization, and various other services.
  6. Glance: It is an image service of OpenStack for managing virtual machine images. These images can be used as a template for launching new VMs.
  7. Heat: It is basically an orchestration tool of OpenStack for launching multiple composite cloud applications using an orchestration template like HOT (Heat Orchestration Template).

The installation process of OpenStack is very resource and time-consuming. Before we get our hands dirty in configuring the OpenStack service, we assume that you have already installed OpenStack on your system or inside a VM. If not, you can follow our previous guide for installation. In this guide, we have installed the OpenStack using the Microstack from the snap repository.

Configuration of Our Machine:

Operating System: Ubuntu 20.04
RAM: 16 GB
Hard Disk: 160 GB
OpenStack Variant: Microstack.

In this tutorial, we will see how we can configure networking services in OpenStack. To simplify things, we have provided snapshots of various stages of configuration. So let’s jump right into it.

Step 1. Log in to the OpenStack dashboard with the admin account. Once you are logged in, you need to create a new project. Follow the below path:

Identity -> Projects -> Create Project

Step 2. Now, as our project has been created with the name “MyProject1”, we will now have to go to the path:

Identity -> Users -> Create User

And create a new user.

Here we have to give our user a name (“LHuser” in our case) and an optional description for this user. Create a password for this user.
In the primary project menu, select our project (MyProject1). Now finish this step by clicking the “Create User” button.

Step 3. Now we will configure the OpenStack network. First, log out from the admin account and login with the newly created user LHuser. Now navigate to the path:

Project -> Networks → Create Network

I) Internal Network

a) First, we will create an internal network. Our Specification for the internal network is as follows:

Network Name: my_internal_nw
Subnet Name: my_subnet
Network Address: 192.168.2.0/24
Gateway IP: 192.168.2.10
IP Version: IPv4

Also, remember to check the “Enable Admin State.”

b) subnet

c) subnet details

II) External Network
The steps for creating the external network are the same as that of the internal network. The only difference is that the network configuration here depends on the br-ex interface created with OpenStack installation. So use the same network address and gateway IP as that of the br-ex interface. In our case, the specifications are as follow:

Network Name: my_external_nw
Subnet Name: my_subnet_2
Network Address: 10.20.20.0/24
Gateway IP: 10.20.20.1
IP Version: IPv4
Also, check the “Enable Admin State.”

2) Subnet

3. Subnet details

Step 4. After finishing the process of creating networks, log out of the new user account and again login with an admin account. On the OpenStack, dashboard goes to:

1. Admin -> System-> Networks

And select the network named “my_external_nw” and click the “edit network” in the right corresponding to this network.

2. A new window will prompt up. Here simply mark this network as an External network. Click the “Save Changes.” button to apply settings.

Step 5. Now, logout from the admin user and log in with the new user.
Step 6. We will have to create a router for the two networks to create a communication path between these two. Go to

Project -> Network -> Routers

And click the “create router” button.

Step 7. It will ask about router details. Fill them in and select “my_external_nw” as the External Network and click the “create router” button.

Step 8. After the above step, select the router from the router name column, go to the Interfaces tab and click on the “Add Interface” button.

Step 9. A new prompt window will appear. In the subnet dropbox, select the internal subnet “my_subnet.” In the IP Address field, do not fill anything. Now click Submit button to complete this step.

Step 10. Now, as all the steps are finished for configuring the network, we will verify OpenStack network settings. Follow the path:

Project -> Network -> Network Topology

A network map as shown below should appear:

That’s all, Folks. We have successfully configured a basic network configuration on OpenStack. Try to add some flavor to this configuration by adding more networks and creating a communication path between multiple VMs inside OpenStack.

]]>
Advanced Network Configuration in Debian 10 (Buster) https://linuxhint.com/advanced-network-configuration-debian-10/ Mon, 25 Jan 2021 18:13:22 +0000 https://linuxhint.com/?p=87611

In this guide, we will see various ways to configure various network operations on the Debian system. Although this guide is for the Debian system, most operations should run on other Debian based systems like Ubuntu and other Linux operating systems as well.

 1. If you want, you can print the IP address of a specific interface or device, just use the below command:

$ ip addr show enp0s8


here enp0s8 is any interface or device. The naming convention may vary depending upon the naming mechanism used.

2. IP command can also be used to show the network performance statistics as follows:

$ ip -s  link show enp0s8


The above command output reveals the number of packets transmitted and received, packets dropped, and the packet with errors. This information can be used to troubleshoot network issues like low memory, connectivity issues, packet congestion, etc.

3. Using nmcli or Network Manager Command Line Interface tool to create a DHCP network connection

$ sudo nmcli con add con-name "MyCon1" type ethernet ifname enp0s8


The above command will create a new connection named “MyCon1” on the device enp0s8. Let us see some details about this command:

  • The configuration of this connection will be based on DHCP. The type of this connection is ethernet. Other types of network connection can be wifi, Bluetooth, vlan, bond, team, bridge, etc.
  • The con-name argument defines the name of the connection.
  • The ifname option specifies the name of the interface or the device assigned for this connection.

4. To create a static connection using nmcli, we will need to specify the IP address and the gateway as the argument

$ sudo nmcli con add con-name “MyCon2” type ethernet ifname eth1 ip4 192.168.2.10/24 gw4 192.168.2.0

To activate the connection, use the following command:

$ sudo nmcli con up "MyCon2"

To verify the new connection, run:

$ nmcli con show –active

$ ip addr show enp0s3

5. Configuring the network with Network Interfaces File

The /etc/network/interfaces file contains the definitions of various interface configurations. We can add configuration details to create a new connection. Let us see some manual configuration:

I. Adding a static IP address:

1. Open the /etc/network/interfaces file with sudo privileges:

$ sudo nano /etc/network/interfaces

Now  add the following lines:

auto  enp0s3

iface enp0s3 inet static

address  192.168.1.63

netmask 255.255.255.0

gateway 192.168.1.1

You can add this configuration to the /etc/network/interfaces file or add it to a new file under the /etc/network/interfaces.d directory.

After modifying the above file, let’s restart the networking service for changes to take effect:

$ sudo systemctl restart networking

Now we will reload this interface by running the command ifdown followed by ifup:

$ sudo ifdown enp0s3

$ sudo ifup enp0s3


The ifup and ifdown commands are used to manage the interfaces defined in this file. These tools are very helpful while configuring the network from the command-line interface. These commands can be found in /sbin/ifup and /sbin/ifdown.

II. Adding a DHCP Address:

The dhcp IP address is automatically assigned from the IP address pool of the DHCP server.

To configure a DHCP address, enter the following line to /etc/network/interfaces file and save the file:

iface enp0s3 inet dhcp


Now restart the networking service and again run the command ifdown and ifup as above:

$ sudo systemctl restart networking

$ sudo ifdown enp0s3

$ sudo ifup enp0s3

To verify the above network configuration, use the following ‘ip’ command to see if the interfaces are shown with their respective ip addresses:

$ ip a | grep 'enp0s3'

Note: DHCP Ip is generally good for clients, but the server usually works on a Static IP address.

6. Setting Hostname with the “Sysctl” command

Linux provides a sysctl utility to display and set the hostname as shown below:

i) Displaying the hostname:

$ sudo sysctl kernel.hostname

kernel.hostname = debian


ii) Setting the hostname

$ sudo sysctl kernel.hostname= linuxhint

Now run the command bash exec to verify the new hostname:

$ exec bash

Now to make this hostname permanent, we will have to edit the /etc/hosts and /etc/hostname files, so open the files and put the new hostname there:

$ sudo nano /etc/hosts

$ sudo nano /etc/hostname

Now from this point, you should see your new hostname every time you open a new terminal.

7. DNS configuration

DNS or domain name service is a naming system that is used to translate domain names into network addresses (IPv4 or IPv6). The DNS service has much more capability than simply translating domain names. The DNS service can work on both internet as well as on a private network.

We will configure a client to use a specific dns server. In the example below, we will configure a client to use a dns server from 8.8.8.8. Open the file /etc/resolv.conf and make the following changes to it:

$ sudo nano /etc/resolv.conf

Go to the line containing the string “nameserver” and add the IP address of the DNS server(8.8.8.8) as shown below:

nameserver 8.8.8.8

8. Using Bonding on Debian 10

Bonding is a method in which two or more interfaces are merged to make a new logical interface. This bonded interface makes the network more reliable. In case a link fails, the other link will automatically balance all the network traffic. This increases the network availability as well. You can try bonding your wireless interface with the cable interface. If for some reason, the ethernet cable is unplugged or not working, the network traffic will automatically start flowing over the wireless interface.

Tip: We can use bonding to add multiple network interfaces (NICs) with the same IP address.

To check if  your linux kernel version supports bonding, use the following command :

$ sudo grep -i bonding /boot/config-$(uname -r)

An output like “CONFIG_BONDING=m” shows that the bonding is enabled as a module

Let us see how to apply bonding on two ethernet interfaces, “eth1″ and” eth2″ on a Debian system. Follow the steps below:

Step 1. Install the ifenslave package to configure bonding:

$ sudo apt install ifenslave

Step 2.  Now bring down the interface before configuring it:

$ sudo ifdown enp0s3

Note: Before proceeding, make sure that the interface you are modifying should not be in use; otherwise, it will break your network connectivity.

Step 3. Create a new bonding configuration and called it “bond1”. To do this, open the default  network configuration file:

$ sudo nano /etc/network/interfaces

Now add the following lines:

auto bond1

iface bond1 inet static

address 192.168.1.200

netmask 255.255.255.0

gateway 192.168.1.1

slaves enp0s8

bond-mode 1

bond-miimon 100

bond_downdelay 200

bond_updelay 200

Restart the networking service

$ sudo systemctl restart networking

Linux supports different bond modes: balance-rr (mode=0), active-backup (mode=1), balance-xor (mode=2), broadcast (mode=3), 802.3ad (mode=4), balance-tlb (mode=5), balance-alb (mode=6). In this example we are using mode 1 or active backup as a bond mode.

Step 4. Bring the new bonded interface (bond1) up with command ifup. Now check if it works:

$ sudo ifup bond1

To check if the bind interface is created, run the following command:

$ ip a | grep 'bond1'

or

$ ifconfig bond1

9. Configuring bridging on Debian

Bridging is the most common way to connect two different networks. A bridge (hardware) device is used when connecting two different networks of an organization, usually located at different locations. Linux system also has the capability to create a bridge between two interfaces having different networks. This way we can pass traffic between them.

Let us create a bridge between two different interfaces, “eth0″ and” eth1,” on a Debian system.

Step 1. Install the “brctl” tool to configure bridging on the Debian system:

$ sudo apt install bridge-utils

Step 2. Run the following command to get a list of all the network interfaces available on your system:

$  ifconfig -a

Step 3. create a new interface using the brtcl tool:

$ sudo brctl addbr br1

This will create a new virtual interface to link between eth0 and eth1.

Step 4. Now add both the interfaces to this virtual interface.

$ sudo brctl addif br1 eth0 eth1

Step 5. To make this configuration permanent, we will add the new interface details to the file /etc/network/interfaces.

i) For setting a DHCP address, use the following details

# The loopback network interface

auto lo

iface lo inet loopback

# Set up interfaces manually, avoiding conflicts with, e.g., network manager

iface eth0 inet manual

iface eth1 inet manual

# Bridge setup

iface br0 inet dhcp

bridge_ports eth0 eth1

Now run the below command to bring the interface up:

$ sudo ifup br1

ii) For setting a static IP address, use the following details

# The loopback network interface

 auto lo br1

 iface lo inet loopback


 # Set up interfaces manually, avoiding conflicts with, e.g., network manager

 iface eth0 inet manual


 iface eth1 inet manual


 # Bridge setup

 iface br1 inet static

    bridge_ports eth0 eth1

        address 192.168.1.2

        broadcast 192.168.1.255

        netmask 255.255.255.0

        gateway 192.168.1.1

Now run the below command to bring the interface up:

$ sudo ifup br1

If the network does not work after rebooting, try removing /etc/network/interfaces.d/setup file to fix the issue.

10. Configuring Networking from  Command-line tools

i) Adding an additional IP address to a network card:

Step 1. Run the following command to list all the available interfaces with their IP address:

$ sudo ip addr

or

$ sudo ifconfig

While running “ifconfig,” you may encounter an error: “ifconfig: command not found”. For fixing this error, we need to install the “net-tools” package:

$ sudo apt install net-tools -y

Step 2. From the output of the above command, you can select the interface on which you want to add an extra IP address. Let us add an extra IP address (10.0.2.65) to the interface enps03.

$ sudo ip addr add 10.0.2.65/24 dev enp0s3

Step 3. Verify if the IP has been added to this interface:

$ ip a | grep "enpo3"

You should see here the new and old IP address in the output.


Step 4. To make this IP address permanent, put the following lines in the  /etc/network/interfaces file:

# The network interface enp0s3 is dhcp enabled

auto enp0s3

iface enp0s3 inet dhcp

iface enp0s3 inet static

address  10.0.2.65/24

Step 5. Now save the file and bring down the interface and then again bring up the interface to apply the changes:

$ sudo ifdown  enpo3

$ sudo ifup  enpo3

Now verify the connectivity of the interface with the ping command:

$ sudo ping  10.0.2.65

If everything goes right, you should see a ping coming from the new IP address.

ii) Changing the mac address of an interface.

Step 1. Run the below command to select the interface for you which you want to change the MAC address for:

$ ip link show

It will show you all the interfaces with their mac address, state, and other information.

Step 2. Let us change the mac address of the “eth0” interface and bring it down:

Note: Before proceeding, make sure that the interface you are modifying should not be in use. Otherwise, it will break your network connectivity.

$ sudo ip link set dev eth0 down

Step 3. Now enter the new mac address as below:

$ sudo ip link set dev eth0 address "enter new mac address here."

Step 4.  Now bring up the interface again:

$ sudo ip link set dev eth0 up

That’s all configuring the new mac address; you should see the new mac address:

$ ip addr

The output of the above command should show you the new mac address. Macchanger can also be used for changing the mac address from the command line.

iii) Enable and disable interfaces.

Besides ifup and ifdown tools, the ifconfig command can also be used to bring up and bring down an interface.

a) To bring down an interface:

$ ifconfig enp0s3 down

b) To bring up an interface:

$ ifconfig enp0s3 up

iv) Remove an IP address from a network interface.

To delete an IP from the network interface, use the below command:

$ sudo ip addr del 'your IP address' dev  enp0s3

Replace ‘your IP address’ with your IP address, e.g., the following command will delete the IP 192.168.2.2

$ sudo ip addr del 192.168.2.2/16 dev  enp0s3

If you have multiple IP addresses for an interface, you can delete all as shown below:

$ sudo ip addr flush dev  enp0s3

v) Set the Default Gateway

The route or ip command can be used to set a Default Gateway:

$ sudo route add default gw  10.0.2.20

or

$ sudo ip route add default via 10.0.2.20 dev enp0s3

This guide has seen how we can modify and configure various network settings in a Debian 10 OS. If you like this guide, please share it with others.

]]>
Automatically Build Docker Images in Debian 10 (Buster) https://linuxhint.com/automatically-build-docker-images-in-debian-10-buster/ Mon, 25 Jan 2021 18:10:41 +0000 https://linuxhint.com/?p=87633

Docker is an on-demand technology these days as many big companies are using it to reduce their workloads. It is used for building, packaging, and deploying applications on top of container technology. Docker can run a high resource utilization application with minimum resource usage. The hypervisor-based virtualization requires lots of resources by installing an entire operating system, whereas Docker uses very lightweight and scalable containers to run applications.

Docker can be installed on Linux, Mac, and Windows. Although It runs natively on Linux, it requires Hyper-V to be enabled on Windows.

Docker also has a Docker Hub, a cloud-based service where we can find images from verified publishers, and we can also publish and share our own custom images.  Once we have pulled an image from Docker Hub, we can create numerous containers from the very same image.

Features of Docker:

  1. It is open-source software.
  2. Provides Platform as a Service for running application in a virtual environment.
  3. It is very easy to understand and use the Docker technology.
  4. Docker applications can be easily moved and run on any system with Docker installed on it.
  5. Migration of docker containers is very fast from cloud environment to localhost and vice versa.

Docker can read and execute the instructions inside the Dockerfile and automatically build the specified image. This guide will see how we can automatically build a docker image using a Dockerfile on the Debian 10 (Buster) operating system. We will deploy the Nginx web server and create a custom Docker image.

Prerequisites:

  1. Access to “sudo” privileges.
  2. Basic Knowledge of Docker commands.

Before we start our journey, let’s quickly review some important concepts and requirements which are necessary to understand this guide. The first thing is that you should have Docker installed on your system. If you have not already, you can follow this guide to install docker. You can also use the official guide available on the Docker website for installing Docker on Debian 10.

  1. Dockerfile: This file describes the whole configuration we want to have in our  Docker container. It is a set of instructions that defines how to build an image.
  2. Docker Image: It is actually the template image we can use to build our custom container. We can say a docker image is an immutable file or a read-only image.
  3. Docker Container: In very simple words, a Docker container is an instance of our docker image. We can say the Docker image is a base image, and we create a custom container on the top of a Docker image by adding a writable layer on this image.  We can use a single Docker image to create multiple Docker containers.

I hope this review is enough for us to get started with Docker. So let’s dive in to see how to build images using Dockerfile automatically.

Step 1: The very first step in building an image starts with a docker file. So let’s first create a working directory, and inside it, we will make a Dockerfile.

$ mkdir mydock1 #  This creates a new directory.

$ nano Dockerfile # This is our dockerfile.

We can use any text editor besides nano like vi or vim.

Step 2. Add the following content to the Dockerfile and save it.

FROM ubuntu

MAINTAINER linuxhint

RUN apt-get update \

    && apt-get install -y nginx \

    && apt-get clean \

    && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* \

    && echo "daemon off;" >> /etc/nginx/nginx.conf

EXPOSE 80

CMD service nginx start

Step 3. Now, as we have our Dockerfile ready, it’s time to build The image. Just use the following command:

$ sudo docker build -t webserver-image:v1 .

Syntax:

sudo docker build -t name:tag /path/to/directory/of/dockerfile

Note: Always run the docker command with root user or “sudo” privileges to avoid the error: “Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker”

In the above command, the webserver-image is the name of our docker image. You can use your custom name here. V1 is the tag for our image.

If everything goes right, we should see the following output:

Sending build context to Docker daemon  2.048kB

Step 1/5 : FROM ubuntu

—> f643c72bc252

Step 2/5 : MAINTAINER linuxhint

—> Using cache

—> 1edea6faff0d

Step 3/5 : RUN apt-get update     && apt-get install -y nginx     && apt-get clean     && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*     && echo “daemon off;” >> /etc/nginx/nginx.conf

—> Using cache

—> 81398a98cf92

Step 4/5 : EXPOSE 80

—> Using cache

—> 2f49ffec5ca2

Step 5/5 : CMD service nginx start

—> Using cache

—> 855796a41bd6

Successfully built 855796a41bd6

Successfully tagged webserver-image:v1


Step 4. When we have a number of images, we can use the below command to look for a specific image:

$ sudo docker images

Step 5. Now we will run our docker image to see if it is working as expected:

$ sudo docker run -d -p 80:80 webserver-image:v1

After a successful run, it will generate a long id as shown below:

Step 6. If everything goes right, we should be able to see our web page running on our nginx web browser inside the docker. Run the below command to check it:

$ curl 'ip_address'

Please keep in mind that the IP address we are using here is the docker container’s IP address installed on our host operating system. To exactly know the ip address required here, run the following command on the host:

$ ip a | grep ^docker

The above command will contain the IP address which we have to use here.

The above curl command will display the index.html content of the nginx web server.

Another simple and straight forward way is to pass the docker as the curl argument, as  shown below:

Step 7. If you want, you can check which port and processes are running inside our docker container. Run the below command:

$ sudo docker ps

This completes our guide on automatically building Docker images on Debian 10 (Buster). We have seen how we can construct Docker images from Dockerfile instead of manually editing each image.

Although this guide is performed on Debian 10, it should also run on other Debian-based distros like Ubuntu, Linux mint, etc. Please do not forget to share this guide with others. Also, subscribe to our blog to get the latest update and HowTos on Linux.

]]>
Difference Between ARM64, ARMel, and ARMhf https://linuxhint.com/about-arm64-armel-armhf/ Mon, 25 Jan 2021 17:56:09 +0000 https://linuxhint.com/?p=87640

Most of us, while looking to buy a new smartphone, tablet, or any electronics gadget we see the term “ARM vXXX” processor in the specifications list. But we hardly bother to know what is an ARM processor. So in this guide, we will explore in brief ARM processors.

What is ARM anyway?

ARM or Advanced RISC Machines or Acorn RISC Machine (previous name) is one of the world’s most used processor cores. The ARM processor became the first commercial RISC processor in 1985. The first release was a 26 bit RISC machine. With its second release in 1987, the ARM version 2 introduced the co-processor feature. Over time the arm processors have evolved very much. The ARM corporation provides paid licenses to anyone who wants to manufacture CPUs or SOC products based on their architecture. ARM Holdings, based in Cambridge, UK, is responsible for this business in and out. Apple, Qualcomm, Texas Instruments, Nvidia, Samsung, etc., are some of the ARM family’s notable consumers.

The ARM processors are mostly used in mobile devices and embedded systems. They are small in size and have low power consumption, but at the same time, they provide high performance. The point of consideration is the design issue, as the software designed for ARM cannot run on non-ARM devices. It is just like two people with different languages cannot understand what the other is speaking.

Features of ARM Processor

  1. Based on RISC  or Reduced Instruction Set Computing.
  2. Fixed size and uniform instruction set.
  3. Multiple stage pipeline support for instruction.
  4. Supports wide frequency range.
  5. Execution of Java byte-code.
  6. Optimized for battery usage in mobile devices.

In a broad sense, the ARM architecture has three types of profiles:

A-profile or Application profile

R-profile or Real-time profile

M-profile or Micro-controller profile

Why is ARM is used by Tech Giants

For a long time, ARM is considered as the processor for mobile devices, with  x86/x64 as the target processor for desktops and servers. But with the evolution of technologies, ARM processors are being used for tablets. For e.g., Windows 10 earlier can only be run on  x86 and x64 based processor, but recent Windows 10 desktop can run on processors that are based on ARM64 architecture. Microsoft has assured the application compatibility for x86 and x64 based applications to run smoothly on the ARM64 based PCs. Although ARM32 and ARM64 based applications will directly execute, the x86 based application will require emulation to run.

Some windows version like Windows 8  requires x86 or x64 processor, whereas Windows RT needs ARM processor. Although x86/x64 are very fast as compared to the ARM processor, they consume significant energy. Therefore they are best suited for servers and desktop computers. At the same time, the ARM processor is relatively slow but requires low energy to run. This makes them more suitable for mobile devices running Android, IOS, etc.

Apple has announced to move its MAC series from Intel to  SoC and SiP processors, which are based on ARM architecture. According to Apple, with ARM processors, they will deliver performance combined with long battery power. Apple Silicon chips are the first Apple-designed Arm-based chip to be used in recent MacBook Air, MacBook Pro, and Mac mini.

The  Three Debian ARM ports: Debian/armel, Debian/armhf, and Debian/arm64

Debian/armhf is an acronym for “arm hard float,” representing a port on Debian.  The Debian armhf port was started to benefit the floating-point unit (FPU) on modern 32 bit ARM boards.

For critical accuracy requirements in computing and digital signal processing (DSP) based applications, floating-point is specifically suited. An ARMv7 CPU  with version 3 of the ARM vector floating-point specification (VFPv3) is the minimum requirement for Debian armhf port.

It is primarily used for mobile devices (smartphones, tablets) and embedded devices.

Various platforms are known to supported by Debian/armhf:

  1. Freescale MX53 Quick Start Board: The i.MX53 Quick Start Board has a 1 GHz Arm Cortex-A8 Processor. It is an open-source platform for development.
  2. NVIDIA Jetson TK1: It is a developer board with a 32-bit ARM Cortex-A15 CPU.
  3. SolidRun Cubox-i4Pro: The Cubox-i series is a tiny compute platform. Cubox-i4Pro features an ARM Cortex A9 processor.

Other supported platforms include Wandboard, Seagate Personal Cloud and Seagate NAS, SolidRun Cubox-i2eX tec. The EfikaMX platform was earlier supported till Debian 7, but from Debian 8, the support is abandoned.

Debian/armel or ARM EABI or Embedded ABI port on Debian was aimed at older 32 bit ARM processors. It does not have a hardware floating-point unit (FPU) support. ARM EABI or armel is supposed to work with ARM architecture versions 4T, 5T, and above, but with Debian 10 (buster) release, the ARM4T support is removed.

According to Oracle, the armel to armhf is in progress, so there may be some incompatibilities between them. To check whether your system is running armhf or armel, run the below command on your Linux terminal:

$ readelf -A /proc/self/exe | grep Tag_ABI_VFP_args

If the above command returns a Tag_ABI_VFP_args tag, then it is an armhf system, whereas a blank output shows that it is an armel system. For e.g., a raspberry distribution will return a Tag_ABI_VFP_args: VFP registers tag as it is an armhf distribution. On the other hand, a  soft-float Debian Wheezy distribution will give a blank output, indicating it is an armel distro.

The following list contains the various platforms supported by Debian/armel:

  1. Kirkwood and Orion5x SoC from Marvell with an ARM CPU.
  2. Versatile platform with QEMU emulator.

Debian/arm64 targets  64-bit ARM processors, which requires minimum ARMv8 architecture. The 64-bit processing provides an enhanced computing capability. This processing enhancement is achieved with an increase in memory addressing capacity in 64-bit architecture.  Arm64 hardware was first launched for iPhone 5 in the year 2013. The gnu name for ARM64 is aarch64-linux-gnu. The good thing with ARM64 is that it is compatible with its 32-bit predecessor. This helps in running the ARMv7 binaries or software without any modification on ARMv8 architecture.

Debian released ARM64 port for the first time in it’s Debian 8 (Jessie) operating system. The list of various platforms supported by Debian/ARM is given below:

  1. Applied Micro (APM) Mustang/X-Gene: It is the first known platform with ARMv8 architecture with an 8-core CPU.
  2. ARM Juno Development Platform: According to ARM, Juno Arm Development Platformis an open and vendor-neutral Armv8 development with a 6-core  ARMv8-A CPU.

Example of devices using ARM64 architecture includes Raspberry Pi 2,  Raspberry Pi 3, Microsoft HoloLens 2, DragonBoard, several IoT devices, modern laptops and desktops, smartphones, etc

Checking the processor type of your board.

To check the processor type on a Ubuntu machine, just use the following command:

$ dpkg –print-architecture

For a detailed list of the various feature of your CPU, use the following command:

$ cat /proc/cpuinfo[//c]

Another command that you can use to see the processor architecture of your system is given below:

[cc lang="bash" width="100%" height="100%" escaped="true" theme="blackboard"]
$ uname -a
]]>
cPanel Tutorial https://linuxhint.com/cpanel-tutorial/ Fri, 15 Jan 2021 20:37:49 +0000 https://linuxhint.com/?p=85794 cPanel is one of the most widely used web hosting control panel. It has a vast number of utilities and tools for website and server management. For example, you can manage and publish your websites, create email and FTP accounts, install applications like WordPress, secure your website with SSL certificates.

cPanel is based on the Linux operating system, and it currently supports Centos 7, Cloud Linux 6 and 7, Red Hat Enterprise Linux version 7.  Amazon Linux 1 was previously supported but has now been abandoned.

cPanel requires a new server for installation. It may be because it requires different services running on a different port; thus, it tries to avoid any port conflict with previously installed services.

Ports Used By cPanel

cPanel has several services for website hosting and server management. Some of these require a specific port to be open for functioning correctly. Hence it would be best if you allowed them through your firewall. A brief list of services and the ports they listen upon is given below:

cPanel Ports and Services
Service Ports
cPanel 2082
cPanel SSL 2083
WHM 2086
WHM SSL 2087
FTP 0
SSH 22
SMTP 25, 26, 465
DNS 53
HTTPD 80, 443
Webmail 2095

Ports Modification in cPanel

cPanel provides many services running on different ports, and sometimes it is required to change the default port of a service. The reason for this may be port conflicts or some security issues. Whatever the reason be, we will show how to modify the port number of specific services of cPanel like Apache (HTTPD), SSH, and SMTP. Some port numbers may require you to contact your hosting provider, whereas specific port numbers can no longer be changed, such as cPanel port.

Note: Before adding any new port, configure the firewall to allow the new port traffic. Also, check if some other service does not already use the new port.

Changing Apache Port Number on a cPanel Server.

Step 1: Login to your WHM account and go to tweak settings as follow:

Home >> Server Configuration >> Tweak Settings

Now go to the “System” menu and change both Apache HTTP (80) and SSL HTTPS (443) port number

Changing SSH Port Number on a cPanel Server.

Step 1: Login to your server via SSH as a root user.

Step 2: Once you are logged in, look for ssh_config file and open it with any text editor like nano or vi.

# vi /etc/ssh/ssh_config

Tip: It is always a good idea to back up a file before modifying it.

Step 3: Now, look for a line in the sshd_config file similar to “#Port 22”. Here 22 is the default port on which sshd daemon listens for connections. Uncomment this line by removing the ‘#’ symbol at the start of the line. Now insert any new privileged port number between 1 – 1023. the privileged port is those port that is accessible only by the root user.

# Port 20 changed to Port 69

Step 4: Now restart SSH service using the following command:

# service sshd restart

In case you have misconfigured the file, you can fix the original SSH configuration file by browsing the following link in a web browser:

https://example.com:2087/scripts2/doautofixer?autofix=safesshrestart

This script will try to assign an additional SSH configuration file for port 23. Now you can access and modify the original SSH config file.

Changing SMTP Port Number on a cPanel Server.

Some providers block access to port 25 for sending mail. But this port is required for communicating with users using other mail services. For changing the SMTP port, navigate through:

Login to WHM > Service Configuration > Service Manager. Inside “Exim Mail Server (on another port),” change the port number to your desired value.

Even though cPanel offers the option to change the port of Exim SMTP, but it is useless. This is because it breaks the communication as other mail servers are not configured to work with non-standard ports. The solution for this is to use a “smart host” or third-party service option in cPanel.

Using Let’s Encrypt with cPanel

Let’s Encrypt is a free and most widely used TLS encryption service. cPanel has made it very easy to install and manage the SSL certificate provided by Let’s Encrypt. To use the Let’s Encrypt SSL service, you need to install the cPanel Let’s Encrypt plugin. The Auto SSL feature of cPanel and the Let’s Encrypt Plugin for cPanel fetches the certificates provided by Let’s Encrypt™. Follow the steps below to install the Let’s Encrypt plugin:

  1. Log in to your server with the root user credential.
  2. Now run the following command to install the plugin:
    /usr/local/cPanel/scripts/install_lets_encrypt_autossl_provider

    If you want to uninstall the plugin, simply run the below command:

    /scripts/uninstall_lets_encrypt_autossl_provider
  3. Now activate the Let’s Encrypt provider in WHM. This login to WHM and go to the “Manage Auto SSL” page under “SSL/TLS.” The path is shown below:
    WHM > Home > SSL/TLS > Manage Auto SSL.
  4. Now, in the Providers tab, select the option Let’s Encrypt; after accepting the terms of service, save the file. From now on, Auto SSL will use Let’s Encrypt while replacing a certificate.After Auto SSL has been enabled in WHM, it’s time to add the certificates to your account. Follow the steps below to accomplish this:
    1. Log in to your WHM account.
    2. Under the Manage Auto SSL path, select the Manage Users tab.
    3. Inside the Manage Users tab, you can configure which individual cPanel users can use Auto SSL.
    4. Select the required domain and click “install” to add the Certificate.
    5. After the installation is complete, click the link “Return to SSL Manager” at the bottom of the page.

Let’s Encrypt for Shared Hosting

If you are on a shared hosting plan, then to install the Let’s Encrypt Free SSL certificate follow the below steps:

  1. Go to some website that offers free SSL services like SSLFORFREE or ZEROSSL.
  2. Complete the Free SSL Certificate Wizard by entering your domain name and accept the terms of service.
  3. Next, it will ask you to verify your domain ownership. For example, some SSL service providers ask to create TXT records in the DNS server that hosts your domain. They give the details of the TXT records. Later they will query the DNS server for the TXT records.
    The other method is to download two files and upload them to your cPanel account. The upload location of the file on the server will be inside: public_html>. well-known>acme-challenge.
  4. Now, once we have verified the ownership of the domain, it will provide you with a certificate key and an account or domain Key (private Key). Download or copy these files somewhere. The next thing is to set up the SSL for our website.
  5. log in to your cPanel account. Under the “Security” section, select the SSL/TLS option.
  6. Select the “Manage SSL sites” option under Install and Manage SSL for your site (HTTPS).
  7. Select the domain from the drop-down menu you used to register at ZeroSSl or SSLforFree website.
  8. Now, enter the contents of the domain certificate files into the certificate text box. To verify if the file also contains the CA bundle key, see if it has a “–End Certificate–” and “–Begin Certificate–”line in the middle of the random text. If this is the case, then just cut the part starting from the “–Begin Certificate–” line in the middle to the end of the text.
  9. Now paste the remaining part cut from Step 8 in the Certificate Authority Bundle text box.
  10. Now Copy the Private Key, i.e., domain key, and paste it in the “Private Key” field.
  11. At last, click on the “Install Certificate” to install all the certificates.

To check if your site is running on HTTPS protocol, try accessing your site with https://yourdomain.com

Redirect HTTP to HTTPS

To redirect the http request to https, open the file manager in cPanel. Look for a file named “.htaccess,” If it is not there, then look inside hidden contents, else create a new one.

Open the file and add the following lines:

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

Now test if .htaccess is working by browsing your site with http://yourdomain.com. If it is automatically redirected to https then it is working correctly.

The drawback of using Let’s Encrypt is that the Certificate needs to be re-validated after 90 days. Also, it has several domain limits and rate limits.

cPanel Backup

cPanel provides a feature of backing up our databases, emails, files, etc. The backup can be used to keep a local copy of the data, recover the data, move to a new hosting provider, or for other uses. Backups are a necessary task for system administrators to keep their organization safe in any data disaster. In this guide, we will see how to take different backups using cPanel.

Full Backup
Step 1: Log in to your cPanel account and click on the “Backup” utility under the “Files” section.

Step 2: It will show you three types of backup options: Full Backup, Account Backup, Partial Backup. Click the button under Full Backup, which is labeled as “Download a Full Account Backup.” It will create an archive of all the files and configuration settings of your website.

Step 3: On the next page, it will ask you for the destination to keep your backup archive file. You can select to save the backup on the home directory, transfer it to another server via FTP or SCP protocols.

You can also optionally select to receive an email for backup completion.

Step 4: Click the “Generate Backup” button to start the process of the backup. The method may take time as per the size of your data. It will generate a downloadable backup file with extension.tar.gz. The name of the file contains the time and date of the backup and the domain name.

Partial Backup
With this method, we can only take a backup of particular stuff like 1) Home directory 2) MySQL 3) databases 4) Email forwarders 5) Email filters. To take a partial backup, click the link given against each option below the “Partial Backups” heading.

Account Backups
The account backup option is used only when we have to download the full backup file to our local computer.

The other option, “Backup Wizard,” can also create and restore a backup. It will provide you with a step-by-step guide for managing the backup.

Managing PHP versions with cPanel

cPanel’s Software section provides utilities to configure various settings related to PHP. Below we will see how to modify some of these settings.

Changing the version

Step 1: Login to your cPanel account and go to the Software section. Look for an application named “MultiPHP Manager.” If it is not already installed, you can install it from the cPanel’s Software center like “Installatron Applications Installer” or whatever your hosting company provides software installer.

Step 2: Now select the domain for which you want to change the version of PHP. From the right drop-down menu labeled as “PHP Version,” choose the PHP version you want to install.

Step 3: Click the apply button to confirm your selection. Please be aware that things might get broken sometimes when you change the version of PHP.  For example, you may not be able to open your WordPress admin page after changing PHP’s version. If such a thing happens, then revert to your older version of PHP.

MultiPHP INI Editor is a cPanel utility that allows users to make more significant changes to PHP settings. It has two modes of editing:

  1. Basic mode to change several PHP directives with a toggle switch. These directives include allow_url_fopen, allow_url_include, file_uploads etc.
  2. Editor mode allows adding new PHP code to your php.ini configuration file.

Configuring .htaccess file in cPanel

.htaccess or Hypertext Access file is an essential file for manipulating various aspects of a website running on an Apache server. We can add additional functionality and control features to our site with the .htaccess file configuration. The .htaccess file usually resides in the root directory and is hidden.  You can unhide it from File Manager. However, every directory can have its.htaccess file. If you cannot find the .htaccess file, you can create a new one using File Manager in cPanel.

In this guide, we will try to explore some salient features of the .htaccess file.

  1. Custom Error pages: Most often, you have noticed that when we query a web-page on the internet, we receive a “404: Error Not Found” error when the requested web-page is not available. With the .htaccess file, we can customize these error pages from plan text to nice looking and user attracting web pages.First, you need to design a custom error page and put it into your web server’s root document directory. If you have placed it in some other sub-directory, then specify the path of that sub-directory. Open the .htaccess file and put the following code:
    ErrorDocument 404 /PathToDirectory/Error404.html

    Where first 404 is the error number, and Error404.html is your custom error page.
    We can do the same process for other errors like bad-request, internal-server-error, etc.

  2. Redirecting HTTP request to HTTPS: Sometimes, users access a website over HTTP or request a resource over HTTP; they should have been using HTTPS. In such a case, modern browsers generate an insecure connection warning. To make the connection secure, we can use the .htaccess file to redirect the HTTP request to HTTPS automatically. For this, open the .htaccess file and add the following lines:
    RewriteEngine On
    RewriteCond %{HTTPS}! =on
    RewriteRule ^ (/.*)$ https://%{SERVER_NAME}$1 [redirect=301]

    This module will turn on the rewriting of the URL and redirects any HTTP request to HTTPS. For example, any query like http://yourdomain.com/index.php will be redirected to https://yourdomain.com/index.php).

  3. Blocking users from specific IP addresses: We can block users, networks, and sub-networks from accessing our server using the .htaccess file. This is shown below:
    1. To Block a specific IP address, add the following line to the .htaccess file:
      Deny from w.x.y.z
      Where w.x.y.z is any IP address you want to block.
    2. To block multiple IP addresses, specify each one with space between them.
      Deny from w.x.y.z a.b.c.d
      Where w.x.y.z and a.b.c.d are two different IP addresses.
    3. To Block a complete subnet
      Deny from w.x
      For example, w.x can be 123.162 networks.
    4. To Block multiple subnets
      Deny from w.x a.b
    5. To Block an entire network
      Deny from w.x.0.0/24
  1. Restricting Users from accessing folder and sub-folders: With .htaccess, we can prompt users for authentication when accessing a protected folder.
    1. Log in to your cPanel account.
    2. Create a directory to be protected.
    3. Create a .htaccess file and a password file in the same directory and name the password file as .htpasswd.
    4. Create an encrypted password or htpasswd for the directory to be protected. You can use any online service or software to generate one for you.
    5. Open the .htpasswd in the directory and paste the encrypted password here and save the file.
    6. Open the .htaccess file and select the edit option and insert the following lines of code in the file and save the file:
      AuthName "Authrized Users Only"
      AuthType Basic
      AuthUserFile /home/cpanelusername/public_html/ProtectedFolderPath/
      .htpasswd require valid-user

      Replace the “Cpanel username” with the user name of your account. Inside the AuthUserFile directive, give the path of your .htpasswd file in the directory. Now for accessing this folder, an authorization will be required.

How to install a Node.js App in Cpanel

Node.js is one of the most used open-source and server-side programming platform. Developers widely use it for building cross-platform applications. Once developed, a Node.js application can be deployed on your server. To host your Node.js App using cPanel, follow the steps below:

  1. Login to your cPanel account.
  2. Head to the Software section and select the option for the “SetUp Node.js App” application.
  3. Click the Create Application button to start building your app.
  4. Select the application mode as a development mode to test the app before deploying to the production environment.
  5. In the application, the root chooses the location of application files. This location will be added to /home/username to form a complete path for your application files. Set the name as something like: “myapp”
  6. In the application, the URL adds an entry to make a public URL for your application.
  7. The application startup file is the entry file or index file of our project or the application. Took the name of the startup file as app.js.

Creating the package.json file

After creating the Node.js application in cPanel, we need to create a package.json file. Package.json file contains the metadata information of the Node.js project.

  1. Open File Manager in cPanel and go to the folder of your Node.js application, i.e., myapp. If you remember, the myapp folder was created in step 5 above when we worked with the first-time wizard of the node.js application.
  2. Create a file and name it package.json. Now, right-click and select the option edit.
  3. Put the following text inside it:
    {
    "name": "myapp",
    "version": "1",
    "description": "My Node.js App",
    "main": "app.js",
    "scripts": {
    "test": "echo "Error: no test specified" && exit 1"
    },
    "author": "",
    "license": "ISC."
    }
  1. Also, create an index or entry file, as mentioned in step 7 above the first-time wizard. You can put your custom code here or put the simple “hello world” node.js code here.

Installing NPM or Node process manager

NPM uses a packsge.json file to install all the dependencies. To install npm, follow the steps below:

  1. Select the “Setup Node.js App” option in the software section.
  2. Here you can see your application is running on cPanel and some icons in the right corner. Use these icons to stop or restarting the application.
  3. Now click on the pencil icon, and it will show up the button for installing the NPM package. Just click this button to install NPM.
  4. Our NPM package is installed; we can check our application by browsing our application’s public URL.

This completes our quick tour of cPanel, and some of its features. I hope you have enjoyed this guide. Please share it with others.

]]>
How to Configure SPICE Server in Debian 10 https://linuxhint.com/configure_spice_server_debian_10/ Wed, 06 Jan 2021 03:20:46 +0000 https://linuxhint.com/?p=84557

The Simple Protocol for Independent Computing Environments  or SPICE is a protocol used to access and control  remote desktops of virtual machines.  It is based on client-server model, where  a server (SPICE server) is  installed on the  host machine and runs a guest VM to be accessed over the Internet. The guest VM is   remotely controlled by an client system running a Spice client.

QEMU, a open source machine emulator and virtualizer, uses SPICE server to provide remote desktop capabilities. QEMU executes the guest code directly on the host CPU. This improves performance QEMU itself uses KVM (Kernel-based Virtual Machine), a linux kernel module, to perform hardware virtualization.

Features of SPICE

  1. Spice supports transmission and handling of 2D graphic commands.
  2. Hardware Acceleration through GPU and CPU of client.
  3. Uses OpenGL for video streaming, giving a smoother user experience
  4. In order to retain the important aspects of an object being transferred   Spice uses lossless compression for images.
  5. In case of video compression, Spice employs lossy video compression for uncritical areas of videos. This saves a lot of bandwidth and improves Spice performance.
  6. Supports two mouse modes: server and client
  7. Supports seamless live VM migration between servers connected to a client.

The major components of Spice model are Spice Server, Spice Client and Spice Protocol.

The SPICE server runs inside the QEMU emulator. It uses the libspice-server1 package and  other dependencies to communicate with the remote client. It also manages authentication of client connections.

The Spice client is a utility that runs on client side. The client connects to remote guest VM desktop via Spice client. For this guide we will be using remote-viewer tool for accessing our guest VM.  The remote-viewer well be installed from the virt-viewer package.

Spice protocol  is a standard protocol for building communication path between the client and  the server side.

Environment Summary

Before we start building things it is necessary to understand the whole scenario. First thing is that we are working with a host machine, a guest virtual machine and a client machine. The host machine runs QEMU emulator on which we will launch the guest VM. The client machine will be used to connect to the  guest virtual machine.  The client system can be the host system itself for simplicity, but in our case it is a different PC running Ubuntu 20.04.

 Overall Summary:

  1. Our Host machine(Spice Server) is Debian 10(Buster), running Qemu emulator. IP: 192.168.1.7
  2. Guest VM  is Ubuntu 18.04, running inside Qemu emulator of above Host Machine.
  3. Client Machine is a Ubuntu 20.04 and a android mobile running a spice client software called as “aSPICE: Secure Spice Client”.

Prerequisites:

  1. A Debian 10(Buster) installed host machine.
  2. Hardware virtualization enabled in HOST.
  3. Basic knowledge of virtualization in linux operating system.

Notes: This is a long process in which multiple machines are running on different systems, so please be careful and run commands only on the right machine.

Steps to be performed on Host Machine i.e. Debian 10(Buster)

Step 1.  Enter the following command to execute commands with the super user’s privileges:

$ sudo su

Step 2. Update the repositories and packages on host machine i.e. Debian 10(Buster) before installing Spice Server on it:

# apt update && apt upgrade -y

Step 3. Install the following dependencies and packages required for running the Spice Server:

# apt install -y qemu-kvm libvirt-daemon-system bridge-utils virt-manager gir1.2-spiceclientgtk-3.0

Step 4. Now we have to launch a guest VM(Ubuntu 18.04)  inside virtual machine manager. Follow the following steps:

# virt-manager

This will open up the Virtual Machine Manager on the host machine i.e. our Debian 10(Buster).

a) Inside Virtual Machine Manager menu select File-> New Virtual Machine.

b) Select source of install as local media and click ‘Forward’ button.

Now browse for the .iso image of the OS to install as a guest VM. As mentioned earlier, we are selecting  Ubuntu 18.04 as our guest VM:

c) In next window, select the RAM size and number of CPUs:

d) Now create a storage for your virtual machine:

e) The next window will show you the details of your machine. Keep the network selection to NAT device.

f) Activate the virtual network when prompted.


Now proceed with normal process of installing your selected guest OS.

Step 5. After installing the guest OS, go to the Virtual Machine Manager and select Virtual Machine Details  as shown below:

A new window will open up showing the details of our selected guest VM(Ubuntu 18.04).

You can change the name and other configuration of your guest VM like RAM, number of CPUs etc from here.

Step 6. Now go to the option “Display Spice” and inside “Address” text-box select the option “All interfaces”. This will help us to view our guest VM on all over LAN device running spice client utility.

Click apply to save the changes.

Note: You will need to restart your guest OS for applying certain changes.

g) Now start the virtual machine from  Virtual Machine Manager main window as shown below:


Now this complete our host machine configuration for installing Spice Server. We have also launched a guest VM inside Qemu emulator installed on host machine.

Steps to be performed on Client Machine (Ubuntu 20.04)

The client machine requires a spice client installed on it for viewing the guest VM. Follow the following steps on Client VM (Ubuntu 20.04).

Step 1. Update the repositories and packages on client System:

$ sudo apt update && sudo apt upgrade -y

Step 2. Now install the following required packages for running spice client:

$ sudo apt install virt-viewer -y

Step 3. Now  to open the remote viewer, run the following command. The remote viewer tool is installed from the virt-viewer package.

$ sudo remote-viewer

A new small window will open up as shown below. Enter the IP address of the host machine and port of spice server.


If you have correctly followed up to this step, you should see the screen of guest VM(Ubuntu 18.04) on the client VM(Ubuntu 20.04) as here:

Spice Client for Mobile Device (Android)

We can also use a spice client on an android device for viewing our guest VM.  Just follow the steps below:

1. Go to play store and download the app “aSPICE: Secure Spice Client”.

2. Now open the app and click PC icon on top right to add a connection.


3. Enter the IP address of the  host machine Debian 10(Buster) upon which  guest VM is running. Save the configration.


4. A icon will appear on main window showing the guest VM. Now click on this icon to lauch the guest vm as shown below:


This completes our today’s guide of installing spice server on Debian 10(Buster). Hope you have enjoyed the guide. Please do not forget to share this guide with others.

]]>
OpenLDAP beginner guide https://linuxhint.com/openldap-beginner-guide/ Tue, 05 Jan 2021 15:54:51 +0000 https://linuxhint.com/?p=84466 OpenLDAP is a free and open-source implementation of LDAP(Lightweight Directory Access Protocol). Many organizations use the LDAP protocol for centralized authentication and directory access services over a network. OpenLDAP is developed by the OpenLDAP Project and organized by the OpenLDAP Foundation.

OpenLDAP Software can be downloaded from the project’s download page at http://www.openldap.org/software/download/. OpenLDAP is very similar to Active Directory in Microsoft.

OpenLDAP consolidates the data of an entire organization into a central repository or directory. This data can be accessed from any location on the network. OpenLDAP provides support for Transport Layer Security (TLS) and Simple Authentication and Security Layer (SASL) for providing data protection

Features of OpenLDAP Server

  • Supports Simple Authentication and Security Layer and Transport Layer Security (requires OpenSSL libraries )
  • Support Kerberos-based authentication services for OpenLDAP clients and servers.
  • Support for Ipv6 of Internet Protocol
  • Support for stand-alone daemon
  • Multiple Database Support viz. MDB, BDB,HDB.
  • Supports LDIF(LDAP Data Interchange Format) files
  • Supports the LDAPv3

In this guide, we will see how to install and configure the OpenLDAP server on Debian 10(Buster) OS.

Some LDAP Terminologies used in this guide:

  1. Entry — It is a single unit in an LDAP directory. It is identified by its unique Distinguished Name (DN).
  2. LDIF((LDAP Data Interchange Format))— (LDIF) is an ASCII text representation of entries in LDAP. Files containing the data to be imported to LDAP servers must be in LDIF format.
  3. slapd — standalone LDAP server daemon
  4. slurpd — A daemon that is used to synchronize changes between one LDAP server to other LDAP servers on the network. It is used when multiple LDAP servers are involved.
  5. slapcat — This command is used to Pull entries from an LDAP directory and saves them in an LDIF file.

Configuration of our machine :

  • Operating System: Debian 10(Buster)
  • IP Address : 10.0.12.10
  • Hostname: mydns.linuxhint.local

Steps for installing OpenLDAP Server on Debian 10(Buster)

Before Proceeding to installation, first, update the repository and installed packages with the following command:

$ sudo apt update

$ sudo apt upgrade -y

Step 1. Install the slapd package (the OpenLDAP server).

$ sudo apt-get install slapd ldap-utils -y

enter the admin password when prompted

Step 2. check the status of the slap service with the following command:

$ sudo systemctl status slapd.service

Step 3. Now configure slapd with the command given below:

$ sudo dpkg-reconfigure slapd

After running the above command, you will be prompted for several questions:

  1. Omit OpenLDAP server configuration?

    Here you have to click ‘No’.

  2. DNS domain name:

    Enter the DNS domain name for constructing the base DN(Distinguished Name) of your LDAP directory. You may enter any name that best suits your requirement. We are taking mydns.linuxhint.local as our domain name, which we have already setup on our machine.

    Tip: It is suggested to use the .local TLD for the internal network of an organization. This is because it avoids conflicts between internally used and externally used TLD’s like .com, .net, etc.

    Note: We recommend to note down your DNS domain name and administrative password on plain paper. It will be helpful later when we configure the LDAP configuration file.

  3. Organization name:

    Here enter the name of the organization you want to use in the base DN and press enter. We are taking linuxhint.

  4. Now, you will be asked for the administrative password which you set earlier while installing in the very first step.

    When you press enter, it will again ask you to confirm the password. Just enter the same password again and enter to continue.

  5. Database backend to use:

    Select the database for the back-end as per your requirement. We are selecting MDB.

  6. Do you want the database to be removed when slapd is purged?

    Enter ‘No’ here.

  7. Move the old database?

    Enter ‘Yes’ here.

After completing the above steps, you will see the following output on the terminal window:

Backing up /etc/ldap/slapd.d in /var/backups/slapd-2.4.47+dfsg-3+deb10u4... done.

  Moving the old database directory to /var/backups:

  - directory unknown... done.

  Creating initial configuration... done.

  Creating LDAP directory... done.

To verify the configuration, run the following command:

$ sudo slapcat

It should produce an output something like below:

dn: dc=mydns,dc=linuxhint,dc=local

objectClass: top

objectClass: dcObject

objectClass: organization

o: linuxhint

dc: mydns

structuralObjectClass: organization

entryUUID: a1633568-d9ee-103a-8810-53174b74f2ee

creatorsName: cn=admin,dc=mydns,dc=linuxhint,dc=local

createTimestamp: 20201224044545Z

entryCSN: 20201224044545.729495Z#000000#000#000000

modifiersName: cn=admin,dc=mydns,dc=linuxhint,dc=local

modifyTimestamp: 20201224044545Z


dn: cn=admin,dc=mydns,dc=linuxhint,dc=local

objectClass: simpleSecurityObject

objectClass: organizationalRole

cn: admin

description: LDAP administrator

userPassword:: e1NTSEF9aTdsd1h0bjgvNHZ1ZWxtVmF0a2RGbjZmcmF5RDdtL1c=

structuralObjectClass: organizationalRole

entryUUID: a1635dd6-d9ee-103a-8811-53174b74f2ee

creatorsName: cn=admin,dc=mydns,dc=linuxhint,dc=local

createTimestamp: 20201224044545Z

entryCSN: 20201224044545.730571Z#000000#000#000000

modifiersName: cn=admin,dc=mydns,dc=linuxhint,dc=local

modifyTimestamp: 20201224044545Z

Now again, check the status of our OpenLDAP server using the below command:

$ sudo systemctl status slapd

It should show an active running status. If this is the case, then you are correctly
building up the things.

Step 4. Open and edit the /etc/ldap/ldap.conf to configure OpenLDAP. Enter the following command:

$ sudo nano /etc/ldap/ldap.conf

You can also use some other text editor besides nano, whichever is available in your case.

Now uncomment the line that begins with BASE and URI by removing “#” at the start of the line. Now add the domain name you entered while setting up the OpenLDAP server configuration. In the URI section, add the IP address of the server with port number 389. Here is the snippet of our config file after modifications:

#
# LDAP Defaults
#

# See ldap.conf(5) for details
# This file should be world-readable but not world-writable.

BASE    dc=mydns,dc=linuxhint,dc=local
URI     ldap://mydns.linuxhint.local ldap://mydns.linuxhint.local:666

#SIZELIMIT      12
#TIMELIMIT      15
#DEREF          never

# TLS certificates (needed for GnuTLS)
TLS_CACERT      /etc/ssl/certs/ca-certificates.crt

Step 5: Now check if the ldap server is working by the following command:

$ ldapsearch -x

It should produce an output similar to the one below :

# extended LDIF

#

# LDAPv3

# base  (default) with scope subtree

# filter: (objectclass=*)

# requesting: ALL

#
# mydns.linuxhint.local

dn: dc=mydns,dc=linuxhint,dc=local

objectClass: top

objectClass: dcObject

objectClass: organization

o: linuxhint

dc: mydns


# admin, mydns.linuxhint.local

dn: cn=admin,dc=mydns,dc=linuxhint,dc=local

objectClass: simpleSecurityObject

objectClass: organizationalRole

cn: admin

description: LDAP administrator


# search result

search: 2

result: 0 Success


# numResponses: 3

# numEntries: 2

If you get a success message, as highlighted in the above output, this means that your LDAP server is correctly configured and is working properly.

That’s all done installing and configuring OpenLDAP on Debian 10(Buster).

What you can do next is to:

  1. Create OpenLDAP user accounts.
  2. Install phpLDAPadmin to administer your OpenLDAP server from a front-end web-based application.
  3. Try installing the OpenLDAP server on other debian based distros like Ubuntu, Linux Mint, Parrot OS, etc.

Also, do not forget to share this guide with others.

]]>
Install and Configure Squid Proxy Server on Debian 10 (Buster) https://linuxhint.com/install_squid_proxy_server_debian/ Fri, 01 Jan 2021 20:43:51 +0000 https://linuxhint.com/?p=83864

Squid is one of the most used proxy servers for controlling internet access from the local network and securing the network from illegitimate traffic and attacks. They are placed between the client and the internet. All the requests from the client are routed through an intermediate proxy server. Squid works for a number of services like HyperText Transport Protocol (HTTP), File Transfer Protocol (FTP), and other network protocols.

Besides serving as a proxy server, Squid is mostly used for caching frequently visited web pages from a web server. So when a user requests a page from a web server, the requests first go through the proxy server to check if the requested content is available. This reduces the server load and bandwidth usage and speeds up the content delivery, thus improving the user’s experience.

Squid can also be used to become anonymous while surfing the internet. Through Squid proxying, we can access the restricted content of a particular country.

This guide will see how to install and configure Squid Proxy server on Debian 10(Buster).

Prerequisites:

  1. “sudo” access to the system upon which Squid will be installed.
  2. Basic knowledge of Debian based Linux terminal commands.
  3. Basic knowledge of using a Proxy server.

Steps For Installing squid on Debian 10(Buster)

1) First update the repository and packages on Debian 10(Buster)

$ sudo apt update

$ sudo apt upgrade -y

2) Now install Squid package with the following command:

$ sudo apt install squid3


The installation process is pretty straight forward. It will automatically install any required dependency.

3) Now go to the main configuration file of the Squid Proxy Server located in /etc/squid/squid.conf.

$ sudo nano /etc/squid/squid.conf


Note: In order to stay safe, take the backup of this file.

4) To allow HTTP proxy server access for anyone, go to the line containing the string “http_access deny all” and change it to “http_access allow all” . If you are using vi or vim editor, you can directly go to this particular string using forward-slash(/) search.

Now just remove the “#” symbol at the start of this string to uncomment the line.

We will only allow localhost and our local network (LAN) devices to use Squid for more precise control. For this, we will change the squid.conf file as below:

 “http_access deny localnet” to “http_access allow localnet” 

 “http_access deny localhost” to “http_access allow localhost”.


Now restart Squid service to apply changes.

5) Now go to the line specifying the “http_port” option. It contains the port number for Squid proxy servers. The default port number is 3218. If for some reason, like port number conflict, you can change the port number to some other value as shown below:

http_port 1256

6) You can also change the hostname of the Squid proxy server with the visible_hostname option. Also restart the Squid service each time the configuration file is modified. Use  the following command:

$ sudo systemctl restart squid

7) Configuring Squid ACL

a) Define a rule to only allow a particular IP address to connect.

Go to the line containing the string #acl localnet src and uncomment it. If the line is not there, just add a new one. Now add any IP you want to allow access from the Squid server. This is shown below:

acl localnet src 192.168.1.4 # IP of your computer

Save the file and restart the squid server.

b)  Define a rule to open a port for connection.

To open a port, uncomment the line “#acl Safe_ports port” and add a port number you want to allow:

acl Safe_ports port 443

Save the file and restart the squid server.

c) Use Squid Proxy to block access to specific websites.

To block access to certain websites using Squid, create a new file called blocked.acl in the same location as squid.conf.

Now specify websites you want to block by stating their address starting with a dot:

.youtube.com

.yahoo.com

Now again open the squid configuration file and look for the line “acl blocked_websites dstdomain”. Add the location of the file “blocked.acl” here as shown below:

acl blocked_websites dstdomain “/etc/squid/blocked.acl”

Also add a line below this as:

http_access deny blocked_websites

Save the file and restart the squid server.

Similarly, we can create a new file to store the IP addresses of allowed clients that will use the Squid proxy.

$ sudo nano /etc/squid/allowedHosts.txt

Now specify IP addresses you want to allow and save the file. Now create a new acl line in the main config file and allow access to the acl using the http_access directive. These steps are shown below:

acl allowed_ips  src "/etc/squid/allowedHosts.txt"

http_access allow allowedHosts

Save the file and restart the squid server.

Note: We can also add the IP addresses of allowed and denied clients in the main configuration file, as shown below:

acl myIP1 src 10.0.0.1

acl myIP2 src 10.0.0.2

http_access allow  myIP1

http_access allow  myIP2

d) Changing squid port

The default port of Squid is 3128, which can be changed from squid.conf to any other value as shown below:

Save the file and restart the squid server.

Configuring Client for the Squid Proxy Server

The best thing with Squid is that all the configuration is to do on the server-side itself. To configure the client, you just need to input the squid setting in the web browser’s network setting.

Let’s do a simple test of proxying with Firefox web browser. Just go to Menu > preferences > Network Settings > Settings.

A new window will open up. In “Configure Proxy Access to the Internet” section select “Manual proxy configuration”. The text box labelled as “HTTP Proxy” but the Squid proxy server’s IP address. The in-text box labelled as Port, enter the port number you specified in “http_port” inside the squid.conf file.


In the search tab of the browser,  go to any website address(www.google.com). You should be able to browse that website. Now return to Squid browser and stop the service by the command:

$ sudo systemctl stop squid.service

Again check the url of the website by refreshing the page. This time you would see the below error:


There is a lot of things we can do with Squid. It has vast documentation available at its official site. Here you can learn how to configure Squid with third-party applications, Configure Proxy Authentication and much more. Meanwhile, try blocking a specific website, IPs, change Squid default port, deploy Caching to Speed Up Data Transfer.

]]>
Install HAProxy to Configure Load Balancing Server on Debian 10 https://linuxhint.com/install_haproxy_configure_load_balancing_server_debian_10/ Wed, 30 Dec 2020 16:06:22 +0000 https://linuxhint.com/?p=83336

Load-balancing is the most common practice of distributing incoming web traffic among multiple back-end servers. This makes the application highly available even if some of the servers go down for some reason.  Load Balancing increases the efficiency and reliability of a web application. HAProxy load-balancer is used for the same purpose. It is the most widely used load-balancer in industries. As per the official website, HAProxy is used by leading companies like AWS, Fedora, Github, and many more.

HAProxy or  High Availability Proxy provides high availability and proxying solution. It is written in C and works at network and application layers of the TCP/IP model. The best thing is that it has a free community edition, and it is an open-source application. It works on Linux, FreeBSD, and Solaris operating systems. The enterprise edition is also there, but it has a price tag.

In this guide, we will see How to Install HAProxy and Configure the Load Balancing Server on Debian 10.

Prerequisites:

  1. “sudo” access to all the machines and basic knowledge of running commands in Linux terminal.
  2. Private IP addresses added to load-balancer and backend servers.
  3. Debian 10 Operating System installed on all machines.

Installing HAProxy on Debian 10

For our guide, we will assume the following IP address configuration :

  1. HAProxy load-balancer 10.0.12.10
  2. Web server1: IP Address: 10.0.12.15
  3. Web server2: IP Address: 10.0.12.16

Step 1. Update Debian System repository and packages

First, run the below commands on all systems to update software packages to the latest one.

$ sudo apt update

$ sudo apt upgrade -y

Step: 2 Install Nginx on back-end servers

Prepare your back-end servers by installing Nginx web server on each. You can also choose to install other web servers like apache.

To install Nginx, run the following commands on each back-end server in your environment:

$ sudo apt install nginx

Step: 3 After Nginx is installed on your back-end servers, start the service, as shown below:

$  sudo systemctl start nginx

TIP: We can also manage the nginx web server using the below command:

$ sudo /etc/init.d/nginx “option”

option: start reload restart status stop

Step: 4 Create custom index pages in the web folder of each Nginx web server. This will help us to distinguish which back-end server is serving the incoming requests.

On each web server, perform the following tasks:

Backup the original index file using the following command:

$ sudo cp /usr/share/nginx/html/index.html /usr/share/nginx/html/index.html.orig

Add custom text to the index.html file. We are adding the IP address of each web server.

For web server 1:

$ sudo echo "Web server 1: 10.0.12.15"  | sudo tee /usr/share/nginx/html/index.html

For web server 2:

$ sudo echo "Web server 2: 10.0.12.16"  | sudo tee /usr/share/nginx/html/index.html

You can also use vi editor if you feel more comfortable with that. This is shown below:

$ sudo vi /usr/share/nginx/html/index.html

When the file is opened, enter the text and save the file.

Open the default virtual host file in the “/etc/nginx/sites-available/” directory.

$ sudo nano /etc/nginx/sites-available/default

Now inside the server block, change the root directive from “/var/www/html” to “/usr/share/nginx/html”.

To check the Nginx configuration, run the following command:

$ sudo nginx -t

Step 5: Now restart the service using the command:

$ sudo systemctl restart nginx

You can check the status of nginx using the following command:

$ sudo systemctl status nginx

Step: 6 To install HAProxy on Debian 10 (Buster), run the following command on the load-balancer.

$ sudo apt install haproxy -y

Tip: Once HAProxy is installed, you can manage HAProxy via an init script. For this, set the “enabled” parameter to 1 in “/etc/default/haproxy” as shown below:

$ sudo vi /etc/default/haproxy

ENABLED=1

Now the following option can be used with an init script:

$ sudo service haproxy “option.”

option: start reload restart status stop

Step: 7 Now configure HAProxy load-balancer by editing the haproxy default configuration file, i.e. “/etc/haproxy/haproxy.cfg”. To edit this file, run the following command

$ sudo vi /etc/haproxy/haproxy.cfg

Tip: Please backup the original file so that in case something goes wrong, we will be all safe. To perform the backup, use the following command:

$ sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.orig

Now go to the end of the file and edit the following information:

frontend Local_Server

bind 10.0.12.10:80

mode http

default_backend webserver

backend webserver

mode http

balance roundrobin

option forwardfor

http-request set-header X-Forwarded-Port %[dst_port]

http-request add-header X-Forwarded-Proto https if { ssl_fc }

option httpchk HEAD / HTTP/1.1rnHost:localhost

server web1 10.0.12.15:80

server web2 10.0.12.16:80

Note: Do not forget to change the IP addresses in the above file to the one you have added to your web servers.

Step: 8 Verify the configuration syntax of the above file with the following command:

$ sudo haproxy -c -f /etc/haproxy/haproxy.cfg

If everything goes right, it will show an output like: “Configuration file is valid.” If you get any error in the output, recheck your configuration file and verify it again.

Step: 9 Now restart the HAProxy service to apply the  changes

$ sudo service haproxy restart

Testing The Configuration

Now it is time to see if our setup is working properly. Enter load-balancer system IP on a web browser (In our case, it is 10.0.12.10) and refresh the page continuously for 2-4 times to see if HAProxy load-balancer is working properly. You should see different IP addresses or whatever text you have entered in the index.html file when you continue to refresh the page multiple times.

Another way to check is to take one web server offline and check if another web server is serving the requests.

That’s all for now! Try experimenting with HAProxy to learn more about how it works. For e.g., you can try:

  • Integrating different web server beside nginx.
  • Changing the load-balancing algorithm to something other than round-robin.
  • Configuring HAProxy health check to determine if a back-end server is working or not.
  • Applying sticky sessions to connect a user to the same back-end server.
  • Using HAProxy stats to get insights about the traffic on servers.

HAProxy has extensive documentation available for both the HAProxy community edition and HAProxy enterprise version. Explore this documentation to get more insights into improving the performance and reliability of your server environment.

This guide has been successfully performed on Debian 10(Buster). Try to install HAProxy on other Debian based distros like Ubuntu, Linux Mint etc. Please do not forget to share this guide with others.

]]>