Shahriar Shovon – Linux Hint https://linuxhint.com Exploring and Master Linux Ecosystem Fri, 05 Mar 2021 03:24:23 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.2 How to Setup Virtualization on Synology NAS? https://linuxhint.com/setup-virtualization-on-synology-nas/ Tue, 02 Mar 2021 14:47:19 +0000 https://linuxhint.com/?p=92495 Synology has official support for virtualization on its NAS products. You can create and run Windows or Linux virtual machines on your Synology NAS really well and turn your Synology NAS into a software development environment.

To run virtual machines on your Synology NAS, you need to have at least 8 GB or 16 GB (or more) memory installed, depending on your requirements. Then, you can install and use the official Virtual Machine Manager app to create and manage your virtual machines from the Synology Web GUI.

In this article, I will show you how to set up virtualization on your Synology NAS and create a Windows 10 and an Ubuntu 20.04 LTS virtual machine on your Synology NAS. I will be using the Synology NAS model DS1821+ for the demonstrations. So, let’s get started!

Copying ISO Image to Synology NAS Share:

First, you need to copy the ISO image files (of the operating systems you want to install on your Synology NAS virtual machines) to the Synology NAS.

You can upload ISO image files from the Synology Web GUI using the File Station app. You can also connect to your Synology shares from Windows or Linux and copy the ISO image to your share.

In this article, I will copy the ISO image from my computer to a share of my Synology NAS.

To access your Synology NAS shares, you need to know the IP address of your Synology NAS. You can find it from the Synology Web GUI. As you can see, the IP address of my Synology NAS is 192.168.0.110. It will be different for you. So, make sure to replace it with yours from now on.

From Windows 10 operating system, navigate to \\192.168.0.110 from the File Explorer app to access the Synology NAS shares.

From Linux operating system, navigate to smb://192.168.0.110 from any file manager app to access the shares on your Synology NAS.

As you can see, the Synology NAS shares are listed on my Debian GNU/Linux operating system.

Now, copy the ISO image files to one of your Synology NAS shares.

I have copied the Windows 10, KDE Neon, and Ubuntu 20.04 LTS ISO images on my Synology NAS share share1.

Installing Virtual Machine Manager:

To create virtual machines on your Synology NAS, you need to install the Virtual Machine Manager app on your Synology NAS. The Virtual Machine Manager app is available in the Package Center of your Synology NAS.

Open the Package Center app from the Synology Web GUI, as marked in the screenshot below.

The Package Center app should be opened.

Search for the keyword virt and you should find the Virtual Machine Manager package, as you can see in the screenshot below.

Click on Install to Install the Virtual Machine Manager package.

Click on Yes.

The package is now being downloaded. It may take a while to download the package.

Once the package is downloaded, you will see the following window.

You need to select a volume where you want to install the downloaded package.

Select a volume using the dropdown menu and click on Next.

Click on Apply.

The package is being installed.

The Virtual Machine Manager app should be installed.

Once the Virtual Machine Manager app is installed, start the Virtual Machine Manager app from the Main Menu of Synology Web GUI.

You have to configure the Virtual Machine Manager app for the first time.

Click on Next.

Click on Next.

Click on Yes.

Select a volume or multiples volumes where you want to store your virtual machine data and click on Next.

Click on Finish.

Virtual Machine Manager should be configured. Now, you can create and manage your virtual machines from the Virtual Machine Manager app.

Creating an Ubuntu 20.04 LTS Virtual Machine:

In this section, I am going to show you how to create an Ubuntu 20.04 LTS virtual machine on your Synology NAS.

To create a new virtual machine, click on Create from the Virtual Machine section of the Virtual Machine Manager app, as you can see in the screenshot below.

Select Linux as the operating system and click on Next.

Now, you have to select a storage volume where you want to save the virtual machine data. Select a storage volume from the list and click on Next.

Type in a name for the virtual machine, the number of CPU cores you want it to have, and the amount of memory you want to allocate to it.

I am going to call the virtual machine vm1-ubuntu20, and allocate 2 CPU cores and 4 GB of memory to it.

Once you’re done, click on Next.

Type in the amount of disk space you want to allocate to the virtual machine and click on Next.

I will allocate 20 GB of disk space to the virtual machine vm1-ubuntu20.

Click on Next.

You have to select the ISO installation image file that you will be using to install the operating system on the virtual machine from here.

To select the Ubuntu 20.04 LTS ISO image, click on Browse from the ISO file for bootup section, as marked in the screenshot below.

Select the Ubuntu 20.04 LTS ISO image from the Synology NAS, then share and click on Select, as marked in the screenshot below.

The Ubuntu 20.04 LTS ISO image should be selected as the ISO file for bootup, as you can see in the screenshot below.

Once the ISO image is selected, click on Next.

Select the users that you want to allow access to the virtual machine and click on Next.

The settings that will be used to create the virtual machine should be displayed. To create a virtual machine with those settings, click on Apply.

A new virtual machine vm1-ubuntu20 should be created, as you can see in the screenshot below.

To power on the virtual machine vm1-ubuntu20, select the virtual machine and click on Power on, as marked in the screenshot below.

The virtual machine vm1-ubuntu20 should be Running.

Once the virtual machine vm1-ubuntu20 is Running, select the virtual machine and click on Connect.

A new browser tab should be opened with the display of the virtual machine, as you can see in the screenshot below.

The Ubuntu 20.04 LTS installer should be loaded by the time you connect to the virtual machine. You can install Ubuntu 20.04 LTS on the virtual machine from here.

To install Ubuntu 20.04 LTS on the virtual machine, click on Install Ubuntu, as marked in the screenshot below.

Select your keyboard layout and click on Continue.

Click on Continue.

As I am installing Ubuntu 20.04 LTS on a virtual machine, I won’t manually partition the hard drive of the virtual machine. I will use automatic partitioning, just to make things easy.

So, select Erase disk and install Ubuntu and click on Install Now.


The Ubuntu 20.04 LTS installer will automatically create all the required partitions, and it will ask you whether you would like to save the changes to the disk.

Click on Continue.

Select your time zone and click on Continue.

Type in your personal information and click on Continue.

Ubuntu 20.04 LTS is being installed. It may take a while to complete.

Ubuntu 20.04 LTS is being installed.

Once Ubuntu 20.04 LTS is installed, click on Restart Now.

Press <Enter> to boot Ubuntu 20.04 LTS from the hard drive.

Ubuntu 20.04 LTS is booting from the hard drive.

After a few seconds, you should see the login window of Ubuntu 20.04 LTS. You can log in to your Ubuntu 20.04 LTS virtual machine with the username and password you’ve set during the installation.

Once you log in, you should see the Ubuntu 20.04 desktop like the screenshot below.

As you can see, I am running Ubuntu 20.04.2 LTS, it uses Linux kernel 5.8.0.

Now, you should remove the Ubuntu ISO image from the virtual machine. To do that, you need to shut down the virtual machine.

You can shut down your virtual machine with the following command:

$ sudo poweroff

Once your virtual machine is powered off, right-click (RMB) on the virtual machine and click on Edit, as marked in the screenshot below.

Navigate to the Others section.

As you can see, the Ubuntu 20.04 LTS ISO image is selected for the virtual machine.

From the ISO file for bootup dropdown menu, select Unmounted, as marked in the screenshot below.

Once you’ve selected Unmounted from the ISO file for bootup dropdown menu, click on OK.

The Ubuntu 20.04 LTS ISO image should be removed from the virtual machine.

Now, select the virtual machine and click on Power on.

Once the virtual machine is running, click on Connect.

Once you connect to the virtual machine and log in to Ubuntu 20.04 LTS, you need to install QEMU Guest Agent. QEMU Guest Agent will report usage information (network, disk, memory, CPU, etc.) to the Virtual Machine Manager app of your Synology NAS.

Open a Terminal on your Ubuntu 20.04 LTS virtual machine and run the following command to update the APT package repository cache:

$ sudo apt update

To install QEMU Guest Agent on your Ubuntu 20.04 LTS virtual machine, run the following command:

$ sudo apt install qemu-guest-agent -y

QEMU Guest Agent is being installed.

QEMU Guest Agent should be installed at this point.

Once QEMU Guest Agent is installed, reboot your virtual machine for the changes to take effect with the following command:

$ sudo reboot

As you can see, the Virtual Machine Manager can now show you the IP address of the vm1-ubuntu20 virtual machine. When the virtual machine vm1-ubuntu20 is running, QEMU Guest Agent sends the IP address information of the virtual machine to the Virtual Machine Manager app.

Creating a Windows 10 Virtual Machine:

In this section, I am going to show you how to create a Windows 10 virtual machine on your Synology NAS.

Before you create a Windows 10 virtual machine, you need to download the Synology Guest Tool ISO image.

To download the Synology Guest Tool ISO image, open Virtual Machine Manager and navigate to the Image Section.

Then, from the ISO File tab, click on Download Synology Guest Tool, as marked in the screenshot below.

Click on Download.

The Synology Guest Tool ISO image is being downloaded. It may take a while to complete depending on your internet connection.

Synology Guest Tool ISO image should be downloaded, as you can see in the screenshot below.

Now, navigate to the Virtual Machine section and click on Create to create a new virtual machine.

Select Microsoft Windows and click on Next.

Select a storage volume where you want to store the virtual machine data and click on Next.

Type in a name for the virtual machine, the number of CPU cores you want the virtual machine to have, and the amount of memory you want to allocate to the virtual machine.

I am going to call the virtual machine vm2-win10, then allocate 2 CPU cores and 8 GB of memory to it.

Type in the amount of disk space you want to allocate to the virtual machine and click on Next.

I will allocate 100 GB of disk space to the virtual machine vm2-win10.

Click on Next.

To select the Windows 10 ISO image, click on Browse from the ISO file for bootup section, as marked in the screenshot below.

Select the Windows 10 ISO image from the Synology NAS share and click on Select, as marked in the screenshot below.

The Windows 10 ISO image should be selected as the ISO file for bootup, as you can see in the screenshot below.

Select the Synology_VMM_Guest_Tool ISO image from the Additional ISO file dropdown menu.

Once you’ve selected the Windows 10 ISO image and the Synology VMM Guest Tool for the virtual machine, click on Next.

Select the users that you want to allow access to the virtual machine and click on Next.

The settings that will be used to create the virtual machine should be displayed. To create a virtual machine with those settings, click on Apply.

A new virtual machine vm2-win10 should be created, as you can see in the screenshot below.

To power on the virtual machine vm2-win10, select the virtual machine and click on Power on.

Once the virtual machine vm2-win10 is Running, click on Connect to connect to the display of the virtual machine.

The Windows 10 installer should start. You can install Windows 10 on the virtual machine from here.

Select the language, time and currency format, the keyboard layout, and click on Next.

Click on Install now.

The Windows setup wizard is being loaded. It may take a few seconds to complete.

Click on I don’t have a product key, as marked in the screenshot below.

Select the version of Windows 10 you want to install on the virtual machine and click on Next.

I will install Windows 10 Pro 64-bit on this virtual machine.

Check I accept the license terms checkbox and click on Next.

Click on Custom: Install Windows only (advanced), as marked in the screenshot below.

Select the virtual hard disk and click on Next.

Windows 10 is being installed on the virtual machine. It may take a few minutes to complete.

Windows 10 is being installed on the virtual machine.

Once the installation is complete, the virtual machine should restart in 10 seconds.

Once the virtual machine starts, Windows 10 should be getting ready. It may take a while to complete.

Windows 10 is getting ready.

Once you see the following window, select your country and click on Yes.

Select your keyboard layout and click on Yes.

Click on Skip.

Windows 10 is being set up. It may take a while to complete.

Windows 10 is being set up.

Select Set up for personal use and click on Next.

Click on Offline account, as marked in the screenshot below.

Click on Limited experience, as marked in the screenshot below.

Type in your full name and click on Next.

Type in a login password and click on Next.

If you don’t want to set a login password, leave it blank and click on Next.

Click on Accept.

If you want to use Cortana, click on Accept.

If you don’t want to use Cortana, click on Not Now.

Windows 10 is being set up. It may take a few minutes to complete.

Windows 10 is being set up.

Once Windows 10 is successfully set up, it should start as you can see in the screenshot below.

Now, you have to install the Synology Guest Tool. To do that, open the File Explorer app and navigate to the SYNOLOGY_VMMTOOL CD drive, as marked in the screenshot below.

Run the Synology_VMM_Guest_Tool installer program, as marked in the screenshot below.

Click on Next.

Check the I accept the terms in the License Agreement checkbox and click on Next.

Click on Next.

Click on Install.

Click on Yes.

Synology Guest Tool is being installed on the Windows 10 virtual machine.

Once you see the following prompt, check Always trust software from “Red Hat, Inc.” checkbox and click on Install, as marked in the screenshot below.

Once the Synology Guest Tool is installed, click on Finish.

For the changes to take effect, click on Yes to restart the Windows 10 virtual machine.

As you can see, the Virtual Machine Manager app can now show you the IP address of the vm2-win10 virtual machine. When the vm2-win10 is running, Synology Guest Tool sends the IP address information of the virtual machine to the Virtual Machine Manager app.

Now, shutdown the vm2-win10 virtual machine. Then right-click (RMB) on the vm2-win10 and click on Edit, as shown in the screenshot below.

Navigate to the Others tab and make sure that Unmounted option is selected from the ISO file for bootup and the Additional ISO file dropdown menu. Once you’re done, click on OK.

Taking Snapshots of the Virtual Machine:

You can save the state of a virtual machine by taking a snapshot of it from the Virtual Machine Manager app. Before you attempt to do experiments on your virtual machines that may break the operating system or may remove important files, you can take snapshots of them. If anything breaks after the experiments, you can restore the virtual machines to their previous state (where you took the snapshot) and get the virtual machines up and running again.

To show you how to take snapshots and restore virtual machines from snapshots, I have prepared a simple example.

On my Ubuntu 20.04 LTS virtual machine vm1-ubuntu20, I have created a helloworld/ directory and a main.c file, as you can see in the screenshot below.

Let’s take a snapshot of the vm1-ubuntu20 virtual machine while this directory is available in the virtual machine.

Before you take a snapshot of the virtual machine, you must power off or shut it down.

To shut down the Ubuntu 20.04 LTS virtual machine vm1-ubuntu20, run the following command:

$ sudo poweroff

The virtual machine vm1-ubuntu20 should be Powered off, as you can see in the screenshot below.

To take a snapshot of the current state of the vm1-ubuntu20 virtual machine, select the vm1-ubuntu20, click on Action > Take a Snapshot, as shown in the screenshot below.

Type in a description of the snapshot and click on OK.

A snapshot of the vm1-ubuntu20 virtual machine should be taken.

To list the snapshots you have taken of a virtual machine, select the virtual machine and click on Action > Snapshot List, as shown in the screenshot below.

The snapshots you’ve taken of that virtual machine should be listed, as you can see in the screenshot below.

Restoring Virtual Machine from Snapshots:

If you have accidentally corrupted the operating system of your virtual machine or removed some important files from the virtual machine, you can restore them from snapshots.

Let’s see how to do it.

First, select the vm1-ubuntu20 virtual machine and click on Power on to power on the vm1-ubuntu20.

The vm1-ubuntu20 virtual machine should be Running. To connect to the display of the vm1-ubuntu20, select the virtual machine and click on Connect, as shown in the screenshot below.

Once you’re connected to the display of the virtual machine, open a Terminal and run the following command to delete the helloworld/ directory:

$ rm -rfv helloworld/

The helloworld/ directory is removed from the HOME directory, as you can see in the screenshot below.

Now, shutdown the virtual machine with the following command:

$ sudo poweroff

As you can see, the virtual machine vm1-ubuntu20 is Powered off.

Now, let’s say, you want to get the helloworld/ directory back. As we had taken a snapshot when we had the helloworld/ directory on the virtual machine vm1-ubuntu20, we can just restore it from the snapshot we have taken.

To restore the virtual machine vm1-ubuntu20 from a snapshot, select the virtual machine and click on Action > Snapshot List, as marked in the screenshot below.

Now, select the snapshot you want to restore to and click on Action.

Then, click on Restore to this snapshot.

If you want to take a snapshot of the current state of the virtual machine before restoring from your selected snapshot, then check Take a snapshot before restoring the virtual machine checkbox, as marked in the screenshot below.

I am not going to take a snapshot of the current state of the virtual machine before restoring the virtual machine from a snapshot. So, I will leave the Take a snapshot before restoring the virtual machine checkbox unchecked.

Once you’re ready, click on OK.

Now, type in the login password of your Synology Web GUI and check I understand my data will be permanently deleted and unrecoverable checkbox.

Once you’re done, click on Submit to confirm the restore operation.

The virtual machine vm1-ubuntu20 should be restored from a snapshot.

Now, select the virtual machine vm1-ubuntu20 and click on Power on.

Once the virtual machine vm1-ubuntu20 is running, click on Connect.

Once you’re connected to the display of the virtual machine, you should see that the helloworld/ directory is restored.

$ ls -lhR helloworld/

So, this is how you restore a virtual machine from a snapshot.

Sharing Virtual Machines:

You can share a virtual machine running on your Synology NAS with other people.

To share a virtual machine (let’s say, vm1-ubuntu20), select the virtual machine and click on Action > Create share link, as marked in the screenshot below.

The link where the virtual machine will be available should be displayed, as you can see in the screenshot below.

You can password protect the virtual machine if you want. So, when other people try to access the virtual machine using the shared link, they will be asked to type in a password.

To password protect a shared virtual machine, check Enable secure sharing checkbox and type in a share password, as shown in the screenshot below.

You can also set a validity period for the shared virtual machine. Once the validity period is over, the link will be removed automatically.

To set a validity period for the shared virtual machine, click on Validity period, as marked in the screenshot below.

You can set different validity period settings from here.

Set up start time: If you set a start time, then the link will be accessible after this time.

Set up stop time: If you set a stop time, then the link will be accessible until the stop time from now.

Number of allowed access: You can set the number of times one can access the virtual machine using the shared link. Once someone has accessed the virtual machine the defined number of times using this link, the link will automatically become inaccessible.

Once you’re done setting the validity period, click on OK to confirm it.

You can also use the QR code to access the virtual machine instead of using the generated shared link.

To see the QR code that you can use to access this shared virtual machine, click on Get QR Code, as you can see in the screenshot below.

Once you’ve set up the shared link, click on Save.

A shared link for accessing the virtual machine should be created and copied to the clipboard.

To see all the shared links for a virtual machine, select the virtual machine and click on Action > Shared Links Manager, as shown in the screenshot below.

The shared links you’ve generated for the virtual machine should be listed.

You can select a shared link and click on Delete to delete a shared link, as shown in the screenshot below.

To access a virtual machine using the shared link, open the shared link with your favorite web browser.

Type in the password of that shared link and click on Enter.

You should be connected to the virtual machine, as you can see in the screenshot below.

Cloning Virtual Machine:

You can make new virtual machines from an existing virtual machine or a snapshot of an existing virtual machine using the Clone feature of the Virtual Machine Manager app.

To clone a virtual machine, make sure that the virtual machine you want to clone is Powered off.

Now, select the virtual machine you want to clone and click on Action > Clone, as marked in the screenshot below.

Type in a Name for the new cloned virtual machine.

You can create multiple cloned virtual machines as well. To do so, select the number of clones you want from the Number of copies dropdown menu.

Once you’re done, click on OK.

As you can see, a new virtual machine vm3-ubuntu20 is being cloned from the vm1-ubuntu20 virtual machine.

At this point, the vm3-ubuntu20 virtual machine is successfully cloned from the vm1-ubuntu20 virtual machine.

To power on the cloned virtual machine vm3-ubuntu20, select the virtual machine and click on Power on, as marked in the screenshot below.

Once the virtual machine vm3-ubuntu20 is Running, click on Connect.

You should be connected to the display of the virtual machine. As you can see, the cloned virtual machine is running just fine.

Exporting Virtual Machines:

You can export Synology NAS virtual machines using the Virtual Machine Manager app.

To export a virtual machine, make sure that it is Powered off.

Now, select the virtual machine you want to export and click on Action > Export, as marked in the screenshot below.

Now, you have to select a directory on your Synology NAS share where you want to export the virtual machine.

I like to export a virtual machine in its own separate directory.

So, select a directory where you want to keep all your exported virtual machine data and click on Create folder to create a new folder for the exported virtual machine.

Type in a name for the new folder and click on OK.

A new folder vm1-ubuntu20 should be created, as you can see in the screenshot below.

You can select the export mode before exporting the virtual machine.

The default mode is Regular OVA. Unless you’re going to import the virtual machines in a VMware product (i.e., VMware Player, VMware Workstation Pro, VMware Fusion, VSphere, etc.), then you don’t have to change the default mode.

If you need to change the export mode, click on Mode, as marked in the screenshot below.

There are 2 export modes:

Regular OVA: This is the default export mode of the Virtual Machine Manager app. You can import virtual machines exported this way into other virtualization programs like KVM, VirtualBox, etc. But this format is not compatible with VMware virtualization programs like VSphere, VMware Workstation Pro, etc.

VMware compatible OVA: You can import virtual machines exported this way into VMware virtualization programs like VSphere, VMware Workstation, VMware Fusion, etc.

Once you’ve selected an export mode, click on OK.

Now, select the newly created folder vm1-ubuntu20 and click on Select, as marked in the screenshot below to start exporting the virtual machine.

Virtual Machine Manager app should start exporting the virtual machine vm1-ubuntu20 to your selected Synology share. It may take a while to complete.

The virtual machine vm1-ubuntu20 is being exported.

Once the virtual machine vm1-ubuntu20 is exported, a new OVA file should be created in the vm1-ubuntu20/ directory of your Synology NAS share, as you can see in the screenshot below.

Importing Virtual Machines:

You can import virtual machines using the exported OVA file into your Synology NAS from the Virtual Machine Manager app.

To import a virtual machine, click on Create > Import, as marked in the screenshot below.

Select Import from OVA files and click on Next, as marked in the screenshot below.

To select the OVA file from your Synology NAS share, select Select a file from Synology NAS and click on Browse, as marked in the screenshot below.

Select the OVA file you’ve just exported and click on Select, as marked in the screenshot below.

Once you’ve selected the OVA file, click on Next.

Select the storage volume where you want to save the imported virtual machine data and click on Next.

Type in a name for the virtual machine, the number of CPU cores you want it to have, and the amount of memory you want to allocate to it.

I am going to call the imported virtual machine vm4-ubuntu20, and allocate 2 CPU cores and 4 GB of memory to it.

Once you’re done, click on Next.

Type in the amount of disk space you want to allocate to the imported virtual machine and click on Next.

I will allocate 20 GB of disk space to the imported virtual machine vm4-ubuntu20.

Click on Next.

Click on Next.

Select the users that you want to allow access to the imported virtual machine and click on Next.

The settings that will be used to import and create the virtual machine should be displayed. To import and create a virtual machine with those settings, click on Apply.

The virtual machine vm4-ubuntu20 is being imported, as you can see in the screenshot below. It may take a few minutes to complete.

The virtual machine vm4-ubuntu20 is imported successfully, as you can see in the screenshot below.

As you can see, the imported virtual machine vm4-ubuntu20 is Running.

You can also connect to the imported virtual machine vm4-ubuntu20 and it’s working just fine.

Conclusion:

In this article, I have shown you how to install and set up the Virtual Machine Manager app on your Synology NAS. I have also shown you how to create and manage virtual machines, take snapshots of it, restore it from snapshots, as well as share, cloneexport, and import it on your Synology NAS using the Virtual Machine Manager app. ]]> Running Docker Containers on Synology NAS https://linuxhint.com/run-docker-containers-synology-nas/ Sat, 27 Feb 2021 06:54:33 +0000 https://linuxhint.com/?p=91955 Docker is a containerization platform. Docker is used to running lightweight containers on your computer.

Synology NAS has official support for Docker. Docker can be an alternative to virtual machines. If you don’t have enough memory to run virtual machines on your Synology NAS, you can run Docker containers instead. Docker containers require a very little amount of memory and system resources to run.

In this article, I will show you how to install and use Docker on Synology NAS. So, let’s get started.

Installing Docker on Synology NAS:

Synology NAS products officially support Docker. To use Docker on your Synology NAS, you need to install the Docker app from the Synology Web GUI.

First, open the Package Center app from the Synology Web GUI.

Search for docker in Package Center. The Docker app should be listed, as you can see in the screenshot below.

Click on the Docker app.

Click on Install to install the Docker app on your Synology NAS.

Select the volume you want to install and keep Docker data using the dropdown menu and click on Next as marked in the screenshot below.

Click on Apply.

The Docker app is being installed. It may take a few seconds to complete.

At this point, the Docker app should be installed.

You can click on Open to open the Docker app from the Package Center app as marked in the screenshot below.

You can also open the Docker app from the Main Menu of Synology Web GUI, as marked in the screenshot below.

As you’re running the Docker app for the first time, you will see the following dialog window.

If you don’t want to see it every time you open the Docker app, check the Don’t show this again checkbox and close the dialog window as marked in the screenshot below.

The Docker app should be ready to use.

Downloading Docker Images:

You can download Docker images from the Registry tab of the Docker app. By default, the Docker images available in the Docker Hub registry are displayed. You can add other Docker registries and download Docker images from there as well. I will show you how to add your own Docker registry in a later section of this article.

To download a Docker image from the Docker Hub registry, type in your search keyword (httpd, let’s say) and click on Search as marked in the screenshot below.

The Docker images that matched the search keyword should be listed.

If you like a Docker image and would like to know more about it, click on the icon to visit that Docker image’s official web page.For example, to know more about the httpd Docker image, click on the icon as marked in the screenshot below.

A new browser tab should open the Docker Hub page of the httpd Docker image, as you can see in the screenshot below. You can find all the information you need about the httpd Docker image on this page.

If you like a Docker image and you would like to download it, select it and click on Download as marked in the screenshot below.

Select the tag of your selected Docker image from the dropdown menu you want to download and click on Select as marked in the screenshot below.

As you can see, 1 new image is being downloaded.

Navigate to the Image section to see the download progress.

As you can see, the httpd: latest Docker image is being downloaded.

As long as the Docker image is downloaded, the Disk icon () will animate.

Once the download is complete, the disk icon () animation should stop.

I have downloaded another Docker image php: latest, as you can see in the screenshot below.

The size of the Docker images you’ve downloaded should be displayed in the Image section, as you can see in the screenshot below.

Managing Docker Images:

You can manage your downloaded Docker images from the Image section of the Docker app.

You can export a Docker image from the Docker app to your Synology NAS shares.

To export a Docker image (php: latest, let’s say), select the Docker image and click on Export as marked in the screenshot below.

Select a folder (docker-images/, let’s say) from one of your Synology NAS shares (share2, let’s say) where you would like to export the Docker image and click on Select marked in the screenshot below.

As you can see, the Docker image php: latest is being exported. It may take a few seconds to complete.

Once the Docker image is exported, you should find a new archive file (php(latest).syno.tar in my case) in the folder you’ve exported; the Docker image can see in the screenshot below.

Now, let’s remove the php: latest Docker image and import it back.

To remove a Docker image, select the Docker image you want to remove and click on Delete, as shown in the screenshot below.

To confirm the removal operation, click on Delete as marked in the screenshot below.

The php: latest Docker image should be removed, as you can see in the screenshot below.

To import the php: latest Docker image from the exported Docker image file, click on Add > Add From File as marked in the screenshot below.

Select the Docker image file you’ve just exported and click on Select as marked in the screenshot below.

The php: latest Docker image should be imported, as you can see in the screenshot below.

Managing Docker Registries:

By default, the official Docker registry Docker Hub is used on the Docker app. So, you can search for and download all the Docker images available on Docker Hub. That is more than enough for most people. But, if you do need to add third-party Docker registries or your own Docker registries, you can do it as well.

To manage Docker registries, click on Settings from the Registry section as marked in the screenshot below.

By default, you will have the following Docker registries. The Docker Hub registry and the Aliyun Hub registry.

Use the Aliyun Hub registry instead of the Docker Hub registry, select it and click on Use as marked in the screenshot below.

The Aliyun Hub registry should be activated, as you can see in the screenshot below.

To add a new Docker registry, click on Add as marked in the screenshot below.

Type in the information of the Docker registry you want to add and click on Confirm.

A new Docker registry should be added, as you can see in the screenshot below.

You can edit a Docker registry you’ve added recently as well.

To edit a Docker registry, select it and click on Edit as marked in the screenshot below.

Now, make the necessary changes and click on Confirm to save the changes.

To remove a Docker registry, select it and click on Delete as marked in the screenshot below.

The selected Docker registry should be removed.

Creating Docker Containers:

To create a Docker container, navigate to the Image section of the Docker app. Then, select the Docker image you want to use to create the container and click on Launch as marked in the screenshot below.

Type in a name for the container in the Container Name section as marked in the screenshot below.

I will call it http-server-1.

If you want to run the container as root (with superuser privileges), check the Execute container using the high privilege checkbox as marked in the screenshot below.

You can limit the CPU and memory usage of the container as well.

To limit resources, check the Enable resource limitation checkbox and set the CPU Priority and Memory Limit as you need.

To configure some advanced settings for the container, click on Advanced Settings as marked in the screenshot below.

If you want to start the container automatically when your Synology NAS boots, check the Enable auto-restart checkbox as marked in the screenshot below.

To create a shortcut of this container on the Synology Web GUI desktop, check the Create shortcut on desktop checkbox and configure it as needed.

To add volumes to the container, click on the Volume tab of the Advanced Settings window, as shown in the screenshot below.

If you visit the Docker Hub page of the Docker image you’re using, you should know the volumes you need to create for your container.

For example, I am using the httpd Docker image to create a container. In the Docker Hub page of the httpd Docker image, you can see that I need to create a volume for the container that binds to the folder /usr/local/apache2/htdocs of the container.

To add a new volume to the container, click on Add Folder as marked in the screenshot below.

You will be asked to select a folder that you want to bind to your container.

When you install the Docker app on your Synology NAS, it will create a new share docker on the volume where you have installed the Docker app. My advice would be to keep your volumes and other files related to your container in their separate folder in the docker share.

Create a new folder in the docker share, select the docker share and click on Create folder as marked in the screenshot below.

Type in the name of your container (http-server-1 in my case) and click on OK.

To create a new folder inside the http-server-1/ folder, select it and click on Create folder.

Type in a folder name and click on OK. The folder name should resemble the path where you want to mount the folder in your container.

In my case, it’s htdocs as I want to mount it in the /usr/local/apache2/htdocs directory of the container.

Once the folder is created, select it and click on Select as marked in the screenshot below.

Now, you have to type in the path where you want to mount the folder you’ve selected.

In this case, it is the /usr/local/apache2/htdocs directory. Just type in the mount path, and you’re good to go.

You can configure the network of the container from the Network tab of the Advanced Settings window.

By default, the Docker containers will use a private IP address range that is not accessible from your home network. So, you will have to use port forwarding to access the services running on your Docker containers.

But, if you want to access the Docker containers from your home network directory without port forwarding, check the Use the same network as Docker host checkbox as marked in the screenshot below.

In the Port Settings tab of the Advanced Settings window, you can configure port forwarding for the Docker container.

Depending on the Docker image you’re using the create the container, you may already have some default port forwarding rules.

I have a default port forwarding rule that forwards the container TCP port 80 to the Synology NAS.

I will forward the container TCP port 80 to the TCP port 8888 on my Synology NAS. So, the Local Port number will be 8888, and the Container Port number will be 80, and the Type will be TCP.

If you want to add a new port forwarding rule, click on the + icon as marked in the screenshot below.

An empty port forwarding rule should be added, as you can see in the screenshot below.

Type in the Local Port, the Container Port, and select the Type from the dropdown menu as needed. Once you’re done, the port forwarding rule should be added.

If you want to remove a port forwarding rule, select it and click on the icon as marked in the screenshot below.

The port forwarding rule should be removed.

In the Environment tab of the Advanced Settings window, you can configure the environment variables of the container and the command that the container will run when it starts.

Depending on the Docker image you’re using to create the container, you may already have some environment variables, as shown in the screenshot below.

If you need to add a new environment variable, click on the + icon as marked in the screenshot below.

An empty environment variable entry should be added, as you can see in the screenshot below.

Type in the environment variable name and the value. Once you’re done, it should be added.

If you want to remove an environment variable, select it and click on the icon as marked in the screenshot below.

Your selected environment variable should be removed.

To set the command that you want to run when your Docker container starts, type in the command section’s command as marked in the screenshot below.

Once you’re done configuring some advanced settings for the container, click on Apply.

Click on Next.

The settings that will be used to create the container http-server-1 should be displayed. To create a container with these settings, click on Apply.

A new container http-server-1 should be created.

You can find all the Docker containers you’ve created in the Container tab of the Docker app. You can manage your containers from here.

The running containers should also be displayed in the Overview tab of the Docker app, as you can see in the screenshot below.

Using Docker Containers:

You can see the CPU and memory/RAM usage information and the container runtime of all the Docker containers you’ve created from the Container section of the Docker app.

As you can see, the http-server-1 container that I’ve created earlier is Running for 12 minutes. It’s using 11 MB of memory/RAM and barely uses any CPU resource.

Let’s create an index.html file in the htdocs/ volume of the container.

Once the index.html file is created in the volume of the container, you should be able to access it from the HTTP server that is running in the container.

I have forwarded the container TCP port 80 on my Synology NAS port 8888. So, I can access the HTTP server running in the Docker container http-server-1 from a web browser using the URL http://192.168.0.110:8888 as you can see in the screenshot below.

Here, 192.168.0.110 is the IP address of my Synology NAS. It will be different for you. So, make sure to replace it with yours.

To find more information about a Docker container, select it and click on Details as marked in the screenshot below.

A new window should be opened.

In the Overview tab, you can see the container’s CPU and RAM usage information, the environment variables added to the container, the configured port forwarding rules of the container, and some container runtime information.

You can Start, Stop, Restart and Force stop a container from the Overview tab as well.

In the Process tab, you can find the following information about all the running processes of the container:

Process Identifier: The process ID of the running process.

Execution Command: The command that is used to start the process.

CPU Usage: The percentage of CPU the process is using.

Memory Size: The amount of RAM/memory the process is using.

In the Log tab, you can find the logs of the running processes on your container. The logs are grouped by date nicely, as you can see in the screenshot below.

You can start a shell and do administration on your container from the command-line from the Terminal tab. You can also run any command and see its output.

To access the shell of the container, click on Create as marked in the screenshot below.

A new shell terminal should be created, as you can see in the screenshot below. You can run any command you want in this shell terminal and administer your container from the command-line.

You can create as many shell terminals as you need.

You can also run other commands from here.

To do that, click on the Create > Launch with command as marked in the screenshot below.

Now, type in a command that you want to run and click on OK.

The command should run on the container, and the output should be displayed, as you can see in the screenshot below.

You can rename or delete a terminal from the Terminal tab as well.

To rename a terminal, select it and click on Rename.

Type in a new terminal name and click on OK.

The terminal should be renamed.

To remove a terminal, select it and click on Delete.

The terminal should be removed.

You can start and stop a container using the toggle button as marked in the screenshot below.

When a container is Running, the toggle button will be blue.

To stop a running container, click on the toggle button.

The container should be Stopped, as you can see in the screenshot below.

When the container is Stopped, the toggle button should be gray.

When a container is Stopped, you can edit the configuration of the container.

To edit the container configuration, select the container and click on Edit.

You should see the same configuration window as you have seen while creating the container. You should be familiar with all the options as I have explained them earlier in this article.

From the General Settings tab, you can change the container name, configure container privileges, configure resource limits, configure container startup settings, and create a desktop shortcut.

From the Volume tab, you can manage the container volumes.

From the Port Settings tab, you can manage the port forwarding rules of your container.

From the Environment tab, you can manage the container environment variables.

Once you’re done with configuring the container, click on Apply as marked in the screenshot below.

Once you’ve configured the container, click on the toggle button to start the container.

The container should be running, as you can see in the screenshot below.

You can select a container and click on Action to Start, Stop, Restart, and Force Stop your container, as you can see in the screenshot below.

Cloning Docker Containers:

You can clone the configuration of an existing Docker container to create a new Docker container.

To clone a Docker container, select it and click on Settings > Duplicate settings as marked in the screenshot below.

Type in a name for the cloned container and click on Apply.

I will call it http-server-2.

A new container http-server-2 should be created, as you can see in the screenshot below.

While the http-server-2 container is Stopped, select it and click on Edit.

Change the local port to 8889 from the Port Settings tab and click on Apply as marked in the screenshot below. The http-server-2 container configuration should be updated.

Click on the toggle button of the http-server-2 container as marked in the screenshot below to start the container.

The http-server-2 container should be running, as you can see in the screenshot below.

As you can see, I can access the HTTP server running on both the http-server-1 and http-server-2 containers.

Exporting Docker Containers:

You can export Docker containers on your Synology NAS shares and import them later using the Docker app.

To export a Docker container, select it and click on Settings > Export as marked in the screenshot below.

Select the export type from the Type section.

Export container settings: This option will only export the configuration options of the container in a plain text file. The configuration file can be later used to rebuild the container. This option will not save any filesystem changes you’ve made in the container. So, all of your container data will be lost when you import the container back.

Export container contents and settings: This option will export the container configuration and contents on your Synology NAS share. The filesystem changes of the container will be kept. The exported file will be a lot bigger than the first option.

Once you’ve selected an export type, select Export to Synology NAS and click on Select a folder from the Destination section as marked in the screenshot below.

Select a folder where you want to export the container and click on Select.

Click on Export.

As you can see, the container is being exported. It may take a while to complete.

Once the container is exported, a new archive file should be generated in the folder where you’ve exported the container, as shown in the screenshot below.

Importing Docker Containers:

In this section, I will remove the Docker container I’ve exported earlier and import it back.

Before you can remove a Docker container, you have to stop the container if it’s running.

To stop the http-server-1 container, click on the toggle button of the container from the Container section of the Docker app, as marked in the screenshot below.

The container should be stopped. Now, select the container and click on Action > Delete as marked in the screenshot below.

Click on Delete.

The http-server-1 container should be removed.

To import the container using the exported container file, click on Settings > Import as marked in the screenshot below.

Select the exported container file and click on Select as marked in the screenshot below.

Type in a Container Name and click on OK.

NOTE: Importing a container this way will also create a new container image. If you want to use that image to create a container later, it’s good to give the image a meaningful name and tag name.

You can set the new image name in the Repository textbox and image tag in the Tag textbox of the import Settings window.

As you can see, the container http-server-1 is imported successfully.

A new container image is also created, as you can see in the screenshot below.

Docker Networks:

You can manage Docker networks from the Network tab of the Docker app.

By default, Docker creates a bridge network interface and a host network interface that you can use to get network connectivity in your Docker containers.

To know more about a network interface, click on the down-arrow icon as marked in the screenshot below.

As you can see in the screenshot below, information about the bridge and the host network is displayed.

As you can see, the bridge network interface uses the bridge driver, and the host network interface is using the host driver.

The bridge network interface configures a random IP subnet (172.17.0.0/16 in my case) that is not accessible from your home/office network. You can only access the services running inside the containers connected to the bridge network using port forwarding.

The host network interface will use your home/office network’s DHCP server to assign IP addresses to the containers. So, the containers using the host network will be accessible from your home/office network directly. You won’t need to configure port forwarding.

Currently, 2 containers (http-server-1 and http-server-2) are using the bridge network interface, as shown in the screenshot below.

Checking Docker Logs:

You can find the logs of your Docker app from the Log section, as you can see in the screenshot below. The log information will help you find problems with the Docker instance running on your Synology NAS.

Conclusion:

In this article, I have shown you how to install the Docker app and use Docker on Synology NAS. I have also shown you how to download Docker images from the Docker Hub registry, manage Docker images, manage Docker registries, create and use Docker containers, clone Docker containers, export Docker containers, import Docker containers, check the Docker network interfaces, and check Docker logs using the Docker app on your Synology NAS. This article should help you get started with Docker on your Synology NAS.

]]>
How to Setup Synology NAS? https://linuxhint.com/setup-synology-nas/ Fri, 26 Feb 2021 02:22:09 +0000 https://linuxhint.com/?p=90793

Synology specializes in Network Attached Storage (NAS) devices and software. Synology NAS devices are easy to use and configure. Its built-in DSM (DiskStation Manager) web app allows you to access and configure the NAS from a web browser. Synology’s management web interface, the DSM web app, is one of the best NAS management tools out there. The DSM web app differentiates the Synology NAS from its competitors.

Synology NAS devices have a lot of useful features like:

  1. File Storage and Sharing
    You can store important files on your Synology NAS. It supports different file-sharing services like SMB, AFP, NFS, FTP, WebDAV, etc., so you can access your files from Windows, Mac, Linux, iPhone, and Android smartphones.
  1. Multi-User and Quota Support
    The Synology NAS supports multiple users and user-based disk quotas. This allows multiple users to access the NAS and use a specific amount of its disk space.
  1. Mobile App Support
    You can manage the Synology NAS and access all the files, photos, audio, videos, etc. from your mobile devices. It also has many official apps in the Apple and Google Play store. For example, DS file, DS photo, DS audio, Synology Moments, Synology Photos, Synology Drive, etc.
  1. File Syncing
    You can use Synology Drive to sync your files to the Synology NAS, or you may also use tools like Rsync.You can use Cloud Sync to sync the files from the NAS to public cloud providers like Amazon Drive, Microsoft Azure, DropBox, OpenStack, etc.
  1. Data Backup
    Synology has Active Backup for Business to help you backup your PCs, servers, virtual machines, and so on. It also has Active Backup for Microsoft 365 and Active Backup for G Suite that lets you backup your Microsoft Office 365 data and Google apps (Drive, Mail, Contacts, Calendar) data, respectively.
  1. NAS Protection
    • Synology NAS supports the Btrfs filesystem. So, you can take snapshots of your filesystem very easily and recover from disaster with a few clicks using the Synology Snapshot Replication Synology High Availability app allows you to connect two Synology NAS devices in a high-availability cluster. In this setup, one Synology NAS will be Active, which serves files, and the other one will be Passive, which will act as the Active NAS and keep serving files in case the Active NAS fails. This ensures data safety and service uptime.
    • Synology Hyper Backup can back up critical files on a public cloud service like Google Drive, Amazon Drive, Dropbox, Microsoft Azure, etc., and recover them if needed from these public cloud services.
  1. Virtualization and Docker Support
    The Virtual Machine Manager app will let you create virtual machines on your Synology NAS.
  1. Productivity Apps
    Synology DSM also has a lot of web apps like Synology Drive, Synology MailPlus, Synology Contacts, Synology Chat, Synology Calendar, Synology Office, and Note Station to help you be productive.

    • Synology Drive allows you to manage and synchronize files.
    • Synology MailPlus allows you to build an efficient and secured business mail server.
    • Synology Contacts allows you to keep all your phonebook contacts centralized on the Synology NAS.
    • Synology Chat allows you to communicate with the Synology NAS users and share files. It is a great collaboration tool for the Synology NAS.
    • Synology Calendar allows Synology users to create, manage, and synchronize events. It also schedules meetings with other Synology users. You can sync calendar events with CalDAV clients like Google Calendar, Apple Calendar, Thunderbird, etc. as well.
    • Synology Office is a complete office suite for Synology NAS. It contains Document (alternative for Microsoft Word), Spreadsheet (alternative for Microsoft Excel), and Slides (alternative for Microsoft PowerPoint) web apps. You can use the Synology Office web app from DSM for free.
    • Note Station is a note-taking app. You can use it to take notes and sync them on your Synology NAS.
  1. Multimedia Apps
    Synology DSM has the following web apps for entertainment:

    • Audio Station is an audio player that lets you play audio files stored on the Synology NAS.
    • Video Station is a video player that lets you play video files stored on the Synology NAS.
    • Photo Station is a photo manager built for professional photographers.
    • Synology Moments is a photo and video organizer app for the Synology NAS. You can organize your photos and videos with this app very easily.
  1. Cloud Services
    • Synology QuickConnect allows you to connect to your Synology NAS from the internet. You don’t need any public IP or configure port forwarding on your router to access it.
      If your public IP address changes frequently, you can use Synology DDNS to access your Synology NAS using a domain name.
    • Synology Cloud is a paid service from Synology that lets you sync important files from your Synology NAS to Synology’s cloud storage service.
  1. Data Security
    Synology NAS has a lot of security features like AES 256-bit encryption for files, secure key management, account protection, firewall, HTTP 2.0 support, IP auto-block, multiple SSL certificates support, encrypt integration, and many more.
  1. Easy Management
    Synology NAS can be easily managed from a web browser using the Web Assistant web UI of the DSM (DiskStation Manager) operating system.

Synology has provided us with the DS1821+ model of the NAS for review. In this article, I am going to show you how to set up the Synology DS1821+ NAS. So, let’s get started!

What’s in the Box?

The Synology DS1821+ NAS model comes with a simple box. Nothing fancy.

In the box, you will get the following components:

  1. The Synology NAS
  2. 2 X RJ-45 patch cables
  3. 2 X drive tray keys
  4. A power cable
  5. Some screws for installing 2.5-inch HDDs/SSDs

You may have some other things in the box depending on the Synology NAS model you’ve bought.

Pulling Out Drive Trays from the Synology NAS

You must install at least 1 HDD/SSD to get the Synology NAS working. So, installing an HDD/SSD on your Synology NAS is the first thing you should do after unboxing it.

In this section, I am going to show you how to pull out the drive tray from the Synology NAS.

Let’s say, you want to install an HDD/SSD into the 1st drive bay.

First, you should see whether the drive tray is locked or not.

If the drive tray lock points 45 degrees to the right, it means the drive tray is unlocked.

If the drive tray lock points down, as shown in the image below, then the drive tray is locked.

If the drive tray is locked, you need to unlock it using the key that is provided with the Synology NAS. Simply insert the key on the drive tray and turn it counter-clockwise, as shown in the image below:

The drive should be unlocked.

Now, remove the key from the drive tray.

Once the drive tray is unlocked, push the bottom of the drive tray until you hear a clicking sound, and then release your finger.

The drive tray handle should be unlocked.

Hold the drive tray handle and pull the drive tray out of the drive bay.

The drive tray should be removed. Now, you can install a 2.5/3.5-inch HDD/SSD on the drive tray.

Installing a 3.5-inch HDD on the Drive Tray

To install a 3.5-inch HDD on the drive tray, you need to remove the fastening panels from the sides of the drive tray by gently pushing the fastening panels outward, as marked in the image below:

The fastening panels should be removed from the drive tray.

Insert the 3.5-inch HDD into the drive tray, as shown in the image below:

Insert the fastening panels into the drive tray to secure the HDD.

The 3.5-inch HDD should be installed on the drive tray.

Installing a 2.5-inch HDD/SSD on the Drive Tray

To install a 2.5-inch HDD/SSD on the drive tray, you need a PH-2 screwdriver and 4 screws. You can use the screws that came with your Synology NAS.

An image of a PH-2 screwdriver is shown below:

An image of 4 screws that came with my Synology NAS model DS1821+ is shown below:

To install a 2.5-inch HDD/SSD on the drive tray, you need to remove the fastening panels from the sides of the drive tray by  gently pushing the fastening panels outward, as marked in the image below:

The fastening panels should be removed from the drive tray. As you don’t need them to install a 2.5-inch HDD/SSD on the drive tray; keep them somewhere safe.

In this article, I will install a Samsung 860 EVO 500GB 2.5-inch SATA SSD on the drive tray. You can use any 2.5-inch HDD/SSD as you like.

To install a 2.5-inch HDD/SSD on the drive tray, place the HDD/SSD in it in a way that matches the screw holes of the HDD/SSD with the drive tray, as marked in the screenshot below:

Once the 2.5-inch HDD/SSD is placed on the drive tray, it should look as shown in the image below:

Now, hold the 2.5-inch HDD/SSD firmly and flip the drive tray. Make sure that the screw holes of the HDD/SSD match with the screw holes of the drive tray.

Put 4 screws in the 4 aligned screw holes of the drive tray, as marked in the image below:

Once you tighten the screws with a PH2 screwdriver, the 2.5-inch HDD/SSD should be installed on the drive tray.

Inserting Drive Trays on the Synology NAS

Once you have installed a 2.5/3.5-inch HDD/SSD on the drive tray, put it back into the drive bay.

Gently push the drive tray all the way into the drive bay.

Then, push the bottom of the locking handle of the drive tray till you hear a clicking sound.

Once you hear a clicking sound, release your finger from the drive tray lock.

The drive tray handle should be locked.

You can lock the drive tray using the key provided with your Synology NAS if you want by inserting the key into it and turning it clockwise, as shown in the image below:

Now, remove the key from the drive tray.

The drive tray should be locked.

You may have some extra screws left and the drive tray fastening panels that you have removed from the drive trays. Make sure to keep them in a safe place so you can still use them when needed.

Powering on the Synology NAS

Once you have installed one or more HDDs/SSDs in your NAS, you have to connect the power cable(1) into the power socket and an ethernet cable into the RJ-45 port(2) of your NAS. These ports are located in the rear (or backside).

Connect the ethernet cable into the first RJ-45 port of your NAS, as shown in the image below:

Connect the power cable into the power socket of your NAS, as shown in the image below:

To power on the NAS, press the power button on the front side of your NAS, as shown in the image below:

The power button should start blinking.

Once your NAS is ready, you should hear a beep, and the STATUS LED, the LAN1 LED, and the power button LED should stop blinking.

Accessing Synology NAS for the First Time

Once the NAS is power on, up and running, you need to install the DSM operating system on the NAS before you can configure it by navigating to http://find.synology.com from your favorite web browser.

Once the page loads, it should search your network (LAN) to find the NAS.

Once the webpage finds the NAS, it will display it like the screenshot below.

My NAS model is DS1821+ and the IP address of the NAS is 192.168.0.110. The IP address of your NAS will be different. So make sure to adjust it from now on.

To connect to your NAS, click on Connect.

Check I have read and agreed to the terms of the EULA checkbox and click on Next, as shown in the screenshot below:

Click on Continue.

You should see the Web Assistant setup page. You can install the DSM operating system on your NAS from here.

Setting Up Synology NAS using Web Assistant

To set up your NAS, click on Set Up.

You will be asked to install the DSM operating system on your NAS. To do that, click on Install Now.

To install the DSM operating system on your Synology NAS, you need to format all the HDDs/SSDs that you have installed. Formatting the drives will remove all the existing data from the drive.

To format all the drives, check I understand that all data on these hard disks will be removed checkbox and click on OK.

Synology Web Assistant should start formatting the HDDs/SSDs installed on your NAS. It may take a while to complete.

Once the HDDs/SSDs are formatted, Synology Web Assistant should start downloading the DSM operating system from the internet. It may take a while to complete depending on the speed of your internet connection.

Once the DSM operating system is downloaded, Synology Web Assistant will install it on a small partition of each of the HDDs/SSDs on your NAS.

The DSM operating system is being installed on the NAS.

Once the installation is complete, Synology Web Assistant will show you a 10-minute timer. It will restart the NAS and automatically connect to it again. Don’t close the web browser.

Once your NAS is ready, you should see the following page.

You need to create a new admin user from here. Type in a server name, username, and password for your NAS and click on Next.

NOTE: Remember the username and password you’ve set here as you will need them to log in to the NAS.

We will talk about QuickConnect in another article. For now, click on Skip this step, as shown in the screenshot below:

Click on Yes to confirm skipping QuickConnect configuration.

Click on Go.

You should be taken to the Synology Web GUI (Graphical User Interface). Click on Got It.

If you want to send device analytics data to Synology, click on Yes. Click on No thanks! if otherwise.

Click on the highlighted section (Tip 1) where it shows you the location of the Main Menu.

Click on the highlighted section (Tip 2) where it shows you the Package Center app.

Click on the highlighted section (Tip 3) where it shows you the Control Panel app.

The Synology Web GUI should be ready. You can configure your NAS from here.

Synology Supported RAIDs

Synology supports different RAID configurations. In this section, I am going to explain all its supported RAID configurations. This will help you create Synology storage pools and volumes later in this article.

You can use the Synology RAID Calculator to get an overview of the RAID you want to set up, estimate the useable disk space, or compare different RAID configurations. I will use screenshots from the Synology RAID Calculator while explaining different RAID configurations so that you can understand how much disk space you can use in a certain RAID configuration.

The Synology supported RAID configurations are:

  1. SHR
    Synology Hybrid RAID or SHR is developed by Synology to make RAID configuration easier for less technical users. You can start an SHR RAID with only a single HDD/SSD and add more HDDs/SSDs later. This is a big advantage of the SHR RAID.If you use 1 HDD/SSD in an SHR RAID, then you can use the full HDD/SSD capacity for storing data.If you add 2 or more HDDs/SSDs to an SHR RAID, then it can provide 1 drive fault tolerance. For fault tolerance, it will use 1 drive worth of disk space. So, you can use the disk space of all the HDDs/SSDs but one.As you can see, if you use 1 X 1 TB HDD in SHR RAID, you can use the full capacity of the HDD.
    And if you use 3 X 1 TB HDD in SHR RAID, you can use only 2 TB (2 x 1 TB HDD) for data and 1 TB (1 x 1 TB HDD) for protection. So that the RAID can survive 1 drive failure.
  2. SHR-2
    The SHR-2 RAID is the same as the SHR RAID. The only difference is that it provides 2 drive fault tolerance, while the latter only provides 1 drive fault tolerance. You need at least 4 HDDs/SSDs to create an SHR-2 RAID.The SHR-2 RAID can survive 2 drive failures at the same time. But you can use 2 drive capacity less disk space for data.For example, if you use 5 X 1 TB HDDs in an SHR-2 RAID, you can use only 3 TB (3 X 1 TB HDD) of disk space for storing data. The other 2 TB (2 X 1 TB HDD) will be used for protection. So that the SHR-2 RAID can survive 2 drive failures at the same time.Compare that to the SHR RAID where you can use 4 TB (4 X 1 TB HDD) of disk space for storing data. 1 TB (1 X 1 TB HDD) will be used for protection. But it can survive only 1 drive failure at the same time.So, in SHR-2 RAID, you get more fail-safety, but less disk space than SHR RAID.
  3. Basic
    If you want to use traditional style disk partitioning instead of RAID configuration on your Synology NAS, then the Basic configuration is for you. You can use only a single HDD/SSD to create a Basic storage pool and create as many volumes (you can use volumes as partitions as well) as you want in that storage pool.In Basic mode, you can’t use more than 1 HDD/SSD, and you won’t have any fail-safety. However, you can use the full capacity of your HDD/SSD for storing data.
  4. JBOD
    In the JBOD array, you can add one or more HDDs/SSDs. You can also use the capacity of all the HDDs/SSDs.JBOD array won’t provide you with any fail-safety. So, if one of the HDDs/SSDs of the JBOD array fails, the entire JBOD array will fail as well, hence losing all your important data.
  5. RAID-0
    In a RAID-0 configuration, you can add two or more HDDs/SSDs. You can also use the full capacity of all the HDDs/SSDs in it. The data you write to a RAID-0 array will be divided into blocks and these will be spread across the HDDs/SSDs in a RAID-0 array. RAID-0 will increase the read/write performance of the filesystem as well. However, it comes with no safety measures. So if any of the HDDs/SSDs in a RAID-0 array fails, the entire array will fail and leave your data inaccessible.An example of a RAID-0 array is shown in the screenshot below. I have added 3 X 1 TB HDDs in the RAID-0 configuration. As you can see, 3 TB (3 x 1 TB HDD) of disk space is available for storing data.
  6. RAID-1
    In the RAID-1 configuration, the data you have on one HDD/SSD will be written to all the other HDDs/SSDs added to the RAID-1 array.You need at least 2 HDDs/SSDs to create a RAID in the RAID-1 configuration. RAID-1 configuration provides the best fail-safety. A RAID in RAID-1 configuration can survive (number of HDDs/SSDs – 1) disk failures. So as long as 1 HDD/SSD is okay, you won’t lose your data and the RAID-1 array can be rebuilt.If all the HDDs/SSDs in the RAID-1 array are of the same size, then you can use the disk space of 1 of those HDDs/SSDs.If the disks are of different sizes, then you can use the disk space of the smallest of the HDDs/SSDs. The rest of the disk space will be wasted.For example, if you add 4 X 1 TB HDD in the RAID-1 configuration, you can only use 1 TB (1 X 1 TB HDD) of disk space for storing data. 3 TB (3 X 1 TB HDD) will be used for protection. So, even if 3 drives fail at the same time, you will still have all your data.

    If you add 1 smaller capacity HDD (1 X 500 GB HDD), then you can only use 500 GB (1 x 500 GB) from the RAID-1 array for storing your precious data. 500 GB from the rest of the 1 TB HDDs will be used for protection, the other 500 GB will be unused or wasted.
  7. RAID-10
    RAID-10 is a hybrid of RAID-0 and RAID-1. To configure a RAID-10 array, you need at least 4 or more even numbers (4, 6, 8, etc.) of HDDs/SSDs. RAID-10 has the read/write performance of RAID-0 and the data protection level of RAID 1.In RAID-10, 2 HDDs/SSDs will form a RAID-1 group. Then, all the RAID-1 groups will be combined to form a RAID-0 array.In Raid-10, 1 HDD/SSD from each of the RAID-1 groups can fail. So, 1 drive from each pair of drives (in the RAID-1 group) can fail, and the RAID will still function. But, if 2 of the HDDs/SSDs from the same RAID-1 group fail, the RAID will be inaccessible, hence you will lose all your precious files.You can use half the storage capacity of all the HDDs/SSDs added to the RAID in the RAID-10 configuration.For example, if you add 6 X 1 TB HDD in the RAID-10 configuration, 3 X RAID-1 groups will be created. Each of the RAID-1 groups will have 2 X 1 TB HDD. Then, the 3 X RAID-1 groups will be used to create a RAID-0 array. So you can use 1 TB (1 X 1 TB HDD) of disk space from each of the RAID-1 groups for storing data, and the other 1 TB (1 X 1 TB HDD) for data protection.
  8. RAID-5
    To create a RAID in the RAID-5 configuration, you need at least 3 HDDs/SSDs.RAID-5 provides 1 drive fail-safety. In it, parity data is calculated and stored on each of the HDDs/SSDs in such a way that if any one of the HDDs/SSDs fails, the data of that failed HDD/SSD can still be recovered using the existing data and the remaining parity data.RAID-5 uses 1 HDD/SSD worth of disk space for storing the parity data. So you can use the disk space of all the HDDs/SSDs added to the RAID-5 array but one.For example, if 6 X 1 TB HDD is configured in RAID-5, you can use 5 TB of disk space to store your important files. 1 TB will be used to store the parity data.
  9. RAID-6
    To create a RAID in the RAID-6 configuration, you need at least 4 HDDs/SSDs.RAID-6 provides 2 drive fail-safety. Its two different parity data is calculated and stored on each of the HDDs/SSDs in such a way that if any two of the HDDs/SSDs fail, the data of those failed HDDs/SSDs can still be recovered using the existing data and the remaining parity data.RAID-6 uses 2 HDDs/SSDs worth of disk space to store the parity data. So, you can use the disk space of all the HDDs/SSDs added to the RAID-6 array but two.For example, if 6 X 1 TB HDD is configured in RAID-6, you can use 4 TB of disk space to store your important files. 2 TB will be used to store the parity data.

Creating a Storage Pool

The first thing you want to do to set up your NAS is to create a storage pool.  You can create it using the Storage Manager app.

First, open the Storage Manager app from the Main Menu of Synology Web GUI, as shown in the screenshot below:

Storage Manager app should be opened.

Navigate to the Storage Pool section and click on Create, as shown in the screenshot below:

Now, you have to select the type of storage pool you want to create:

  • Better performance: This option will allow you to create a single volume/partition on the storage pools.
  • Higher flexibility: This option will let you create every type of Synology-supported storage pool.

To create storage pools that are optimized for performance only, select Better performance and click on Next.

Type in a pool description (optional).

Select the type of RAID you want to set up for the storage pool from the RAID type dropdown menu, as shown in the screenshot below:

The minimum number of HDDs/SSDs you need for this RAID should be displayed in the Minimum number of drives per RAID section. A short description of how your selected RAID will work, its advantages, and disadvantages will be displayed in the Description section, as you can see in the screenshot below:

Once you’ve set up the RAID settings of your storage pool, click on Next.

The unused HDDs/SSDs installed on your NAS should be displayed.

Select the HDDs/SSDs that you want to use for this storage pool and click on Next.

The existing data of the HDDs/SSDs you have selected must be removed to add them to your storage pool. To confirm the erase operation, click on OK.

You will be asked whether you want to check the HDDs/SSDs (for bad sectors or errors) you are adding to the storage pool.

If you want to check the HDDs/SSDs for bad sectors and other problems, select Yes and click on Next.

Otherwise, select No and click on Next.

The settings to be used to create the storage pool should be displayed. To create a storage pool with these settings, click on Apply.

A storage pool is being created. It may take a few seconds to complete.

Click on OK.

As you can see, a storage pool is created. The HDDs/SSDs you have selected are also added to the storage pool.

By default, a lot of information about the storage pool is displayed in the Storage Pool section of the Storage Manager. To hide this information, click on the arrow up icon, as marked in the screenshot below:

The storage pool information should be hidden.

Let’s create another storage pool.

In the same way, navigate to the Storage Pool section and click on Create, as shown in the screenshot below:

Now, select Higher flexibility storage pool type and click on Next.

Type in a pool description (optional).

As you can see, you have more RAID types available in the Higher flexibility section.

Select the RAID type you want from the RAID type drop-down menu, as marked in the screenshot below:

The minimum number of HDDs/SSDs you need for this type of RAID should be displayed in the Minimum number of drives per RAID section. A short description of how your selected RAID will work, its advantages, and disadvantages will be displayed in the Description section, as you can see in the screenshot below:

Once you’re done setting up the RAID type, click on Next.

Select the HDDs/SSDs you want to add to the storage pool and click on Next.

The existing data of the HDDs/SSDs you have selected must be removed to add them to your storage pool. To confirm the erase operation, click on OK.

You will be asked whether you want to check the HDDs/SSDs (for bad sectors or errors) you are adding to the storage pool.

If you want to check the HDDs/SSDs for bad sectors and other problems, select Yes and click on Next.

Otherwise, select No and click on Next.

The settings to be used to create the storage pool should be displayed. To create a storage pool with these settings, click on Apply.

A storage pool is being created. It may take a few seconds to complete.

Click on OK.

As you can see, a new storage pool is created.

To see more information about the newly created storage pool, click on the arrow down icon at the right side of the storage pool, as marked in the screenshot below:

A lot of information about the selected storage pool should be displayed, as you can see in the screenshot below:

Creating a Volume

Once you’ve created the necessary storage pools, you can create as many volumes as you want on each of these storage pools. Volumes are like partitions (of a storage device) for the Synology NAS storage pools.

To create a volume, navigate to the Volume section of Storage Manager and click on Create.

Select Custom and click on Next.

Select Choose an existing storage pool and click on Next.

From here, you have to select a storage pool where you want to create the volume.

Select a storage pool where you want to create the volume from the Storage pool drop-down menu, as shown in the screenshot below:

Once you’ve selected a storage pool, you should see more information about it.

Once you’ve selected a storage pool where you want to create the volume, click on Next.

Select the filesystem you want to format the volume with and click on Next.

You should see the following page.

Type in a description for the new volume (optional).

If you’ve selected a storage pool that supports multiple volumes, then you will be allowed to allocate the maximum space available on it or a portion of the available space from it.

Once you’re done, click on Next.

The settings to be used to create the volume should be displayed. To create a volume with these settings, click on Apply.

A new volume is being created. It may take a while to complete.

A new volume should be created, as you can see in the screenshot below:

Creating a Share

Once you’ve created the necessary volumes, you need to create shares on these volumes to be able to store files on the NAS and access them remotely.

To create a share, open the Control Panel app from the Main Menu of Synology Web GUI, as marked in the screenshot below:

The Control Panel app should be opened.

Click on Shared Folder, as marked in the screenshot below:

Click on Create.

Click on Create.

Type in the share name, a short description (optional), and select a volume from the Location dropdown menu, as marked in the screenshot below:

Once you’re done, click on Next.

If you want to encrypt your share, you can check the Encrypt this shared folder checkbox and type in an encryption key.

If you don’t want to encrypt the share, you don’t have to do anything here.

Once you’re done with this step, click on Next.

You can configure some advanced settings for the share from here.

If you want to perform checksums on the files you store on this share to make sure not a single bit is flipped in any way, check the Enable data checksum for advanced data integrity checkbox.

If you enable data checksum, then you can also check the Enable file compression checkbox to automatically compress the files you store on this share.

You can enable quota for this share as well by checking the Enable shared folder quota checkbox and typing in the amount of disk space (in GB) you want this share to use from your selected volume.

Once you’re done, click on Next.

The settings to be used to create the share should be displayed. To create a share with these settings, click on Apply.

Now, you have to set the necessary permissions for the users you want to give access to this share.

Once you’re done, click on OK.

A new share should be created.

Accessing the Share from Windows 10

You can access the share you’ve created on your Synology NAS very easily from a Windows 10 computer.

If you go to the Network section of the Explorer app, the Synology NAS should show up. You can access the share you’ve created on your Synology NAS from here. As you can see in the screenshot below:

You can also use the IP address of your Synology NAS to access the shares you’ve created on the NAS.

You can find the IP address of your Synology NAS in the Synology Web UI, as you can see in the screenshot below. In my case, the IP address is 192.168.0.110. It will be different for you. So make sure to replace it with yours from now on.

To connect to your Synology NAS shares using the IP address 192.168.0.110, open the Explorer app and navigate to the \\192.168.0.110 location, as shown in the screenshot below:

Type in the username and password of your Synology NAS and click on OK.

The shares that are accessible to the user you’ve logged in with should be displayed.

As you can see, I can access the share share1.

I can also copy files to the share with pretty good speeds.

The file is copied to the share share1, as you can see in the screenshot below:

You can also access the share from the Synology Web GUI using the File Station app. As you can see, the file I have copied to the share is accessible from the File Station app.

Accessing the Share from Linux

You can access the share you’ve created on your Synology NAS from Linux as well. You need to have Samba installed on your Linux distribution. Luckily, most of the desktop Linux distributions have Samba preinstalled. So, you probably won’t have to do anything to access the shares from Linux. You just need to know the IP address of your Synology NAS to connect to the shares on Synology NAS from Linux.

You can find the IP address of your Synology NAS in the Synology Web UI, as you can see in the screenshot below. In my case, the IP address is 192.168.0.110. It will be different for you. So, make sure to replace it with yours from now on.

Open the File Manager app and navigate to the location smb://192.168.0.110 and click on Connect.

Type in the username and password of your Synology NAS and click on Connect.

The shares that the user you’ve logged in, which has access to, should be listed.

As you can see, I can access the share1 share.

I can also copy files to the share, as you can see in the screenshot below:

I have copied the /etc directory to my share share1, as you can see in the screenshot below:

The /etc directory I have copied to the share1 share is also displayed in the File Station app, as you can see in the screenshot below:

Conclusion

In this article, I have shown you how to set up the Synology NAS model DS1821+, as well as how to install 2.5/3.5-inch HDDs/SSDs on the drive trays of the Synology NAS. How to power on the Synology NAS and install the DSM operating system on the Synology NAS is also taught here. Finally, you have learned how to create storage pools, volumes, and shares on the Synology NAS and access the shares from Windows and Linux operating systems.

]]>
How to Setup Synology Link Aggregation https://linuxhint.com/setup-synology-link-aggregation/ Wed, 24 Feb 2021 17:05:26 +0000 https://linuxhint.com/?p=91399 Link Aggregation is a method of combining multiple network interfaces into a single network interface. Link aggregation is used to increase the bandwidth of the network and provide fault tolerance of the network.

In Synology NAS, link aggregation is called Bond. Your Synology NAS may have multiple network interfaces. You can use bond multiple network interfaces to increase your Synology NAS bandwidth or configure fault tolerance.

For example, you can bond 2×1 Gbps network interfaces to create a 2 Gbps network interface. Or, you can bond 2×1 Gbps network interfaces to create a single 1 Gbps fault-tolerant network interface. The fault-tolerant bonded network interface will use the same IP address no matter which physical network interface is used. So, if one fails for some reason, the other one will still work, and you will be able to connect to your Synology NAS without changing the IP address you use to connect to your Synology NAS.

In this article, I will show you how to create a bond network interface using multiple physical network interfaces on your Synology NAS to increase the network bandwidth and provide fault tolerance. So, let’s get started.

Installing Network Cables in the Synology NAS:

Before installing network cables in the RJ-45 ports of your Synology NAS, you should shut down your Synology NAS from the Synology Web GUI.

To shut down your Synology NAS from the Synology Web GUI, click on the User icon ( ) and click on Shutdown as shown in the screenshot below.

To confirm the shutdown operation, click on Yes.

Your Synology NAS should be powered off within a few minutes.

Once your Synology NAS is powered off, all the LEDs should be off, as shown in the image below.

In the rear of your Synology NAS, you may have a single ethernet cable connected in the RJ-45 port of your NAS, as I have.

I have 3 extra unused ethernet ports on my NAS. You may have more or less unused ethernet ports on your NAS.

Connect ethernet cables on the unused RJ-45 ports of your NAS.

I have connected 3 more ethernet cables, as you can see in the screenshot below.

Once you’re done, press the power button to turn on your NAS.

Once all the LEDs of your NAS are on, you should be able to connect to your Synology Web GUI and configure network bonds.

Visit the Synology Web GUI from a web browser and go to Network from the Control Panel app as marked in the screenshot below.

In the Network Interface tab, you should see that all the network interfaces you’re using are Connected.

You can click on the down arrow button as marked in the screenshot below to find more information about the Connected network interfaces.

As you can see, all the network interfaces have their IP addresses and network bandwidth.

Creating a Load Balancing Network Bond:

If you want to connect multiple network interfaces to increase your NAS’s download/upload speed, you have to create a load balancing network bond.

To create a load balancing network bond interface, click on Create > Create Bond as marked in the screenshot below.

Now, select either the Balance-SLB or Balance-TCP options as marked in the screenshot below.

Balance-SLB: If you want to bond network interfaces from different network-switch to increase your Synology NAS download/upload speed, select the Balance-SLB option.

Balance-TCP: If you have a switch that supports link aggregation, then configure your switch ports for link aggregation first and use this option to configure link aggregation for your Synology NAS.

I will select Balance-SLB as my switch does not support link aggregation.

Once you’re done selecting an option, click on Next.

Now, select the physical network interfaces that you want to add to your bond network and click on Next.

Set the IP address of your network bond manually. I will set it to 192.168.0.120.

If your switch/router supports Jumbo Frame, then you can set the MTU value manually.

My switch supports Jumbo Frame. So, I will set the MTU value to 9000.

If you want all the internet traffic to go through this network bond, check the Set as the default gateway checkbox as marked in the screenshot below.

Once you’re done setting up the network bond, click on Apply.

Click on Yes.

The network bond is being created. It may take a few seconds to complete.

Once the network bond is created, you can find the network bond in the Network Interfaces tab of Control Panel > Network.

As you can see, a network bond Bond 1 is created. It combines the physical ethernet interfaces LAN 3 and LAN 4. The IP address assigned to the network bond Bond 1 is 192.168.0.120. Also, notice that the network speed is 2000 Mbps (2 Gbps), 2 times the network speed of LAN 3 and LAN 4, which is 1000 Mbps (1 Gbps) each.

So, bonding 2 physical network interfaces in Balance mode increased the network bandwidth of the NAS. Now, you should be able to transfer files from your computer faster or access your NAS from multiple computers at the same time without any network performance penalty.

Creating a Fault-Tolerant Network Bond:

If you want to connect multiple network interfaces for fault tolerance, you must create an active or backup network bond.

To create an active or backup network bond interface, click on Create > Create Bond as marked in the screenshot below.

Select Active/Backup Mode and click on Next.

Now, select the physical network interfaces that you want to add to your bond network and click on Next.

In the same way as before, configure your network bond and click on Apply.

I will set a static IP address 192.168.0.110 for this network and enable Jumbo Frame. You can configure it any way you like.

Click on Yes.

The network bond is being created. It may take a few seconds to complete.

Once the network bond is created, you can find the network bond in the Network Interfaces tab of Control Panel > Network.

As you can see, a network bond. Bond 2 is created. It combines the physical ethernet interfaces LAN 1 and LAN 2. The IP address assigned to the network bond Bond 2 is 192.168.0.110. Also, notice that the network speed is 1000 Mbps (1 Gbps), the same as the network speed of LAN 1 and LAN 2, which is 1000 Mbps (1 Gbps) each.

So, bonding 2 physical network interfaces in Active/Backup mode does not increase the NAS’s network bandwidth. But, it provides fault tolerance. If LAN 1 physical network interface stops working for some reason, you will still be able to access your NAS using the IP address 192.168.0.110 as long as the physical network interface LAN 2 is okay. Similarly, if LAN 2 physical network interface stops working, you will be able to access your NAS using the same IP address as long as LAN 1 is working.

Resetting the Synology NAS Network:

At times, the network bond configuration you want may not work and leave your Synology NAS inaccessible. In that case, you need to reset your Synology NAS network configuration and configure the network bond again.

To reset your Synology NAS network configuration, press and hold the button in the rear of your NAS as marked in the image below till you hear a beep. Once you hear a beep, the network configuration should reset, and you should be able to connect to your NAS as you did before you configured the network bond.

Conclusion:

In this article, I have discussed what link aggregation is and how it will help you configure your Synology NAS network. I have shown you how to configure link aggregation in Balance and Active/Backup mode on your Synology NAS. I have also shown you how to reset your NAS’s network configuration in case of network misconfiguration.

]]>
Upgrade Memory of Synology NAS https://linuxhint.com/upgrade-synology-nas-memory/ Tue, 23 Feb 2021 17:21:13 +0000 https://linuxhint.com/?p=91068 Synology NAS comes preinstalled with 2 GB or 4 GB of memory, depending on the model you’ve bought. Depending on the applications you want to run on your Synology NAS, you may need more memory. For example, to run virtual machines on your Synology NAS, 2 GB or 4 GB memory is not enough.

In this article, I will show you c So, let’s get started.

Checking Installed Memory Before Upgrade:

Before you upgrade the memory of your NAS, you can check the amount of memory you’ve already installed on your NAS.

To check the memory, you’ve already installed on your NAS, open the Control Panel app from the Main Menu of Synology Web GUI, as shown in the screenshot below.

Click on Info Center, as shown in the screenshot below.

As you can see, 4096 MB or 4 GB of memory is installed on my Synology NAS model DS1821+.

Shutting Down the NAS:

Before you can open up the NAS to remove the old memory and install new ones, you must shut down the NAS and unplug the power cable and the RJ45 cables from the NAS. This is to ensure that there is no power on any of the components of the Synology NAS and avoid any short-circuits as a result. Short-circuits are harmful to electronics components and may permanently damage your NAS.

You can shut down the NAS from the Synology Web GUI. To do that, click on the user icon from the Synology Web GUI panel’s top-right corner and click on Shutdown, as shown in the screenshot below.

Click on Yes to confirm the shutdown operation.

The Synology NAS should be powered off in a few minutes.

Once the NAS is powered off, all the status LEDs and the power button LED should be off.

Now, remove the power cable from the rear/back of the NAS.

Also, remove all the RJ-45 cables from the rear/back of the NAS.

Once all the cables are removed from your Synology NAS, you can open it up and upgrade the memory of your Synology NAS.

Upgrading Memory of the NAS:

You can buy Synology compatible memory from Synology and install it on your NAS. Make sure to buy the correct type (i.e., DDR3, DDR4) of memory in the correct form factor (i.e., UDIMM, RDIMM, SODIMM) for your Synology NAS. You can read the product specification page or product datasheet of your NAS model to see what type of memory you need to buy for your NAS.

The Synology NAS model I have is DS1821+, and it supports DDR4 SO-DIMM ECC memory. Synology has sent me 2 of the 16 GB DDR4 SO-DIMM 2666 MHz memory modules for review. In this article, I will use them to upgrade the memory of the Synology NAS model DS1821+.

To upgrade the memory, you need a PH-2 screwdriver.

First, flip your NAS, and you should find a cover, as you can see in the screenshot below.

Open the screws from the cover using a PH-2 screwdriver.

Once the screws are opened, pull the cover from your NAS.

You should see the memory module that is/are already installed on your Synology NAS.

To remove the memory module that is already installed, gently push the levers on both sides of the memory module outward at the same time.

The memory module should be released from the slot.

Now, hold the edges of the memory module and gently pull it out of the slot.

The memory module should be removed from the slot.

Now, I am going to install 2 of the 16 GB DDR4 SODIMM memory modules on the memory slots of the NAS.

Align the empty slot’s notch with the gap between the gold connectors of the memory module and gently push the memory module into the slot.

Once the memory module is in the slot, gently push the memory module downward till you hear a click. The memory module should be locked in the slot.

The memory module should be installed on the slot.

Similarly, align the notch of the other empty slot with the gap between the gold connectors of the memory module and gently push the memory module into the slot.

Once the memory module is in the slot, gently push the memory module downward till you hear a click. The memory module should be locked in the slot.

Both the memory modules should be installed at this point.

Now, put the cover on the memory slots.

Tighten the screws in the cover using a PH-2 screwdriver.

Now, connect the power cable and the RJ-45 cables to your NAS.

Once all the cables are plugged in, press the power button to power on the NAS.

After a few minutes, the NAS should be ready to connect.

Checking Installed Memory After Upgrade:

Once you have upgraded the memory of your NAS, visit the Synology Web GUI from your favorite web browser.

Go to Info Center from the Control Panel app.

As you can see, the memory is upgraded successfully from 4 GB to 32 GB.

Conclusion:

In this article, I have shown you how to check the amount of memory you have already installed on your Synology NAS from the Synology Web GUI. I have also shown you how to access the memory slots of your Synology NAS model DS1821+. I have shown you how to remove memory modules from your Synology NAS model DS1821+ and install new memory modules as well.

]]>
Ways to Determine the File System Type in Linux https://linuxhint.com/determine-file-system-type-linux/ Mon, 08 Feb 2021 03:10:30 +0000 https://linuxhint.com/?p=89217 In computing, a filesystem is a layout or format used to store files in a storage device. A filesystem is used to logically divide a storage device to keep different files organized nicely in the storage device to be searched, accessed, modified, removed, etc. easily from the storage device.

There are many filesystems available today. Different filesystems have different structures, logics, features, flexibility, security, etc. Some of the most common filesystems are Ext4, Btrfs, XFS, ZFS, NTFS, FAT32, etc.

There are times when a Linux system administrator will need to determine the filesystem type to simply mount the filesystem or to diagnose problems with the filesystem. Different filesystems have different tools for diagnosing problems, checking for errors and fixing them, etc. So, you have to know the filesystem a storage device is using to determine the maintenance tool/tools to use.

In this article, I will show you different ways you can determine the filesystem type in Linux. So, let’s get started.

Way 1: Using the df Command-Line Tool

The df command-line program is preinstalled on almost every Linux distribution you will find. You can use the df command-line program to find the Filesystem type all the mounted storage devices and partitions.

To find the filesystem type of all the mounted storage devices and partitions of your computer, run the df command as follows:

$ df -Th

The df command will show you the following information:
Filesystem: The storage device name or partition name that is currently mounted.

Mounted on: The directory where the storage device/partition (Filesystem) is mounted.

Type: The filesystem type of the mounted storage device/partition.

Size: The size of the mounted storage device/partition.

Used: The disk space that is used from the mounted storage device/partition.

Use%: The percentage of disk space that is used from the mounted storage device/partition.

Avail: The amount of free disk space of the mounted storage device/partition.

On Ubuntu, the df command will show you many loop devices as you can see in the screenshot below.

You can hide the loop devices with the -x option of the df command as follows:

$ df -Th -x squashfs

You can also hide the tmpfs devices from the output of the df command.

To hide the tmpfs devices from the output of the df command as well, run the df command with the -x option as follows:

$ df -Th -x squashfs -x tmpfs

Now, the output looks much cleaner. If you want, you can remove the udev devices from the df command’s output.

To remove the udev devices from the output of the df command as well, run the df command as follows:

$ df -Th -x squashfs -x tmpfs -x devtmpfs

Only the physical storage devices and partitions will be displayed in the output of the df command. The output looks much nicer than before as well.

Way 2: Using the lsblk Command

The lsblk command-line program is preinstalled on almost every Linux distribution you will find. You can use the lsblk command-line program to find the Filesystem type of all (mounted and unmounted) the storage devices and partitions of your computer.

To find the filesystem type of all (mounted and unmounted) the storage devices and partitions of your computer, run the lsblk command as follows:

$ lsblk -f

The lsblk command will show you the following information:
NAME: The storage device name or partition name of a storage device.

MOUNTPOINT: The directory where the storage device/partition (Filesystem) is mounted (if mounted).

FSTYPE: The filesystem type of the storage device/partition.

LABEL: The filesystem label of the storage device/partition.

UUID: The UUID (Universally Unique IDentifier) of the filesystem of the storage device/partition.

FSUSE%: The percentage of disk space that is used from the storage device/partition.

FSAVAIL: The amount of free disk space of the storage device/partition

Just as before, you can hide the loop devices from the output of the lsblk command.

To hide the loop devices from the output of the lsblk command, run the lsblk command with the -e7 option as follows:

$ lsblk -f -e7

As you can see, all the loop devices are removed from the output of the lsblk command. The output looks a lot cleaner than before.

Way 3: Using the blkid Command

The blkid command-line program is preinstalled on almost every Linux distribution you will find. You can use the blkid command-line program to find the Filesystem type of all (mounted and unmounted) the storage devices and partitions of your computer.

To find the filesystem type of all (mounted and unmounted) the storage devices and partitions of your computer, run the blkid command as follows:

$ blkid

The lsblk command will show you the following information:
NAME: The name of the storage device or partition name of the storage device. i.e. /dev/sda1, /dev/sda5.

UUID: The UUID (Universally Unique IDentifier) of the filesystem of the storage device/partition.

TYPE: The filesystem type of the storage device/partition.

PARTUUID: The UUID (Universally Unique IDentifier) of the partition.

You can also hide the loop devices from the output of the blkid command as before.

To hide the loop devices from the output of the blkid command, run the blkid command as follows:

$ blkid | grep -v 'TYPE="squashfs"'

As you can see, the loop devices are not displayed in the output of the blkid command. The output looks much nicer than before.

Way 4: Using the file Command

The file command-line program is preinstalled on almost every Linux distribution you will find. You can use the find command-line program to identify the file type of a file on Linux. As every device is considered a file in Linux, you can use the find command-line program to determine the filesystem type of a storage device or partition in Linux.

For example, to determine the filesystem type of the partition sdb1, you can run the file command as follows:

$ sudo file -sL /dev/sda1

If you read the file command’s output, you can see that the sdb1 partition is using the FAT32 filesystem.

In the same way, you can find the filesystem type of the sda5 partition with the file command as follows:

$ sudo file -sL /dev/sda5

As you can see, the partition sda5 is using the EXT4filesystem.

Way 5: Using the mount Command and /etc/mtab File

The /etc/mtab file contains an entry for all the mounted storage devices and partitions of your computer. You can read this file to find the filesystem type of your storage devices and partitions. The mount command-line program also prints the contents of the /etc/mtab file. So, you can use the mount command-line program as well to find the same data.

You can read the contents of the /etc/mtab file with the following command:

$ sudo /etc/mtab

As you can see, there is a lot of mount information in the /etc/mtab file.

You can find the same information with the mount command as you can see in the screenshot below.

$ mount

As the /etc/mtab file or the mount command’s output has many mount entries, it’s hard to interpret it. You can use the grep command to filter the output and find what you need very easily.

For example, to find the filesystem type of the sda1 partition using either the mount command or /etc/mtab file, run one of the following commands:

$ cat /etc/mtab | grep /dev/sda1

Or,

$ mount | grep /dev/sda1

As you can see, the filesystem type of the sda1 partition is FAT32/vfat

.

In the same way, to find the filesystem type of the sda5 partition using either the mount command or /etc/mtab file, run one of the following commands:

$ cat /etc/mtab | grep /dev/sda5

Or,

$ mount | grep /dev/sda5

As you can see, the filesystem type of the sda5 partition is EXT4.

Way 6: Using the /etc/fstab File

The /etc/fstab file keeps an entry for each of the storage devices or partitions that is to be mounted automatically at boot time. So, you can read this file to find the filesystem type of your desired storage device or partition.

Suppose your computer is not configured to mount a storage device or partition at boot time automatically. In that case, it’s very likely that there won’t be any entry for that storage device or partition in the /etc/fstab file. In that case, you won’t find any information on that storage device or partition in the /etc/fstab file. You will have to use the other methods described in this article to find the storage device’s filesystem type or partition.

You can read the contents of the /etc/fstab file with the following command:

$ cat /etc/fstab

The contents of the /etc/fstab file.

You can see that the storage device or partition with the UUID 3f962401-ba93-46cb-ad87-64ed6cf55a5f uses the EXT4 filesystem.

The storage device or partition that has the UUID dd55-ae26 is using the vfat/FAT32 filesystem.

The lines starting with a # in the /etc/fstab file is a comment. These lines don’t have a real purpose. They are used for documentation purposes only.

If you want, you can hide them using the grep command as follows:

$ grep -v '^#' /etc/fstab

As you can see, the comments are gone, and the output looks a lot cleaner than before.

The /etc/fstab file uses UUID instead of the storage device name or partition name by default. You can use the blkid command to convert the UUID to storage device name or partition name.

For example, to convert the UUID 3f962401-ba93-46cb-ad87-64ed6cf55a5f to the name of the storage device or partition, run the blkid command as follows:

$ blkid -U 3f962401-ba93-46cb-ad87-64ed6cf55a5f

As you can see, the partition sda5 has the UUID 3f962401-ba93-46cb-ad87-64ed6cf55a5f.

In the same way, you can find the storage device or partition name that has the UUID DD55-AE26 as follows:

$ blkid -U DD55-AE26

As you can see, the partition sda1 has the UUID DD55-AE26.

Conclusion:

This article has shown you different ways to determine the filesystem type of a storage device/partition in Linux. I have shown you how to use the df, lsblk, blkid, file, and mount command to determine the filesystem type of the Linux storage devices and partitions. I have also shown you how to determine the filesystem type of the storage devices and partitions of your Linux system by reading the /etc/mtab and /etc/fstab files.

References:

[1] File system – Wikipedia – https://en.wikipedia.org/wiki/File_system ]]> Kaisen Linux – A Dedicated System Rescue Linux Distribution https://linuxhint.com/download-kaisen-linux/ Mon, 01 Feb 2021 12:21:30 +0000 https://linuxhint.com/?p=88407 Kaisen Linux is an operating system developed for IT professionals to diagnose and deal with the faults/failures of an installed operating system. Kaisen Linux provides all the necessary tools for diagnosing and fixing an installed operating system, recovering lost data, fixing boot problems, formatting disks, and many more.

Kaisen Linux is a rolling Linux distribution which is based on Debian GNU/Linux testing. As Kaisen Linux is a rolling release, you can always use up to date software/tools on Kaisen Linux.

One of the best things about Kaisen Linux is that you can boot it from a USB thumb drive and do what you need to do to rescue a broken system without needing to install Kaisen Linux on your computer. Kaisen Linux also provides you with all the necessary drivers like wifi/video/sound/bluetooth etc., to ensure that you don’t have to install anything after booting it from the USB thumb drive. Kaisen Linux can also boot on BIOS/UEFI hardware.

You can load the entire Kaisen Linux into your computer’s RAM/memory while you boot Kaisen Linux from the USB thumb drive. You can remove the USB thumb drive from your computer once Kaisen Linux has booted in Live mode. This feature will save a USB port on your computer if you’re short on USB ports.

In this article, I will show you how to download Kaisen Linux and make bootable USB thumb drives of Kaisen Linux from Windows and Linux operating systems. I will also show you how to boot Kaisen Linux from the USB thumb drive and install Kaisen Linux on your computer. So, let’s get started.

Downloading Kaisen Linux:

You can download Kaisen Linux from the official website of Kaisen Linux.

Visit the official website of Kaisen Linux from a web browser and click on DOWNLOADS once the page loads.

You can download different flavors of Kaisen Linux from here.

Kaisen Linux ISO image is available with the following desktop environments:

  • MATE Desktop Environment
  • KDE Plasma 5 Desktop Environment
  • XFCE 4 Desktop Environment
  • LXDE Desktop Environment

Click on the desktop environment icon you like, and a Download link should appear. Click on it to download the ISO image of Kaisen Linux with your desired desktop environment.

Your browser should prompt you to save the Kaisen Linux ISO image. Click on Save.

Your browser should start downloading the ISO image of Kaisen Linux. It may take quite a while to complete.

Making a Bootable USB Thumb Drive of Kaisen Linux on Windows:

Once the Kaisen Linux ISO image is downloaded, you can make a bootable USB thumb drive of Kaisen Linux.

On Windows 10, you can use Rufus to make a bootable USB thumb drive easily.

Rufus is a free program that you can download from the official website of Rufus.

Visit the official website of Rufus from your favorite web browser. Once the page loads, scroll down a little bit and click on the Rufus Portable link as marked in the screenshot below.

Your browser should prompt you to save the Rufus Portable executable. Click on Save.

Rufus should be downloaded. It’s a small piece of software (about 1 MB in size).

Once Rufus is downloaded, plug in the USB thumb drive on your computer and run Rufus.

You will see the following prompt if you’re starting Rufus for the first time. Click on No.

Rufus should start.

Make sure your USB thumb drive is selected in the Device section. Then, click on SELECT to select the Kaisen Linux ISO image that you’ve downloaded.

A file picker should be opened. Select the Kaisen Linux ISO image that you’ve downloaded and click on Open.

The Kaisen Linux ISO image should be selected.

If you want to save the changes, you will make to the Live Kaisen Linux OS, set the persistence partition size from the Persistent partition size section as shown in the screenshot below.

You can either drag the slider or type in the amount of disk space (in GB) to set the persistence partition size.

I will not create a persistence partition in this article. I just wanted to show you how to do it. That’s all.

To start flashing the USB thumb drive with the Kaisen Linux ISO file, click on START.

Click on Yes.

Select Write in ISO Image mode (Recommended) and click on OK.

Click on OK.

Rufus should start copying all the required files from the ISO image to the USB thumb drive. It may take a while to complete.

Once your USB thumb drive is READY, click on CLOSE.

Now, right-click (RMB) on your USB thumb drive and click on Eject to safely remove the USB thumb drive from your computer. Your USB thumb drive should be ready to use.

Making a Bootable USB Thumb Drive of Kaisen Linux on Linux:

You can also make a bootable USB thumb drive of Kaisen Linux from any Linux distribution. You don’t have to download any extra software to do so. Every Linux distribution already has the dd command-line program that you can use to make bootable USB thumb drives from an ISO image.

Let’s say you have downloaded the Kaisen Linux ISO image (kaisenlinuxrolling1.5-amd64-LXDE.iso) in the ~/Downloads directory of your computer.

$ ls -lh ~/Downloads

To flash the Kaisen Linux ISO image to the USB thumb drive, you need to know the USB thumb drive’s device name.

To do that, run the following command once before and after you plug in the USB thumb drive on your computer.

$ sudo lsblk -e7

You should see a new device if you compare the output.

In my case, the new device name is sdc, as you can see in the screenshot below. It may be different for you. So, make sure to replace it with yours from now on.

To flash the Kaisen Linux ISO image to the USB thumb drive sdc, run the following command:

$ sudo dd if=~/Downloads/kaisenlinuxrolling1.5-amd64-LXDE.iso of=/dev/sdc bs=1M status=progress

The Kaisen Linux ISO image is being flashed on to the USB thumb drive. It may take a while to complete.

At this point, the Kaisen Linux ISO image should be flashed on to the USB thumb drive.

Now, eject the USB thumb drive sdc with the following command:

$ sudo eject /dev/sdc

Enabling Persistence on Kaisen Linux Bootable USB Thumb Drive from Linux:

On Windows, you have used Rufus to create a bootable USB thumb drive of Kaisen Linux. It was really easy to add persistence support from Rufus. But, on Linux, you have to create a persistence partition manually to enable persistence.

First, plug in the bootable USB thumb drive of Kaisen Linux, which you’ve created earlier on your Linux computer.

Then, unmount all the mounted partitions of your USB thumb drive with the following command:

$ sudo umount /dev/sdc{1,2}

As you can see, the Kaisen Linux bootable USB thumb drive currently has 2 partitions (sdc1 and sdc2).

$ sudo fdisk -l /dev/sdc

The sdc1 partition is the main partition. The sdc2 partition is a partition within the sdc1 partition.

Notice that the sdc1 partition ends in sector number 7234655. So, if you want to create a new partition, it will have to start from sector number 7234655 + 1 = 7234656.

Also, notice that the Kaisen Linux bootable USB thumb drive (in my case) is 29.43 GiB in size. In total, the USB thumb drive has 61702144 sectors.

So, the new partition will have to end in sector number 61702144 – 1 = 61702143 if you want to use all the free space of your USB thumb drive for persistence.

NOTE: If you want to create a smaller partition for persistence, you can do so. You don’t have to use all the free space of your USB thumb drive like I am doing in this article.

So, if you want to use all the free disk space of the USB thumb drive, the new persistence partition will have to,

  • Start from sector number 7234656
  • End in the sector number 61702143

NOTE: These numbers will change for you as you will be using a different USB thumb drive than mine. Also, the Kaisen Linux ISO file may have a different size by reading this article. So, always make sure to do the necessary calculations and adjust the numbers as required.

Now, open the Kaisen Linux bootable USB thumb drive sdc with the fdisk partitioning program as follows:

$ sudo fdisk /dev/sdc

fdisk should be opened.

To create a new partition, press n, and press <Enter>.

We want to create a primary partition. So, press p and then press <Enter>.

We want to create a 3rd partition. So, press 3 and then press <Enter>.

Type in 7234656 as the first sector number and press <Enter>.

Type in 61702143 as the last sector number and press <Enter>.

A new partition should be created.

At the same start sector, you may already have a partition. If you do, fdisk will show you the following prompt.

If you see the following prompt, press Y and then press <Enter> as you want to remove the partition’s signature and create a new filesystem there.

The existing partition signature should be removed.

Press p and then press <Enter> to list all the existing partitions of your USB thumb drive.

As you can see, a new partition sdc3 is created. The start sector is 7234656, and the end sector is 61702143, just like we wanted.

To write the changes to the partition table of your USB thumb drive, press w and then press <Enter>.

Now, create an EXT4 filesystem on the newly created partition sdc3 of your USB thumb drive and add the label persistence to the EXT4 filesystem as follows:

$ sudo mkfs.ext4 -L persistence /dev/sdc3

An EXT4 filesystem with the label persistence should be created on the sdc3 partition of your USB thumb drive, as you can see in the screenshot below.

Mount the partition sdc3 in the /mnt directory as follows:

$ sudo mount /dev/sdc3 /mnt

Create a new file persistence.conf on the filesystem root of the sdc3 partition with the content ‘/ union’ as follows:

$ echo "/ union" | sudo tee /mnt/persistence.conf

A new file persistence.conf with the content ‘/ union’ should be created in the sdc3 partition, as you can see in the screenshot below.

$ ls -lh /mnt

$ cat /mnt/persistence.conf

Now, unmount the newly created partition sdc3 as follows:

$ sudo umount /dev/sdc3

Finally, eject the Kaisen Linux bootable USB thumb drive sdc from your computer with the following command:

$ sudo eject /dev/sdc

Booting Kaisen Linux from the USB Thumb Drive:

To boot Kaisen Linux from the USB thumb drive, plug the USB thumb drive in your computer and go to your computer’s BIOS. You usually keep pressing the <Esc> or <Delete> button just after powering on the computer to go to the BIOS of your computer.

From the BIOS of your computer, boot from the USB device (the one you’ve flashed with the Kaisen Linux ISO).

You should see the Kaisen Linux GRUB menu as in the screenshot below.

Select Kaisen Linux Rolling LXDE 1.5 Live (English) and press <Enter> to start the Kaisen Linux in Live Mode and use the English language.

From here, you can select how you want Kaisen Linux Live to run on your computer.

default: Start Kaisen Linux without any special options.

failsafe: Start Kaisen Linux in repair mode. This mode disables certain features (i.e., multi-threading and direct access to RAM) to facilitate the detection of different problems.

forensic: Start Kaisen Linux with some security features. This mode deactivates some USB equipment, NVIDIA and AMD GPUs, a swap partition, etc.

persistence: Start Kaisen Linux with persistence enabled. By default, the changes you made to Kaisen Linux in Live mode is erased when you power off or restart your computer. Persistence mode allows you to store the changes to your USB thumb drive. You need to set up your USB thumb drive in a very specific way to enable persistence.

encrypted persistence: This mode is the same as persistence mode. The only difference is that the persistence partition will be encrypted with cryptsetup and LUKS.

loaded to RAM, default: Copy the contents of the USB thumb drive to the RAM and boot Kaisen Linux Live in default mode. This mode allows you to remove the USB thumb drive from the computer once Kaisen Linux boots.

loaded to RAM, failsafe: Copy the contents of the USB thumb drive to the RAM and boot Kaisen Linux Live in failsafe mode. This mode allows you to remove the USB thumb drive from the computer once Kaisen Linux boots.

loaded to RAM, forensic: Copy the contents of the USB thumb drive to the RAM and boot Kaisen Linux Live in forensic mode. This mode allows you to remove the USB thumb drive from the computer once Kaisen Linux boots.

Once you select an option, Kaisen Linux Live should be loading. It may take a few seconds to complete.

Kaisen Linux should start in Live mode.

Kaisen Linux has many pre-installed software and tools to help you rescue a broken system, recover data, and many more.

Installing Kaisen Linux:

You can install Kaisen Linux on your computer and use it as a regular Linux distribution if you want.

To install Kaisen Linux, boot Kaisen Linux from the USB thumb drive and select Kaisen Linux Rolling LXDE 1.5 Install from the Kaisen Linux GRUB menu as marked in the screenshot below.

Select Kaisen Linux Graphical Install and press <Enter>.

The Kaisen Linux Graphical Installer should start. You can install Kaisen Linux on your computer from here.

First, select your language and click on Continue.

Select your location and click on Continue.

Select your keyboard layout and click on Continue.

Type in a hostname or computer name and click on Continue.

Type in your full name and click on Continue.

Type in your username or login name and click on Continue.

Type in a password and click on Continue.

Select your timezone and click on Continue.

Now, you have to partition your HDD/SSD.

You can select Guided – use entire disk and click on Continueto let Kaisen Linux use the entire HDD/SSD and create required partitions automatically.

If you want to manually partition your HDD/SSD, select Manual and click on Continue.

In this article, I will show you how to do manual partitioning for installing Kaisen Linux.

Once you select the Manual partitioning method, you will be asked to select an HDD/SSD which you want to partition.

Select the HDD/SSD you want to partition manually and click on Continue.

If you’re using a new HDD/SSD, you most likely won’t have a partition table. In this case, you will see the following prompt.

Select Yes and click on Continue to create a new partition table on your HDD/SSD.

Once the partition table is created, you can create as many partitions as you want.

To install Kaisen Linux, we need at least 2 partitions.

  • A 256 MB Reserved BIOS boot area partition or EFI System Partition for keeping the bootloader files.
  • A root (/) partition for keeping all the system files and data.

Let’s create the boot partition first.

To create a new partition, select the FREE SPACE and click on Continue to create a new partition.

Select Create a new partition and click on Continue.

As you’re creating a boot partition, type in 256 MB as the new partition size and click on Continue.

Select Beginning and click on Continue.

Select Use as and click on Continue.

Now, if you’re using a UEFI compatible motherboard (most likely you’re), select EFI System Partition and click on Continue.

If you’re trying to install Kaisen Linux on a very old computer that only supports BIOS, then select Reserved BIOS boot area and click on Continue.

Then, select Done, setting up the partition, and click on Continue.

The boot partition should be created.

To create the root (/) partition, select the FREE SPACE and click on Continue.

Select Create a new partition and click on Continue.

Type in the size of the root partition you want and click on Continue.

If you want to allocate all the available free space to the root (/) partition, you can use the keyword max instead of a specific partition size as well.

Make sure that the Mount point is set to /.

Then, select Done setting up the partition, and click on Continue.

The root (/) partition should be created.

Now, select Finish partitioning and write changes to disk and click on Continue.

If you haven’t created a Swap partition, you will see the following prompt asking you to go back and create one.

I am not going to create a Swap partition. So, I will select No and click on Continue.

To save the changes to the partition table, select Yes and click on Continue.

Kaisen Linux installer should start installing Kaisen Linux on your HDD/SSD. It may take a while to complete.

Kaisen Linux is being installed.

Kaisen Linux is being installed.

Once the installation is complete, your computer should reboot.

Once you boot from the HDD/SSD where you’ve installed Kaisen Linux, you should see the following GRUB menu.

Select Kaisen GNU/Linux and press <Enter> to boot Kaisen Linux.

Kaisen Linux is being loaded from the HDD/SSD. It may take a few seconds.

Kaisen Linux login window should be displayed.

You can use the login username and password you’ve set during the installation to log in to Kaisen Linux.

Kaisen Linux is running from the HDD/SSD.

Conclusion:

In this article, I have shown you how to download Kaisen Linux and make a bootable USB thumb drive of Kaisen Linux from Windows and Linux operating systems. I have shown you how to boot Kaisen Linux from the USB thumb drive and install Kaisen Linux on your computer as well.

References:

[1] Live informations – Kaisen Linux –https://docs.kaisen-linux.org/index.php/Live_informations

]]>
Using Snap Package Manager on Ubuntu https://linuxhint.com/using-snap-package-manager-on-ubuntu/ Sat, 30 Jan 2021 19:51:07 +0000 https://linuxhint.com/?p=88315 Snap is a tool used to bundle an app and its required dependencies so that it works on different Linux distributions without any modification.

Snap apps are hosted in the Snap Store. At the time of this writing, there are thousands of open-source and proprietary apps available in the snap store.

In this article, I am going to show you how to use the Snap package manager on Ubuntu. So, let’s get started!

Searching for Snap Packages

To install a Snap package, you need to know the package’s name and whether it is available in the Snap package repository or not. To find this information, you can search the Snap package repository for your desired software/app from the command-line very easily.

For example, to search for the JetBrains PyCharm IDE, search for the packages that match the pycharm keyword with the following command:

$ sudo snap find pycharm

The Snap packages that matched the pycharm keyword should be listed.

You should find the name of the Snap package, the version that is going to be installed by default, the name of the publisher, and its summary.

Knowing More About a Snap Package

Before you install a Snap package, you may want to know more about it.

To know more about, let’s say, the Snap package pycharm-community, run the following command:

$ sudo snap info pycharm-community

A lot of information about the pycharm-community Snap package should be displayed.

In the top section, you have the name, a summary, the publisher name, the Snap Store URL, the official page of the software/app it installs, the license, the description, and the ID of the Snap package.

In the bottom section, you have a list of all the available channels, or you can say versions of the software/app, you want to install. The latest/stable channel should be the default for all the Snap software/apps. If you want to install an older version of the software/app, you can specify the required channel during the installation of the Snap package.

Installing a Snap Package

To install the latest stable version of the PyCharm Community software/app, you can install the pycharm-community Snap package as follows:

$ sudo snap install pycharm-community

If you want to install a specific version of the software/app from the Snap Store, you can specify the channel to use during installation with the –channel command-line option as follows:

$ sudo snap install pycharm-community --channel latest/stable

Some Snap Store software/app will show you the following error message. This is because Snap software/apps use sandboxes for an extra layer of security. Sandboxing a Snap software/app will not let the software/app modify the filesystem outside the sandbox (its specified installation directory).

Some software/apps will need to modify the filesystem (i.e., a text editor or IDE). So, you can’t use the sandbox feature of Snap for these software/apps. To install the Snap Store software/apps that need to modify the filesystem, you must use the –classic command-line option during installation.

You can install a Snap Store software/app (i.e., pycharm-community) that does not use the sandboxing feature of Snap as follows:

$ sudo snap install pycharm-community --channel latest/stable --classic

The Snap software/app is being downloaded from the Snap Store, and it may take a while to complete.

At this point, the Snap package should be installed.

Once the PyCharm Community snap package is installed, you should be able to find it in the Application Menu of Ubuntu. You can run it just like any other apps.

Listing Installed Snap Packages

You can list all the Snap Store packages that are installed on your Ubuntu machine with the following command:

$ sudo snap list

All the Snap Store packages that are installed on your computer should be listed.

Upgrading a Snap Package

Upgrading a Snap Store package is easy.

To demonstrate the process, I have installed the webstorm (JetBrains WebStorm IDE) Snap Store package on my Ubuntu machine, as you can see in the screenshot below.

$ sudo snap list

You can upgrade the webstorm Snap Store app with the following command:

$ sudo snap refresh webstorm

You can also upgrade to downgrade a specific channel of the app.

For example, I have the 2019.3/stable channel of the webstorm Snap Store app installed on my Ubuntu machine. And, let’s say, you want to upgrade to the latest/stable channel.

$ sudo snap info webstorm

To upgrade the webstorm Snap store app to the latest/stable channel, you can run the following command:

$ sudo snap refresh webstorm --channel latest/stable

The webstorm Snap Store app is being updated to the latest/stable channel.

The webstorm app is upgraded to 2020.3.1, as you can see in the screenshot below.

Webstorm Snap app is upgraded to the latest/stable version 2020.3.1.

Disable and Enable Snap Apps

In a traditional package management system, you can only install, uninstall, or upgrade a package.

One big advantage of Snap Store apps is that you can disable an app if you no longer need it. When you disable a Snap Store app, it will still be available on your computer, but the Snap daemon won’t load the app. You can enable the app whenever you need it.

I think this is a very good solution. If you don’t need an app all the time, you can keep it disabled and enable it only when you need it. This may save a lot of memory on your computer.

Right now, the WebStorm IDE is installed on my Ubuntu machine from the Snap Store. So, I can now access it from the Application Menu of my computer.

To disable the webstorm Snap Store app, run the following command:

$ sudo snap disable webstorm

The webstorm Snap Store app should be disabled.

As you can see, the disabled option is added to the webstorm Snap Store app.

$ sudo snap list

Now, you won’t find the WebStorm IDE app on the Application Menu of your computer.

To enable the webstorm Snap Store app again, run the following command:

$ sudo snap enable webstorm

The webstorm Snap Store app should be enabled.

The disabled option is removed from the webstorm Snap Store app once it is enabled.

Once you have enabled the webstorm Snap Store app, the WebStorm IDE should be available in the Application Menu of your computer again.

Uninstalling a Snap Package

If you don’t like a Snap Store app that you have installed, you can uninstall it easily.

For example, to remove the webstorm Snap Store app, run the following command:

$ sudo snap remove webstorm

The Snap Store app webstorm should be removed.

You can then see that the Snap Store app webstorm is not on the list anymore.

$ sudo snap list

Conclusion

In this article, I have shown you how to search for Snap Store packages and find more information about a Snap Store package. I have shown you how to install, upgrade, enable/disable, and uninstall a Snap Store package. This article should help you get started with Snap package manager on Ubuntu.

]]>
Useful Mount Options of the Btrfs Filesystem https://linuxhint.com/btrfs-filesystem-mount-options/ Sat, 30 Jan 2021 18:26:58 +0000 https://linuxhint.com/?p=88304

Like any other filesystems, the Btrfs filesystem also has a lot of mount options that you can use to configure the Btrfs filesystem’s behavior while mounting the filesystem.

This article will show you how to mount a Btrfs filesystem with your desired mount options. I will explain some of the useful Btrfs mount options as well. So, let’s get started.

Abbreviations

ACL – Access Control List
RAID – Redundant Array of Independent/Inexpensive Disks
UUID – Universally Unique Identifier

Where to Put Btrfs Mount Options

You can mount a Btrfs filesystem using the mount command-line program or the /etc/fstab file at boot time. You can configure the behavior of the Btrfs filesystem using mount options. In this section, I am going to show you how to mount a Btrfs filesystem using different mount options:

  1. from the command-line.
  2. using the /etc/fstab

From the command-line, you can mount a Btrfs filesystem (created on the sdb storage device) on the /data directory with the mount options option1, option2, option3, etc. as follows:

$ sudo mount -o option1,option2,option3,… /dev/sdb /data

To mount the same Btrfs filesystem at boot time using the /etc/fstab file, you need to find the UUID of the Btrfs filesystem.

You can find the UUID of the Btrfs filesystem with the following command:

$ sudo blkid --match-token TYPE=btrfs

As you can see, the UUID of the Btrfs filesystem created on the sdb storage device is c69a889a-8fd2-4571-bd97-a3c2e4543b6b.

Open the /etc/fstab file with the following command:

$ sudo nano /etc/fstab

To automatically mount the Btrfs filesystem that has the UUID c69a889a-8fd2-4571-bd97-a3c2e4543b6b on the /data directory with the mount options option1,option2,option3, etc., add the following line at the end of the /etc/fstab file.

UUID=c69a889a-8fd2-4571-bd97-a3c2e4543b6b            /data    btrfs     option1,option2,option3,…        0          0

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/fstab file.

Your Btrfs filesystem should be mounted with your desired mount options.

Important Btrfs Mount Options

In this section, I am going to explain some of the important Btrfs mount options. So, let’s get started.

The most important Btrfs mount options are:

1. acl and noacl
ACL manages user and group permissions for the files/directories of the Btrfs filesystem.

The acl Btrfs mount option enables ACL. To disable ACL, you can use the noacl mount option.

By default, ACL is enabled. So, the Btrfs filesystem uses the acl mount option by default.

2. autodefrag and noautodefrag
Defragmenting a Btrfs filesystem will improve the filesystem’s performance by reducing data fragmentation.

The autodefrag mount option enables automatic defragmentation of the Btrfs filesystem.

The noautodefrag mount option disables automatic defragmentation of the Btrfs filesystem.

By default, automatic defragmentation is disabled. So, the Btrfs filesystem uses the noautodefrag mount option by default.

3. compress and compress-force
Controls the filesystem-level data compression of the Btrfs filesystem.

The compress option compresses only the files that are worth compressing (if compressing the file saves disk space).

The compress-force option compresses every file of the Btrfs filesystem even if compressing the file increases its size.

The Btrfs filesystem support many compression algorithms and each of the compression algorithm has different levels of compression.

The Btrfs supported compression algorithms are: lzo, zlib (level 1 to 9), and zstd (level 1 to 15).

You can specify what compression algorithm to use for the Btrfs filesystem with one of the following mount options:

  • compress=algorithm:level
  • compress-force=algorithm:level

For more information, check my article How to Enable Btrfs Filesystem Compression.

4. subvol and subvolid
These mount options are used to separately mount a specific subvolume of a Btrfs filesystem.

The subvol mount option is used to mount the subvolume of a Btrfs filesystem using its relative path.

The subvolid mount option is used to mount the subvolume of a Btrfs filesystem using the ID of the subvolume.

For more information, check my article How to Create and Mount Btrfs Subvolumes.

5. device
The device mount option is used in multi-device Btrfs filesystem or Btrfs RAID.

In some cases, the operating system may fail to detect the storage devices used in a multi-device Btrfs filesystem or Btrfs RAID. In such cases, you can use the device mount option to specify the devices that you want to use for the Btrfs multi-device filesystem or RAID.

You can use the device mount option multiple times to load different storage devices for the Btrfs multi-device filesystem or RAID.

You can use the device name (i.e., sdb, sdc) or UUID, UUID_SUB, or PARTUUID of the storage device with the device mount option to identify the storage device.

For example,

  • device=/dev/sdb
  • device=/dev/sdb,device=/dev/sdc
  • device=UUID_SUB=490a263d-eb9a-4558-931e-998d4d080c5d
  • device=UUID_SUB=490a263d-eb9a-4558-931e-998d4d080c5d,device=UUID_SUB=f7ce4875-0874-436a-b47d-3edef66d3424

6. degraded
The degraded mount option allows a Btrfs RAID to be mounted with fewer storage devices than the RAID profile requires.

For example, the raid1 profile requires 2 storage devices to be present. If one of the storage devices is not available in any case, you use the degraded mount option to mount the RAID even though 1 out of 2 storage devices is available.

7. commit
The commit mount option is used to set the interval (in seconds) within which the data will be written to the storage device.

The default is set to 30 seconds.

To set the commit interval to 15 seconds, you can use the mount option commit=15 (let’s say).

8. ssd and nossd
The ssd mount option tells the Btrfs filesystem that the filesystem is using an SSD storage device, and the Btrfs filesystem does the necessary SSD optimization.

The nossd mount option disables SSD optimization.

The Btrfs filesystem automatically detects whether an SSD is used for the Btrfs filesystem. If an SSD is used, the ssd mount option is enabled. Otherwise, the nossd mount option is enabled.

9. ssd_spread and nossd_spread
The ssd_spread mount option tries to allocate big continuous chunks of unused space from the SSD. This feature improves the performance of low-end (cheap) SSDs.

The nossd_spread mount option disables the ssd_spread feature.

The Btrfs filesystem automatically detects whether an SSD is used for the Btrfs filesystem. If an SSD is used, the ssd_spread mount option is enabled. Otherwise, the nossd_spread mount option is enabled.

10. discard and nodiscard
If you’re using an SSD that supports asynchronous queued TRIM (SATA rev3.1), then the discard mount option will enable the discarding of freed file blocks. This will improve the performance of the SSD.

If the SSD does not support asynchronous queued TRIM, then the discard mount option will degrade the SSD’s performance. In that case, the nodiscard mount option should be used.

By default, the nodiscard mount option is used.

11. norecovery
If the norecovery mount option is used, the Btrfs filesystem will not try to perform the data recovery operation at mount time.

12. usebackuproot and nousebackuproot
If the usebackuproot mount option is used, the Btrfs filesystem will try to recover any bad/corrupted tree root at mount time. The Btrfs filesystem may store multiple tree roots in the filesystem. The usebackuproot mount option will scan for a good tree root and use the first good one it finds.

The nousebackuproot mount option will not check or recover bad/corrupted tree roots at mount time. This is the default behavior of the Btrfs filesystem.

13. space_cache, space_cache=version, nospace_cache, and clear_cache
The space_cache mount option is used to control the free space cache. Free space cache is used to improve the performance of reading the block group free space of the Btrfs filesystem into memory (RAM).

The Btrfs filesystem supports 2 versions of the free space cache: v1 (default) and v2

The v2 free space caching mechanism improves the performance of big filesystems (multi terabytes in size).

You can use the mount option space_cache=v1 to set the v1 of the free space cache and the mount option space_cache=v2 to set the v2 of the free space cache.

The clear_cache mount option is used to clear the free space cache.

When the v2 free space cache is created, the cache must be cleared to create a v1 free space cache.

So, to use the v1 free space cache after the v2 free space cache is created, the clear_cache and space_cache=v1 mount options must be combined: clear_cache,space_cache=v1

The nospace_cache mount option is used to disable free space caching.

To disable the free space caching after the v1 or v2 cache is created, the nospace_cache and clear_cache mount option must be combined: clear_cache,nosapce_cache

14. skip_balance
By default, interrupted/paused balance operation of a multi-device Btrfs filesystem or Btrfs RAID will be automatically resumed once the Btrfs filesystem is mounted. To disable automatic resuming of interrupted/paused balance operation on a multi-device Btrfs filesystem or Btrfs RAID, you can use the skip_balance mount option.

15. datacow and nodatacow
The datacow mount option enables the Copy-on-Write (CoW) feature of the Btrfs filesystem. It is the default behavior.

If you want to disable the Copy-on-Write (CoW) feature of the Btrfs filesystem for the newly created files, mount the Btrfs filesystem with the nodatacow mount option.

16. datasum and nodatasum
The datasum mount option enables data checksumming for newly created files of the Btrfs filesystem. This is the default behavior.

If you don’t want the Btrfs filesystem to checksum the data for newly created files, mount the Btrfs filesystem with the nodatasum mount option.

Conclusion

This article has shown you how to mount a Btrfs filesystem with your desired mount options. I have explained some of the useful Btrfs mount options as well.

References

[1] The Btrfs Mount Options Manpage – man 5 btrfs

]]>
How to Encrypt a Btrfs Filesystem? https://linuxhint.com/encrypt-a-btrfs-filesystem/ Sat, 30 Jan 2021 16:31:52 +0000 https://linuxhint.com/?p=88201

The Btrfs filesystem-level encryption feature is still not available. But you can use a 3rd party encryption tool like dm-crypt to encrypt the entire storage devices of your Btrfs filesystem.

In this article, I am going to show you how to encrypt the storage devices added to a Btrfs filesystem with dm-crypt. So, let’s get started.

Abbreviations

  • LUKS – Linux Unified Key Setup
  • HDD – Hard Disk Drive
  • SSD – Solid-State Drive

Prerequisites

To follow this article:

  • You must be running either Fedora 33 Workstation or Ubuntu 20.04 LTS Linux distribution on your computer.
  • You must have a free HDD/SSD on your computer.

As you can see, I have an HDD sdb on my Ubuntu 20.04 LTS machine. I will encrypt it and format it with the Btrfs filesystem.

$ sudo lsblk -e7

Installing Required Packages on Ubuntu 20.04 LTS

To encrypt storage devices and format them with the Btrfs filesystem, you need to have the btrfs-progs and cryptsetup packages installed on your Ubuntu 20.04 LTS machine. Luckily, these packages are available in the official package repository of Ubuntu 20.04 LTS.

First, update the APT package repository cache with the following command:

$ sudo apt update


To install btrfs-progs and cryptsetup, run the following command:

$ sudo apt install btrfs-progs cryptsetup --install-suggests


To confirm the installation, press Y and then press <Enter>.


The btrfs-progs and cryptsetup packages and their dependencies are being installed.


The btrfs-progs and cryptsetup packages should be installed at this point.

Installing Required Packages on Fedora 33

To encrypt storage devices and format them with the Btrfs filesystem, you need to have the btrfs-progs and cryptsetup packages installed on your Fedora 33 Workstation machine. Luckily, these packages are available in the official package repository of Fedora 33 Workstation.

First, update the DNF package repository cache with the following command:

$ sudo dnf makecache


To install btrfs-progs and cryptsetup, run the following command:

$ sudo dnf install btrfs-progs cryptsetup -y


Fedora 33 Workstation uses the Btrfs filesystem by default. So, it’s more likely that you will have these packages installed already, as you can see in the screenshot below. If for some reason, they are not installed, they will be installed.

Generating an Encryption Key

Before you can encrypt your storage devices with cryptsetup, you need to generate a 64 bytes long random key.

You can generate your encryption key and store it in the /etc/cryptkey file with the following command:

$ sudo dd if=/dev/urandom of=/etc/cryptkey bs=64 count=1


A new encryption key should be generated and stored in the /etc/cryptkey file.


The encryption key file /etc/cryptkey can be read by everyone by default, as you can see in the screenshot below. This is a security risk. We want only the root user to be able to read/write to the /etc/cryptkey file.

$ ls -lh /etc/cryptkey


To allow only the root user to read/write to the /etc/cryptkey file, change the file permissions as follows:

$ sudo chmod -v 600 /etc/cryptkey


As you can see, only the root user has read/write (rw) permission to the /etc/cryptkey file. So, no one else can see what’s in the /etc/cryptkey file.

$ ls -lh /etc/cryptkey

Encrypting the Storage Devices with dm-crypt

Now that you have generated an encryption key, you can encrypt your storage device. let’s say,  sdb, with the LUKS v2 (version 2) disk encryption technology as follows:

$ sudo cryptsetup -v --type luks2 luksFormat /dev/sdb /etc/cryptkey

cryptsetup will prompt you to confirm the encryption operation.

NOTE: All the data of your HDD/SSD should be removed. So, make sure to move all of your important data before you attempt to encrypt your HDD/SSD.


To confirm the disk encryption operation, type in YES (in uppercase) and press <Enter>. It may take a while to complete.


At this point, the storage device /dev/sdb should be encrypted with the encryption key /etc/cryptkey.

Opening Encrypted Storage Devices

Once you’ve encrypted a storage device with cryptsetup, you need to open it with the cryptsetup tool to be able to use it.

You can open the encrypted storage device sdb and map it to your computer as a data storage device as follows:

$ sudo cryptsetup open --key-file=/etc/cryptkey --type luks2 /dev/sdb data


Now, the decrypted storage device will be available in the path /dev/mapper/data. You have to create your desired filesystem in the /dev/mapper/data device and mount the /dev/mapper/data device instead of /dev/sdb from now on.

Creating Btrfs Filesystem on Encrypted Devices:

To create a Btrfs filesystem on the decrypted storage device /dev/mapper/data with the label data, run the following command:

$ sudo mkfs.btrfs -L data /dev/mapper/data


A Btrfs filesystem should be created on the /dev/mapper/data storage device, which is decrypted from the storage device /dev/sdb (encrypted with LUKS 2).

Mounting Encrypted Btrfs Filesystem

You can mount the Btrfs filesystem you have created earlier as well.

Let’s say, you want to mount the Btrfs filesystem you’ve created earlier in the /data directory.

So, create the /data directory as follows:

$ sudo mkdir -v /data


To mount the Btrfs filesystem created on the /dev/mapper/data storage device in the /data directory, run the following command:

$ sudo mount /dev/mapper/data /data


As you can see, the Btrfs filesystem created on the encrypted storage device sdb is mounted in the /data directory.

$ sudo btrfs filesystem show /data

Automatically Mounting Encrypted Btrfs Filesystem at Boot-Time

You can mount the encrypted Btrfs filesystem at boot time as well.

To mount the encrypted Btrfs filesystem at boot time, you need to:

  • decrypt the storage device /dev/sdb at boot time using the /etc/cryptkey encryption key file
  • mount the decrypted storage device /dev/mapper/data to the /data directory

First, find the UUID of the sdb encrypted storage device with the following command:

$ sudo blkid /dev/sdb


As you can see, the UUID of the sdb encrypted storage device is 1c66b0de-b2a3-4d28-81c5-81950434f972. It will be different for you. So, make sure to change it with yours from now on.


To automatically decrypt the sdb storage device at boot time, you have to add an entry for it on the /etc/crypttab file.

Open the /etc/crypttab file with the nano text editor as follows:

$ sudo nano /etc/crypttab


Add the following line at the end of the /etc/crypttab file if you’re using an HDD.

data UUID=1c66b0de-b2a3-4d28-81c5-81950434f972 /etc/cryptkey luks,noearly

Add the following line at the end of the /etc/crypttab file if you’re using an SSD.

data UUID=1c66b0de-b2a3-4d28-81c5-81950434f972 /etc/cryptkey luks,noearly,discard

Once you’re done, press <Ctrl> + X, followed by Y, and <Enter> to save the /etc/crypttab file.


Now, find the UUID of the decrypted /dev/mapper/data storage device with the following command:

$ sudo blkid /dev/mapper/data


As you can see, the UUID of the /dev/mapper/data decrypted storage device is dafd9d61-bdc9-446a-8b0c-aa209bfab98d. It will be different for you. So, make sure to change it with yours from now on.


To automatically mount the decrypted storage device /dev/mapper/data in the /data directory at boot time, you have to add an entry for it on the /etc/fstab file.

Open the /etc/fstab file with the nano text editor as follows:

$ sudo nano /etc/fstab


Now, add the following line at the end of the /etc/fstab file:

UUID=dafd9d61-bdc9-446a-8b0c-aa209bfab98d /data btrfs defaults 0 0

Once you’re done, press <Ctrl> + X, followed by Y, and <Enter> to save the /etc/fstab file.


Finally, reboot your computer for the changes to take effect.

$ sudo reboot


The encrypted storage device sdb is decrypted into a data storage device, and the data storage device is mounted in the /data directory.

$ sudo lsblk -e7


As you can see, the Btrfs filesystem, which was created on the decrypted /dev/mapper/data storage device is mounted in the /data directory.

$ sudo btrfs filesystem show /data

Conclusion

In this article, I have shown you how to encrypt a storage device using the LUKS 2 encryption technology with cryptsetup. You also learn how to decrypt the encrypted storage device and format it with the Btrfs filesystem as well. As well as how to automatically decrypt the encrypted storage device and mount it at boot time. This article should help you get started with Btrfs filesystem encryption.

]]>
How to Use Btrfs Balance? https://linuxhint.com/how-to-use-btrfs-balance/ Sun, 24 Jan 2021 15:22:36 +0000 https://linuxhint.com/?p=87342 The Btrfs filesystem has built-in multi-device support, so you can create different levels of RAID using it.

Once you’ve created a Btrfs RAID, you can add more storage devices to the RAID to expand the RAID. But, once you have added more storage devices to the RAID, Btrfs won’t spread the existing data/metadata/system-data to the new storage devices automatically. So, you may not get the desired throughput (read/write speed) out of the RAID, and it may not be able to populate the new storage devices with the required redundant data. So, the RAID array may fail to survive the desired number of drive failures.

To solve these problems, the Btrfs filesystem provides a built-in balancing tool. The Btrfs balance utility will spread the data/metadata/system-data of the existing storage devices of the RAID to the newly added storage devices.

In this article, I am going to show you how to use the Btrfs balance utility to spread the data/metadata/system-data of the existing storage devices of the RAID to the newly added storage devices. So, let’s get started!

Abbreviations

RAID – Redundant Array of Inexpensive/Independent Disks
MB – Megabyte
GB – Gigabyte

Prerequisites

To follow this article, you need to have a working Btrfs RAID or multi-device setup.

I have created a Btrfs RAID in RAID-0 configuration using 4 storage devices sdb, sdc, sdd, and sde.

As you can see, the Btrfs filesystem allocated 1 GB of disk space for data1 256 MB of disk space for metadata2, and 4 MB of disk space for system-data3 from each of the storage devices in the RAID.

About 18.75 GB out of 20 GB is still unallocated4 from each of the storage devices of the RAID.

$ sudo btrfs filesystem usage /data

Writing a Script to Generate Random Files

To show you how the Btrfs balance utility works, we need to generate some random files to fill up the Btrfs filesystem. Let’s create a shell script that does just that.

Create a new shell script genfiles.sh in the /usr/local/bin/ directory as follows:

$ sudo nano /usr/local/bin/genfiles.sh

Type in the following lines of codes in the genfiles.sh shell script.

#!/bin/bash
while true
do
    FILENAME=$(uuidgen)
    echo "[Creating] $FILENAME"
    dd if=/dev/random of=$FILENAME bs=1M count=256 status=progress
    echo "[Created] $FILENAME"
done

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the genfiles.sh shell script.

The genfiles.sh shell script runs an infinite while loop.

while true
do
    # other codes
done

The following line generates a UUID using the uuidgen command and stores the UUID in the FILENAME variable.

The following line prints a message on the console before the file FILENAME is generated.

The following line generates a new random file FILENAME using the dd command. The file will be 256 MB in size.

The following line prints a message on the console after the file FILENAME is generated.

Add execute permission to the genfiles.sh shell script as follows:

$ sudo chmod +x /usr/local/bin/genfiles.sh

The genfiles.sh shell script should now be accessible as any other commands.

$ which genfiles.sh

Generating Random Files in the Btrfs Filesystem

We want to generate random files in the Btrfs RAID. Let’s say, the Btrfs RAID is mounted on the /data directory.

Navigate to the /data directory where the Btrfs RAID is mounted as follows:

$ cd /data

As you can see, there are no files available in my Btrfs RAID at the moment.

$ ls -lh

To generate some random files in the current working directory (/data directory in this case), run the genfiles.sh shell script as follows:

$ sudo genfiles.sh

The genfiles.sh shell script should start generating random files in the /data directory.

The genfiles.sh script is generating random files. Let the script run for a couple of minutes, so it fills up about 2-3 GB of disk space of the Btrfs RAID.

When you want to stop the genfiles.sh shell script, press <Ctrl> + C.

As you can see, some random files are generated in the Btrfs RAID.

$ ls -lh


As you can see, the Btrfs RAID allocated 2 GB from each of the storage devices added to the RAID. Previously the Btrfs RAID allocated 1 GB from each of the storage devices added to the RAID.

The unallocated disk space has been reduced from 18.75 GB to 17.75 GB in all the storage devices of the RAID.

$ sudo btrfs filesystem usage /data

Adding Another Storage Device to the Btrfs RAID

To show you how to balance a Btrfs RAID after adding a new storage device, you have to add a new storage device to it.

I have added a new HDD sdf to my computer, which I want to add to the Btrfs RAID mounted on the /data directory. Let’s see how to do it.

$ sudo lsblk -e7

Navigate to a different directory (i.e., HOME directory) from the /data directory as follows:

$ cd

To add the storage device sdf to the Btrfs RAID mounted on the /data directory, run the following command:

$ sudo btrfs device add /dev/sdf /data

As you can see, the storage device sdf is added to the Btrfs RAID. The RAID size has increased from 80 GB to 100 GB.

$ sudo btrfs filesystem usage /data

Balancing the Btrfs RAID

As you can see, the newly added storage device (sdf) of the RAID (mounted on the /data directory) has 20 GB unallocated, and the other storage devices (sdb, sdc, sdd, sde, etc.) have 17.75 GB unallocated.

$ sudo btrfs filesystem usage /data

The data1, metadata2, and system-data3 are only available on the existing storage devices of the RAID, not the newly added storage device.

To spread out the data, metadata, and system-data on all the storage devices of the RAID (including the newly added storage device) mounted on the /data directory, run the following command:

$ sudo btrfs balance start --full-balance /data

It may take a while to spread out the data, metadata, and system-data on all the storage devices of the RAID ifit contains a lot of data.

Once the storage devices of the RAID are properly balanced, you should see the following message.

As you can see, after the balance operation is completed, the newly added storage device has an equal amount of unallocated disk space as the other storage devices of the RAID.

After the balance operation, an equal amount of disk space as the other storage devices of the RAID is allocated for the data, metadata, and system-data from the newly added storage device (sdf) of the RAID.

Conclusion

In this article, I have discussed the purpose of the Btrfs balance utility, as well as how to balance a Btrfs RAID or multi-device filesystem after adding new storage devices to the RAID or multi-device filesystem.

]]>
How to Use Btrfs Scrub? https://linuxhint.com/how-to-use-btrfs-scrub/ Sun, 24 Jan 2021 15:07:21 +0000 https://linuxhint.com/?p=87408 The Btrfs filesystem is a multi-device filesystem that has built-in support for RAID. In a multi-device Btrfs filesystem or RAID, the data/metadata blocks may be stored in one or more storage devices. The Btrfs scrub tool will read all the data/metadata blocks from all the storage devices added to a Btrfs filesystem or RAID and find all the corrupted data/metadata blocks. Once the corrupted data/metadata blocks are found, the Btrfs scrub tool will automatically repair those corrupted data/metadata blocks if possible.

In a multi-device Btrfs filesystem or Btrfs RAID, depending on the filesystem configuration, there may be multiple copies of the data/metadata blocks stored in different locations of the storage devices added to the Btrfs filesystem. When the Btrfs scrub tool finds a corrupted data/metadata block, it searches all the storage devices added to the Btrfs filesystem for duplicate copies of that data/metadata block. Once a duplicate copy of that data/metadata block is found, the corrupted data/metadata block is overwritten with the correct data/metadata block. This is how the Btrfs scrub tool repairs corrupted data/metadata blocks in a multi-device Btrfs filesystem or Btrfs RAID.

In this article, I am going to show you how to use the Btrfs scrub tool to find and repair corrupted data/metadata blocks in a multi-device Btrfs filesystem or Btrfs RAID. So, let’s get started.

Abbreviations

RAID – Redundant Array of Inexpensive/Independent Disks
GB – Gigabyte

Prerequisites

To follow this article, you need to have a working multi-device Btrfs filesystem or a Btrfs RAID.

I have created a Btrfs RAID in RAID-1 configuration (mounted on the /data directory) using 4 storage devices sdb, sdc, sdd, and sde, as you can see in the screenshot below. I will be using this Btrfs RAID for the Btrfs scrub demonstration in this article.

$ sudo btrfs filesystem usage /data

If you need any assistance on installing the Btrfs filesystem on Ubuntu, check my article Install and Use Btrfs on Ubuntu 20.04 LTS.

If you need any assistance on installing the Btrfs filesystem on Fedora, check my article Install and Use Btrfs on Fedora 33.

If you need any assistance in creating a Btrfs RAID, check my article How to Setup Btrfs RAID.

Generating Dummy Files on the Btrfs Filesystem

To show you how the Btrfs scrub tool works, we need to generate some random files to fill up the Btrfs filesystem. Let’s create a shell script that does just that.

Create a new shell script genfiles.sh in the /usr/local/bin/ directory as follows:

$ sudo nano /usr/local/bin/genfiles.sh

Type in the following lines of codes in the genfiles.sh shell script.

#!/bin/bash
while true
do
    FILENAME=$(uuidgen)
    echo "[Creating] $FILENAME"
    dd if=/dev/random of=$FILENAME bs=1M count=256 status=progress
    echo "[Created] $FILENAME"
done

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the genfiles.sh shell script.

The genfiles.sh shell script runs an infinite while loop.

while true
do
    # other codes
done

The following line generates a UUID using the uuidgen command and stores the UUID in the FILENAME variable.

The following line prints a message on the console before the file FILENAME is generated.

The following line generates a new random file FILENAME using the dd command. The file will be 256 MB in size.

The following line prints a message on the console after the file FILENAME is generated.

Add execute permission to the genfiles.sh shell script as follows:

s

The genfiles.sh shell script should now be accessible as any other commands.

$ which genfiles.sh

Let’s generate some random files in the Btrfs RAID mounted in the /data directory.

Navigate to the /data directory where the Btrfs RAID is mounted as follows:

$ cd /data

As you can see, there are no files available in my Btrfs RAID at the moment.

$ ls -lh

To generate some random files in the current working directory (/data directory in this case), run the genfiles.sh shell script as follows:

$ sudo genfiles.sh

The genfiles.sh shell script should start generating random files in the /data directory.

The genfiles.sh script is generating random files. Let the script run for a couple of minutes, so it fills up about 2-3 GB of disk space of the Btrfs RAID.

When you want to stop the genfiles.sh shell script, press <Ctrl> + C.

As you can see, some random files are generated in the Btrfs RAID.

$ ls -lh

I have generated about 13 GB of random files in the Btrfs RAID mounted in the /data directory, as you can see in the screenshot below.

$ sudo du -sh /data

Working with the Btrfs Scrub Tool

In this section, I am going to show you how to use the Btrfs scrub tool. Let’s get started.

You can start the scrub process on the Btrfs filesystem mounted on the /data directory with the following command:

$ sudo btrfs scrub start /data

A Btrfs scrub process should be started on the Btrfs filesystem mounted on the /data directory.

You can see the status of the Btrfs scrub process running on the Btrfs filesystem mounted on the /data directory as follows:

$ sudo btrfs scrub status /data

As you can see, the Btrfs scrub process is still running.

Scrubbing a Btrfs filesystem or Btrfs RAID that has a lot of files will take a long time to complete.

Once the Btrfs scrub process is complete, the status should be changed to finished, as you can see in the screenshot below.

$ sudo btrfs scrub status /data

You can also see the Btrfs scrub status for each of the storage devices added to the Btrfs filesystem (mounted in the /data directory) separately as follows:

$ sudo btrfs scrub status -d /data

I have told you that the Btrfs scrub process takes a long time to complete on a big Btrfs filesystem. One big advantage of the Btrfs scrub tool is that its process can be paused and resumed at any time.

Let’s see how to pause and resume a Btrfs scrub process.

First, start a new Btrfs scrub process on the Btrfs filesystem mounted in the /data directory as follows:

$ sudo btrfs scrub start /data

To cancel or pause the Btrfs scrub process that is currently running on the Btrfs filesystem mounted on the /data directory, run the following command:

$ sudo btrfs scrub cancel /data

The running Btrfs scrub process should be canceled or paused.

As you can see, the Btrfs scrub status is aborted. So, the Btrfs scrub process is not running anymore.

$ sudo btrfs scrub status /data

To resume the Btrfs scrub process that you’ve canceled or paused, run the following command:

$ sudo btrfs scrub resume /data

The Btrfs scrub process should be resumed.

As you can see, the Btrfs scrub status is now running. So, the Btrfs scrub process is resumed.

$ sudo btrfs scrub status /data

After the Btrfs scrub process is complete, the Btrfs scrub status should be changed to finished.

$ sudo btrfs scrub status /data

Conclusion

In this article, I have shown you how to work with the Btrfs scrub tool to find and fix corrupted data/metadata blocks of a Btrfs multi-device filesystem or RAID. I have shown you how to cancel/pause and resume a Btrfs scrub process once it’s started as well.

]]>
How to Set Up Btrfs RAID https://linuxhint.com/set-up-btrfs-raid/ Mon, 18 Jan 2021 16:09:35 +0000 https://linuxhint.com/?p=86014 Btrfs is a modern Copy-on-Write (CoW) filesystem with built-in RAID support. So, you do not need any third-party tools to create software RAIDs on a Btrfs filesystem.

The Btrfs filesystem keeps the filesystem metadata and data separately. You can use different RAID levels for the data and metadata at the same time. This is a major advantage of the Btrfs filesystem.

This article shows you how to set up Btrfs RAIDs in the RAID-0, RAID-1, RAID-1C3, RAID-1C4, RAID-10, RAID-5, and RAID-6 configurations.

Abbreviations

  • Btrfs – B-tree Filesystem
  • RAID – Redundant Array of Inexpensive Disks/Redundant Array of Independent Disks
  • GB – Gigabyte
  • TB – Terabyte
  • HDD – Hard Disk Drive
  • SSD – Solid-State Drive

Prerequisites

To try out the examples included in this article:

  • You must have the Btrfs filesystem installed on your computer.
  • You will need at least four same-capacity HDDs/SSDs to try out the different RAID configurations.

In my Ubuntu machine, I have added four HDDs (sdb, sdc, sdd, sde). Each of them is 20 GB in size.

$ sudo lsblk -e7

Note: Your HDDs/SSDs may have different names than mine. So, be sure to replace them with yours from now on.


For assistance with installing the Btrfs filesystem in Ubuntu, check out the article Install and Use Btrfs on Ubuntu 20.04 LTS.

For assistance with installing the Btrfs filesystem in Fedora, check out the article Install and Use Btrfs on Fedora 33.

Btrfs Profiles

A Btrfs profile is used to tell the Btrfs filesystem how many copies of the data/metadata to keep and what RAID levels to use for the data/metadata. The Btrfs filesystem contains many profiles. Understanding them will help you to configure a Btrfs RAID just the way you want.

The available Btrfs profiles are as follows:

single: If the single profile is used for the data/metadata, only one copy of the data/metadata will be stored in the filesystem, even if you add multiple storage devices to the filesystem. So, 100% of the disk space of each of the storage devices added to the filesystem can be utilized.

dup: If the dup profile is used for the data/metadata, each of the storage devices added to the filesystem will keep two copies of the data/metadata. So, 50% of the disk space of each of the storage devices added to the filesystem can be utilized.

raid0: In the raid0 profile, the data/metadata will be split evenly across all the storage devices added to the filesystem. In this setup, there will be no redundant (duplicate) data/metadata. So, 100% of the disk space of each of the storage devices added to the filesystem can be used. If in any case one of the storage devices fails, the entire filesystem will be corrupted. You will need at least two storage devices to set up the Btrfs filesystem in the raid0 profile.

raid1: In the raid1 profile, two copies of the data/metadata will be stored in the storage devices added to the filesystem. In this setup, the RAID array can survive one drive failure. But, you can use only 50% of the total disk space. You will need at least two storage devices to set up the Btrfs filesystem in the raid1 profile.

raid1c3: In the raid1c3 profile, three copies of the data/metadata will be stored in the storage devices added to the filesystem. In this setup, the RAID array can survive two drive failures, but you can use only 33% of the total disk space. You will need at least three storage devices to set up the Btrfs filesystem in the raid1c3 profile.

raid1c4: In the raid1c4 profile, four copies of the data/metadata will be stored in the storage devices added to the filesystem. In this setup, the RAID array can survive three drive failures, but you can use only 25% of the total disk space. You will need at least four storage devices to set up the Btrfs filesystem in the raid1c4 profile.

raid10: In the raid10 profile, two copies of the data/metadata will be stored in the storage devices added to the filesystem, as in the raid1 profile. Also, the data/metadata will be split across the storage devices, as in the raid0 profile.

The raid10 profile is a hybrid of the raid1 and raid0 profiles. Some of the storage devices form raid1 arrays and some of these raid1 arrays are used to form a raid0 array. In a raid10 setup, the filesystem can survive a single drive failure in each of the raid1 arrays.

You can use 50% of the total disk space in the raid10 configuration. You will need at least four storage devices to set up the Btrfs filesystem in the raid10 profile.

raid5: In the raid5 profile, one copy of the data/metadata will be split across the storage devices. A single parity will be calculated and distributed among the storage devices of the RAID array.

In a raid5 configuration, the filesystem can survive a single drive failure. If a drive fails, you can add a new drive to the filesystem and the lost data will be calculated from the distributed parity of the running drives.

You can use 100x(N-1)/N % of the total disk spaces in the raid5 configuration. Here, N is the number of storage devices added to the filesystem. You will need at least three storage devices to set up the Btrfs filesystem in the raid5 profile.

raid6: In the raid6 profile, one copy of the data/metadata will be split across the storage devices. Two parities will be calculated and distributed among the storage devices of the RAID array.

In a raid6 configuration, the filesystem can survive two drive failures at once. If a drive fails, you can add a new drive to the filesystem, and the lost data will be calculated from the two distributed parities of the running drives.

You can use 100x(N-2)/N % of the total disk space in the raid6 configuration. Here, N is the number of storage devices added to the filesystem. You will need at least four storage devices to set up the Btrfs filesystem in the raid6 profile.

Creating a Mount Point

You need to create a directory to mount the Btrfs filesystem that you will create in the next sections of this article.

To create the directory/mount point /data, run the following command:

$ sudo mkdir -v /data

Setting Up RAID-0

In this section, you will learn how to set up a Btrfs RAID in the RAID-0 configuration using four HDDs (sdb, sdc, sdd, and sde). The HDDs are 20 GB in size.

$ sudo lsblk -e7

To create a Btrfs RAID in the RAID-0 configuration using four HDDs (sdb, sdc, sdd, and sde) run the following command:

$ sudo mkfs.btrfs -L data -d raid0 -m raid0 -f /dev/sdb /dev/sdc /dev/sdd /dev/sde

Here,

  • The –L option is used to set the filesystem label data.
  • The –d option is used to set the RAID profile raid0 for the filesystem data.
  • The –m option is used to set the RAID profile raid0 for the filesystem metadata.
  • The –f option is used to force the creation of the Btrfs filesystem, even if any of the HDDs have an existing filesystem.

The Btrfs filesystem data in the RAID-0 configuration should now be created, as you can see in the screenshot below.

You can mount the Btrfs RAID using any HDD/SSD you used to create the RAID.

For example, I used the HDDs sdb, sdc, sdd, and sde to create the Btrfs RAID in the RAID-0 configuration.

So, I can mount the Btrfs filesystem data in the /data directory using the HDD sdb, as follows:

$ sudo mount /dev/sdb /data

As you can see, the Btrfs RAID is mounted in the /data directory.

$ sudo df -h /data

To find the filesystem usage information of the data Btrfs filesystem mounted in the /data directory, run the following command:

$ sudo btrfs filesystem usage /data

As you can see,

The RAID size (Device size) is 80 GB (4×20 GB per HDD).

About 78.98 GB (Free (estimated)) of 80 GB of disk space can be used in the RAID-0 configuration.

Only one copy of the data (Data ratio) and one copy of the metadata (Metadata ratio) will be stored in the Btrfs filesystem in the RAID-0 configuration.

As the Btrfs RAID is working, you can unmount it from the /data directory, as follows:

$ sudo umount /data

Setting Up RAID-1

In this section, you will learn how to set up a Btrfs RAID in the RAID-1 configuration using four HDDs (sdb, sdc, sdd, and sde). The HDDs are 20 GB in size.

$ sudo lsblk -e7

To create a Btrfs RAID in the RAID-1 configuration using four HDDs (sdb, sdc, sdd, and sde), run the following command:

$ sudo mkfs.btrfs -L data -d raid1 -m raid1 -f /dev/sdb /dev/sdc /dev/sdd /dev/sde

Here,

  • The –L option is used to set the filesystem label data.
  • The –d option is used to set the RAID profile raid1 for the filesystem data.
  • The –m option is used to set the RAID profile raid1 for the filesystem metadata.
  • The –f option is used to force the creation of the Btrfs filesystem, even if any of the HDDs have an existing filesystem.

The Btrfs filesystem data in the RAID-1 configuration should now be created, as you can see in the screenshot below.

You can mount the Btrfs RAID using any HDD/SSD you used to create the RAID.

For example, I used the HDDs sdb, sdc, sdd, and sde to create the Btrfs RAID in the RAID-1 configuration.

I can mount the Btrfs filesystem data in the /data directory using the HDD sdb, as follows:

$ sudo mount /dev/sdb /data

As you can see, the Btrfs RAID is mounted in the /data directory.

$ sudo df -h /data

To find the filesystem usage information of the data Btrfs filesystem mounted in the /data directory, run the following command:

$ sudo btrfs filesystem usage /data

As you can see,

The RAID size (Device size) is 80 GB (4×20 GB per HDD).

About 38.99 GB (Free (estimated)) of 80 GB of disk space can be used in the RAID-1 configuration.

In the RAID-1 configuration, two copies of the data (Data ratio) and two copies of the metadata (Metadata ratio) will be stored in the Btrfs filesystem.

As the Btrfs RAID is working, you can unmount it from the /data directory, as follows:

$ sudo umount /data

Setting Up RAID-1C3

In this section, you will learn how to set up a Btrfs RAID in the RAID-1C3 configuration using four HDDs (sdb, sdc, sdd, and sde). The HDDs are 20 GB in size

$ sudo lsblk -e7

To create a Btrfs RAID in the RAID-1C3 configuration using the four HDDs sdb, sdc, sdd, and sde, run the following command:

$ sudo mkfs.btrfs -L data -d raid1c3 -m raid1c3 -f /dev/sdb /dev/sdc /dev/sdd /dev/sde

Here,

  • The –L option is used to set the filesystem label data.
  • The –d option is used to set the RAID profile raid1c3 for the filesystem data.
  • The –m option is used to set the RAID profile raid1c3 for the filesystem metadata.
  • The –f option is used to force the creation of the Btrfs filesystem, even if any of the HDDs have an existing filesystem.

The Btrfs filesystem data in the RAID-1C3 configuration should now be created, as you can see in the screenshot below.

You can mount the Btrfs RAID using any HDD/SSD you used to create the RAID.

For example, I used the HDDs sdb, sdc, sdd, and sde to create the Btrfs RAID in the RAID-1C3 configuration.

So, I can mount the Btrfs filesystem data in the /data directory using the HDD sdb, as follows:

$ sudo mount /dev/sdb /data

As you can see, the Btrfs RAID is mounted in the /data directory.

$ sudo df -h /data

To find the filesystem usage information of the data Btrfs filesystem mounted in the /data directory, run the following command:

$ sudo btrfs filesystem usage /data

As you can see,

The RAID size (Device size) is 80 GB (4×20 GB per HDD).

About 25.66 GB (Free (estimated)) of 80 GB of disk space can be used in the RAID-1C3 configuration.

In the RAID-1C3 configuration, three copies of the data (Data ratio) and three copies of the metadata (Metadata ratio) will be stored in the Btrfs filesystem.

As the Btrfs RAID is working, you can unmount it from the /data directory, as follows:

$ sudo umount /data

Setting Up RAID-1C4

In this section, you will learn how to set up a Btrfs RAID in the RAID-1C4 configuration using the four HDDs sdb, sdc, sdd, and sde. The HDDs are 20 GB in size.

$ sudo lsblk -e7

To create a Btrfs RAID in the RAID-1C4 configuration using the four HDDs sdb, sdc, sdd, and sde, run the following command:

$ sudo mkfs.btrfs -L data -d raid1c4 -m raid1c4 -f /dev/sdb /dev/sdc /dev/sdd /dev/sde

Here,

  • The –L option is used to set the filesystem label data.
  • The –d option is used to set the RAID profile raid1c4 for the filesystem data.
  • The –m option is used to set the RAID profile raid1c4 for the filesystem metadata.
  • The –f option is used to force the creation of the Btrfs filesystem, even if any of the HDDs have an existing filesystem.

The Btrfs filesystem data in the RAID-1C4 configuration should now be created, as you can see in the screenshot below.

You can mount the Btrfs RAID using any HDD/SSD you used to create the RAID.

For example, I used the HDDs sdb, sdc, sdd, and sde to create the Btrfs RAID in the RAID-1C4 configuration.

So, I can mount the Btrfs filesystem data in the /data directory using the HDD sdb, as follows:

$ sudo mount /dev/sdb /data

As you can see, the Btrfs RAID is mounted in the /data

$ sudo df -h /data

To find the filesystem usage information of the data Btrfs filesystem mounted in the /data

$ sudo btrfs filesystem usage /data

As you can see,

The RAID size (Device size) is 80 GB (4×20 GB per HDD).

About 18.99 GB (Free (estimated)) of 80 GB of disk space can be used in the RAID-1C4 configuration.

In the RAID-1C4 configuration, four copies of the data (Data ratio) and four copies of the metadata (Metadata ratio) will be stored in the Btrfs filesystem.

As the Btrfs RAID is working, you can unmount it from the /data directory, as follows:

$ sudo umount /data

Setting Up RAID-10

In this section, you will learn how to set up a Btrfs RAID in the RAID-10 configuration using the four HDDs sdb, sdc, sdd, and sde. The HDDs are 20 GB in size.

$ sudo lsblk -e7

To create a Btrfs RAID in the RAID-10 configuration using the four HDDs sdb, sdc, sdd, and sde, run the following command:

$ sudo mkfs.btrfs -L data -d raid10 -m raid10 -f /dev/sdb /dev/sdc /dev/sdd /dev/sde

Here,

  • The –L option is used to set the filesystem label data.
  • The –d option is used to set the RAID profile raid10 for the filesystem data.
  • The –m option is used to set the RAID profile raid10 for the filesystem metadata.
  • The –f option is used to force the creation of the Btrfs filesystem, even if any of the HDDs have an existing filesystem.

The Btrfs filesystem data in the RAID-10 configuration should now be created, as you can see in the screenshot below.

You can mount the Btrfs RAID using any HDD/SSD you used to create the RAID.

For example, I used the HDDs sdb, sdc, sdd, and sde to create the Btrfs RAID in the RAID-10 configuration.

So, I can mount the Btrfs filesystem data in the /data directory using the HDD sdb, as follows:

$ sudo mount /dev/sdb /data

As you can see, the Btrfs RAID is mounted in the /data directory.

$ sudo df -h /data

To find the filesystem usage information of the data Btrfs filesystem mounted in the /data directory, run the following command:

$ sudo btrfs filesystem usage /data

As you can see,

The RAID size (Device size) is 80 GB (4×20 GB per HDD).

About 39.48 GB (Free (estimated)) of 80 GB of disk space can be used in the RAID-10 configuration.

In the RAID-10 configuration, two copies of the data (Data ratio) and two copies of the metadata (Metadata ratio) will be stored in the Btrfs filesystem.

As the Btrfs RAID is working, you can unmount it from the /data directory, as follows:

$ sudo umount /data

Setting Up RAID-5

In this section, you will learn how to set up a Btrfs RAID in the RAID-5 configuration using the four HDDs sdb, sdc, sdd, and sde. The HDDs are 20 GB in size.

$ sudo lsblk -e7

To create a Btrfs RAID in the RAID-5 configuration using the four HDDs sdb, sdc, sdd, and sde, run the following command:

$ sudo mkfs.btrfs -L data -d raid5 -m raid5 -f /dev/sdb /dev/sdc /dev/sdd /dev/sde

Here,

  • The –L option is used to set the filesystem label data.
  • The –d option is used to set the RAID profile raid5 for the filesystem data.
  • The –m option is used to set the RAID profile raid5 for the filesystem metadata.
  • The –f option is used to force the creation of the Btrfs filesystem, even if any of the HDDs have an existing filesystem.

The Btrfs filesystem data in the RAID-5 configuration should now be created, as you can see in the screenshot below.

You can mount the Btrfs RAID using any HDD/SSD you used to create the RAID.

For example, I used the HDDs sdb, sdc, sdd, and sde to create the Btrfs RAID in the RAID-5 configuration.

So, I can mount the Btrfs filesystem data in the /data directory using the HDD sdb, as follows:

$ sudo mount /dev/sdb /data

As you can see, the Btrfs RAID is mounted in the /data directory.

$ sudo df -h /data

To find the filesystem usage information of the data Btrfs filesystem mounted in the /data directory, run the following command:

$ sudo btrfs filesystem usage /data

As you can see,

The RAID size (Device size) is 80 GB (4×20 GB per HDD).

About 59.24 GB (Free (estimated)) of 80 GB of disk space can be used in the RAID-5 configuration.

In the RAID-5 configuration, 1.33 copies of the data (Data ratio) and 1.33 copies of the metadata (Metadata ratio) will be stored in the Btrfs filesystem.

As the Btrfs RAID is working, you can unmount it from the /data directory, as follows:

$ sudo umount /data

Setting Up RAID-6

In this section, you will learn how to set up a Btrfs RAID in the RAID-6 configuration using the four HDDs sdb, sdc, sdd, and sde. The HDDs are 20 GB in size.

$ sudo lsblk -e7

To create a Btrfs RAID in the RAID-6 configuration using the four HDDs sdb, sdc, sdd, and sde, run the following command:

$ sudo mkfs.btrfs -L data -d raid6 -m raid6 -f /dev/sdb /dev/sdc /dev/sdd /dev/sde

Here,

  • The –L option is used to set the filesystem label data.
  • The –d option is used to set the RAID profile raid6 for the filesystem data.
  • The –m option is used to set the RAID profile raid6 for the filesystem metadata.
  • The –f option is used to force the creation of the Btrfs filesystem, even if any of the HDDs have an existing filesystem.

The Btrfs filesystem data in the RAID-6 configuration should now be created, as you can see in the screenshot below.

You can mount the Btrfs RAID using any HDD/SSD you used to create the RAID.

For example, I used the HDDs sdb, sdc, sdd, and sde to create the Btrfs RAID in the RAID-6 configuration.

So, I can mount the Btrfs filesystem data in the /data directory using the HDD sdb, as follows:

$ sudo mount /dev/sdb /data

As you can see, the Btrfs RAID is mounted in the /data directory.

$ sudo df -h /data

To find the filesystem usage information of the data Btrfs filesystem mounted in the /data directory, run the following command:

$ sudo btrfs filesystem usage /data

As you can see,

The RAID size (Device size) is 80 GB (4×20 GB per HDD).

About 39.48 GB (Free (estimated)) of 80 GB of disk space can be used in the RAID-6 configuration.

In the RAID-6 configuration, two copies of the data (Data ratio) and two copies of the metadata (Metadata ratio) will be stored in the Btrfs filesystem.

As the Btrfs RAID is working, you can unmount it from the /data directory, as follows:

$ sudo umount /data

Problems with Btrfs RAID-5 and RAID-6

The built-in Btrfs RAID-5 and RAID-6 configurations are still experimental. These configurations are very unstable and you should not use them in production.

To prevent data corruption, the Ubuntu operating system did not implement RAID-5 and RAID-6 for the Btrfs filesystem. So, you will not be able to create a Btrfs RAID in the RAID-5 and RAID-6 configurations using the built-in RAID feature of the Btrfs filesystem on Ubuntu. That is why I have shown you how to create a Btrfs RAID in the RAID-5 and RAID-6 configurations in Fedora 33, instead of Ubuntu 20.04 LTS.

Mounting a Btrfs RAID Automatically on Boot

To mount a Btrfs RAID automatically at boot time using the /etc/fstab file, you will need to know the UUID of the Btrfs filesystem.

You can find the UUID of a Btrfs filesystem with the following command:

$ sudo blkid --match-token TYPE=btrfs

As you can see, the UUID of the storage devices that are added to the Btrfs filesystem for configuring the RAID is the same.

In my case, it is c69a889a-8fd2-4571-bd97-a3c2e4543b6b. It will be different for you. So, be sure to replace this UUID with yours from now on.

Now, open the /etc/fstab file with the nano text editor, as follows:

$ sudo nano /etc/fstab

Add the following line to the end of the /etc/fstab file.

UUID=<strong>c69a889a-8fd2-4571-bd97-a3c2e4543b6b</strong>  /data   btrfs   defaults    0   0

Once you are finished, press <Ctrl> + X followed by Y and <Enter> to save the /etc/fstab file.

For the changes to take effect, restart your computer, as follows:

$ sudo reboot

As you can see, the Btrfs RAID is correctly mounted in the /data directory.

$ df -h /data

As you can see, the Btrfs RAID mounted in the /data directory is working just fine.

$ sudo btrfs filesystem usage /data

Conclusion

This article explained various Btrfs RAID profiles in detail. The article showed you how to set up a Btrfs RAID in the RAID-0, RAID-1, RAID-1C3, RAID-1C4, RAID-10, RAID-5, and RAID-6 configurations. You also learned about some of the problems with the Btrfs RAID-5 and RAID-6 configurations, as well as how to mount the Btrfs RAID automatically at boot time.

References

]]>
How to Defragment a Btrfs Filesystem https://linuxhint.com/defragment-btrfs-filesystem/ Fri, 15 Jan 2021 20:40:50 +0000 https://linuxhint.com/?p=85810

Btrfs is an extent-based Copy-on-Write (CoW) filesystem. Large files are stored in multiple data extents; and when these large files are modified, the extents to be modified are copied to new, empty extents in another location of the storage device and are modified in the new location. Then, the extents of the large files are re-linked to point to the updated extents. The old extents are never removed instantly.

This is how the Copy-on-Write (CoW) feature of the Btrfs filesystem causes fragmentation. Fragmentation means that the data extents of large files are scattered around the entire storage device. They are not instantaneous. So, the performance (read/write speed) of the filesystem may be reduced.

To solve this problem, it is necessary to defragment the Btrfs filesystem every once in a while. This article shows you how to defragment the Btrfs filesystem.

Abbreviations

The abbreviations(short forms) used in this article are as follows:

  • CoW – Copy-on-Write
  • Btrfs – B-tree Filesystem
  • HDD – Hard Disk Drive
  • SSD – Solid-State Drive
  • GB – Gigabyte
  • VM – Virtual Machine

Prerequisites

To try out the examples included in this article:

  • You must have the Btrfs filesystem installed on your computer.
  • You must have a spare HDD/SSD (of any size) or at least 1 free HDD/SSD partition (of any size).

I have a 20 GB HDD sdb on my Ubuntu machine. I will create a Btrfs filesystem on the HDD sdb.

$ sudo lsblk -e7

Note: Your HDD/SSD will likely have a different name than mine, and so will the partitions. So, be sure to replace them with yours from now on.

You can create a Btrfs filesystem on your HDD/SSD (without partitioning) if you have a spare HDD/SSD. You can also create a partition on your HDD/SSD and create a Btrfs filesystem there.

For assistance with installing the Btrfs filesystem in Ubuntu, check out the article Install and Use Btrfs on Ubuntu 20.04 LTS.

For assistance with installing the Btrfs filesystem in Fedora, check out the article Install and Use Btrfs on Fedora 33.

Creating a Btrfs Filesystem

You can create a Btrfs filesystem on your HDD/SSD (unpartitioned) or on your HDD/SSD partition.

To create a Btrfs filesystem on the sdb HDD (entire HDD, no partitions) and give it the filesystem label data, run the following command:

$ sudo mkfs.btrfs -L data /dev/sdb

A Btrfs filesystem should now be created on the sdb HDD.

Create the directory /data to mount the Btrfs filesystem you have just created with the following command:

$ sudo mkdir -v /data

To mount the Btrfs filesystem created on the sdb HDD on the /data directory, run the following command:

$ sudo mount /dev/sdb /data

The Btrfs filesystem should now be mounted, as you can see in the screenshot below:

$ df -h /data

Defragmenting a Btrfs Filesystem Manually

As you can see in the screenshot below, I have copied two files on the Btrfs filesystem mounted on the /data directory to demonstrate the process of Btrfs filesystem defragmentation:

$ ls -lh /data

You can defragment a single file or an entire subvolume/directory recursively.

To defragment the single file /data/ubuntu-20.04.1-live-server-amd64.iso, we will run the following command:

$ sudo btrfs filesystem defragment -vf /data/ubuntu-20.04.1-live-server-amd64.iso

The file /data/ubuntu-20.04.1-live-server-amd64.iso should be defragmented.

To defragment every file or directory of the /data directory recursively, run the following command:

$ sudo btrfs filesystem defragment -rvf /data

As you can see, all the files of the /data directory are defragmented.

In the same way, if you had the subvolume /data/osimages, then you could defragment all the files of the /data/osimages subvolume recursively with the following command:

$ sudo btrfs filesystem defragment -rvf /data/osimages

Compressing a Btrfs Filesystem While Defragmenting

The Btrfs filesystem allows you to compress files while you defragment them.

To defragment all the files in the /data directory and compress them with the ZLIB compression algorithm at the same time, run the defragment command with the -czlib option, as follows:

$ sudo btrfs filesystem defragment -rvf -czlib /data

To defragment all the files in the /data directory and compress them with the ZSTD compression algorithm at the same time, run the defragment command with the -czstd option, as follows:

$ sudo btrfs filesystem defragment -rvf -czstd /data

To defragment all the files in the /data directory and compress them with the LZO compression algorithm at the same time, run the defragment command with the -clzo option, as follows:

$ sudo btrfs filesystem defragment -rvf -clzo /data

The files in the /data directory should be defragmented and compressed at the same time.

In the same way, you can defragment and compress the files of a Btrfs subvolume, as well.

Defragmenting a Btrfs Filesystem Automatically

You can enable automatic defragmentation on your Btrfs filesystem at mount time. This feature of the Btrfs filesystem will defragment all the files of your Btrfs filesystem automatically.

To mount the Btrfs filesystem created on the sdb HDD in the /data directory with automatic defragmentation enabled at boot time, you must add an entry for the Btrfs filesystem in the /etc/fstab file.

First, find the UUID of the Btrfs filesystem created on the sdb HDD, as follows:

$ sudo blkid /dev/sdb

As you can see, the UUID of the Btrfs filesystem created on the sdb HDD is 60afc092-e0fa-4b65-81fd-5dfd7dd884de.

It will be different for you. So, be sure to replace it with yours from now on.

Open the /etc/fstab file with the nano text editor, as follows:

$ sudo nano /etc/fstab

Add the following line to the end of the /etc/fstab file:

UUID=60afc092-e0fa-4b65-81fd-5dfd7dd884de /data     btrfs      autodefrag         0              0

Once you are done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/fstab file.

For the changes to take effect, reboot your computer with the following command:

$ sudo reboot

As you can see, the Btrfs filesystem created on the sdb HDD is mounted on the /data directory with auto defragmentation enabled.

Problems with Defragmenting a Btrfs Filesystem

Though it may seem that defragmentation improves filesystem performance, there are some problems with defragmenting a Btrfs filesystem.

As Btrfs is a Copy-on-Write (CoW) filesystem, to understand the problems with Btrfs filesystem defragmentation, you must understand the Copy-on-Write feature of the Btrfs filesystem.

Suppose, you have a large file (file1) that uses 100 extents (you can think of extents as file blocks) of a Btrfs filesystem. If you create another copy of that large file (file2) in the same Btrfs filesystem, you will see that no additional disk space is used. That is because the files are identical, and the 100 extents of each file are the same. So, the Btrfs filesystem uses the same extents for both files.

Figure 1: file1 and file2 are identical and sharing the same Btrfs filesystem extents to save disk space

Now, say, you have modified one of the copies of the large file (file2). The modification needs to change 10 of the 100 extents. The Btrfs filesystem will copy the required 10 extents in another unused location (say, e101e110) of the filesystem and change them there. Once the changes are written to the disk, the Btrfs filesystem will re-link the extents so that the changes are reflected in the large file. The process is illustrated in the figure below:

Figure 2: 10 extents are changed in file2. So, the extents are re-linked in the Btrfs filesystem.

From figures 1 and 2, you can understand how Copy-on-Write (CoW) works and how the Btrfs filesystem uses Copy-on-Write (CoW) to save disk space.

Now that you know how the Copy-on-Write (CoW) feature of the Btrfs filesystem works, you will understand the problems with defragmenting a Btrfs filesystem.

  1. Defragmenting files move Btrfs data extents and attempt to align them, one after the other. So, the Copy-on-Write links between the copies of the file breaks. This will increase redundant data extents, as well as the disk usage of a Btrfs filesystem that was previously saved by sharing data extents between identical (or nearly identical) copies of the file.
  2. If a Btrfs subvolume has multiple snapshots, defragmenting the subvolume will break the Copy-on-Write links between the subvolume and the snapshots. This will increase disk usage of a Btrfs filesystem.
  3. If you are using the Btrfs filesystem for large databases or virtual machine images (for storing VM data/disks), defragmenting the filesystem will also negatively impact the performance of the filesystem.

Conclusion

In this article, you learned how to defragment a single file and the files in a directory/subvolume recursively of a Btrfs filesystem. You also learned how to enable automatic defragmentation on a Btrfs filesystem at mount time. Finally, the article discussed some of the problems with defragmenting a Btrfs filesystem.

]]>
How to Run Google Chrome OS from a USB Drive https://linuxhint.com/run_google_chrome_os/ Wed, 13 Jan 2021 06:49:02 +0000 https://linuxhint.com/?p=85461 Google Chrome OS is based on the open-source Chromium OS. It is a browser-based operating system. You will only have the Google Chrome web browser installed on it. You can install Chrome web apps or extensions from the Chrome Web Store and add more functionality to the operating system.Sadly, the Google Chrome OS is not publicly available for download, and only the source code of Chromium OS is publicly available. So, you can’t run the Google Chrome OS or Chromium OS directly on your computer.

Luckily, a few Chromium OS-based operating systems are available that you can download and install on your computer. The most popular one is Neverware’s CloudReady OS.

This article will show you how to make a Live bootable USB thumb drive of Neverware’s CloudReady OS and run it from the USB thumb drive. So, let’s get started.

Abbreviations

The abbreviations(short forms) used in this article are:

  • OS – Operating System
  • USB – Universal Serial Bus
  • BIOS – Basic Input/Output System

Downloading CloudReady OS

You can download CloudReady OS from the official website of Neverware.

First, visit the official website of Neverware from your favorite web browser.


Once the page loads, click on CLOUD READY EDITIONS > HOME as marked in the screenshot below.


Click on INSTALL THE HOME EDITION as marked in the screenshot below.


You should see the CloudReady system requirements in the What you need a section of the web page.

At the time of this writing, you need an 8 GB or higher capacity USB thumb drive and a computer to flash the CloudReady image to the USB thumb drive.


Scroll down a little bit and click on DOWNLOAD 64-BIT IMAGE as marked in the screenshot below.


Your browser should start downloading the CloudReady OS image. It’s a big file. So, it may take a while to complete.

Creating a CloudReady OS Bootable USB Thumb Drive on Windows

You can create a CloudReady OS bootable USB thumb drive on Windows using the official CloudReady USB Maker.

From the page you downloaded the CloudReady OS image, click on DOWNLOAD USB MAKER as marked in the screenshot below.


Your browser should start downloading the CloudReady USB Maker.


Once CloudReady USB Maker is downloaded, run it.

Click on Yes.


Click on Next.


Once you see this window, plug-in the USB thumb drive to your computer.


Click on Next.


Select your USB thumb drive from the list and click on Next.


The CloudReady USB Maker is extracting the CloudReady OS image. It may take a while to complete.


Once the CloudReady OS image is extracted, the CloudReady USB Maker should start flashing the CloudReady image to the USB thumb drive. It may take a while to complete.


Once your USB thumb drive is flashed, click on Finish.


Finally, eject the USB thumb drive from your computer, and your USB thumb drive should be ready.

Creating a CloudReady OS Bootable USB Thumb Drive on Linux

You can create a CloudReady OS bootable USB thumb drive on Linux using the dd command-line tool.

First, navigate to the ~/Downloads directory as follows:

$ cd ~/Downloads

You should find the CloudReady OS image cloud ready-free-85.4.0-64bit.zip here.

$ ls -lh


The CloudReady OS image is ZIP compressed. It would be best if you unzipped it.

To unzip the CloudReady OS image cloudready-free-85.4.0-64bit.zip, run the following command:

$ unzip cloudready-free-85.4.0-64bit.zip


The CloudReady OS image ZIP file is being extracted. It may take a while to complete.


At this point, the CloudReady OS image should be extracted.


Once the CloudReady OS image zip file is extracted, you should find a new file cloudready-free-85.4.0-64bit.bin in the ~/Downloads directory.

$ ls -lh


Now, insert the USB thumb drive on your computer and find the device name of your USB thumb drive as follows:

$ sudo lsblk -e7


As you can see, I am using a 32 GB USB thumb drive, and its name is sdb. It will be different for you. So, make sure to replace it with yours from now on.


To flash the USB thumb drive sdb with the CloudReady OS image cloudready-free-85.4.0-64bit.bin, run the following command:

$ sudo dd if=cloudready-free-85.4.0-64bit.bin of=/dev/sdb bs=4M status=progress


The CloudReady OS image cloudready-free-85.4.0-64bit.bin is being written to the USB thumb drive sdb. It may take a while to complete.


At this point, the CloudReady OS image cloudready-free-85.4.0-64bit.bin should be written to the USB thumb drive sdb.


Finally, eject the USB thumb drive sdb with the following command:

$ sudo eject /dev/sdb

Booting CloudReady OS from USB Thumb Drive

Now, insert the USB thumb drive on your computer, go to your computer’s BIOS, and boot from the USB thumb drive.

Once you’ve booted from the USB thumb drive, CloudReady should start in Live mode.

Initial Configuration of CloudReady OS

As you’re running CloudReady for the first time, you have to do some initial configuration.

Click on Let’s go.


You can configure the network from here if you need. Once you’re done, click on Next.


Click on CONTINUE.


Sign in to your Google account from here.


Once you’ve logged in to your Google account, you should see the following window.

Click on Get started.


You should see the CloudReady welcome screen. Close it.


CloudReady should be ready to be used. Have fun.

Conclusion

CloudReady OS is based on the open-source Chromium OS, which Google Chrome OS is also based on. In this article, I have shown you how to make a Live bootable USB thumb drive of CloudReady OS on Windows and Linux operating systems. Now, you should be able to run CloudReady OS from a USB thumb drive.

]]>
How to Backup Btrfs Snapshots to External Drives https://linuxhint.com/back_up_btrfs_snapshots_external_drives/ Sun, 10 Jan 2021 14:18:48 +0000 https://linuxhint.com/?p=85110

By default, you can store the snapshots you take of your Btrfs subvolumes in the same Btrfs filesystem, but it is not possible to store the snapshots of one Btrfs filesystem directly to another Btrfs filesystem. However, the Btrfs filesystem provides you with the necessary tools to back up snapshots of one Btrfs filesystem to another Btrfs filesystem. This article shows you how to back up Btrfs snapshots to an external Btrfs filesystem on an external drive.

Prerequisites

To try out the examples included in this article, you must fulfill the following prerequisites:

  • Have the Btrfs filesystem installed on your computer.
  • Have a hard disk or SSD with at least 2 free partitions (of any size).

I have the 20 GB hard disk, sdb, on my Ubuntu machine. I have created two partitions, sdb1 and sdb2, on this hard disk.

$ sudo lsblk -e7

Note: Your hard disk or SSD will have a different name than mine, and so will the partitions. So, be sure to replace these names with yours from now on.


I will create Btrfs filesystems on the sdb1 and the sdb2 partitions. The snapshots created on the Btrfs filesystem (sdb1) will be backed up to the Btrfs filesystem created on the sdb2 partition. The Btrfs filesystem created on the sdb2 partition will act as the external drive. You may use a USB thumb drive or an external hard drive, as well; just be sure to format it with the Btrfs filesystem.

For assistance with installing the Btrfs filesystem in Ubuntu, check out my article Install and Use Btrfs on Ubuntu 20.04 LTS.

For assistance with installing the Btrfs filesystem in Fedora, check out my article Install and Use Btrfs on Fedora 33.

Creating Required Btrfs Filesystems

I will format both the sdb1 and sdb2 partitions as Btrfs. I will use the sdb1 partition for storing the data and Btrfs snapshots. I will use the sdb2 partition for backing up the snapshots of the Btrfs filesystem created on the sdb1 partition.

To create a Btrfs filesystem on the sdb1 partition and give it the filesystem label data, run the following command:

$ sudo mkfs.btrfs -L data /dev/sdb1

A Btrfs filesystem should now be created on the sdb1 partition.

To create a Btrfs filesystem on the sdb2 partition and give it the filesystem label snapshots, run the following command:

$ sudo mkfs.btrfs -L snapshots /dev/sdb2

A Btrfs filesystem should now be created on the sdb2 partition.

Create the directories /data and /snapshots for mounting the sdb1 and sdb2 partitions, respectively, as follows:

$ sudo mkdir -v /{data,snapshots}

Mount the Btrfs filesystem you have created on the sdb1 partition on the /data directory, as follows:

$ sudo mount /dev/sdb1 /data

In the same way, mount the Btrfs filesystem you have created on the sdb2 partition on the /snapshots directory, as follows:

$ sudo mount /dev/sdb2 /snapshots

As you can see in the screenshot below, both the Btrfs filesystems (sdb1 and sdb2 partitions) have been mounted correctly.

$ df -h -t btrfs

Taking Snapshots of a Btrfs Filesystem

In this section, we will create the dummy project web1 on the /data/projects/web1 Btrfs subvolume. We will take a snapshot of that subvolume in this section, as well as some other snapshots in later sections of this article.

First, create the new directory /data/projects, as follows:

$ sudo mkdir -v /data/projects

Next, create the new subvolume web1 in the /data/projects directory, as follows:

$ sudo btrfs subvolume create /data/projects/web1

Finally, create the new file index.html in the /data/projects/web1 subvolume with the nano text editor, as follows:

$ sudo nano /data/projects/web1/index.html

Type in the following lines of code in the index.html file:

<!DOCTYPE html>

<html>

<head>

        <title>Demo Website</title>

        <link rel="stylesheet" href="style.css"/>

</head>

<body>

        <h1>Hello World</h1>

</body>

</html>


Once you are done, press <Ctrl> + X followed by Y and <Enter> to save the index.html file.

In the same way, create the new file style.css in the /data/projects/web1 subvolume as follows:

$ sudo nano /data/projects/web1/style.css


Type the following lines of code in the style.css file:

h1 {

        color: green;

}

Once you are done, press <Ctrl> + X followed by Y and <Enter> to save the style.css file.

Now, the /data/projects/web1 subvolume contains the index.html and style.css file.

$ ls -lh /data/projects/web1

We will keep all the snapshots of this Btrfs filesystem in the /data/.snapshots directory.

First, create the /data/.snapshots directory with the following command:

$ sudo mkdir -v /data/.snapshots

Next, create the read-only snapshot /data/.snapshots/web1-2020-12-30 of the /data/projects/web1 subvolume with the following command:

$ sudo btrfs subvolume snapshot -r /data/projects/web1 /data/.snapshots/web1-2020-12-30

As you can see, the new snapshot /data/.snapshots/web1-2020-12-30 has been created.

$ sudo btrfs subvolume list /data

Backing up Snapshots to External Drive

To back up the snapshot /data/.snapshots/web1-2020-12-30 to another Btrfs filesystem (external drive sdb2, in this case) mounted on the /snapshots directory, run the following command:

$ sudo btrfs send /data/.snapshots/web1-2020-12-30 | sudo btrfs receive /snapshots

The snapshot /data/.snapshots/web1-2020-12-30 should be backed up to the external Btrfs filesystem (sdb2) mounted on the /snapshots directory.

As you can see, the new subvolume web1-2020-12-30 has been created on the external Btrfs filesystem.

$ sudo btrfs subvolume list /snapshots

The snapshot web1-2020-12-30 should have the same files/directories as the /data/.snapshots/web1-2020-12-30 snapshot.

$ tree -a /snapshots

You can obtain more information about the backed-up snapshot /snapshosts/web1-2020-12-30 as follows:

$ sudo btrfs subvolume show /snapshots/web1-2020-12-30

Incremental Back-up of Snapshots to External Drive

If there are a lot of files in the snapshots to back up to an external drive, then incremental backups will help you speed up the back-up operation. In this case, Btrfs will only update the files that have changed since the last snapshot and copy new files that were not available in the last snapshot.

In this section, I will show you how to perform incremental back-ups of Btrfs snapshots to external Btrfs filesystems.

First, open the index.html file from the /data/projects/web1 subvolume, as follows:

$ sudo nano /data/projects/web1/index.html

Make any changes you want to the index.html file. Once you are done, press <Ctrl> + X followed by Y and <Enter> to save the index.html file.

Take a new read-only snapshot of the /data/projects/web1 subvolume, as follows:

$ sudo btrfs subvolume snapshot -r /data/projects/web1 /data/.snapshots/web1-2020-12-31

As you can see, the new snapshot /data/.snapshots/web1-2020-12-31 of the /data/projects/web1 subvolume has been created.

$ sudo btrfs subvolume list /data

Now, we are ready to take an incremental backup.

To take an incremental backup, you will need a common snapshot of both the source and the destination (external drive) Btrfs filesystems. The common snapshot is usually the latest snapshot of a Btrfs subvolume. When you take a new snapshot on the source Btrfs filesystem, the new snapshot is compared with the latest snapshot (available on both the source and the destination Btrfs filesystem) of the source Btrfs filesystem. Btrfs will calculate the difference and send only the required data to the destination Btrfs filesystem (the external drive).

For example, to take an incremental backup of the /data/.snapshots/web1-2020-12-31 snapshot, you must specify the parent snapshot (the latest snapshot available on both the source and destination Btrfs filesystems), /data/.snapshots/web1-2020-12-30, as well.

An incremental backup of the /data/.snapshots/web1-2020-12-31 snapshot can be taken to an external Btrfs filesystem, as follows:

$ sudo btrfs send -p /data/.snapshots/web1-2020-12-30 /data/.snapshots/web1-2020-12-31 | sudo btrfs receive /snapshots

An incremental backup of the /data/.snapshots/web1-2020-12-31 snapshot should be taken.

As you can see, the web1-2020-12-31 snapshot has been backed up to the external Btrfs filesystem mounted on the /snapshots directory.

$ sudo btrfs subvolume list /snapshots

As you can see in the screenshot below, the changes you have made to the index.html file are available in the web1-2020-12-31 snapshot that has been backed up to the external Btrfs filesystem.

$ cat /snapshots/web1-2020-12-31/index.html

In the same way, you may take as many incremental backups of your snapshots as you want.

I will show you how to do an incremental backup one more time. I will not take the time to explain it again. Instead, I will just show you the process for clarity.

Open the index.html file from the /data/projects/web1 subvolume, as follows:

$ sudo nano /data/projects/web1/index.html

Make any changes you want to the index.html file. Once you are done, press <Ctrl> + X followed by Y and <Enter> to save the index.html file.

Take a new read-only snapshot of the /data/projects/web1 subvolume, as follows:

$ sudo btrfs subvolume snapshot -r /data/projects/web1 /data/.snapshots/web1-2020-12-31_2

Take an incremental backup of the /data/.snapshots/web1-2020-12-31_2 snapshot to an external Btrfs filesystem, as follows:

$ sudo btrfs send -p /data/.snapshots/web1-2020-12-31 /data/.snapshots/web1-2020-12-31_2 | sudo btrfs receive /snapshots

Note: Now, the parent snapshot to which the /data/.snapshots/web1-2020-12-31_2 snapshot will be compared is /data/.snapshots/web1-2020-12-31.

As you can see, the web1-2020-12-31_2 snapshot has been backed up to the external Btrfs filesystem mounted on the /snapshots directory.

$ sudo btrfs subvolume list /snapshots

As you can see in the screenshot below, the recent changes made to the index.html file are available on the web1-2020-12-31_2 snapshot backed up to the external Btrfs filesystem.

$ cat /snapshots/web1-2020-12-31_2/index.html

Keeping Things Clean

If you back up your Btrfs snapshots frequently, you will end up with a lot of snapshots, and it may become difficult to manage them. Luckily, you can remove any snapshot from the Btrfs filesystem.

If you are using a large enough external drive for keeping backups of the Btrfs snapshots, then you can keep a few snapshots on your Btrfs filesystem and back up all the snapshots on your external drive.

If you are using a smaller external drive, then you can selectively keep only the most important snapshots backed up on the external drive.

To perform backups of your Btrfs snapshots, you need to keep at least the latest snapshot on both the source (/data/.snapshots) and the destination (/snapshots – external drive) Btrfs filesystems. So, feel free to remove any snapshots other than the latest snapshot on both ends.

For example, in this case, the latest snapshot is web1-2020-12-31_2. So, to perform incremental backups, this snapshot must be kept on the source and the destination (external drive) Btrfs filesystems.

Suppose, you want to remove the /data/.snapshots/web1-2020-12-30 snapshot.

To do this, run the following command:

$ sudo btrfs subvolume delete /data/.snapshots/web1-2020-12-30

The Btrfs snapshot /data/.snapshots/web1-2020-12-30 should now be removed.

In the same way, you can remove the /data/.snapshots/web1-2020-12-31 snapshot, as follows:

$ sudo btrfs subvolume delete /data/.snapshots/web1-2020-12-31

Now, only the latest snapshot, /data/.snapshots/web1-2020-12-31_2, is available on the Btrfs filesystem, mounted on the /data directory. The other snapshots are backed up on the external drive, mounted on the /snapshots directory.

$ sudo btrfs subvolume list /data

$ sudo btrfs subvolume list /snapshots

Restoring Snapshots from External Drive

If you have backed up your snapshots on the external drive, you can restore them at any time from the external drive.

For example, I have removed the web1-2020-12-30 snapshot from my Btrfs filesystem, mounted on the /data directory. But, this snapshot is backed up on the external drive, mounted on the /snapshots directory. Let us restore this snapshot.

$ sudo btrfs subvolume list /snapshots

To restore the web1-2020-12-30 snapshot from the external drive, run the following command:

$ sudo btrfs send /snapshots/web1-2020-12-30 | sudo btrfs receive /data/.snapshots

The snapshot web1-2020-12-30 should be restored on the Btrfs filesystem mounted on the /data directory.

As you can see, the web1-2020-12-30 snapshot is restored on the Btrfs filesystem mounted on the /data directory.

$ sudo btrfs subvolume list /data

And, as you can see, the contents of the index.html file from the web1-2020-12-30 snapshot. This is the first version of the index.html file from before.

$ cat /data/.snapshots/web1-2020-12-30/index.html

Conclusion

In this article, you learned how to back up snapshots of your Btrfs filesystem to an external drive. You also learned how to take incremental backups of your Btrfs snapshots to an external drive. Finally, you learned how to remove existing snapshots from a Brtfs filesystem and restore snapshots from the external drive, as well.

]]>
How to Use Btrfs Snapshots https://linuxhint.com/use-btrfs-snapshots/ Sat, 09 Jan 2021 11:12:01 +0000 https://linuxhint.com/?p=85021 The Btrfs filesystem has built-in filesystem-level snapshot support. You can create a subvolume in your Btrfs filesystem and take snapshots of the files/directories in that subvolume. Taking a snapshot of a subvolume will save the state of the files/directories in that subvolume. You can recover any files/directories of the subvolume from the snapshot in case you need it.

The snapshot feature of the Btrfs filesystem uses the Copy-on-Write (CoW) principle. So, it does not take much disk space, and you can take snapshots of a subvolume instantly.

The Btrfs filesystem supports 2 types of snapshots.

  1. Writable snapshots: If you take a writable snapshot, you can modify that snapshot’s files/directories later. This is the default snapshot type of the Btrfs filesystem.
  2. Read-only snapshots: If you take a read-only snapshot, you can’t modify that snapshot’s files/directories later.

This article will show you how to take writable and read-only snapshots of your Btrfs filesystem subvolumes. I will also show you how to update a writable snapshot and recover files from a snapshot. I will show you how to remove a snapshot as well. So, let’s get started.

Prerequisites

To try out the examples of this article,

  • You must have the Btrfs filesystem installed on your computer.
  • You need to have a hard disk or SSD with at least 1 free partition (of any size).

I have a 20 GB hard disk sdb on my Ubuntu machine. I have created 2 partitions sdb1 and sdb2 on this hard disk. I will use the partition sdb1 in this article.

$ sudo lsblk -e7

Your hard disk or SSD may have a different name than mine, so will the partitions. So, make sure to replace them with yours from now on.

If you need any assistance on installing the Btrfs filesystem on Ubuntu, check my article Install and Use Btrfs on Ubuntu 20.04 LTS.

If you need any assistance on installing the Btrfs filesystem on Fedora, check my article Install and Use Btrfs on Fedora 33.

Creating a Btrfs Filesystem

To experiment with Btrfs subvolumes, you need to create a Btrfs filesystem.

To create a Btrfs filesystem with the label data on the sdb1 partition, run the following command:

$ sudo mkfs.btrfs -L data /dev/sdb1

A Btrfs filesystem should be created.

Create a directory /data with the following command:

$ sudo mkdir -v /data

To mount the Btrfs filesystem created on the sdb1 partition in the /data directory, run the following command:

$ sudo mount /dev/sdb1 /data

The Btrfs filesystem should be mounted as you can see in the screenshot below.

$ df -h /data

Preparing the Btrfs Filesystem for Snapshots

In Btrfs, you can take snapshots of Btrfs subvolumes only. The main root of a Btrfs filesystem is also a subvolume. So, you can take the backup of the entire Btrfs filesystem as well as specific subvolumes.

This section will create a Btrfs subvolume /data/projects/web1 and create the necessary files for the next sections of this article below. I will also create a directory where you can keep your snapshots. In the next sections, I will show you how to take snapshots (writable and read-only), update a writable snapshot, and recover files from the snapshot. So, let’s get started.

First, create a new directory /data/projects as follows:

$ sudo mkdir -v /data/projects

Create a new subvolume web1 in the /data/projects directory as follows:

$ sudo btrfs subvolume create /data/projects/web1

Create a new file index.html in the /data/projects/web1 subvolume as follows:

$ sudo nano /data/projects/web1/index.html

Type in the following lines of codes in the index.html file.

<!DOCTYPE html>
<html>
<head>
    <title>Demo Website</title>
    <link rel="stylesheet" href="style.css"/>
</head>
<body>
    <h1>Hello World 4</h1>
</body>
</html>

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the index.html file.

Create a new file style.css in the /data/projects/web1 subvolume as follows:

$ sudo nano /data/projects/web1/style.css

Type in the following lines of codes in the style.css file.

h1 {
color: green;
}

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the style.css file.

Now, the /data/projects/web1 subvolume has the index.html and style.css file.

$ ls -lh /data/projects/web1

I want to keep all the snapshots of this Btrfs filesystem in the /data/.snapshots directory.

Create the /data/.snapshots directory with the following command:

$ sudo mkdir -v /data/.snapshots

Taking Snapshots of a Subvolume

To take a snapshot of the /data/projects/web1 subvolume into the /data/.snapshots/web1-2020-12-25 directory (will be created automatically), run the following command:

$ sudo btrfs subvolume snapshot /data/projects/web1 /data/.snapshots/web1-2020-12-25

A snapshot of the /data/projects/web1 directory should be created on the /data/.snapshots/web1-2020-12-25 directory.

As you can see in the screenshot below, a new subvolume .snapshots/web1-2020-12-25 is created. A snapshot is actually a subvolume.

$ sudo btrfs subvolume list /data

You can see more information about the snapshot you’ve created in the /data/.snapshots/web1-2020-12-25 directory as follows:

$ sudo btrfs subvolume show /data/.snapshots/web1-2020-12-25

As you can see, all the files that are in the /data/projects/web1 subvolume are in the /data/.snapshots/web1-2020-12-25 snapshot.

$ tree -a /data

Recovering Files from Snapshots

In this section, I am going to show you how to recover files from the Btrfs snapshots.

First, I am going to show you how to recover a single file from the snapshot.

Open the /data/projects/web1/index.html file with the nano text editor as follows:

$ sudo nano /data/projects/web1/index.html

Make any changes you want.

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the file.

As you can see, the main index.html file is different from the index.html file in the snapshot.

$ cat /data/projects/web1/index.html
$ cat /data/.snapshots/web1-2020-12-25/index.html

We have made the changes to the main index.html file are unwanted, and we want to recover the index.html file from the snapshot.

You can restore the index.html file from the snapshot as follows:

$ sudo cp -v /data/.snapshots/web1-2020-12-25/index.html /data/projects/web1/index.html

As you can see, the index.html file is restored from the snapshot.

$ cat /data/projects/web1/index.html
$ cat /data/.snapshots/web1-2020-12-25/index.html

Now, let’s see how to recover all the files/directories from the snapshot.

Remove all the files from the /data/projects/web1 snapshot as follows:

$ sudo rm -rv /data/projects/web1/*

To recover all the files/directories from the snapshot, run the following command:

$ sudo rsync -avz /data/.snapshots/web1-2020-12-25/ /data/projects/web1/

As you can see, the files/directories are restored from the snapshot.

$ ls -lh /data/projects/web1

Finally, let’s see how to recover files/directories from the snapshot in mirror mode. In mirror mode, the subvolume’s files/directories will be the same as in the snapshot. If there are any files/directories in the subvolume that are not available in the snapshot, they will be removed.

Let’s create a new file in the subvolume to differentiate the file tree from the snapshot.

Create a README.txt file in the /data/projects/web1 subvolume as follows:

$ echo "hello world 5" | sudo tee /data/projects/web1/README.txt

As you can see, the file tree of the /data/projects/web1 subvolume is different from the /data/.snapshots/web1-2020-12-25 snapshot.

$ tree -a /data

To restore the files/directories from the /data/.snapshots/web1-2020-12-25 snapshot to the /data/projects/web1 subvolume in mirror mode, run the following command:

$ sudo rsync -avz --delete /data/.snapshots/web1-2020-12-25/ /data/projects/web1/

All the files/directories of the /data/projects/web1 subvolume should be restored (in mirror mode) from the /data/.snapshots/web1-2020-12-25 snapshot.

The file tree of the /data/projects/web1 subvolume and the /data/.snapshots/web1-2020-12-25 snapshot should be the same.

As you can see, the index.html file and style.css file contents are the same in the /data/projects/web1 subvolume and the /data/.snapshots/web1-2020-12-25 snapshot.

Contents of the index.html and style.css file in the /data/projects/web1 subvolume.

$ cat /data/projects/web1/index.html
$ cat /data/projects/web1/style.css

Contents of the index.html and style.css file in the /data/.snapshots/web1-2020-12-25 snapshot.

$ cat /data/projects/web1/index.html
$ cat /data/projects/web1/style.css

Updating a Snapshot

By default, the Btrfs filesystem takes writable snapshots. A Btrfs snapshot is just like a subvolume. So, you can modify/update the files/directories of a writable snapshot.

Let’s update the index.html file in the /data/projects/web1 subvolume.

First, open the index.html file from the /data/projects/web1 subvolume with the nano text editor as follows:

$ sudo nano /data/projects/web1/index.html

Make any changes you want. Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the index.html file.

As you can see, the index.html file of the /data/projects/web1 subvolume is different from the /data/.snapshots/web1-2020-12-25 snapshot.

$ cat /data/projects/web1/index.html
$ cat /data/.snapshots/web1-2020-12-25/index.html

You want to keep the index.html file of the /data/projects/web1 subvolume.

To update the index.html file in the /data/.snapshots/web1-2020-12-25 snapshot, run the following command:

$ sudo cp -v /data/projects/web1/index.html /data/.snapshots/web1-2020-12-25/index.html

As you can see, the index.html file of the /data/.snapshots/web1-2020-12-25 snapshot is updated.

Updating a snapshot is as easy as copying new files to the snapshot.

Taking Read-Only Snapshots of a Subvolume

At times, you don’t want the snapshots you’ve taken to be updated in any way. In that case, you can create read-only snapshots.

For example, to create a read-only snapshot /data/.snapshots/web1-2020-12-26 of the /data/projects/web1 subvolume, run the following command:

$ sudo btrfs subvolume snapshot -r /data/projects/web1 /data/.snapshots/web1-2020-12-26

As you can see, a new subvolume .snapshots/web1-2020-12-26 is created.

$ sudo btrfs subvolume list /data

As you can see, the snapshot /data/.snapshots/web1-2020-12-26 is read-only.

$ sudo btrfs subvolume show /data/.snapshots/web1-2020-12-26

Let’s update the index.html file from the /data/projects/web1 subvolume.

To do that, open the index.html file from the /data/projects/web1 subvolume with the nano text editor as follows:

$ sudo nano /data/projects/web1/index.html

Make any changes you want. Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the changes.

As you can see, the index.html in the /data/projects/web1 subvolume is different from the /data/.snapshots/web1-2020-12-26 snapshot.

$ cat /data/projects/web1/index.html
$ cat /data/.snapshots/web1-2020-12-26/index.html

Let’s try to update the index.html file in the /data/.snapshots/web1-2020-12-26 snapshot.

$ sudo cp -v /data/projects/web1/index.html /data/.snapshots/web1-2020-12-26/index.html

As you can see, you can’t update the index.html file of the /data/.snapshots/web1-2020-12-26 snapshot because the snapshot is read-only.

Removing a Snapshot

I have told you earlier that a Btrfs snapshot is like a subvolume. So, you can remove a Btrfs snapshot just like you remove a Btrfs subvolume. Same command.

This is how the file tree of the Btrfs filesystem mounted on the /data directory looks like at the moment.

$ tree -a /data

Let’s remove the .snapshots/web1-2020-12-25 snapshot.

$ sudo btrfs subvolume list /data

To remove the /data/.snapshots/web1-2020-12-25 snapshot, run the following command:

$ sudo btrfs subvolume delete /data/.snapshots/web1-2020-12-25

As you can see, the snapshot .snapshots/web1-2020-12-25 is no more.

$ sudo btrfs subvolume list /data

As you can see, the files/directories of the /data/.snapshots/web1-2020-12-25 snapshot is removed as well.

$ tree -a /data

Conclusion

This article has shown you how to take writable and read-only snapshots of your Btrfs filesystem subvolumes. I have also shown you how to update a writable snapshot and recover files from a snapshot. I have shown you how to remove a Btrfs snapshot as well. This article should help you get started with the Btrfs snapshot feature.

]]>
How to Create and Mount Btrfs Subvolumes https://linuxhint.com/create-mount-btrfs-subvolumes/ Fri, 08 Jan 2021 20:50:48 +0000 https://linuxhint.com/?p=84971 A Btrfs subvolume works just like a directory, but it has its own file tree. So, you can mount Btrfs subvolumes separately as they have their own file tree. You also need to create subvolumes to take snapshots of your important data.

This article will show you how to create and delete Btrfs subvolumes, mount Btrfs subvolumes, and automatically mount Btrfs subvolumes using the /etc/fstab file. So, let’s get started.

Prerequisites

To try out the examples of this article,

  • You must have the Btrfs filesystem installed on your computer.
  • You need to have a hard disk or SSD with at least 1 free partition (of any size).

I have a 20 GB hard disk sdb on my Ubuntu machine. I have created 2 partitions sdb1 and sdb2 on this hard disk. I will use the partition sdb1 in this article.

$ sudo lsblk -e7

Your hard disk or SSD may have a different name than mine, so will the partitions. So, make sure to replace them with yours from now on.

If you need any assistance on installing the Btrfs filesystem on Ubuntu, check my article Install and Use Btrfs on Ubuntu 20.04 LTS.

If you need any assistance on installing the Btrfs filesystem on Fedora, check my article Install and Use Btrfs on Fedora 33.

Creating a Btrfs Filesystem

To experiment with Btrfs subvolumes, you need to create a Btrfs filesystem.

To create a Btrfs filesystem with the label data on the sdb1 partition, run the following command:

$ sudo mkfs.btrfs -L data /dev/sdb1

A Btrfs filesystem should be created.

Create a directory /data with the following command:

$ sudo mkdir -v /data

To mount the Btrfs filesystem created on the sdb1 partition in the /data directory, run the following command:

$ sudo mount /dev/sdb1 /data

The Btrfs filesystem should be mounted as you can see in the screenshot below.

$ df -h /data

Creating Btrfs Subvolumes

A Btrfs subvolume is just like a directory in your Btrfs filesystem. So, you need to specify a directory path to create a Btrfs subvolume in that directory path. The path must point to a Btrfs filesystem where you want to create the subvolume.

For example, to create a Btrfs subvolume in the path /data/photos (the Btrfs filesystem is mounted in the /data directory), run the following command:

$ sudo btrfs subvolume create /data/photos

A Btrfs subvolume /data/photos should be created.

Let’s create some more Btrfs subvolumes.

Create a Btrfs subvolume /data/videos with the following command:

$ sudo btrfs subvolume create /data/videos

Create a Btrfs subvolume /data/documents with the following command:

$ sudo btrfs subvolume create /data/documents

Create a Btrfs subvolume /data/projects with the following command:

$ sudo btrfs subvolume create /data/projects

As you can see, a new directory is automatically created for each of the subvolumes.

You can list all the subvolumes of your Btrfs filesystem (mounted on the /data directory) as follows:

$ sudo btrfs subvolume list /data

As you can see, all the subvolumes we have created are listed.

You can find a lot of information about a Btrfs subvolume (let’s say /data/projects) like the subvolume name, the subvolume UUID, the subvolume ID etc. as follows:

$ sudo btrfs subvolume show /data/projects

Let’s create some dummy files in each of the Btrfs subvolumes. Once we mount the Btrfs subvolumes separately, the files in each of the subvolumes should be there.

To create some dummy files in the /data/projects subvolume, run the following command:

$ sudo touch /data/projects/file{1..3}

To create some dummy files in the /data/photos subvolume, run the following command:

$ sudo touch /data/photos/file{4..6}

To create some dummy files in the /data/videos subvolume, run the following command:

$ sudo touch /data/videos/file{7..8}

To create some dummy files in the /data/documents subvolume, run the following command:

$ sudo touch /data/documents/file{9..10}

Right now, this is how the Btrfs filesystem mounted on the /data directory looks like.

$ tree /data

Mounting Btrfs Subvolumes

To mount a Btrfs subvolume, you need to know either its name or its ID.

You can find the name or the ID of all the Btrfs subvolumes created on the Btrfs filesystem mounted on the /data directory as follows:

$ sudo btrfs subvolume list /data

Let’s mount the projects Btrfs subvolume. The projects Btrfs subvolume has the ID 261.

I will mount the Btrfs subvolume projects in the /tmp/projects directory to show you how to mount a Btrfs subvolume.

Create a directory /tmp/projects as follows:

$ sudo mkdir -v /tmp/projects

You can mount the projects Btrfs subvolume (which is available in the Btrfs filesystem created on the sdb1 partition) using its name projects in the /tmp/projects directory as follows:

$ sudo mount /dev/sdb1 -o subvol=projects /tmp/projects

The projects subvolume should be mounted on the /tmp/projects directory as you can see in the screenshot below.

$ sudo btrfs subvolume show /tmp/projects

You can also see that the Btrfs filesystem (the projects subvolume) is mounted on the /tmp/projects directory.

$ df -h -t btrfs

All the files you have created in the projects subvolume are also available in the /tmp/projects directory as you can see in the screenshot below.

$ tree /tmp/projects

Now, let’s see how to mount a Btrfs subvolume using its ID.

Before that, umount the projects subvolume from the /tmp/projects directory as follows:

$ sudo umount /tmp/projects

You can mount the projects Btrfs subvolume (which is available in the Btrfs filesystem created on the sdb1 partition) using its ID 261 in the /tmp/projects directory as follows:

$ sudo mount /dev/sdb1 -o subvolid=261 /tmp/projects

The projects subvolume should be mounted on the /tmp/projects directory as you can see in the screenshot below.

$ sudo btrfs subvolume show /tmp/projects

You can also see that the Btrfs filesystem (the projects subvolume) is mounted on the /tmp/projects directory.

$ df -h -t btrfs

All the files you have created in the projects subvolume are also available in the /tmp/projects directory as you can see in the screenshot below.

$ tree /tmp/projects

Removing Btrfs Subvolumes

In this section, I am going to show you how to remove a Btrfs subvolume.

Let’s create a Btrfs subvolume test on the Btrfs filesystem mounted on the /data directory as follows:

$ sudo btrfs subvolume create /data/test

As you can see, the test subvolume is created on the Btrfs filesystem mounted on the /data directory.

$ sudo btrfs subvolume list /data

To remove the test Btrfs subvolume, run the following command:

$ sudo btrfs subvolume delete /data/test

NOTE: If you delete a Btrfs subvolume, all the files/directories in that subvolume will also be removed.

As you can see, the Btrfs subvolume test is removed.

$ sudo btrfs subvolume list /data

Automatically Mount Brtfs Subvolumes at Boot Time

In this section, I will show you how to mount the Btrfs subvolumes of the Btrfs filesystem created on the sdb1 partition (mounted on /data directory now).

First, unmount the Btrfs filesystem, which is mounted on the /data directory as follows:

$ sudo umount /data

I want to mount the Btrfs subvolumes in their respective directories. Let’s create some directories where we can mount the Btrfs subvolumes.

To create the directories documents, projects, photos, and videos, run the following command:

$ sudo mkdir -pv /data/{documents,projects,photos,videos}

Find the UUID of the Btrfs filesystem on the sdb1 partition as follows:

$ sudo blkid  /dev/sdb1

As you can see, the UUID of the Btrfs filesystem is 0b56138b-6124-4ec4-a7a3-7c503516a65c.

Now, edit the /etc/fstab file with the nano text editor as follows:

$ sudo nano /etc/fstab

Type in the following lines in the /etc/fstab file:

# Mount the Btrfs subvolumes to their respective directories
UUID=0b56138b-6124-4ec4-a7a3-7c503516a65c   /data/projects   
btrfs   subvol=projects    0   0
UUID=0b56138b-6124-4ec4-a7a3-7c503516a65c   /data/documents  
btrfs   subvol=documents   0   0
UUID=0b56138b-6124-4ec4-a7a3-7c503516a65c   /data/photos     
btrfs   subvol=photos      0   0
UUID=0b56138b-6124-4ec4-a7a3-7c503516a65c   /data/videos     
btrfs   subvol=videos      0   0

NOTE: Make changes as required.

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/fstab file.

For the changes to take effect, reboot your computer with the following command:

$ sudo reboot

Once your computer boots, the Btrfs subvolumes should be mounted on their respective directories as you can see in the screenshot below.

$ df -h -t btrfs

Conclusion

In this article, I have shown you how to create and delete Btrfs subvolumes, mount Btrfs subvolumes, and automatically mount Btrfs subvolumes using the /etc/fstab file. This article should help you get started with the subvolume feature of the Btrfs filesystem.

]]>
How to Save Disk Space using Btrfs Deduplication https://linuxhint.com/save-disk-space-btrfs-deduplication/ Tue, 29 Dec 2020 22:14:20 +0000 https://linuxhint.com/?p=83592 Deduplication is a software feature that is used to remove duplicate data blocks (redundant data blocks) from a filesystem to save disk spaces. The Btrfs filesystem is a modern Copy-on-Write (CoW) filesystem that supports deduplication.

If you need to keep a lot of redundant data (i.e., file backups, database) on your computer, then the Copy-on-Write (CoW) and deduplication feature of the Btrfs filesystem can save a huge amount of disk spaces.

In this article, I will show you how to save disk spaces using the Btrfs deduplication feature. So, let’s get started.

Prerequisites:

To try out the examples of this article,

  • You must have the Btrfs filesystem installed on your computer.
  • You need to have a hard disk or SSD with at least 1 free partition (of any size).

I have a 20 GB hard disk sdb on my Ubuntu machine. I have created 2 partitions sdb1 and sdb2, on this hard disk. I will use the partition sdb1 in this article.

$ sudo lsblk -e7

Your hard disk or SSD may have a different name than mine, so will the partitions. So, make sure to replace them with yours from now on.

If you need any assistance on installing the Btrfs filesystem on Ubuntu, check my article Install and Use Btrfs on Ubuntu 20.04 LTS.

If you need any assistance on installing the Btrfs filesystem on Fedora, check my article Install and Use Btrfs on Fedora 33.

Creating a Btrfs Filesystem:

To experiment with Btrfs filesystem-level data compression, you need to create a Btrfs filesystem.

To create a Btrfs filesystem with the label data on the sdb1 partition, run the following command:

$ sudo mkfs.btrfs -L data /dev/sdb1

Mount a Btrfs Filesystem:

Create a directory /data with the following command:

$ sudo mkdir -v /data

To mount the Btrfs filesystem created on the sdb1 partition on the /data directory, run the following command:

$ sudo mount /dev/sdb1 /data

The Btrfs filesystem should be mounted, as you can see in the screenshot below.

$ df -h /data

Installing Deduplication Tools on Ubuntu 20.04 LTS:

To deduplicate a Btrfs filesystem, you need to install the duperemove program on your computer.

If you’re using Ubuntu 20.04 LTS, then you can install duperemove from the official package repository of Ubuntu.

First, update the APT package repository cache with the following command:

$ sudo apt update

Install the duperemove package with the following command:

$ sudo apt install duperemove -y

The duperemove package should be installed.

Installing Deduplication Tools on Fedora 33:

To deduplicate a Btrfs filesystem, you need to install the duperemove program on your computer.

If you’re using Fedora 33, then you can install duperemove from the official package repository of Fedora.

First, update the DNF package repository cache with the following command:

$ sudo dnf makecache

Install the duperemove package with the following command:

$ sudo dnf install duperemove

To confirm the installation, press Y and then press <Enter>.

The duperemove package should be installed.

Testing Deduplication on a Btrfs Filesystem:

In this section, I am going to do a simple test to show you how the deduplication feature of the Btrfs filesystem removes redundant data from the filesystem and saves disk space.

As you can see,

  1. I have copied a file QGIS-OSGeo4W-3.14.0-1-Setup-x86_64.exe to the /data directory. The file is 407 MB in size.
  2. The file stored on the /data directory is 407 MB in size.
  3. Only the file consumed about 412 MB of disk space from the Btrfs filesystem mounted on the /data directory.

As you can see,

  1. I have copied the same file to the /data directory and renamed it to QGIS-OSGeo4W-3.14.0-1-Setup-x86_64.2.exe.
  2. The file stored on the /data directory is now 814 MB in size.
  3. The files consumed about 820 MB of disk space from the Btrfs filesystem mounted on the /data directory.

To perform the deduplication operation on the Btrfs filesystem mounted on the /data directory, run the following command:

$ sudo duperemove -dr /data

The redundant data blocks from the Btrfs filesystem mounted on the /data directory should be removed.

As you can see,

  1. I have the files QGIS-OSGeo4W-3.14.0-1-Setup-x86_64.exe and QGIS-OSGeo4W-3.14.0-1-Setup-x86_64.2.exe in /data directory.
  2. The file stored on the /data directory is now 814 MB in size.
  3. The files consumed about 412 MB of disk space from the Btrfs filesystem mounted on the /data directory.

The duperemove program removed redundant (duplicate) data blocks from the Btrfs filesystem mounted on the /data directory and saved a lot of disk spaces.

Automatically Mounting a Btrfs Filesystem on Boot:

To mount the Btrfs filesystem you have created, you need to know the UUID of the Btrfs filesystem.

You can find the UUID of the Btrfs filesystem mounted on the /data directory with the following command:

$ sudo btrfs filesystem show /data

As you can see, the UUID of the Btrfs filesystem that I want to mount at boot time is e39ac376-90dd-4c39-84d2-e77abb5e3059. It will be different for you. So, make sure to replace it with yours from now on.

Open the /etc/fstab file with the nano text editor as follows:

$ sudo nano /etc/fstab

Type in the following line at the end of the /etc/fstab file:

UUID=e39ac376-90dd-4c39-84d2-e77abb5e3059    /data    btrfs    defaults   0   0

NOTE: Replace the UUID of the Btrfs filesystem with yours. Also, change the mount option and compression algorithm as you like.

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/fstab file.

For the changes to take effect, reboot your computer with the following command:

$ sudo reboot

Once your computer boots, the Btrfs filesystem should be mounted in the /data directory, as you can see in the screenshot below.

$ df -h /data

Automatically Perform Deduplication using Cron Job:

To remove redundant data from the Btrfs filesystem, you have to run the duperemove command every once in a while.

You can automatically run the duperemove command hourly, daily, weekly, monthly, yearly, or at boot time using a cron job.

First, find the full path of the duperemove command with the following command:

$ which duperemove

As you can see, the full path of the duperemove command is /usr/bin/duperemove. Remember the path as you will need it later.

To edit the crontab file, run the following command:

$ sudo crontab -e

Select a text editor you like and press <Enter>.

I will use the nano text editor. So, I will type in 1 and press <Enter>.

The crontab file should be opened.

To run the duperemove command on the /data directory every hour, add the following line at the end of the crontab file.

@hourly /usr/bin/duperemove -dr /data >> /var/log/duperemove.log

To run the duperemove command on the /data directory every day, add the following line at the end of the crontab file.

@daily /usr/bin/duperemove -dr /data >> /var/log/duperemove.log

To run the duperemove command on the /data directory every week, add the following line at the end of the crontab file.

@weekly /usr/bin/duperemove -dr /data >> /var/log/duperemove.log

To run the duperemove command on the /data directory every month, add the following line at the end of the crontab file.

@monthly /usr/bin/duperemove -dr /data >> /var/log/duperemove.log

To run the duperemove command on the /data directory every year, add the following line at the end of the crontab file.

@yearly /usr/bin/duperemove -dr /data >> /var/log/duperemove.log

To run the duperemove command on the /data directory at boot time, add the following line at the end of the crontab file.

@reboot /usr/bin/duperemove -dr /data >> /var/log/duperemove.log

NOTE: I will run the duperemove command at boot time in this article.

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the crontab file.

A new cron job should be installed.

For the changes to take effect, reboot your computer with the following command:

$ sudo reboot

As the duperemove command runs in the background, the output of the command will be stored in the /var/log/duperemove.log file.

$ sudo ls -lh /var/log/duperemove*

As you can see, the /var/log/duperemove.log file contains the duperemove log data. It means the cron job is working just fine.

Conclusion:

In this article, I have shown you how to install the duperemove Brtfs deduplication tool on Ubuntu 20.04 LTS and Fedora 33. I have also shown you how to perform Btrfs deduplication using the duperemove tool and run the duperemove tool automatically using a cron job. ]]> Install and Use Btrfs on Fedora 33 https://linuxhint.com/install-and-use-btrfs-on-fedora33/ Mon, 28 Dec 2020 13:59:24 +0000 https://linuxhint.com/?p=83171 Btrfs (B-Tree Filesystem) is a modern copy-on-write (CoW) filesystem for Linux. It aims to implement many advanced filesystem features while focusing on fault tolerance, repair, and easy administration. The Btrfs filesystem is designed to support the requirement of high performance and high-capacity storage servers.

If you want to learn more about the Btrfs filesystem, check my article Introduction to Btrfs Filesystem.

In this article, I am going to show you how to install Btrfs on Fedora 33 and use it. So, let’s get started.

Installing Btrfs Filesystem

The Btrfs filesystem package is available in the official package repository of Fedora 33. So, you can easily install it on your Fedora 33 operating system.

First, update the DNF package manager cache with the following command:

$ sudo dnf makecache

To install the Btrfs filesystem on Fedora 33, run the following command:

$ sudo dnf install btrfs-progs -y

Fedora 33 uses the Btrfs filesystem by default. So, it should be installed on your Fedora 33 operating system already.

Partitioning the Disk

You don’t have to partition your HDD/SSD to create a Btrfs filesystem, you can just make it on your bare unpartitioned HDD/SSD. But you may want to partition your HDD/SSD before you format your HDD/SSD with the Btrfs filesystem.

You can list all the storage devices and partitions of your computer with the following command:

$ sudo lsblk

I have an HDD sdb on my computer, as you can see in the screenshot below. I will be partitioning the HDD sdb and formatting the partitions with the Btrfs filesystem for the demonstration in this article.

To partition the HDD sdb, I will use the cfdisk partitioning tool.

You can open the HDD sdb with the cfdisk partitioning tool as follows:

$ sudo cfdisk /dev/sdb

Select gpt and press <Enter>.

To create a new partition, select Free space, select [ New ], and press <Enter>.

Type in the size of the partition you want to create. I will create a 10 GB partition. So, I will type in 10G.

You can use the following symbols to create partitions of different sizes/units:

  • M – partition size in megabyte unit
  • G – partition size in gigabyte unit
  • T – partition size in terabyte unit
  • S –number of sectors you want to use for the partition

Once you’re done, press <Enter>.

A new partition (sdb1 in my case) should be created.

Let’s create another partition.

To do that, select the Free space, select [ New ], and press <Enter>.

Type in the size of the partition and press <Enter>.

A new partition (sdb2 in my case) should be created.

To write the changes to the disk, select [ Write ] and press <Enter>.

To confirm the changes, type in yes and press <Enter>.

The partition table should be saved to the disk.

To quit cfdisk program, select [ Quit ] and press <Enter>.

Formatting a Disk with Btrfs Filesystem

In this section, I am going to show you how to format a partition with the Btrfs filesystem.

I have created 2 partitions sdb1 and sdb2 in the earlier section of this article. I will format the partition sdb1 with the Btrfs filesystem for the demonstration.

$ sudo lsblk

To format the partition sdb1 with the Btrfs filesystem, run the following command:

$ sudo mkfs.btrfs -L data /dev/sdb1

NOTE: Here, the -L flag sets the label of the partition. In this case, the partition label is data.

The partition sdb1 should be formatted with the Btrfs filesystem.

Mounting a Btrfs Filesystem:

To mount a Btrfs filesystem, you need to create a directory (mount point) where you can mount the Btrfs filesystem.

To create a directory/mount point /data, run the following command:

$ sudo mkdir -v /data

Once the /data mount point is created, you can mount the sdb1 Btrfs filesystem on the /data mount point with the following command:

$ sudo mount /dev/sdb1 /data

The Btrfs partition sdb1 should be mounted in the /data mount point as you can see in the screenshot below.

$ df -h

Checking Btrfs Filesystem Usage Information

Checking the usage information of your Btrfs filesystem is very important. There are many ways to check the usage information of your Btrfs filesystem. Let’s see some of them.

You can use the following command to see the usage information of all the Btrfs filesystems on your computer:

$ sudo btrfs filesystem show

As you can see, the usage information of the fedora_localhost-live Btrfs filesystem (where the Fedora 33 operating system is installed) and the data Btrfs filesystem that we have created are listed.

You should find the following usage information here:

  • The label of each of the Btrfs filesystems on your computer.
  • The UUID of each of the Btrfs filesystems on your computer.
  • The total number of devices added to each of the Btrfs filesystems on your computer.
  • The disk usage information of each of the storage devices added to each of the Btrfs filesystems on your computer.

To find disk usage information about a specific Btrfs filesystem mounted on a specific directory path (/data let’s say), run the following command:

$ sudo btrfs filesystem usage /data

As you can see, a lot of disk usage information about the Btrfs partition mounted on the /data mount point is displayed.

On the top, you should find the total disk size of the Btrfs filesystem.

You should also find the amount of disk space the Btrfs filesystem has allocated (reserved for storing data) and the amount of disk space that is used from the allocated/reserved disk space.

You should also find the amount of disk space the Btrfs filesystem did not allocate (did not reserve for storing data) yet and the estimated amount of disk space (allocated and unallocated) that is still available for storing new data.

On the bottom, you should find the following information:

  • The total amount of disk space allocated for data and used for data from all the storage devices added to the Btrfs filesystem.
  • The amount of disk space allocated for data in each of the storage devices added to the Btrfs filesystem.
  • The total amount of disk space allocated and used for metadata from all the storage devices added to the Btrfs filesystem.
  • The amount of disk space allocated for metadata in each of the storage devices added to the Btrfs filesystem.
  • The total amount of disk space allocated and used for the Btrfs system data from all the storage devices added to the Btrfs filesystem.
  • The amount of disk space allocated for the Btrfs system data in each of the storage devices added to the Btrfs filesystem.
  • The amount of unallocated disk space in each of the storage devices added to the Btrfs filesystem.

On the bottom, you should also find:

  • The method (i.e., single, DUP) that is used to allocate disk space for the data, metadata, and system data.

Here:

  • For single-mode allocation, the Btrfs filesystem will keep only one instance of the allocation. There won’t be any duplicates.
  • For DUP mode allocation, the Btrfs filesystem will allocate the disk space in different parts of the filesystem for the same purpose. So, multiple copies (usually two) of the same data will be kept on the filesystem.
  • Usually, the data is allocated in a single mode. The metadata and the system data are allocated in DUP mode.
  • In single mode, notice that the Btrfs filesystem can use all the allocated disk space.
  • In DUP mode, notice that the Btrfs filesystem can use half the disk space from the total allocated disk space.

To see the summary of the disk space allocated and used for the data, metadata, and system of a Btrfs filesystem mounted in the /data directory, run the following command:

$ sudo btrfs filesystem df /data

You can also list the disk usage information of each of the files and directories of the Btrfs filesystem mounted on the /data directory as follows:

$ sudo btrfs filesystem du /data

In the end, the disk usage summary of all the files and directories of the /data btrfs filesystem should be displayed.

To only see the disk usage summary of the files and directories of the Btrfs filesystem mounted on the /data directory, run the following command:

$ sudo btrfs filesystem du -s /data

Adding More Storage Devices to a Btrfs Filesyste

If you need more disk space on your Btrfs filesystem, you can add more storage devices or partitions to the Btrfs filesystem to expand the disk space of the filesystem.

For example, to add the partition sdb2 on the Btrfs filesystem mounted on the /data directory, run the following command:

$ sudo btrfs device add /dev/sdb2 /data

As you can see, the new partition sdb2 is added to the Btrfs filesystem mounted on the /data directory.

$ sudo btrfs device usage /data

As you can see, the size of the Btrfs filesystem mounted on the /data directory has increased.

$ df -h

Mounting a Btrfs Filesystem at Boot Time:

Once you have set up a Btrfs filesystem, you don’t want to mount it manually every time you boot your computer, instead, you would want it to automatically do so. Let’s see how to do that.

First, find the UUID of the Btrfs filesystem mounted on the /data directory as follows:

$ sudo btrfs filesystem show /data

In my case, the UUID of the Btrfs filesystem is

7732d03-b934-4826-9e8f-d7de4971fb15.

It will be different for you. So, make sure to replace it with yours from now on.

Open the /etc/fstab file with the nano text editor as follows:

$ sudo nano /etc/fstab

At the end of the /etc/fstab file, type in the following line.

UUID=7732d03-b934-4826-9e8f-d7de4971fb15        /data   btrfs   defaults    0 0

Once you’re done, press <Ctrl> + X, followed by Y, and <Enter> to save the /etc/fstab file.

For the changes to take effect, reboot your computer with the following command:

$ sudo reboot

Once your computer boots, you should see that the Btrfs filesystem is correctly mounted in the /data directory at boot time, as you can see in the screenshot below.

$ df -h

Conclusion

In this article, I have shown you how to install and use the Btrfs filesystem on Fedora 33. This article should help you get started with the Btrfs filesystem on Fedora 33.

]]>
How to Enable Btrfs Filesystem Compression https://linuxhint.com/enable-btrfs-filesystem-compression/ Mon, 28 Dec 2020 13:44:20 +0000 https://linuxhint.com/?p=83431 The Btrfs filesystem supports filesystem-level data compression. It means that the filesystem data will be compressed automatically as new data is written to the filesystem. When you access the files stored in your Btrfs filesystem, those files’ data will be automatically decompressed.

This feature of the filesystem will save you a lot of disk space and will save you a lot of time that you would have spent compressing your files manually.

In this article, I am going to show you how to enable the Btrfs filesystem-level compression on a Btrfs filesystem. So, let’s get started.

Prerequisites:

To try out the examples of this article,

  • You must have the Btrfs filesystem installed on your computer.
  • You need to have a hard disk or SSD with at least 1 free partition (of any size).

I have a 20 GB hard disk sdb on my Ubuntu machine. I have created 2 partitions sdb1 and sdb2 on this hard disk. I will use the partition sdb1 in this article.

$ sudo lsblk -e7

Your hard disk or SSD may have a different name than mine, so will the partitions. So, make sure to replace them with yours from now on.

If you need any assistance installing the Btrfs filesystem on Ubuntu, check my article Install and Use Btrfs on Ubuntu 20.04 LTS.

If you need any assistance installing the Btrfs filesystem on Fedora, check my article Install and Use Btrfs on Fedora 33.

Btrfs Compression Algorithms:

At the time of this writing, the Btrfs filesystem supports the following compression algorithms:

i) LZO: LZO is a lossless real-time block compression algorithm. LZO divides the data into blocks and compresses/decompressed the data by blocks in real-time. It is the default compression algorithm of the Btrfs filesystem.

ii) ZLIB: ZLIB is a library used for data compression. It uses the DEFLATE data compression algorithm. The DEFLATE data compression algorithm is a combination of the LZ77 and Huffman coding algorithms. The Btrfs filesystem supports the ZLIB data compression algorithm.

You can also specify the level of compression you want. The level can be any number from 1 to 9. A higher level indicates a higher compression ratio. So, level 9 will save more disk space than level 1 (level 9 has a higher compression ratio than level 1). Unless you specify a ZLIB level of compression to use, the Btrfs filesystem will use the ZLIB compression level 3 by default.

ZSTD: ZSTD or Zstandard is a high-performance lossless data compression algorithm. It was developed at Facebook by Yann Collect. Its compression ratio is comparable to the DEFLATE algorithm that is used in ZLIB, but it’s faster. The Btrfs filesystem supports the ZSTD data compression algorithm.

You can also specify the level of compression you want. The level can be any number from 1 to 15. A higher level indicates a higher compression ratio. So, level 15 will save more disk space than level 1 (level 15 has a higher compression ratio than level 1). Unless you specify a ZSTD level of compression to use, the Btrfs filesystem will use the ZSTD compression level 3 by default.

Creating a Btrfs Filesystem:

To experiment with Btrfs filesystem-level data compression, you need to create a Btrfs filesystem.

To create a Btrfs filesystem with the label data on the sdb1 partition, run the following command:

$ sudo mkfs.btrfs -L data /dev/sdb1

Mount a Btrfs Filesystem with Compression Enabled:

To enable Btrfs filesystem-level compression, you have to mount the Btrfs filesystem you have created on the sdb1 partition with either the compress or compress-force mount option.

i) compress mount option: The compress mount option will simply enable Btrfs filesystem-level compression. The Btrfs filesystem will determine whether compression makes the file that is to be compressed bigger than the original (uncompressed) file size. If compression makes the file size any bigger than the original, then the Btrfs filesystem will not compress that file.

ii) compress-force mount option: Unlike the compress mount option, if the Btrfs filesystem is mounted using the compress-force mount option, then every file on the Btrfs filesystem will be compressed even when compression makes the file bigger.

Create a directory /data with the following command:

$ sudo mkdir -v /data

To enable LZO compression, mount the Btrfs filesystem that you’ve created earlier in the /data directory with the following command:

$ sudo mount -o compress=lzo /dev/sdb1 /data

To enable force LZO compression, mount the Btrfs filesystem that you’ve created earlier in the /data directory as follows:

$ sudo mount -o compress-force=lzo /dev/sdb1 /data

In the same way, you can mount the Btrfs filesystem in the /data directory as follows to enable ZLIB compression:

$ sudo mount -o compress=zlib /dev/sdb1 /data

To set a ZLIB compression level (let’s say, level 7), you can mount the Btrfs filesystem in the /data directory as follows:

$ sudo mount -o compress=zlib:7 /dev/sdb1 /data

To enable ZSTD compression, mount the Btrfs filesystem in the /data directory as follows:

$ sudo mount -o compress=zstd /dev/sdb1 /data

To set a ZSTD compression level (let’s say, level 10), you can mount the Btrfs filesystem in the /data directory as follows:

$ sudo mount -o compress=zstd:10 /dev/sdb1 /data

The Btrfs filesystem that you’ve created on the sdb1 partition should be mounted in the /data directory as you can see in the screenshot below.

$ df -h /data

Testing Btrfs Filesystem Compression:

To test whether the Btrfs filesystem compresses the files that are on the Btrfs filesystem, I will mount the Btrfs filesystem on the /data directory with the compress-force option. I will use the highest compression level of the ZSTD compression algorithm for the demonstration.

First, unmount the Btrfs filesystem that you may have mounted on the /data directory as follows:

$ sudo umount /data

Mount the Btrfs filesystem with the highest compression level (level 15) of the ZSTD compression algorithm in the /data directory as follows:

$ sudo mount -o compress-force=zstd:15 /dev/sdb1 /data

I have copied about 717 MB of data on the Btrfs filesystem mounted on the /data directory. As you can see, only 661 MB is disk space is used on the Btrfs filesystem even though the data stored in the filesystem is 717 MB in size. So, the Btrfs filesystem-level compression is working.

Mounting a Compression Enabled Btrfs Filesystem on Boot:

If you want to mount the Btrfs filesystem automatically at boot time with compression enabled (which you most likely do), then this section is for you.

First, find the UUID of the Btrfs filesystem which you want to enable compression and mount automatically at boot time as follows:

$ sudo btrfs filesystem show /data

As you can see, the UUID of the Btrfs filesystem is a8e75a9d-a6f6-4c6e-be41-c10bc1077aa2 in my case. It will be different for you. So, make sure to replace it with yours from now on.

Open the /etc/fstab file with the nano text editor as follows:

$ sudo nano /etc/fstab

Type in the following line at the end of the /etc/fstab file:

UUID=a8e75a9d-a6f6-4c6e-be41-c10bc1077aa2 /data btrfs compress=lzo 0 0

NOTE: Replace the UUID of the Btrfs filesystem with yours. Also, change the mount option and compression algorithm as you like.

Once you’re done, press <Ctrl> + X followed by Y and <Enter> to save the /etc/fstab file.

For the changes to take effect, reboot your computer with the following command:

$ sudo reboot

Once your computer boots, the Btrfs filesystem should be mounted in the /data directory as you can see in the screenshot below.

$ df -h /data

Conclusion:

In this article, I have discussed the compression algorithms supported by the Btrfs filesystem: LZO, ZLIB, and ZSTD. I have also shown you how to enable filesystem-level compression in a Btrfs filesystem.

]]>
Resize a Btrfs Filesystem https://linuxhint.com/resize_a_btrfs_filesystem/ Sun, 27 Dec 2020 17:02:27 +0000 https://linuxhint.com/?p=83067

The Btrfs filesystem can be resized online (when the filesystem is mounted), but if you want to resize a partition that is added to a Btrfs filesystem, you will have to do it offline (when the filesystem is not mounted). You can grow/expand or shrink a Btrfs filesystem online and grow/expand or shrink a Btrfs partition offline.

The Btrfs filesystem is a multi-device filesystem. If you have multiple devices added to your Btrfs filesystem, then you need to resize specific storage devices attached to the filesystem to resize the filesystem itself. Otherwise, you can directly resize the filesystem (as by default, the only attached storage device will be selected when you perform the resize operation).

In this article, I am going to show you how to grow/expand and shrink a Btrfs filesystem online and a Btrfs partition offline. I will also show how to resize a Btrfs filesystem that has multiple devices attached to it. So, let’s get started.

Prerequisites

To try out the examples of this article:

  • You must have the Btrfs filesystem installed on your computer.
  • You need to have a hard disk or SSD with at least 2 free partitions (of any size).

I have a 20 GB hard disk sdb on my Ubuntu machine. I have created 2 partitions, sdb1 and sdb2, on this hard disk. The partitions sdb1 and sdb2 are 10 GB in size.

$ sudo lsblk -e7


Your hard disk or SSD may have a different name than mine, so will the partitions. So, make sure to replace them with yours from now on.

If you need any assistance installing the Btrfs filesystem on Ubuntu, check my article Install and Use Btrfs on Ubuntu 20.04 LTS.

If you need any assistance installing the Btrfs filesystem on Fedora, check my article Install and Use Btrfs on Fedora 33.

Creating a Btrfs Filesystem

To experiment with resizing a Btrfs filesystem, we need to create a Btrfs filesystem. So, let’s create a Btrfs filesystem data on the partition sdb1.

To create a Btrfs filesystem with the label data on the sdb1 partition, run the following command:

$ sudo mkfs.btrfs -L data /dev/sdb1

Create a directory /data with the following command:

$ sudo mkdir -v /data


Mount the Btrfs partition sdb1 (that you have created earlier) on the /data directory with the following command:

$ sudo mount /dev/sdb1 /data


As you can see, the Btrfs filesystem data mounted on the /data directory has only one storage device (sdb1) added to it, and the ID of the storage device is 1.

$ sudo btrfs device usage /data


The size of the filesystem is 10 GB (Device size). Out of 10 GB of disk space, 9.48 GB is not used (Unallocated), 8 MB is allocated for storing data (Data, single), 512 MB is allocated for the filesystem metadata (Metadata, DUP), and 16 MB is allocated for system data (System, Dup).

The entire disk space of the partition sdb1 is on the Btrfs filesystem pool (can be used). So, 0 byte is outside of the filesystem pool ( Device slack).


The Btrfs filesystem mounted on the /data directory is 10 GB in size.

$ df -h /data

Resize a Btrfs Filesystem

You can resize the Btrfs filesystem data that you have created earlier and mounted it on the /data directory online (when it’s mounted).

For example, to shrink the Btrfs filesystem mounted on the /data directory, let’s say, by 1 GB, run the following command:

$ sudo btrfs filesystem resize -1G /data

As shown in the illustration, the Btrfs filesystem removed 1 GB of disk space from the filesystem pool. You can use the slack space (Device slack) to grow/expand the Btrfs filesystem later.

$ sudo btrfs device usage /data


Based on the image below, you can see that the Brtfs filesystem mounted on the /data directory is 9 GB in size. It was previously 10 GB.

$ df -h /data


To grow/expand the Btrfs filesystem mounted on the /data directory, let’s say, by 256 MB, run the following command:

$ sudo btrfs filesystem resize +256M /data


You can see from the picture below that 256 MB of disk space is removed from the Device slack and added to the Btrfs filesystem pool.

$ sudo btrfs device usage /data


As you can see, the Btrfs filesystem mounted on the /data directory is now 256 MB larger than before.

$ df -h /data


To grow/expand the Btrfs filesystem mounted on the /data directory to the maximum available disk space (in Device slack), run the following command:

$ sudo btrfs filesystem resize max /data


The illustration below shows that all the available disk space from the Device slack is added to the Btrfs filesystem pool. So, the Device slack is now 0 byte in size.

$ sudo btrfs device usage /data


The Btrfs filesystem mounted on the /data directory is now 10 GB in size.

$ df -h /data

Resize a Btrfs Partition

You can resize a partition that is added to a Btrfs filesystem offline (when the Btrfs filesystem is not mounted).

WARNING: Be careful when you resize a partition that is added to a Btrfs filesystem as you may lose important data from the partition. Always take a backup before resizing.

As you can see, the Btrfs filesystem we have created in this article has one disk partition (sdb1) added to it. The partition is 10 GB in size.

$ sudo btrfs device usage /data


As shown in the image below,the size of the partition sdb1 is 10 GB.

$ df -h /data


Before you resize the partition, unmount the Btrfs filesystem from the /data directory with the following command:

$ sudo umount /data


The name of the disk that contains the partition sdb1 is sdb.

$ sudo lsblk -e7


Open the disk sdb with a disk partitioning program like fdisk as follows:

$ sudo fdisk /dev/sdb

Type in p and press <Enter> to list all the existing partitions of the storage device sdb.

As you can see below, I have two partitions, sdb1 and sdb2, in the disk sdb. Let’s resize the first partition (sdb1).


To resize a partition, you have to remove the partition, then add it again. So, you have to remember the start sector number of the partition.

For example, the start sector number of the first partition, sdb1, is 2048, as you can see in the screenshot below.


To remove a partition, type in d and press <Enter>.


To remove the first partition (sdb1), type in 1, and press <Enter>. The partition sdb1 should be removed.


To recreate the same partition, type in n and press <Enter>.


Type in 1 as the partition number and press <Enter>.


Type in 2048 as the first sector number and press <Enter>.


I want to demonstrate the process of shrinking the partition. So, I am going to create a smaller partition than before.

Type in +9G (to create a 9 GB partition) and press <Enter>.


We would want to keep the partition signature, so type in N and press <Enter>.


The partition should be created.


To save the changes, type in w and press <Enter>.


Now, mount the Btrfs filesystem on the /data directory as follows:

$ sudo mount /dev/sdb1 /data


Resize the Btrfs filesystem that is mounted on the /data directory for the changes to take effect.

$ sudo btrfs filesystem resize max /data


You can see from the image below that the size of the sdb1 partition that is added to the Btrfs filesystem is reduced to 9 GB (from 10 GB).


You can confirm the partition size change with the df command as well.

$ df -h /data


We can grow/expand a partition that is added to the Btrfs filesystem the same way. Let’s see how to do that.

Unmount the Btrfs filesystem that is mounted on the /data directory as follows:

$ sudo umount /data


Open the disk sdb with a disk partitioning program like fdisk as follows:

$ sudo fdisk /dev/sdb


Now, the first partition sdb1 is 9 GB in size.


The start sector number of the first partition, sdb1, is 2048.


To remove the first partition, type in d and press <Enter>.


Type in 1 as the partition number and press <Enter>. The first partition sdb1 should be removed.


To recreate the first partition sdb1, type in n and press <Enter>.


Type in 1 as the partition number and press <Enter>.


Type in 2048 as the first sector number and press <Enter>.


I will increase the partition size by 500 MB. So, the new partition size should be 9.5 GB.

Type in +9.5G and press <Enter>.


As we would want to keep the filesystem signature, let’s type in N and press <Enter>.


The first partition, sdb1, should be recreated, and its size increased.


To save the changes, type in w and press <Enter>.


Mount the Btrfs partition sdb1 to the /data directory as follows:

$ sudo mount /dev/sdb1 /data


Resize the Btrfs filesystem that is mounted on the /data directory for the changes to take effect.

$ sudo btrfs filesystem resize max /data


As you can see, the partition (sdb1) size has increased to 9.5 GB (from 9 GB).

$ sudo btrfs device usage /data


You can confirm the partition size with the df command as well.

$ df -h /data

Resize a Multi-device Btrfs Filesystem

Btrfs is a multi-device filesystem. It means you can add multiple storage devices or partitions to a single Btrfs filesystem. In this section, I am going to show you how to resize a Btrfs filesystem that has multiple storage devices or partitions added to it. So, let’s get started.

Right now, the Btrfs filesystem that is mounted on the /data directory is 10 GB in size.

$ df -h /data


The partition sdb1 (which has the ID 1) is the only partition added to the Btrfs filesystem.

$ sudo btrfs device usage /data


You can add another partition (let’s say, sdb2) to the Btrfs filesystem, which is mounted on the /data directory with the following command:

$ sudo btrfs device add /dev/sdb2 /data


The newly added partition, sdb2, of the Btrfs filesystem, which is mounted on the /data directory has the ID 2, as you can see in the screenshot below.

$ sudo btrfs device usage /data


As you can see, the size of the Btrfs filesystem, which is mounted on the /data partition, has increased. The disk space of the sdb2 partition is added to the Btrfs filesystem.

$ df -h /data


To resize a Btrfs filesystem that has multiple storage devices added to it, you have to specify which partition of the Btrfs filesystem you want to resize. To specify the partition to resize in a Btrfs filesystem, you have to use the partition ID.

$ sudo btrfs device usage /data


For example, to shrink the partition with the ID 1 by 2 GB of the Btrfs filesystem mounted on the /data directory, you can run the following command:

$ sudo btrfs filesystem resize 1:-2G /data


The 2 GB of disk space is removed from the partition sdb1 of the Btrfs filesystem mounted on the /data directory.

$ sudo btrfs device usage /data


As you can see in the illustration, the Btrfs filesystem is resized (shrank) to 18 GB from 20 GB.

$ df -h /data


In the same way, you can shrink the Btrfs filesystem partition sdb2 using the partition ID 2.

$ sudo btrfs device usage /data


To shrink the partition with the ID 2 by 1 GB of the Btrfs filesystem mounted on the /data directory, you can run the following command:

$ sudo btrfs filesystem resize 2:-1G /data


You can see that 1 GB of disk space is removed from the partition sdb2 of the Btrfs filesystem mounted on the /data directory.

$ sudo btrfs device usage /data


The Btrfs filesystem is resized (shrank) to 17 GB from 18 GB, as shown in the image below.

$ df -h /data


To expand the partition with the ID 1 by 1 GB of the Btrfs filesystem mounted on the /data directory, you can run the following command:

$ sudo btrfs filesystem resize 1:+1G /data


As you can see, 1 GB of disk space from the partition sdb1 is added to the Btrfs filesystem pool.

$ sudo btrfs device usage /data


Now, the Btrfs filesystem is resized (expanded) to 18 GB from 17 GB.

$ df -h /data


To expand the partition with the ID 2 by 1 GB of the Btrfs filesystem mounted on the /data directory, you can run the following command:

$ sudo btrfs filesystem resize 2:+1G /data


You can see that 1 GB of disk space from the partition sdb2 is added to the Btrfs filesystem pool.

$ sudo btrfs device usage /data


The Btrfs filesystem is now resized (expanded) to 19 GB from 18 GB.

$ df -h /data

Conclusion

In this article, I have shown you how to resize a Btrfs filesystem and the partitions added to a Btrfs filesystem. As well as how you can shrink or grow/expand a Btrfs filesystem and the partitions added to a Btrfs filesystem.

]]>
The Comparison of Btrfs vs Ext4 Filesystems https://linuxhint.com/btrfs-vs-ext4-filesystems-comparison/ Fri, 25 Dec 2020 06:47:02 +0000 https://linuxhint.com/?p=82107 There are many filesystems out there for Linux. The most common ones are Ext4, Btrfs, XFS, ZFS, and so on. Each of the filesystems has its use cases, pros, and cons. You may have a hard time deciding which filesystem to use.

In this article, I will compare the Ext4 and the Btrfs filesystem. So, if you’re having a hard time deciding whether to use the Ext4 filesystem or the Btrfs filesystem, then

Introduction to the Ext4 and the Btrfs Filesystems:

Ext4 Filesystem: Ext4 is the fourth version of the Ext (Extended) filesystem. It is a successor to the Ext3 filesystem. The first version of the Ext filesystem was release in 1992 for the Minix operating system. It was later ported on Linux operating systems. The Ext4 filesystem was released in 2008. Ext4 is a journaled filesystem.

Btrfs Filesystem: Btrfs or the B-Tree filesystem is a modern Copy-on-Write (CoW) filesystem. It is new compared to the Ext filesystem. It was designed for the Linux operating systems at Oracle Corporation in 2007. In November 2013, the Btrfs filesystem was declared stable for the Linux kernel.

Feature Comparisons of the Ext4 and Btrfs Filesystems:

The Ext4 and Btrfs filesystem was designed to solve different types of problems. So, the design goal of the Ext4 filesystem was different than the Btrfs filesystem. Still, they are filesystems. They do have some similarities that we can compare.

i. Maximum Partition Size: The Ext4 filesystem supports partition sizes up to 1 EiB.

The Btrfs filesystem supports partition sizes up to 16 EiB.

ii. Maximum File Size: The Ext4 filesystem supports file sizes up to 16 TiB (for standard 4 KiB block size).

The Btrfs filesystem supports file sizes up to 16 EiB.

iii. Maximum Filename Length: The Ext4 filesystem supports up to 255 characters (255 bytes) long file names.

The Btrfs filesystem also supports up to 255 characters (255 bytes) long file names.

iv. Allowed Characters in Directory and Filenames: The Ext4 filesystem allows any characters except the / and NULL (\0) characters in directory and file names.

NOTE: You can’t create a file or directory with the name. and .. in either the Ext4 or the Btrfs filesystem.

v. Maximum Path Length: The Ext4 filesystem does not have any limits to the length of the path of a file or directory. So, you can create very very deep directory structures and keep your files there.

The same is true for the Btrfs filesystem.

vi. Max Number of Files: You can create at max 232 (= 4,294,967,296 ~= 4 billion) files in an Ext4 filesystem.

You can create at max 264 (= 18,446,744,073,709,551,616 ~= 18 quintillion) files in a Btrfs filesystem.

vii. inode Allocation Method: An inode is a filesystem data structure that is used to describe a file or a directory. So, a directory or a file requires 1 inode. 2 directories or 2 files will require 2 inodes.

In the Ext4 filesystem, you define the number of inodes the filesystem can supports while you create the filesystem. You can’t change it after the filesystem is created. If you create too many small files, you may have free disk space left on your filesystem, but you won’t be able to create new files/directories unless you have free inodes. This is a big limitation of the Ext4 filesystem.

In the Btrfs filesystem, the inode allocation is flexible. The filesystem can add as many inodes as needed. So, you will never run out of inodes.

viii. Checksum/ECC Support: The Ext4 filesystem does not keep checksum of the data stored on the filesystem.

The Btrfs filesystem keeps crc32c checksum of the data stored on the filesystem. So, in case of any data corruption, the Btrfs filesystem can detect it and recover the corrupted file.

ix. Journal and Copy-on-Write Support: The Ext4 filesystem is a journaling filesystem. It does not have any Copy-on-Write (CoW) support.

The Btrfs filesystem is a Copy-on-Write (CoW) filesystem, and it does not have any journal support.

x. Filesystem Snapshot: The Ext4 filesystem can’t take snapshots of the filesystem.

The Btrfs filesystem can take snapshots. You can take read-only snapshots and writable snapshots.

NOTE: Filesystem snapshot is an important feature. Using this feature, you can take a snapshot of your filesystem before trying out anything risky. If things do not go as planned, you can go back to an early state where everything worked. This is a built-in feature of the Btrfs filesystem. You don’t need any 3rd-party tools/software to do that on a Btrfs filesystem.

xi. Filesystem-level Encryption: The Ext4 filesystem has experimental support for filesystem-level encryption.

The Btrfs filesystem does not have any support for filesystem-level encryption.

xii. Filesystem-level Deduplication: The Ext4 filesystem does not have deduplication support.

The Btrfs filesystem supports deduplication on the filesystem-level. You don’t need any 3rd-party tools/software for that.

NOTE: Depulication is a technique to eliminate/remove duplicate copies of data from the filesystem and keep only one copy of data (unique data) on the filesystem. This technique is used to save disk spaces.

xiii. Multiple Devices Support: The Btrfs filesystem supports multiple devices and has built-in RAID support. The Btrfs filesystem has a built-in logical volume manager (LVM) that is used to add multiple storage devices or partitions to a single Btrfs filesystem. A single Btrfs filesystem can span over multiple disks and partitions.

The Ext4 filesystem does not support multiple devices. You can’t span a single Ext4 filesystem over multiple disks or partitions. To combine multiple storage devices and partitions in an Ext4 filesystem, you have to use 3rd-party logical volume managers like LVM 2. To set up RAID, you have to use 3rd-party tools like DM-RAID or MDADM.

xiv. Filesystem-level Compression: The Ext4 filesystem does not have built-in filesystem-level compression support.

The Btrfs filesystem has built-in filesystem-level compression support. It can compress a single directory or a single file or the entire filesystem to save disk space.

xv. Offline Filesystem Resize Capabilities: The Ext4 filesystem has support for offline filesystem growing (increase filesystem size) and shrinking (decrease filesystem size).

The Btrfs filesystem also supports offline filesystem growing and shrinking.

xvi. Online Filesystem Resize Capabilities: The Ext4 filesystem has support for online growing (increase filesystem size when mounted). But it has no support for online filesystem shrinking (decrease filesystem size when mounted).

You can grow (increase filesystem size) and shrink (decrease filesystem size) Btrfs filesystems online (when mounted).

xvii. Sparse files: Sparse file feature save disk space when small files (smaller than the block size) are stored on the filesystem. The Ext4 and the Btrfs filesystem supports sparse files.

xviii. Block sub-allocation: The Ext4 filesystem does not support block sub-allocation.

The Btrfs filesystem supports block sub-allocation.

NOTE: When a filesystem stores large files in a filesystem, the large file is broken into blocks, and the blocks are stored in the filesystem. The last block of the file does not occupy the entire block. This last block is called the tail block. In the same way, when a lot of small files are stored, they do not occupy the entire block. So, a lot of disk space is wasted. Block sub-allocation is a method to store parts of another file block to the tail block (the last block of another file that did not occupy the entire block) and save disk spaces.

xix. Tail packing: The Ext4 filesystem does not support tail packing.

The Btrfs filesystem supports tail packing.

NOTE: Tail packing is a part of block sub-allocation. As I have already discussed, small files do not occupy an entire file block. So, to efficiently store small files (i.e. program source codes) in the filesystem, the tail block of a small file is used to store other small files. Tail packing improves the filesystem performance and saves a lot of disk space in a filesystem where lots of small files (i.e. program source codes) are stored.

xx. Extent-based Filesystem: Both the Ext4 and the Btrfs filesystems are extent-based filesystems.

NOTE: An extent is a contiguous area of the storage device which is reserved for a file in a filesystem. Extent-based filesystems store large files in a contiguous storage area. This improves filesystem performance and increases storage efficiency.

xxi. Variable file block size: The Ext4 filesystem supports fixed block size. The block size is set before the filesystem is created. Once the filesystem is created, you can’t change the block size.

The Btrfs filesystem supports variable block size. The filesystem can determine the best possible block size to store a file on the filesystem based on the size of the file. This feature can save a lot of disk space.

xxii. Allocate-on-flush: Both the Ext4 and the Btrfs filesystem supports allocate-on-flush.

NOTE: The filesystem allocates some buffer space in the memory of the computer. When there are disk write requests, the filesystem does not write the data blocks directly on the storage device. Instead, the filesystem stores the data blocks in the buffer memory. When the buffer memory is full, the filesystem writes all the pending data blocks to the storage device at once. This reduces CPU usage, speeds up disk writes and reduces disk fragmentation.

xxiii. TRIM support: Both the Ext4 and the Btrfs filesystem support TRIM. It is a very important feature for SSD storage devices.

NOTE: When you remove a file from an SSD, the TRIM command notifies the SSD storage device of the pages (file blocks) that are no longer needed. The SSD erases the unnecessary pages (file blocks) from the flash storage and prepares the pages (file blocks) for storing new data. Without TRIM support, the SSD write speed would get slower as the SSD is filled with new data.

Advantages of Ext4 over Btrfs:

The Ext4 filesystem is a very old filesystem. It has been used on the Linux operating system for a long, long time. Because of that, the Ext4 filesystem is very stable. The Ext4 filesystem is still the default filesystem in many popular Linux distributions (i.e. Ubuntu/Debian). If you need to store some data as an ordinary Linux user, you can keep your eyes closed and use the Ext4 filesystem. The Ext4 filesystem has journaling support. So, your files should be safe even when there’s a power failure. It’s a good filesystem for everyday use.

Advantages of Btrfs over Ext4:

The Btrfs filesystem is a modern Copy-on-Write (CoW) filesystem that was designed for high-capacity and high-performance storage servers. So, it has a lot of advanced features that the Ext4 filesystem does not have. The Ext4 filesystem was designed to be a simple local filesystem.

The main features of the Btrfs filesystem that are useful to everyday Linux users are:

  1. Built-in Filesystem-level snapshots.
  2. Multiple device support.
  3. Built-in RAID support.
  4. Flexible inode allocation.
  5. Optimizations for storing smaller files (sparse files, block sub-allocation, tail packing, variable block size).
  6. Built-in filesystem-level compression support.

These are the filesystem features for which you may choose to use the Btrfs filesystem over the Ext4 filesystem.

Conclusion:

In this article, I have compared the Btrfs and the Ext4 filesystems. I have compared the main features of the Btrfs and Ext4 filesystem. This article should help you decide between the Btrfs and the Ext4 filesystem.

References:

  1. ext4 – Wikipedia – https://en.wikipedia.org/wiki/Ext4
  2. Btrfs – Wikipedia – https://en.wikipedia.org/wiki/Btrfs
  3. kernel/git/torvalds/linux.git – Linux kernel source tree – https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4204617d142c0887e45fda2562cb5c58097b918e
  4. Comparison of Filesystems – Wikipedia – https://en.wikipedia.org/wiki/Comparison_of_file_systems
  5. Data deduplication – Wikipedia – https://en.wikipedia.org/wiki/Data_deduplication
  6. Sparse file – Wikipedia – https://en.wikipedia.org/wiki/Sparse_file
  7. Block suballocation – Wikipedia – https://en.wikipedia.org/wiki/Block_suballocation
  8. Extent (file systems) – Wikipedia – https://en.wikipedia.org/wiki/Extent_(file_systems)
  9. Allocate-on-flush – Wikipedia – https://en.wikipedia.org/wiki/Allocate-on-flush
  10. Trim (computing) – Wikipedia – https://en.wikipedia.org/wiki/Trim_(computing)
]]>
Comparison Between Btrfs and XFS Filesystems https://linuxhint.com/comparison-between-btrfs-and-xfs-filesystems/ Thu, 24 Dec 2020 19:44:38 +0000 https://linuxhint.com/?p=82941 There are many filesystems available for use with Linux. The most common Linux filesystems include Ext4, Btrfs, XFS, and ZFS. Every filesystem has its unique use cases, pros, and cons. Due to the variety of options available, you may have a hard time deciding which filesystem to use.To help you with your choice, this article compares the XFS and the Btrfs filesystems. If you are having a hard time deciding whether to use the XFS filesystem or the Btrfs filesystem, then this article should help. Let us begin!

Introduction to XFS and Btrfs Filesystems

XFS Filesystem: XFS is a high-performance 64-bit journaling filesystem. It was originally developed by Silicon Graphics, Inc. in 1993 for the IRIX operating system and was later ported to the Linux kernel in 2001.

Btrfs Filesystem: Btrfs or the B-Tree filesystem is a modern Copy-on-Write (CoW) filesystem. It is new compared to the Ext filesystem. Btrfs was originally designed for the Linux operating systems by the Oracle Corporation in 2007. In November 2013, the Btrfs filesystem was declared stable for the Linux kernel.

Feature Comparison

The XFS and Btrfs filesystems were designed to solve different types of problems. Though the design goal of the XFS filesystem was different than that of the Btrfs filesystem, because they are both filesystems, we may compare them in depth.

  • Maximum Partition Size: The XFS filesystem supports partition sizes of up to 1 byte less than 8 EiB (8 EiB – 1 byte).
  • The Btrfs filesystem supports partition sizes of up to 16 EiB.
  • Maximum File Size: The XFS filesystem supports file sizes of up to 1 byte and less than 8 EiB (8 EiB – 1 byte).
  • The Btrfs filesystem supports file sizes of up to 16 EiB.
  • Maximum Filename Length: The XFS filesystem supports filenames up to 255 characters (255 bytes) in length.
  • The Btrfs filesystem also supports filenames up to 255 characters (255 bytes) in length.
  • Allowed Characters in Directory and Filenames: The XFS filesystem allows any characters except the / and NULL (\0) characters in directory and file names.
  • The Btrfs filesystem also allows any characters except the / and NULL (\0)characters in directory and file names.

NOTE: You cannot create a file or directory with the name . and .. in either of the XFS or Btrfs filesystems.

  • Maximum Path Length: The XFS filesystem does not have any limits to the length of the path of a file or directory. So, you can create deep directory structures and keep your files in these structures.
  • The same is true for the Btrfs filesystem.
  • Max Number of Files: You can create a maximum of 264 (= 18,446,744,073,709,551,616 ~= 18 quintillion) files in an XFS filesystem.
  • The same is true for the Btrfs filesystem.
  • Inode Allocation Method: An inode is a filesystem data structure used to describe a file or a directory. So, a single directory or file requires one inode, two directories or files will require two inodes, and so on.
  • In the Ext4 filesystem, you define the number of inodes the filesystem can support when creating the filesystem. You cannot change this after the filesystem has been created. If you create too many small files, you may have free disk space left on your filesystem, but you will not be able to create new files/directories unless you have free inodes. This is a major limitation to the Ext4 filesystem.
  • Unlike the Ext4 filesystem, inode allocation is flexible in the XFS filesystem. So, the filesystem can add as many inodes as needed and you will never run out of inodes.
  • The above is also true for the Btrfs filesystem.
  • Checksum/ECC Support: The Btrfs filesystem keeps crc32c checksum of the data and metadata stored in the filesystem. So, in the case of data corruption, the Btrfs filesystem can detect the corruption and recover the corrupted files or metadata.
  • The XFS filesystem only keeps the CRC32 checksum of the metadata. It does not keep a checksum of the data stored in the filesystem, unlike the Btrfs filesystem.
  • Journal and Copy-on-Write Support: The XFS filesystem is a journaling filesystem. It does not have Copy-on-Write (CoW) support.
  • The Btrfs filesystem is a Copy-on-Write (CoW) filesystem and it does not have journal support.
  • Filesystem Snapshot: The XFS filesystem cannot take snapshots of the filesystem.
  • The Btrfs filesystem can take snapshots of the filesystem. With Btrfs, you may take read-only snapshots and writable snapshots of the filesystem.

NOTE: The filesystem snapshot is an important feature. You may take a snapshot of your filesystem using this feature before attempting any risky actions. If things do not go as planned, a snapshot allows you to go back to an earlier state in which everything in the system worked. This is a built-in feature of the Btrfs filesystem. You do not need any third-party tools or software to generate a snapshot of a Btrfs filesystem.

  • Filesystem-level Encryption: The Btrfs filesystem does not support filesystem-level encryption.
  • The same is true for the XFS filesystem.
  • Filesystem-level Deduplication: The Btrfs filesystem supports deduplication at the filesystem level. You do not need any third-party tools or software to use this feature.
  • The XFS filesystem also has deduplication support, but the deduplication feature of the XFS filesystem is still experimental.

NOTE: Depulication is a technique for eliminating duplicate copies of data from the filesystem and keeping only one copy of the data (unique data) in the filesystem. This technique is used to save disk space.

  • Multiple Devices Support: The Btrfs filesystem supports multiple devices and includes built-in RAID support. The Btrfs filesystem has a built-in logical volume manager (LVM) for adding multiple storage devices or partitions to a single Btrfs filesystem. A single Btrfs filesystem can span over multiple disks and partitions.
  • The XFS filesystem does not support multiple devices, meaning that you cannot span a single XFS filesystem over multiple disks or partitions. To combine multiple storage devices and partitions in an XFS filesystem, you must use third-party logical volume managers, such as LVM 2. To set up RAID, you must use third-party tools such as dm-raid or mdadm.
  • The XFS filesystem was designed to execute I/O (input/output) operations in parallel. If you span the XFS filesystem over multiple devices using LVM 2 or a different logical volume manager, the filesystem performance will be increased.
  • Filesystem-level Compression: The XFS filesystem does not include built-in filesystem-level compression support.
  • The Btrfs filesystem includes built-in filesystem-level compression support. This feature allows you to compress a single directory, a single file, or the entire filesystem to save disk space.
  • Offline Filesystem Resize Capabilities: You cannot grow (increase filesystem size) or shrink (decrease filesystem size) an XFS filesystem while the filesystem is not mounted.
  • You can grow (increase filesystem size) or shrink (decrease filesystem size) a Btrfs filesystem while the filesystem is not mounted.
  • Online Filesystem Resize Capabilities: You can grow (increase filesystem size) an XFS filesystem while the filesystem is mounted, but you cannot shrink (decrease filesystem size) an XFS filesystem while the filesystem is mounted.
  • You can grow (increase filesystem size) or shrink (decrease filesystem size) a Btrfs filesystem while the filesystem is mounted.
  • Sparse files: The sparse file feature saves disk space when small files (smaller than the block size) are stored on the filesystem. The XFS and the Btrfs filesystems both support sparse files.
  • Block sub-allocation: The Btrfs filesystem supports block sub-allocation.
  • The XFS filesystem does not support block sub-allocation.

NOTE: When a filesystem stores large files in a filesystem, the large file is broken into blocks, and the blocks are stored in the filesystem. The last block of the file, called the tail block, does not occupy the entire block. When many small files are stored, they do not occupy the entire block, and a lot of disk space is wasted. Block sub-allocation allows you to store parts of another file block in the tail block (the last block of another file that did not occupy the entire block) to save disk space.

  • Tail packing: The Btrfs filesystem supports tail packing.
  • The XFS filesystem does not support tail packing.

NOTE: Tail packing is a part of block sub-allocation. As previously discussed, small files do not occupy an entire file block. To efficiently store small files (e.g., program source codes) in the filesystem, the tail block of a small file is used to store other small files. Tail packing improves filesystem performance and saves disk space in filesystems in which many small files (e.g., program source codes) are stored.

  • Extent-based Filesystem: Both the XFS and Btrfs filesystems are extent-based filesystems.

NOTE: An extent is a contiguous area of the storage device reserved for a file in a filesystem. Extent-based filesystems store large files in a contiguous storage area. This improves filesystem performance and increases storage efficiency.

  • Variable file block size: The block size is set before the filesystem is created. Once the filesystem is created, you cannot change the block size.
  • The XFS filesystem supports fixed block size.
  • The Btrfs filesystem supports variable block size. The filesystem can determine the best possible block size to store a file on the filesystem based on the size of the file. This feature can save a lot of disk space.
  • Allocate-on-flush: Both the XFS and Btrfs filesystems support allocate-on-flush.

NOTE: The filesystem allocates some buffer space in the system memory. When there are disk write requests, the filesystem does not write the data blocks directly on the storage device. Instead, the filesystem stores the data blocks in the buffer memory. When the buffer memory is full, the filesystem writes all the pending data blocks to the storage device at once. This reduces CPU usage, speeds up disk writes, and reduces disk fragmentation.

  • TRIM support: Both the XFS and Btrfs filesystems support TRIM, which is a very important feature for SSD storage devices.

NOTE: When you remove a file from an SSD, the TRIM command notifies the SSD storage device of the pages (file blocks) that are no longer needed. The SSD erases the unnecessary pages (file blocks) from the flash storage and prepares the pages (file blocks) for storing new data. Without TRIM support, the SSD write speed would become progressively slower as the SSD fills with new data.

Advantages of XFS over Btrfs

XFS is a stable 64-bit journaling filesystem for high-capacity storage devices.

You may use the XFS filesystem for the following reasons:

  • Parallel I/O (Input/Output) Support

The XFS filesystem supports parallel I/O and can provide multiple data streams for files due to its design.

  • Large partition support

The XFS filesystem supports partition sizes of up to 8 EiB (up to 8 EiB – 1 byte).

  • Large file support

The XFS filesystem supports file sizes of up to 8 EiB (up to 8 EiB – 1 byte).

  • Journaling Support

Journaling ensures data consistency in the filesystem in the event of a power outage or system crash. In the event of a power outage or system crash, the data stored in the journal will be recovered and applied to the filesystem.

  • Direct I/O

This is an important feature of the XFS filesystem. It is essential for applications that require high read/write speed to storage devices. Direct I/O allows storage devices direct access to the data buffer using DMA (Direct Memory Access) so that the full I/O bandwidth of the storage device can be utilized.

  • Guaranteed-rate I/O

The XFS filesystem can reserve the bandwidth of the storage device for certain applications. This feature is ideal for real-time applications (e.g., video streaming).

Disadvantages of the XFS Filesystem

There are some disadvantages to the XFS filesystem.

Disadvantages of the XFS filesystem include the following:

  • No Built-in LVM Support

Compared to the Btrfs filesystem, the XFS filesystem does not have a built-in logical volume manager. So, you will have to use LVM 2 for logical volume management.

  • No Built-in RAID Support

Compared to the Btrfs filesystem, the XFS filesystem does not have built-in RAID support. So, you will have to use dm-raid or mdadm to configure RAID.

  • No Snapshot Support

The XFS filesystem does not have a filesystem snapshot feature, unlike the Btrfs filesystem.

  • Journaling Cannot Be Disabled

As with some other journaling filesystems, you cannot disable the journaling feature of the XFS filesystem. Journaling is not good for USB flash drives. If you use the XFS filesystem on a USB flash drive, the lifetime of the USB flash disk will be reduced due to the journaling overload.

Advantages of Btrfs over XFS

The Btrfs filesystem is a modern Copy-on-Write (CoW) filesystem designed for high-capacity and high-performance storage servers. XFS is also a high-performance 64-bit journaling filesystem that is also capable of parallel I/O operations. The XFS filesystem contains many important features, including Direct I/O, Guaranteed-rate I/O, and more. Compared to the XFS filesystem, however, the Btrfs filesystem has many advantages.

The advantages of the Btrfs filesystem over the XFS filesystem include the following:

i) Built-in Filesystem-level snapshots.

ii) Multiple device support.

iii) Built-in RAID support.

iv) Flexible inode allocation.

v) Optimizations for storing smaller files (sparse files, block sub-allocation, tail packing, variable block size).

vi) Built-in filesystem-level compression support.

These are the filesystem features that may cause you to choose the Btrfs filesystem over the XFS filesystem.

Conclusion

This article compared the Btrfs and XFS filesystems, including a comparison of the most important features of each filesystem. This article should help you to decide between the Btrfs and XFS filesystems. Choose whichever system works best for you, according to your unique needs and preferences.

References:

  1. XFS – Wikipedia – https://en.wikipedia.org/wiki/XFS
  2. Comparison of file systems – Wikipedia – https://en.wikipedia.org/wiki/Comparison_of_file_systems
  3. XFS – ArchWiki – https://wiki.archlinux.org/index.php/XFS
]]>