Daniel Rosehill – Linux Hint https://linuxhint.com Exploring and Master Linux Ecosystem Sun, 06 Dec 2020 17:04:10 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.2 Review of Synology Active Backup https://linuxhint.com/synology_active_backup_review/ Fri, 03 Jul 2020 15:22:53 +0000 https://linuxhint.com/?p=62565 Synology Active Backup is a tool for backing up physical machines, virtual machines, and remote servers. This article will review Synology Active Backup and relevant tools.

Included in this evaluation of Synology’s new DS920+ NAS are some of the Active Backups tools in Diskstation Manager (DSM), Synology’s operating system, that can be accessed from a web server.

Windows VM Backups

As a daily Linux user for many years, over time, my Windows virtual machine (VM) has evolved to become my main interface with the Windows operating system for the times I require it. I use my VM, which runs on VMWare Workstation Player, for evaluating Windows software, for using tools that do not run well on Linux, and to see what’s new with Windows.

Virtual machine files contain all the virtualized storage a virtual machine runs off, and are therefore quite heavy. These files can, of course, be included in full system backups — which I described previously — but unless you are running disk cloning jobs with Clonezilla, it sometimes makes more sense to separate VM backup from backups taken to protect the host systems they run on.

There are several options to achieve this. For one, you can simply use Windows’ native backup and restore functionality. Alternatively, you can open up a file browser, navigate to the folder containing the VM, and copy the images it contains onto your backup target. As an additional approach, you can use Synology Active Backup to pipe the image onto an NAS.

In contrast to Linux backup jobs, getting the last option up and running is plain sailing. Opening up DSM on the Windows VM, I simply downloaded and installed the backup agent for a 64-bit architecture machine.

I then told the program where to find the NAS on my LAN:

I hit “Backup,” and the process essentially ran itself. About fifteen minutes later, I had a nice backup image visible in the Active Backup folder on the NAS.

G Suite

The second feature I wanted to try out in Active Backup was backing up my G Suite account. Synology offers a separate Active Backup product, Active Backup for G Suite, for this purpose.

Getting the G Suite Active Backup tool set up took some legwork. Synology’s documentation lists 15 different steps for this process. I spent some time in the Google Developer Console to make sure that the user from which NAS would run its job had the required permissions.

After completing the setup process, I had a direct backup from my G Suite onto my NAS. I could choose to configure the backup to run manually, to pull down my data continuously, or to do so on a set schedule.

The scheduling options were somewhat illogical, in my view. For instance, I was not able to choose to run the backup every X days, or even twice a month. The options were essentially the same as those of the cron job configuration tool, which contains the same limitations.

In terms of what is backed up, the API configuration process gives you a very good idea. The tool captures users’ Google Drive, contacts, and calendar data, among other data — but it stops short of the total user data extraction that I prefer to use by accessing the service through Google Takeout.

Despite a couple of limitations, the Synology program is a very affordable, onsite means of taking exactly the same backup that many G Suite cloud-to-cloud providers charge big licensing bucks to perform. Adding multiple user accounts to the backup tools is simple (simply recreate the process). Using Synology’s Cloud Backup tool, it is easy to replicate the G Suite backups onto another cloud, if required — taking an offsite backup to fulfill the 3-2-1 requirement that is the foundation of well-thought-out backup approaches.

Overall, with some reservations about what is not included in the G Suite tool, this is a backup tool that I definitely plan on using and would recommend.

Synology DS920+ can be purchased from Benda, KSP, or Amazon.  Contact Synology for more information. ]]> 3-2-1: A Common-Sense Approach For Backing Up Ubuntu — And Keeping It In Good Order https://linuxhint.com/ubuntu_backups_321/ Sun, 01 Mar 2020 10:30:41 +0000 https://linuxhint.com/?p=55895 Whether you’re an Ubuntu newbie, an Arch veteran, or dabbling in the abstruse world of Gentoo, backups are a topic that you should give at least occasional thought to.

Because, even if you’re sticking to Long Term Support (LTS) releases, Linux distributions are often fundamentally more at risk than Windows machines of going — suddenly and spectacularly — out of business.

Why, in so many cases, is this so?

  • Hardware compatibility, including for essential components like GPUs, remains a significant challenge with many vendors still not supporting Linux distributions, leaving it to the community to create workarounds;
  • Open source’s financial model doesn’t incentivize, much less require, thorough QA processes;
  • And for those keeping up with bleeding edge releases, fundamental changes to package management tools have a nasty habit of sometimes bricking the system by opening up an irreparable Pandora’s Box of dependency errors. Repairing these, even when possible, can involve doing down days-long rabbit holes. What might seem like a good learning experience for a first-time user can become a deal-breaking frustration for a veteran user on the verge of jumping ship to Windows.

And Linux’s stability issue has enraged plenty of users. Browse many user-in-distress threads on AskUbuntu.com and you’ll come across plenty of frustrated posters who have tried everything and ultimately resolved that the only way forward is to install from scratch.

While doing this can initially be a learning process of sorts, encouraging users to periodically rethink how they can make their system leaner and streamline the recovery process, after a while it becomes nothing better than a big, time-draining nuisance. Sooner or later, even the most advanced power users will begin to crave stability.

I’ve been using Linux as my day-to-day OS for more than 10 years and have gone through my fair share of unwanted clean installations. So many, in fact, that I promised that my last re-installation would be my last. Since then, I’ve developed the following methodology. And it’s worked to keep my Lubuntu system running as good as the day I installed it without a re-installation since. Here’s what I do.

Considerations: What Do You Need To Back Up?

Before deciding upon a backup strategy, you need to figure out some fundamentals:

  • What do you need to back up? Do you need to backup the full partition/volume or just the home user directory?
  • Will an incremental backup strategy suffice for your use case? Or do you need to take full backups?
  • Does the backup need to be encrypted?
  • How easy do you need the restore process to be?

My backup system is based on a mixture of methodologies.

I use Timeshift as my primary backup system, which takes incremental snapshots. And I keep a full disk backup on site that excludes directories that do not contain user data. Relative to the system root these are:

  • /dev
  • /proc
  • /sys
  • /tmp
  • /run
  • /mnt
  • /media
  • /lost+found

Finally, I keep two more backups. One of these is a (real) full system partition to image backup using a Clonezilla live USB. Clonezilla packages a series of low-level tools for replicating installations. And the second is an offsite full system backup that I upload to AWS S3 about once a year whenever I have a great data uplink at my disposal.

Backup Tools Options

These days, the selection of tools you can use is large.

It includes:

  • Well-known CLIs such as rsync which can scripted and called as cron job manually
  • Programs like Déjà Dup, Duplicity, Bacula that provide GUIs to create and automate backup plans to local or off-site destination servers, including those operated by common cloud providers
  • And tools that interface with paid cloud services like CrashPlan, SpiderOak One, and CloudBerry. The last category includes services that provide cheap cloud storage space themselves so the offering is totally end to end.

The 3-2-1 Rule

I’m going to give a quick overview of the tools I’m currently using on my main machine.

Although I’ve written some Bash scripts to get essential config files into my main cloud storage, which I use for day to day files, this (the essential) component of my backup plan simply backs up the entire machine, including virtual machines and system files that should be left out or backed up separately in more nuanced approaches.

Its central premise is adherence to the 3-2-1 backup rule. This approach should keep your data — including your main OS — safe in almost any failure scenario.

The Rule states that you should keep:

  • 3 copies of your data. I always say that this is a bit of a misnomer because it actually means that you should keep your primary data source and two backups. I would simply refer to this as “two backups”
  • These two backup copies should be kept on different storage media. Let’s bring this back to simple home computing terms. You could write a simple rsync script that (incrementally) copies your main SSD into another attached storage media — let’s say a HDD attached to the next SATA port on your motherboard. But what happens if your computer catches fire or your house is robbed? You would be left without your primary data source and have no backup. Instead, you could back up your primary disk to a Network Attached Storage (NAS) or simply use Clonezilla to write it to an external hard drive.
  • One of the two backups copies should be stored offsite. Offsite backups are vital because, in the event of a catastrophic natural event such as flooding for instance, your entire house could be destroyed. Less dramatically, a major oversurge event could fry all connected electronics in a house or all those on a particular circuit (this is why keeping one of the onsite backups unconnected to a power supply makes sense – an example would be a simple external HDD/SDD).Technically, “offsite” is anywhere that is a remote location. So you could use Clonezilla to remotely write an image of your operating system to your work PC, or a drive attached to it, over the internet. These days, cloud storage is cheap enough to affordably install even full drive images. For that reason, I back up my system in full, once a year, to an Amazon S3 bucket. Using AWS also gives you massive additional redundancy.

My Backup Implementation

My approach to backups is based on a few simple policies:

  • I want to keep things as simple as possible;
  • I want to give myself the most redundancy that I can reasonably achieve ;
  • I want to, at a minimum, follow the 3-2-1 rule

So I do as follows.

  • I keep an additional drive in my desktop that is solely used to house Timehsift restore points. Because I dedicate a whole disk to it, I have quite a lot of room to play around with. I keep a daily, a monthly, and a weekly backup. So far, Timeshift is all that I have needed to roll the system back a few days to a point before something, like a new package, had an adverse impact on other parts of the system. Even if you can’t get past GRUB, Timeshift can be used as a CLI with root privileges to repair the system. It’s an amazingly versatile and useful tool. This is a first on-site copy.
  • I keep an additional drive in my desktop that is solely used for housing Clonezilla images of my main drive. Because these images would only really be useful to me in the event that Timeshift failed, I only take these once every three to six months. This is a second on-site copy.
  • Using Clonezilla, I create an additional hard drive that I keep at home external to the PC. Except that, for this hard drive, I use a device-device backup rather than a device-image backup as in the previous image — so that it would be good to go instantaneously if my primary drive were bricked. If I were to recover from the internal Clonezilla backup drive, for instance, I would need to first follow a restore process. Assuming the other system components are in good working order following a hard drive failure, I would theoretically only need to connect this drive to the motherboard to begin using it. This is a third on-site copy.
  • Finally, once every six months or so, I upload a Clonezilla-generated image of my system to AWS S3. Needless to say, this is a long multipart upload and needs to be undertaken from an internet connection with a good upload link.

Altogether, my system involves three on-site copies and one off-site copy of my main desktop.

Main Takeaways

  • All Linux users should have robust backup strategies in place
  • The 3-2-1 backup rule is a good yardstick for ensuring that your data is safe in virtually all circumstances.
  • I use a combination of Timeshift and Cloudzilla to create my backups although there are plenty of other options, including paid ones, on the market. For cloud storage, I use a simple AWS S3 bucket, although again, there are integrated services that include both software and storage tools.
]]>