Admin – Linux Hint https://linuxhint.com Exploring and Master Linux Ecosystem Mon, 01 Mar 2021 00:17:41 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.2 10 Best Cheap Linux Laptops to Buy on a Budget https://linuxhint.com/best_cheap_linux_laptops/ Mon, 09 Dec 2019 11:26:03 +0000 https://linuxhint.com/?p=51419 In comparison to other operating systems, Linux offers a dedicated environment for programmers that is free and more dedicated to user’s privacy and security. This is why Linux’s popularity has increased drastically over the years.

Whether you’re looking at buy a laptop with pre-installed Linux or want to run it on parallel with a Windows operating system, you’ve come to the right place. Read on below to find out some interesting specifications of the top ten Linux laptops you can buy at the most affordable prices.

1.   Acer Aspire E 15

  • CPU: 8th generation Intel Core i3-8130u processor, i5 and i7 upgrades available
  • Graphics: NVidia GeForce MX150 Graphics card
  • RAM: 6 GB
  • Storage: Up to 1 TB SSD space
  • Display: 6-inch Full HD LED Display
  • Screen Resolution: 1920 x 1080 pixels
  • Battery Life: Up to 8 hours and 45 minutes when fully charged
  • OS: In built Windows 10 home, compatible with Ubuntu 16.04 and higher versions.
  • Size: 15 x 10.2 x 1.2 inches
  • Availability: Globally
  • Price: US$ 379.99
  • Buy: amazon

Acer Aspire E 15

2.   HP Chromebook 14

  • CPU: Dual Core Intel Celeron N3350 processor, also available for AMD A4-9210 CPUs
  • RAM: 4 GB
  • Graphics: Intel HD Graphics 500
  • Storage: Up to 32GB eMMC storage
  • Display: 14-inch HD Display
  • Screen Resolution: 1366 x 768 pixels. With full HD versions, you can get 1920 x 1080 p display as well
  • Battery Life: Up to 5 hours when fully charged
  • OS: In built Windows 10 home, compatible with Ubuntu 16.04 and higher versions.
  • Availability: Globally
  • Price: US$ 203.59. For Full HD, the price is $330.
  • Buy: amazon

HP Chromebook 14

3.   System76 Galago Pro

  • CPU: 10th generation Intel Core i7-8565u 1.8GHz, Quad Core Processor up to 4.6GHz, also available in Core i5
  • Graphics: Intel UHD Graphics 620
  • RAM: 8 GB DDR4 2400MHz RAM – extendable up to 32GB
  • Storage: Up to 240 GB SSD space
  • Display: 14 -inch Matte Full HD Display
  • Screen Resolution: 1920 x 1080 pixels
  • Battery Life: Up to 5 hours when fully charged
  • OS: Ubuntu Linux 18.04 LTS and higher versions.
  • Weight:87 lbs
  • Availability: Globally
  • Price: US$ 999.99
  • Buy: system76

System76 Galago Pro

4.   Pinebook Pro

  • CPU: ARM Cortex A 72 1.8GHz, 64 bit Dual-Core Processor
  • Graphics: Integrated Graphics Card
  • RAM: 4 GB LPDDR4 RAM
  • Body: Magnesium Alloy Shell
  • Storage: 64 GB eMMC, which is upgradable
  • Display: 1-inch Full HD IPS Display
  • Screen Resolution: 1920 x 1080 pixels
  • Battery Life: Up to 6 hours when fully charged
  • OS: Ubuntu 16.04 and higher versions or any other Linux Distro.
  • Weight: 9lbs
  • Availability: Globally
  • Price: US$ 200
  • Buy: pine64

Pinebook Pro

5.   ASUS Zen book UX331UA

  • CPU: 8th generation Intel Core i5-8250u processor
  • Graphics: Intel HD Graphics 620
  • RAM: 8 GB
  • Storage: 256GB SSD space
  • Display: 3-inch Full HD Display
  • Screen Resolution: 1920 x 1080 pixels
  • Battery Life: Up to 14 hours
  • OS: In built Windows 10 home, compatible with Ubuntu 16.04 and higher versions.
  • Price: US$ 799.99
  • Buy: amazon

ASUS Zen book UX331UA

6.   HP Stream 14

  • CPU: AMD A4-9120E 1.5 GHz Dual-Core processor (Turbo up to 2.2GHz)
  • Graphics: AMD Radeon R3
  • RAM: 4GB Non Expandable RAM
  • Storage: 64GB
  • Display: 14-inch Diagonal HD Display with a Bright View WLED
  • Screen Resolution: 1336 x 768 pixels
  • Battery Life: Up to 14 hours
  • Weight: 8lbs
  • OS: You can install either Windows or Linux OS. You cannot dual boot the two because of low space.
  • Price: US$ 230
  • Buy: amazon

HP Stream 14

7.   Acer Chromebook 514

  • CPU: Intel Celeron N3350 Dual-Core processor (Turbo up to 2.4GHz)
  • Graphics: Intel HD Graphics
  • RAM: 4GB LPDDR4 RAM
  • Storage: 32GB expandable storage
  • Display: 14-inch Full HD Display with an IPS LED-backlit keyboard
  • Battery Life: Up to 12 hours
  • Weight: 0 lbs
  • OS: Chrome OS with any Linux distros on dual boot
  • Price: US$ 345.43
  • Buy: amazon

Acer Chromebook 514

8.   Acer Aspire 1 A114

  • CPU: Intel Celeron N4000 Dual-Core processor
  • Graphics: Intel HD Graphics
  • RAM: 4GB RAM
  • Storage: 64 GB eMMC
  • Display: 14-inch Full HD Widescreen LED-backlit Display
  • Screen Resolution: 1920 x 1080 pixels
  • Battery Life: Up to 6.5 hours
  • OS: Windows 10 OS with any Linux distros on dual boot
  • Price: US$ 229.99
  • Buy: amazon

Acer Aspire 1 A114

8.   Acer Chromebook 13

  • CPU: 8th Generation Intel Core i3-8130u 2.2GHz processor
  • Graphics: Intel UHD Graphics 620
  • RAM: 8 GB LPDDR3 RAM
  • Storage: 32 GB eMMC, expandable via SD card
  • Display: 3-inch IPS LED Display
  • Screen Resolution: 2256 x 1504 pixels
  • Battery Life: Solid life up to 10 hours
  • Weight:0 lbs
  • OS: Chrome OS with any Linux distros on dual boot
  • Price: US$ 699.99
  • Buy: acerrecertified

Acer Chromebook 13

10.   ASUS Vivo Book S15

  • CPU: 8th Generation Intel Core i5-8265u 1.6 GHz Quad-Core Processor with Turbo up to 3.9 GHZ
  • Graphics: NVidia GeForce MX250 GPU with 2GB DDR5 Graphics Memory
  • RAM: 8 GB DDR4 RAM
  • Storage: 256 GB M2 Solid State Drive, expandable via SD card
  • Display: 6-inch Full HD NanoEdge Display with 180 degree viewing angle
  • Battery Life: Up to 6 hours but strong charging capability
  • Weight:97 lbs
  • OS: Windows 10 Home 64-bit with any Linux distros on dual boot
  • Price: US$ 749.99
  • Buy: amazon

ASUS Vivo Book S15

Conclusion

With the above list concluded these were our top ten recommended Linux Laptops to buy if you’re on a budget. These PCs cater to different processor, budget and hardware requirements of users who use Linux operating systems for daily usage. These are also the newest models with either pre-installed Linux or come with full support for dual-booting it with Windows.

]]>
Book Review: Apache Kafka 1.0 Cookbook https://linuxhint.com/apache-kafka-cookbook/ Tue, 03 Jul 2018 18:27:48 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=27775

Written by: By Raúl Estrada
Published by: Packt Publishing
Summary: Over 100 practical recipes on using distributed enterprise messaging to handle real-time data
Publisher Link: Apache Kafka 1.0 Cookbook

This book is a cookbook, a compendium of practical recipes that are solutions to everyday problems faced in the implementation of a streaming architecture with Apache Kafka.

Target Audience For This Book

You are an IT professional who works with software development and data processing but you have yet no experience with using Apache Kafka. This book is for you because it won’t take a lot of your time talking about theory but get right into how to setup Kafka and what you can do with it to build bigger, better and more robust systems then you have ever done before you learned how to use Apache Kafka.

Getting Started

The author gets you started right away by showing you how to install on various platforms all the software and dependencies including Scala programming language and Apache Zookeeper. Then install Apache Kafka and configure it like a real world system in cluster mode on a single host and start the server processes right away with the initial recipes. Brokers in Kafka are the servers themselves. It is shown how to configure and start them.

Creating your first topics

Quickly in Chapter 1, you will be using the command line interface to create topics in Apache Kafka. Topics are the core abstract that are used to store data and read from data. They are a linear set of unalterable messages that can be published to and read from with a message counter. You will also learn the command line interface tools to list, describe and inspect the topics.

Command Line Tools

A recipe is shown for using basic command line tools for generating data and inserting into a topic. The various useful options for inserting data via the command line interface are shown. The same options can be used from code itself rather than the command line. Then the command line tools for reading from a topic are shown. Again the same can be done either via command line or by writing code.

Optimizing the Install

You can modify change the threading options for performance and the replica options for reliability. Logging options can be modified to fine tune how you want to debug the logs. Zookeeper settings can also be tuned for performance and scalability. Quick recipes to get started with tuning these are shown in Chapter 1.

Core Content of Book

After the basics are covered the book goes into more advanced topics such as:

  • Clustering: Different recipes for common topologies of deployment
  • Message Validation: Override Producer class and ensure all messages are valid before putting in topic
  • Message Enrichment: Override Producer and add more color to the data based on geolocation or any additional context
  • Confluent Platform: Confluent is leading Apache Kafka vendor, see what they offer as added value
  • Kafka Streams: Process data as it comes into a topic, Streaming, and handle data or write new data to a topic
  • Monitoring & Management: Learn the best practices and recipes for production monitoring and management
  • Security: Ensure you have secured your Kafka install with best practice recipes
  • Integration with Open Source Projects: HDFS, ElasticSearch and other systems you can integrate Apache Kafka with

Outcomes

After going through the recipes in this book, you will no longer be a newbie. Now you will have deployed both simple and real world Apache Kafka topologies. You have written to and read from topics, and you have processed data in a streaming fashion. You have the basic skills you need to start leveraging this powerful technology in the real world. Get the book from Amazon today and start your learning path with it.

(This post contains affiliate links. It is a way for this site to earn advertising fees by advertising or linking to certain products and/or services.) ]]> Book Review: Mastering Linux Security and Hardening https://linuxhint.com/book-review-mastering-linux-security-and-hardening/ Sat, 09 Jun 2018 23:01:03 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=27119 Written by: By Donald A. Tevault, and published by Packt Publishing. Secure your Linux server and protect it from intruders, malware attacks, and other external threats
Official Book Link

One thing to always remember as you go through this book is that the only operating system you’ll ever see that’s totally, 100% secure will be installed on a computer that never gets turned on.

Target Audience For This Book

You are a Linux user and have been using Linux either for a couple years or a couple decades but never really dug into the details of how to harden a linux system. You might be a developer, a casual linux user, a system administrator, dev ops, release engineering, or some variation of the same. Now its time to harden your system and sharpen your knowledge on security.

Setting the Context In Beginning

Why do you even care about the content in this book? How would you like a system you are responsible for to be hijacked and converted in a crypto-currency mining server or file server for illicit content on behalf of a criminal. Or perhaps your server will be jacked and used for distributed denial of server attack bringing down important corporate or government servers. If you leave non-secure systems on the Internet you are part of the problem. Let alone having your proprietary content stolen by hackers. The default settings in most linux distributions is inherently insecure! This context is set in the beginning of the book.

Content Organization and Style

After setting the context around security and providing links to some main stream security news websites where you can subscribe or visit to keep current on new developments in security and computing in general, the lab environment for this book is introduced. Basically the labs are not super proprietary but you will need a linux environment and for that VirtualBox or Cygwin is recommended and instructions for getting setup with it is provided (mostly for newbies without access to linux systems to run the labs). If you have your own system, perhaps bypass the VirtualBox or Cygwin, and run the labs on your own system to save setup time. But if you are more of newbie, definitely follow the lab setup process.

The content in this book is geared to two of the most popular linux distributions: RedHat (or CentOS) and Ubuntu. These are great choices to focus on as they are the most mainstream Linux distributions. What becomes obvious when reading the book, is much of Linux Security hardening is Linux Distribution dependent as the kernel itself is fairly secure but the wrapping bits which open up various potential issues. So no book can cover all linux distributions and this book focuses on RedHat, CentOS and Ubuntu, although principles are largely generic.

Most of the content in this book assumes you are familiar with using the Command Line Interface for linux, which is more efficient and more suitable for day to day linux folks, however there are some cases where Graphical User Interface tools are showcased when they add special value in various cases.

Core Content of Book

  • Proper usage of the sudo command to restrict the requirement of full root access
  • How to restrict too simple passwords and enforce periodic password resets by users
  • Temporarily lock suspicious or under investigation user accounts
  • Basic firewall setup to limit traffic to specific ports and applications
  • Difference between symmetric and assymetric encryption algorithms and use cases respectively
  • How to encrypt files, directories, disk volumes on the system
  • Basic SSH hardening, including use cases where this is important
  • Chown/Chmod and basic access system. Coverage largely for beginners and good review for others
  • Access Control Lists, which are more sophisticated then the basics with Chown/Chmod. This is for intermediate to advanced users
  • SELinux (RHEL) and AppArmor (Ubuntu): Acknowledges the clunkiness of these solutions but shows how they can be used and use cases where they add specific value
  • Relevance and techniques for Virus and Malware detection and prevention, and how this is different than Windows which is very much more vulnerable
  • Complying to official security standards and how to verify your system against these standards using tools
  • Snort for intrusion detection. If your system is compromised you need a way to detect the intrusion
  • Introduction to Linux Distributions that are designed specifically for security vulnerability work such as Security Onion, Kali, Parrot, and Black Arch

Outcomes

Get the book from Amazon today. You can start as a casual to advanced Linux user and sharpen your awareness of security hardening by just reading this one book, so its highly recommended that everyone using Linux get a copy and go through the lessons in this book.

(This post contains affiliate links. It is a way for this site to earn advertising fees by advertising or linking to certain products and/or services.) ]]> How to Upgrade Kernel of Debian 9 Stretch from Source https://linuxhint.com/how-to-upgrade-kernel-of-debian-9-stretch-from-source/ Tue, 19 Dec 2017 13:03:49 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20852 In this article, I will show you how to upgrade the kernel of Debian 9 stretch. I will download a kernel source from the official website of Linux kernel and compile it on Debian 9 stretch. Let’s get started.

Checking the Installed Kernel Version:

You can check the current version of the kernel that is installed on your Debian 9 operating system with the following command:

$ uname -r

From the output of the ‘uname’ command, you can see that the kernel I have installed on my Debian 9 operating system is 4.9.0


Downloading the Kernel:

Go to the official website of Linux Kernel at https://www.kernel.org from any web browser of your choice. You should see the following page:

You can see that source code of different kernel versions is listed on the website. You can download the kernel sources as compressed tar file.

There are mainline, stable, longterm kernels that you can download. If you’re upgrading the kernel of a production computer system running Debian, you should download the longterm or stable release. If you’re just testing something, you may download the mainline release if you want. Be warned, the mainline release may have bugs. If you care about stability, you should always get the stable or longterm releases.

In this article, I will show you how to upgrade the default Debian 9 kernel to stable 4.14.7 kernel. So click on the link as shown in the screenshot to download the source code for kernel 4.14.7

Your browser should prompt you to download the file. Click on “Save File” and click on “OK”. Your download should start.


Installing Required Tools for Building the Kernel:

Before you can compile a kernel on Debian 9, you need to install some additional packages, basically the compiler and the required dependencies.

First update the package repository cache with the following command:

$ sudo apt-get update

Now run the following command to install the required packages:

$ sudo apt-get install build-essential libncurses5-dev gcc libssl-dev bc

Just press ‘y’ and press <Enter> to continue.

Once the installation is complete, we can start the kernel compilation process.


Compiling the Kernel:

Now we can compile the kernel from source. Before you go any further, make sure you have more than 18GB-20GB of free space where you are going to compile the kernel.

First go to the directory where you downloaded the linux kernel source. In my case, it is the Downloads directory on my HOME directory.

Run the following command to navigate to the Downloads directory:

$ cd ~/Downloads

You can see from the output of ‘ls’ that the name of the file I downloaded is ‘linux-4.14.7.tar.xz’.

Now we have to extract the tar archive.

Run the following command to extract the tar.xz archive:

$ tar xvf linux-4.14.7.tar.xz

You can see that a new directory ‘linux-4.14.7’ was created.

Now navigate to the directory with the following command:

$ cd linux-4.14.7

Now we have to copy the boot configuration into the ‘linux-4.14.7’ directory. We are doing these because it’s a lot of work to figure out what kernel module to enable, what to disable to get a working system. So we can just use the configuration that the current kernel is using.

From the output of the following ‘ls’ command, you can see a config file marked black in the screenshot. This is the file we are interested in.

Run the following command to copy the configuration file:

$ cp -v /boot/config-4.9.0-3-amd64 .config

The new kernel may have a lot of new features that the old kernel didn’t have. So it’s a good idea to run the following command to convert the old configuration file to a new one. Otherwise we will be asked a lot of questions that might not make sense to you.

$ make menuconfig

You will be presented with the following window. From here you can enable and disable specific kernel features. If you don’t know what to do here, just leave the defaults.

Once you’re done deciding what you want to install, press the <Right Arrow> key to navigate to “Save” and press <Enter>

Press <Enter>

Press <Enter>

Navigate to “Exit” and press <Enter>. You’re done.

Now run the following command to start the kernel compilation process:

$ make deb-pkg

The compilation process takes a very long time to finish.

Once the compilation is complete, you should see the following window:

4 debian package files (.deb) will be generated outside of the ‘linux-4.14.7’ directory as you can see in the screenshot.

Now all we have to do to update the kernel is to install all the generated .deb files.

Run the following command to update the kernel:

$ sudo dpkg -i ../linux-*.deb

Now restart your computer with the following command:

$ sudo reboot

Once the computer boots, run the following command to check the kernel version:

$ uname -r

You can see from the output of the ‘uname’ command that the kernel version is updated to 4.14.7

So that’s how you update the kernel of Debian 9 stretch. Thanks for reading this article.

]]>
DigitalOcean vs AWS https://linuxhint.com/digitalocean-vs-aws/ Mon, 18 Dec 2017 06:51:06 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20826

DigitalOcean vs Amazon Web Services (AWS)

DigitalOcean and Amazon Web Services (AWS) are two of the popular cloud computing services. DigitalOcean is quite popular among small businesses and indie developers whereas AWS is mostly popular among medium sized to corporate level businesses. Arguably, AWS is the leading force in the cloud computing for having massive infrastructure at their disposal, and a tremendous amount of corporate level clients which allow them to grow large in size, and bring the service to the next level. However, lately DigitalOcean has been gaining a tremendous amount of attention from various directions, and this article demonstrates what DigitalOcean has done to stand out in the cloud computing world, and major differences between these two competitors in the cloud computing arena.


DigitalOcean

Payment Method and Sign up

DigitalOcean accepts both Paypal, and debit card as valid payment methods. If Paypal is used, $5 should be deposited to verify the account, and without verifying the account it’s not possible at the moment to complete the sign up even if a promo code is used. However, the signup process is relatively easy.

Droplets

DigitalOcean is a cloud computing service which provides a wide variety of features, and is known for their simplicity and elegant look in the web interface. Due to the simplicity, it doesn’t take much time to roll out the cloud computer instance. In fact, it takes less than 2 minutes for creating a cloud computer instance, and being available to public with an accessible public IP address. Initially, it allocates a single IP4 address to the instance which is known as a droplet in DigitalOcean platform, but it’s possible to acquire an IP6 address through the droplet settings page which is unique per each droplet.

Operating Systems

DigitalOcean provides a range of operating systems from popular Ubuntu distro, to CentOS which is currently dominant in the server market. As seen in the following screenshot, Ubuntu makes the latest version 17.10 available to general public, and usually any recent version is available here after a while later, which is a huge plus for many clients who seek the power of latest features more than the stability.

digital ocean operating systems

Additionally, it’s possible to deploy either a container distribution or one click app as well. Container distribution is a minimal version of the operating system which is more suitable for advanced users. Unfortunately, at the moment only 3 operating systems are available under this category, Core OS, Fedora Atomic OS, RancherOS.

One click app simplifies the cloud computing even further by providing a range of popular web apps which can be installed with just one click right into the droplet. Afterward, the credentials to access the web app are emailed to the email address used during the registration in the service, whereby the webapp can be accessed to use. Some of the popular web apps are discourse, ghost, WordPress.

System Specification

There are 3 main hardware categories available for every droplet, and they are Standard, High Memory, High CPU. Each category is optimized for various purposes such as standard one is for regular users, high memory category is for apps which demand high memory, high cpu category is for apps which consume more CPU power. Fortunately, the fee is charged based on the hardware specification.

The cheapest droplet starts from just $5 per month, which is billed on an hourly basis, so it costs $0.007 per hour, and therefore a droplet can be destroyed at any time without paying the whole $5. The hardware specification of it is 512 MB, 1 CPU, 20 GB SSD disk, 1000 GB transfer. It goes up to 640 GB space, 64GB RAM, 20 CPU, 9TB transfer rate which is priced at $640 per month, meaning $0.952 per hour. After creating a droplet, its specs can be upgraded to higher tiers as well which increases the fee along with it.

Datacenters

There are a staggering 19 data centers across the globe in various countries. This is quite surprising for a small cloud computing provider. They are in New York, San Francisco, Amsterdam, Singapore, London, Frankfurt, Toronto, Bangalore. Surprisingly, the fee is consistent across all the data centers despite having different political and economy backgrounds. According to DigitalOcean, a brand new datacenter in Australia is on the horizon, which makes it ideal for Australian residents.

Additional Options

As additional networking, it provides private networking which is suitable to make an intranet among droplets, Backup which is for automating the backup on a weekly basis, Ipv6 which is for next generation networks, Data Monitoring which further enhances the existing monitoring system with additional services. These services are free of charges, except Backup which charges depending on the size of the droplet on a monthly basis.

If infrequent backups are taken, snapshots are much more ideal, but unfortunately they are charged $0.05 per gigabyte per month. So the more the snapshots the more it costs. By default, each droplet is assigned a password and a username automatically in the beginning, which are emailed to the email address of the user, but it’s possible to create a SSH key as well while the droplet is being created.

Object Storages

Object Storage is a brand new service introduced lately for hosting static files in the cloud as in Dropbox/Google drive. The files are accessible through the standard portal and can also be linked with droplets. Object storage doesn’t have multiple packages, but just one package which is free of charge for the first 2 months, then thereafter it’s billed $5 per month for 250GB space and 1TB inbound traffic. Overage fees are $0.02 per GB and $0.01 per GB, with free inbound data transfer.

Images

Apart from having automatic backup, droplets can be backed up manually as well. The manually taken backups are known as snapshots which can be used to restore droplets in case of a malfunction in the droplet. Unfortunately, it’s not possible to restore snapshots/backups taken from a higher tier package to a lower one.

Networking

The networking segment offers a range of features to enhance the networking side of the droplet. This includes advanced DNS records which are usually available at the domain name registrar. Floating IP is for making the droplet available even when it’s under maintenance, quite ideal for HAI (highly available infrastructure). Load balancers are for distributing the server load across multiple droplets with ease. A basic firewall to defend the droplets against intruders. The firewall isn’t meant to defend against DDOS attacks, and thus a professional level firewall is still required.

API

API makes it possible to design our own interface to interact with DigitalOcean services. This is mostly for programmers who intend to combine it with other services through a single interface. The documentation to the API can be found here. https://developers.digitalocean.com/documentation/v2/


Amazon Web Services (AWS)

Payment Method and Sign up

AWS makes the sign up process slightly difficult for new users by asking both phone verification and credit card verification. The phone verification can be done by either calling the customer support or typing the pin, shown on the screen, on the mobile. If the pin typing failed 2 times, it automatically blocks the sign up process for 12 hours. So the only way is contacting customer service to verify the account. However, customer support responds without any delay as long as “chat” is selected as the response method.

Instances and System Specification

As instances, it provides two options – EC2 and Lightsail. EC2 is based on “pay as you go” way whereas Lightsail has a fixed rate and fixed hardware specifications. The lowest package of EC2 starts from 0.5GB memory, and 1 CPU core with EBS space. EBS space is relatively slower than SSD storage, but it’s expandable up to 16TB from just 30GB given for free of charge for a year. It’s billed on an hourly basis as you go way, and priced at $0.0058 per Hour. The highest tier available at amazon is i3.16xlarge which costs $4.992 per hour, which totally would be staggering $3594.24 per month. There is no package in DigitalOcean matches to this humongous one at amazon, and thus it’s definitely suitable for corporate levels more than for small businesses.

Lightsail is same as EC2, except it has a flat fee which is charged at the end of the month, on the contrary to EC2, which has a “pay as you go” fee. However, even though it’s not stated on the surface both instances are actually charged on an hourly basis, and thus both are similar, in terms of charging frequency, to DigitalOcean. The upside of AWS is EC2 instance is only charged when it’s at running state, whereas Lightsail instance is charged whether it’s running or stopped, so this is quite similar to the pricing method at DigitalOcean. However, since both are almost same, it might confuse people more than benefit them.

Lowest tier in Lightsail is $5 and it’s basically same as the lowest tier in DigitalOcean. The highest tier available in Lightsail is $80 package which gives 8 GB Memory, 2 Core Processor, 80 GB SSD Disk, 5 TB Transfer which is comparable to $80 package in DigitalOcean, except it has more CPU power, and consistent data transfer rate across the whole globe which is not seen in Lightsail as it charges more for bandwidth in Mumbai and Sydney datacenters regardless of the package.

Operating Systems

At a first glimpse, it’s quite obvious that AWS doesn’t have as many  operating systems as DigitalOcean, and available ones are also a bit older as well, for instance, in DigitalOcean the latest available Ubuntu version is 17.10 whereas in AWS it’s 16.04 LTS. However, as a plus, AWS provides Windows operating system which is often used for hosting .NET web apps, and SQL Server databases. Lightsail provides both Windows Server 2012 and 2016.

aws operating systeam

Similar to DigitalOcean, AWS has a number of one click web apps (11), but its number is less than DigitalOcean (16), and it’s also limited to Lightsail by default, but users can still download a tremendous amount of 3rd party one click apps from amazon AWS market place. So technically AWS has thousands of one click apps. So basically AWS is much more complicated in terms of configuration but has more diverse options.

Static Content Hosting

Even though it’s possible to host static contents in DigitalOcean, there is no simple ready made solution like AWS. In AWS, static hosting is possible with the S3 service. As it is static contents no server side scripts are allowed, it means any client side script, html, css can be used with S3. This is a huge advantage for static content developers.

Domain Name Registration and Site Management

Surprisingly, AWS also acts as a domain name registrar, but it’s slightly expensive than in other places like Namecheap, for instance in AWS the .com domain is priced at $12, whereas in Namecheap it’s just $10.69. DigitalOcean at the moment doesn’t provide any service for registering new domains. Additionally, AWS goes much beyond and provides a whole new service just for managing websites. It’s known as Amazon Route 53. With 53, DNS management, traffic management, availability monitoring, domain registration as stated earlier are possible. With DigitalOcean at the moment it’s only possible to manage DNS. This is again slightly complicated due to the way it’s organized. So the whole problem AWS has is the lack of organization.


Conclusion

DigitalOcean and AWS both are extraordinary cloud computing services that provide sophisticated features which help in creating complex web applications at a decent price. DigitalOcean doesn’t have as many features, but available features are nicely organized, whereas in AWS feature organization is a whole mess, but AWS has a tremendous amount of features which are difficult to be reviewed in this small article.

AWS is often regarded as a corporate level cloud computing service, but given the fact it has a decent price model with good features which are comparable to DigitalOcean there is no huge advantage of moving to DigitalOcean. However, DigitalOcean is much more user friendly, and newbie friendly and has a consistent price scheme across the globe; hence DigitalOcean is recommended for new users, whereas AWS is mostly useful for experienced users.

]]>
Tor and Onion Explained https://linuxhint.com/tor-and-onion-explained/ Mon, 18 Dec 2017 06:35:10 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20823

Introduction to TOR and .onion

What is TOR For?

That is the first question that comes to mind. The Onion Router (aka TOR) is a tool that allows you to stay somewhat anonymous while using the internet. You might ask yourself, I did nothing wrong or illegal, why do I need to stay anonymous? That’s a very good question.

The Internet is global and is not a subject to any one country’s regulations. Even if you are not doing anything that your government would consider illegal there is still pretty good chance that your activities are going to upset someone. Imagine this, one day you log into your account and discover that it has been hacked (through no fault of your own) and used to make posts that are directly opposite (not to mention extremely offensive) of what you believe in. You check your email and it is full of “hate mail” from your now former fans.  While the damage might not be irreparable, do you also want to worry about the attackers actually knowing you real-world identity and where you live? Do you want them contacting your employer, your landlord and your real life friends with the links to the horrible things they put online while pretending to be you? Need I continue?

And that is why you would be wise to stay anonymous online and learn to use tools that facilitate it (including TOR).

How TOR Works.

The core ideas behind TOR are: it channels your communication through a number (at least 3) of relays. Each relay has its own layer of encryption. So, even if a relay (except for an exit node, more on that later) gets compromised, there is no easy way to know what your final destination is or where you are coming from because everything (except the information about next relay) is encrypted.

In fact, each relay uses a separate layer (like onion) of encryption. When the TOR client sends the data it is first encrypted so that only the exit node can decrypt it. It adds some metadata to it and then encrypts it again with a different key. The step is repeated for every relay in the circuit. Check out this post for more details on how TOR works.

The Bad Exit

You might ask yourself: it is all well and good that TOR still keeps you safe even if some of the intermediate nodes have been compromised. What happens if it is the exit (the one that connects to your final destination) node? Short answer: nothing good (for you). That’s the bad news. The good news is that there are ways to mitigate the threat. The community is identifying and reporting (they get flagged with BadExit flag) bad exit nodes (see this for up to date list) on a regular basis and you can take some measures to protect yourself as well.

It is hard to go wrong with using HTTPS. Even if the exit node is controlled by the attacker they don’t actually know your IP address! TOR is designed in such a way that each node only knows the IP address of a previous node but not the origin. One way they can figure out who you are is by analyzing contents of and modifying (injecting JavaScripts is a fairly common tactic) your traffic. Of course, you have to rely on your destination site to actually keep their TLS (check out this article for more details) up to date and even then you might not be safe depending on the implementation. But, at least using encryption will make it *lot* more expensive if not impractical for the would-be attackers. This fun interactive online tool can help you see how the TOR and HTTPS fit together.

By the same token, it is also a good idea to use a VPN — preferably the one that does not keep more logs then necessary (IPVanish is pretty good). This way, even if your encryption was cracked and your origin IP was tracked down, the attackers still don’t have much to work with. Besides, with the net neutrality, it is a good idea to obscure your online activities from your ISP. Unless you like your internet access being throttled and the data about your online habits being sold to the highest bidder, of course.

Use .onion And Disable JavaScript

There are more measures you can take to stay safe. One thing you can do is check if your website (quite a few do, including DuckDuckGo search engine) has .onion service and use that if it does. What that means: the website itself is also the exit node. This makes life lot harder for the would-be attackers as the only way they can control the exit node is by controlling the service itself. Even then, they still won’t know easily know your IP address.

One way they can find out your IP address is by injecting a certain JavaScript into the response. It is highly recommended that you disable JavaScript in your TOR browser for that reason. You can always enable them for an specific site if need be.

TOR Helps Everyone Stay Safe

They say: “If you have nothing to hide, you have nothing to fear”. Unfortunately, the opposite is also true. Even if you did not do anything wrong, you could still be targeted by someone. Your data can also be used for questionable things (such as identity theft) without your knowledge — why should you let everyone see it?

Besides if you use TOR you create more traffic for the “bad guys” to analyze and make their life more difficult in general thus helping everyone else to stay safe. Keep calm and use open source.

Works Cited

“How HTTPS and Tor Work Together to Protect Your Anonymity and Privacy.” Electronic Frontier Foundation, 6 July 2017
“How Tor Works: Part One · Jordan Wright.” Jordan Wright, 27 Feb. 2015
“Net Neutrality.” Wikipedia, Wikimedia Foundation, 14 Dec. 2017
Project, Inc. The Tor. “Tor.” Tor Project | Privacy Online
TLS vs SSL, Linux Hint, 8 Dec. 2017 ]]> Latest Debian Version and How To Download It https://linuxhint.com/latest-debian-version-download/ Sun, 17 Dec 2017 03:29:05 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20803

The latest stable version of Debian is Debian 9, codenamed Debian Stretch, released on June 17th, 2017.  This version will be supported for 5 years, all the way until June 2022!  Interesting tidbits about this version include MySQL has been replaced by default with the more open source project MariaDB.  Official Firefox and Thunderbird packages are in Debian 9, where previouslly there were similar software but named differently, that naming / branding issue is now no longer confusing users in Debian 9.  The developers also want to highlight improvements in the boot process of Debian 9 with UEFI which is greatly improved in this version.  Many most of the additional changes are just normal updates and all around improvements of many small things that add up to the latest and best version of Debian.

Debian minor version 9.3 was released on December 9th, 2017.

Debian has a Network Installation media which is the minimum content to boot up the system, from there you can use apt-get after the system is up to add more packages. This is what I recommend for most cases, since you don’t need to download and install such a big image.

The link to download these Network Installation media is here.

The installation guide for Debian stable version for AMD64 in English is linked here. For other architectures and languages, you can find the installations guides here.

Here is a direct download link for the image for AMD64 arch.

If you want to download from one of the mirror’s that might have faster download speeds check this link here.

Here are some direct links to a subset of fast mirrors for quick access from this page:

Americas, Debian Mirror Download

utexas.edu: University of Texas, USA

xfree.com.ar: XFree Argentinia

Europe, Middle East, Africa, Debian Mirror Download

uni-stuttgart.de: Official file server of the University of Stuttgart

icm.edu.pl: University Warsaw, Poland

linux.org.tr: Linux Org, Turkey

Asia Pacific, Debian Mirror Download

xjtu.edu.cn: Xi’an Jiaotong University, China

jaist.ac.jp: Japan Advanced Institute of Science and Technology

poliwangi.ac.id: Politeknik Negeri Banyuwangi, Indonesia

overthewire.com.au: Over the Wire, Australia

References

debian.org: Main Debian Home Page
debian.org/intro/about: About Debian Page Official
wikipedia.org/wiki/Debian: Debian Wikipedia Page
debian.org/News/2017/20170617: Release Announcement for Debian Stretch
linuxhint.com/debian: Debian Articles on LinuxHint.com

Debian Package Search

]]>
SQLite Tutorial https://linuxhint.com/sqlite-tutorial/ Thu, 14 Dec 2017 09:35:20 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20780 Ever since man started creating systems, there have always been databases corresponding to them. Fast forwarding to our present day where technology has evolved, and nearly everything has now been automated. While developing systems and applications, SQL has been the leading language which developers use to create and administer databases. For SQL to run properly, it needs an engine. This engine is then responsible for handling operations and transactions throughout the database.

What is SQLite?

SQLite is a fast and simple open source SQL engine. While this engine might have others confused with full RDBMS’s such as Postgres and MySQL, SQLite is different and performs at its peak in certain conditions. SQLite is a new library that enforces a serverless, self-contained, no setup SQL database engine. Since it does not need configuration like other databases, it is easier to use and install. Unlike other databases, it is not a standalone process. You can link the database to your application so that the records it stores can be accessed either dynamically or statically.

There has always been a misconception about SQLite being only for development and testing. While it is a great tool for that job, it is not only limited to system testing. For instance, SQLite can handle a website receiving more than 100,000 visits per day on the lower side. The maximum size limit for SQLite is 140 Terabytes which is more than what most applications clock.

Why should you use SQLite?

  • Since the system is serverless, it does not need an additional server process to function.
  • There being no configuration, there is no need for setup or monitoring administrators.
  • SQLite is compact as a full SQLite database can fit in one cross-platform disk file. Fully configured on a system, the whole database can be about 400KiB or about 250KiB when some features have been omitted.
  • Supports most of SQL92 (SQL2) query language features thus quite familiar.

Since it is written in ANSI-C, the API is easy to use and quite straightforward.


INSTALLATION

Since SQLite’s pride comes from its surprisingly simple configuration, the installation process is quite straightforward. In this tutorial, we shall focus more on Linux than other platforms. These days we find that SQLite is being shipped with almost all the versions of Linux operating system. So, before bothering to install it, you should check whether the installation has already taken place. To be sure, type this:

$ sqlite3

If properly installed, you should see the following result:

SQLite version 3.7.15.2 2013-01-09 11:53:05

Enter ".help" for instructions

Enter SQL statements terminated with a ";"

sqlite>

If not, it means SQLite has not been installed on your system. To install, you can;

Go to the SQLite official page and download SQLite-autoconf-*.tar.gz from the section with the source codes. After that, open command line and run the following command;

$tar xvfz SQLite-autoconf-3071502.tar.gz

$cd SQLite-autoconf-3071502

$./configure --prefix = /usr/local

$make

$make install

You can also use the following method to install:

sudo apt-get update

sudo apt-get install sqlite3

Both these methods will do the same thing. You can confirm that installation is complete by running the first test.


Meta commands

Meta commands are mostly used for administrative operations such as examining databases and defining output formats. The unique thing about all these commands is that they always start with a dot (.). Here are some of the more common ones that come in handy over time.

Command Description
.dump Dump database usually SQL text format
.show Displays the current settings for various parameters
.databases Provides complete database names and files
.quit Quits the SQLite3 program
.tables Show a list of all current tables
.schema Display schema of the table
.header Hides or displays the output table header
.mode Select mode for the output table

Standard Commands

When dealing with SQLite, there exist common commands used for various activities in the database. They are referred to as standard commands since they are usually the most frequently used ones. They are classified into three groups owing to their various functions across the scope.

Data definition language

The very first group are the commands responsible for storage structure and also methods of data access from the database. They are:

  • CREATE
  • DROP
  • ALTER

Data manipulation language

These are the commands mostly used to manipulate data in the database. Data manipulation includes adding, removing and modifying the data. In this section, we have:

  • INSERT
  • UPDATE
  • DELETE

Data query language

The last type of commands are those that enable the users to fetch certain data from the databases. Here we only have one:

  • SELECT

It is important noting that these are not the only commands SQLite can support. However, since we are at the beginner stage, we shall only be exploring these for now.


Creating a database

When dealing with SQLite3, commands are used to create a new database. Unlike other RDBMS’s, you need not have special privileges to do this. Remember that the database name should be unique. The following is the syntax for creating a database:

sqlite3 DatabaseName.db

A new database called linuxDB would be written as follows

$ sqlite3 linuxDB.db
SQLite version 3.21.0 2017-10-24 00:53:05
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
SQLite>

You can confirm the creation of the database by using the .databases command.

sqlite>.databases
seq  name             file
---  ---------------  ----------------------
0    main             /home/SQLite/linuxDB.db

Creating tables

Since tables are the skeletal of the database, it is essential to know how to create them. To create a table means you have to name the table, define the columns and the data type for each column. This is the syntax for creating a table.

CREATE TABLE database_name.table_name(
column1 datatype PRIMARY KEY(one or more columns),
column2 datatype,
column3 datatype,
…..
columnN datatype
);

In action, this is what a sample table called product_x will look like. The ID is the primary key. You should always remember to specify fields that cannot be null.

sqlite> CREATE TABLE product_x(
   ID INT PRIMARY KEY     NOT NULL,
   NAME           TEXT    NOT NULL,
   UNITS          INT     NOT NULL,
   PRICE          INT,
   DISCOUNT       REAL
);

Drop table

This command is used when the developer wants to remove a table together with all its contents. You should always be careful when using this command since once the table is deleted, all subsequent data is lost forever. This is the syntax:

DROP TABLE database_name.table_name;

sqlite> DROP TABLE product_x;

Alter table

This command is used to edit the contents of a table without having to dump and reload the data. In SQLite, there are only two operations you can perform on a table with this command; renaming a table and adding or removing current columns.

This is the syntax for renaming an already existing table and adding a new column respectively;

ALTER TABLE database_name.table_name RENAME TO new_table_name;
ALTER TABLE database_name.table_name ADD COLUMN column_def…;

For example, a table named product_x can be renamed to product_yz and we can add a new column to product_yz in the two lines of code below:

sqlite3> ALTER TABLE product_x
 ...> RENAME TO product_yz;

sqlite3> ALTER TABLE product_yz
 ...> ADD COLUMN manufacturer_name TEXT;

Insert query

The INSERT INTO command is used to add rows of data into a table inside the database. The syntax for this is quite direct:

INSERT INTO TABLE_NAME VALUES (value1,value2,value3,…valueN);

Column1, column2,…columnN are the names of the columns belonging to the table you want to insert data. It is important to specifically note the column name in SQLite when adding values to all columns in the table.

SELECT Query

The SELECT statement in SQLite is primarily used to fetch data from the SQLite database and return said data in the form of a results set. This is the syntax for using the SELECT statement;

SELECT column1, column2, columnN FROM table_name;

From the above syntax, column1, column2 … are the respective fields in the table where you want to fetch values. In case you want to fetch all fields in that table, then you use the following syntax. The wildcard (*) basically means ‘all’.

SELECT * FROM TABLE_NAME;

UPDATE Query

In a database, records need to change for one reason or another. Supposing a user wants to change their email address on your platform, this is exactly the command you need to make this process work. While using the UPDATE clause, we must also use the WHERE clause to update the selected rows. If not, you will find all the rows have been updated! That would be really bad. This is the syntax for performing this operation:

UPDATE table_name
SET column1 = value1, column2 = value2…., columnN = valueN
WHERE [condition];

If you have an N number of conditions to be met, the AND or OR operators come in very handy. Example:

sqlite> UPDATE product_x
 ...> SET UNITS = 103 WHERE ID = 6;

The AND & OR operators

These are what could be called conjunctive operators. They are used to compile several conditions in order to shrink the selected data in an SQLite environment. These operators make it possible for a developer to make multiple comparisons of values using different operators on one SQLite statement.

The AND operator is unique since it allows the user to have multiple conditions in conjunction with the WHERE clause. When using this operator, the condition is regarded as true if all the conditions are met. This is the syntax for the AND operator.

SELECT column1, column2, columnN
FROM table_name
WHERE [condition1] AND [condition2]…AND [conditionN];

On the flip side of things, we have the OR operator which is also used together with the WHERE clause. Unlike the AND operator, the condition is true if one of the conditions has been met. The syntax is pretty simple.

SELECT column1, column2, columnN
FROM table_name
WHERE [condition1] OR [condition2]…OR [conditionN]

Sources and Additional Info

http://linuxgazette.net/109/chirico1.html
http://www.yolinux.com/TUTORIALS/SQLite.html
https://www.sitepoint.com/getting-started-sqlite3-basic-commands/
https://www.digitalocean.com/community/tutorials/how-and-when-to-use-sqlite
http://www.thegeekstuff.com/2012/09/sqlite-command-examples/?utm_source=feedburner

]]>
History of the Linux Kernel https://linuxhint.com/history-linux-kernel/ Wed, 13 Dec 2017 17:02:13 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20772 Even though most people have heard of Linux, they still associate it primarily with various operating system distributions built around it. In this article, we describe the history of Linux as an open source operating system kernel, which is the central component of most computer operating systems that acts as a bridge between applications and the actual data processing done at the hardware level. The history of the Linux kernel is both fascinating and educational as it can teach us a lot about the underlying motivations of Linux developers and help us understand the direction the kernel is headed.

Introduction

What started as one man’s humble idea has grown to become the most important open source project ever created. The Linux kernel currently has over 20 million lines of code, and it runs on all of the world’s 500 most powerful supercomputers. It also runs on servers, desktops, laptops, TV boxes, routers, tablets, smartphones, wearable devices, and it powers much of the rapidly growing network of connected devices known as the Internet of Things.

Over 12,000 programmers from more than 1,200 companies have contributed to the project, including Intel, Red Hat, Linaro, Samsung, SUSE, IBM, and Microsoft. In other words, the Linux kernel is hugely important, and its future is looking brighter than ever.

Creation of the Linux Kernel

But it wasn’t always like this. Not too long ago, in 1991, the Linux kernel was nothing but an announcement made by Linus Torvalds, at the time a 21-year-old computer science student at the University of Helsinki, Finland.

“I’m doing a (free) operating system (just a hobby, won’t be big and professional like GNU) for 386 (486) AT clones. This has been brewing since April, and is starting to get ready. I’d like any feedback on things people like/dislike in MINIX, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things),” Linus posted to comp.os.minix, a newsgroup on Usenet, a worldwide distributed discussion system that predates current Internet forums.

In his historic announcement, Linus mentioned two other important projects: GNU and MINIX. The latter is a Unix-like computer operating system that was initially released in 1987 by Andrew S. Tanenbaum for educational purposes. Unix-like computer operating systems are inspired by Bell Labs’ original Unix computer operating system, often emulating its features and architecture. GNU is also a Unix-like operating system, initiated by Richard Stallman and first announced in 1983, but it differs from Unix in two important aspects: it’s free, and it doesn’t contain any Unix code.

Linus had been using MINIX during the time he spent as a student at the University of Helsinki in Finland. After he had become frustrated with MINIX’s licensing model, he decided to develop his own free alternative to Unix, one that would embrace the concept of free software that had only just started to become popular at the time thanks to Richard Stallman and his GNU General Public License (GPL), which guarantees end users the freedom to run, study, share and modify the software.

Linus started by porting some essential GNU components, and it remains true to this day that many Linux distributions heavily rely on GNU. “I’ve currently ported bash (1.08) [a Unix shell and command language written by Brian Fox] and gcc (1.40) [a compiler system produced by the GNU Project supporting various programming languages], and things seem to work. This implies that I’ll get something practical within a few months, and I’d like to know what features most people would want. Any suggestions are welcome, but I won’t promise I’ll implement them.”

In September 1991, version 0.01 of the Linux kernel was released on the FTP server of FUNET, the Finnish University and Research Network, containing 10,239 lines of code. When Linus announced version 0.02 on October 5, 1991, the Linux kernel still needed MINIX to operate, but the number of volunteers from around the world who decided to contribute to the project without expecting anything in return had been steadily increasing. In December of the same year, Linux kernel 0.11 was released as the first version that could be compiled by a computer running the same kernel version. With Linux kernel 0.12, released in February 1992, Linux officially adopted the GNU General Public License (GPL).

Release of Linux kernel 1.0.0

In March 1992, Linux kernel 0.95 became the first version of the Linux kernel capable of running the X Window System, which is a windowing system for bitmap displays that offers a basic framework for a GUI environment by providing a way for windows to be drawn on a display device and interacted with using a mouse and keyboard. The massive version-jump from 0.12 to 0.95 reflected the fact that the Linux kernel had matured and evolved into a full-featured system.

To cement this notion further, Linux kernel 1.0.0 was released on March 14, 1994. It had 176,250 lines of code, and you can still study the original code and read the original release notes, which state that the Linux kernel 1.0 “has all the features you would expect in a modern fully-fledged Unix, including true multitasking, virtual memory, shared libraries, demand loading, shared copy-on-write executables, proper memory management, and TCP/IP networking.”

Modern-Day Development of Linux kernel

The Linux kernel continued to be heavily improved through the 1990’s, with version 2.0 released on June 6, 1996, and version 2.2.13, which allowed the Linux kernel to run on enterprise-class machines thanks to IBM mainframe patches, released on December 18, 1999.

After the arrival of the new millennium, Linux evolved to a world-wide development project with countless contributors from around the world. You can see the complete changelog of everything that happened from December 17, 2001 to the present day by visiting this website. According to recent estimations, “The average number of changes accepted into the kernel per hour is 7.71, which translates to 185 changes every day and nearly 1,300 per week.”

Considering that Linus never intended for his pet project to become so big, the Linux kernel is a true testament to the power of open source development and the ingenuity and skill of independent developers motivated by the desire to collectively create something great.

]]>
Upgrade To Latest SQLite3 on CentOS7 https://linuxhint.com/upgrade-to-latest-sqlite3-on-centos7/ Sat, 09 Dec 2017 18:31:09 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20679 How to Upgrade to the Latest SQLite3 on CentOS7

SQLite 3 is a lightweight simple file based database system. It is used by many developers. Especially devices that have low hardware specs, microcontrollers, embedded computers use SQLite as a lightweight database system. Android also make extensive use of SQLite database.

In CentOS 7, SQLite 3.7 is already installed. You can’t remove it because all the other packages of CentOS 7 depend on it. CentOS 7 don’t provide any way to update it. Updating the version of SQLite on CentOS 7 is also tricky because of that.

In this article, I will show you how to update SQLite3 on CentOS 7 safely. Let’s get started.

Downloading Latest SQLite3:

First go to the official website of SQLite at https://sqlite.org.  You should see the following page:

Now click on “Download” as shown in the screenshot below.

You should see the following page. It contains download links for the latest version of SQLite3.

Now scroll down a little bit. You should see the section “Precompiled Binaries for Linux”. From here, you can download precompiled version of latest stable SQLite3 database. Precompiled binaries are ready to use after download, as they don’t require compilation of any sort. Click on the file as shown in the screenshot.

Your browser should prompt you to download the file. Click on “Save File” and click on “OK”. Your download should start.


Upgrading SQLite3:

Now open a Terminal and go to the directory where you downloaded SQLite3. It should be ~/Downloads directory in your USER’s home directory by default.

Run the following command to go to the ~/Downloads directory:

$ cd ~/Downloads

Now let’s list the contents of the ~/Downloads directory with the following command:

$ ls

You can see that the downloaded file is there.

Now we have to extract the zip archive.

To extract the downloaded zip file, run the following command:

$ unzip sqlite-tools-linux-x86-3210000.zip

There are only 3 files inside the zip file as you can see. They are ‘sqlite3’, ‘sqldiff’, ‘sqlite3_analyzer’

We are interested in ‘sqlite3’ file only.

Now we have to locate where the preinstalled sqlite3 program is.

To do that, run the following command:

$ whereis sqlite3

You can see that, the preinstalled SQLite3 program is in ‘/usr/bin/sqlite3’. Take a note of that as we need this path later.

You can remove ‘/usr/bin/sqlite3’ file and replace it with the update one. But I don’t recommend doing that. Because if any problem arises, you won’t be able to go back that easily. I recommend renaming the file. So you will have both of these SQLite3 installed and the updated one will be the default. In case you have any problem, just remove the new one and rename the old one to ‘sqlite3’ you’re done.

Before I rename the file, let’s check the version of the SQLite3 currently installed.

Run the following command:

$ sqlite3 --version

You can see that the version is 3.7.17.

I will rename the installed SQLite3 binary from ‘sqlite3’ to ‘sqlite3.7’.

To do that, run the following command:

$ sudo mv -v /usr/bin/sqlite3 /usr/bin/sqlite3.7

You can see that, the rename operation was successful.

Now we can copy the latest stable sqlite3 binary that we got after we unzipped the downloaded zip archive to /usr/bin/.

To do that, run the following command:

$ sudo cp -v sqlite-tools-linux-x86-3210000/sqlite3 /usr/bin/

You can see the that the copy operation was successful.

Now you can check the version of the installed SQLite3 again with the following command:

$ sqlite3 --version

You can see that the version if 3.21. That is the latest version as of this writing.

You can also use the old version if you want. The old SQLite3 can be accessed as ‘sqlite3.7’ as shown in the screenshot below.

So that’s how you update SQLite3 on CentOS 7. Thanks for reading this article.

]]>
TLS vs SSL https://linuxhint.com/tls-vs-ssl/ Sat, 09 Dec 2017 03:00:38 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20654 TLS and SSL Explained

Introduction to Public Key Cryptography

Before we go into details, we should review some key concepts that are crucial to understanding the subject. Both Transport Layer Security (TLS) and Secure Socket Layer (SSL) take advantage of public (asymmetric) key cryptography for establishing a secure communication channel.

While conventional symmetric cryptography has been around since at least ancient Egypt, public key cryptography has been discovered in the 1970s. It utilizes a pair of keys. If you encrypt something with one key, for all practical purposes, it can only be decrypted with the other. Discussing why this is the case would involve math that is well outside the scope of this article.

What is the Difference Between TLS and SSL?

Both TLS and SSL use public key cryptography to share a more conventional, symmetric key (choice of multiple cipher types is available) between two hosts. This process is called the handshake. The shared key is then used to encrypt the subsequent communication. So, what is the difference?

TLS 1.0 was introduced in 1999 as the successor to SSL 3.0.  Some people think of it as SSL 4.0, and it is a very reasonable way to look at it. The SSL is technically proprietary to Netscape and TLS is an Internet Engineering Task Force standard, hence the difference in name — to avoid potential legal issues. You can check this article for more details.

From a more technical perspective, TLS performs the handshake slightly differently from SSL. The connection starts as “insecure” and is then later “upgraded” with STARTTLS command. The name of the command is somewhat misleading as it can be used to start TLS and SSL connections. Please see this for more details.

The idea behind it was to allow upgrading to secure communication via normally insecure application ports. This way an application only has to listen on one port instead of two. It turned out to be impractical as a lot of client applications would send user credentials in plain text before the server could even tell them: “plaintext is not supported”. The request would fail, of course, but the credentials would already be compromised.

Why is TLS more secure than SSL?

Computer security is an arms race. SSL 3.0 has been declared obsolete in 2015 because it has unfixable security vulnerabilities. To be fair, TLS 1.0 is not much of an improvement as the attacker can force the client application to downgrade to SSL 3.0 by interrupting the handshake. TLS 1.1+ addresses this particular issue.

The main reason why SSL 3.0 is simply not secure anymore is, largely, because it does not support ciphers strong enough to counter increases in the computational (and sometimes legal) power that is available to the attackers. It is simply obsolete. On top of that, it does not use the ciphers that it does support as well as it should. For example, it does not have a mechanism to check padding contents when using block ciphers and the infamous POODLE (among others) attack exploits this.

What measures to take?

This thread gives a really good overview of the measures you can take. Let’s summarize them briefly here.

From the client perspective, it is relatively simple. All modern (such as Firefox 27+) web browsers support TLS 1.2, so making sure that your browser is up to date is a good start. In fact, most of them will warn you if the website has outdated TLS among other things. So, if you visit a website and your browser tells you that there is a problem with connection security, do take it seriously.

On the server end, you should consider displaying a warning to your customers if they are using an outdated security protocol. Assuming you are using Apache you can do something like this:

SSLOptions +StdEnvVars
RequestHeader set X-SSL-Protocol %{SSL_PROTOCOL}s
RequestHeader set X-SSL-Cipher %{SSL_CIPHER}s

Then, in case of PHP for example, you can access those values using $_SERVER inside your code. If you detect an older TLS version you can say something along the lines of “Starting 30 June 2018 we will no longer be supporting TLS 1.0, as per PCI Security Standards Council mandate. Please upgrade your web browser”. By the way, the council has been founded by the major credit card companies and any eCommerce business that is operating in the US needs to comply with their security standards.

It is worth mentioning that there are free third party tools you can use to scan for SSL/TLS vulnerabilities and even generate configuration for your server. The Mozilla SSL Configuration Generator tool basically generates TLS configuration appropriate for your server all you need to do is make some choices.

The SSL Server Test by Qualys SSL Labs allows you to enter the hostname and click “Submit”. It will run a plethora of tests against you server and will inform you of vulnerabilities… if any.

Secure Internet Is Everyone’s Responsibility

Using adequate encryption for your digital communication has never been as important as it is today. Keep calm and use open source. Good luck.

Bibliography

History of Cryptography, Wikipedia
Public-key Cryptography, Wikipedia
SSL vs TLS vs STARTTLS, FastMail Help & Support
SamuelChristie, Explanation of How to Detect TLS 1.0 Connections And, by Way of Custom Headers, Warn the User about the Coming Change to More Modern TLS Versions
Transport Layer Security, Wikipedia

]]>
Install Oracle JDK 9 on CentOS7 https://linuxhint.com/install-oracle-jdk-9-centos7/ Thu, 07 Dec 2017 13:23:12 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20631 JDK or Java Development Kit is used by Java programmers all over the world to compile and run Java code. Java and JDK is a product of Oracle and it’s also licensed under Oracle. There is OpenJDK which is an open source implementation of Java. Currently the latest version of JDK is 9.  In this article, I will show you how to install Oracle JDK9 on CentOS 7. Let’s get started.


Downloading Oracle JDK 9:

Oracle distributes JDK (Java Development Kit) for Java Developers in their official website in compressed tar archive and in an rpm file. As CentOS uses an RPM based package manager yum, we can install JDK on CentOS 7 with compressed tar archive or with the rpm file using the package manager. Since we can use the package manager

First go to the official website of Oracle at https://www.oracle.com from any browser of your choice.

Once the page loads, Hover over “Menu”. A drop down menu should pop up. Now hover over “Downloads and Trials” and click on “Developer Downloads”.

You should see the following page.

Scroll down a little bit and click on “Java SE (includes JavaFX)”.

You should see the following page. Now click on “Java Platform (JDK) 9” as shown in the screenshot below.

You should see the following page. This page includes the download links for JDK 9. Before you can download JDK 9, you must accept “Oracle Binary Code” License.

Click on “Accept License Agreement” radio box to accept the “Oracle Binary Code” license.

Once the license is accepted, you will be able to download JDK 9.

Now click on “jdk-9.0.1_linux-x64_bin.rpm” as shown in the screenshot.

NOTE: By the time you read this article, the version of JDK 9 might change, so adapt it as required for the rest of the article.

Your browser should prompt you to download the file, just click on “Save File” and then click on “OK”.

Your download should begin. As you can see, it’s a pretty big file. So it may take a while depending on your internet connection.


Installing Oracle JDK 9:

Once the download is complete, open a terminal and go to the directory where you downloaded Oracle JDK 9 on your computer. By default, it is in ~/Downloads directory in your user’s HOME directory.

Run the following command to change to the Downloads directory:

$ cd ~/Downloads

Run the following command to see what is in the Downloads directory:

$ ls

You can see that, the downloaded file ‘jdk-9.0.1_linux-x64_bin.rpm’ is in the directory.

You can install JDK 9 from an rpm package in 2 ways. You can either use the yum package manager to install the downloaded rpm package file. Or you can use the traditional rpm program to install the downloaded rpm package file. The latter is important when yum is not available. CentOS or Fedora should have it installed already. I will show you both ways in this article.


Using yum package manager:

To install JDK 9 using yum package manager, run the following command from the ~/Downloads directory:

$ sudo yum install jdk-9.0.1_linux-x64_bin.rpm

Now type ‘y’ and press <Enter> to continue.

The installation should start. It may take a while to complete. You should see the following window once the installation is complete.

Now run the following command to test whether JDK 9 is working:

$ javac -version

You can see that the version is 9.0.1. So it is working correctly.

 


Using rpm:

You can also install JDK 9 rpm package file using the rpm program.

To install JDK 9 using rpm, run the following command:

$ sudo rpm -i jdk-9.0.1_linux-x64_bin.rpm

Once you press <Enter>, the installation should start. On completion, you should see something like this.

You can run the following command to test whether the installation was successful again:

$ javac -version

That’s how you install Oracle JDK 9 on CentOS 7. Thanks for reading this article.

]]>
Upgrade to Linux Mint 18.3 https://linuxhint.com/upgrade-to-linux-mint-18-3/ Wed, 06 Dec 2017 07:39:33 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20540 History and Versions of Linux Mint

Over the years, Linux Mint has become a popular Linux distribution. It was conceived by Frenchman Clement Lefebvre in 2006. The objective of Linux Mint is to create a “modern, elegant, and comfortable” version of Linux.

Linux Mint is developed by the open source community. The community tries to make the operating system easy-to-use. Users will get multimedia support out of the box. The community regularly monitors the feedback from users and tries to incorporate the suggestions for each release. The operating system provides 30,000 packages and one of the best software managers for Linux.

The first version Linux Mint 1.0 Ada was released in 2006. It was based on Ubuntu. But Linux Mint 2.0 Barbara started to use the Ubuntu 6.10 codebase. Since 2008, Linux Mint has been following Ubuntu’s release cycle. Generally, the release is one month after the Ubuntu release.

In 2010, Linux Mint started the Debian-based Linux Mint Debian Edition (LMDE). LMDE distribution has a smaller user base.

The latest version of Linux Mint is 18.3, codename Sylvia. It is a long-term support (LTS) release. It will be supported until 2021.

The Editions of Linux Mint

The Ubuntu-based Linux Mint desktop environment comes in 4 flavors:

  • Cinnamon: It is the most developed and best looking Mint desktop. It is also the most used version of Linux Mint environment. This edition is based on Gnome 3. It is a stable and reliable environment.
  • MATE: This desktop environment is a continuation of the Gnome 2 desktop.
  • KDE: This edition had lots of configuration options. But it’s geared towards advanced users who like to play around with their environment.
  • Xfce: Xfce is a lightweight version of the Linux Mint desktop environment. It’s a useful edition for older computers with less processing and memory power.

New Features and Changes in Linux Mint 18.3

The Linux Mint 18.3 “Sylvia” Cinnamon and Mate edition were released on November 27, 2017. The BETA versions of KDE and Xfce was released on December 1, 2017. Linux Mint 18.3 “Sylvia” is based on Linux kernel 4.10 and Ubuntu 16.04 package base.

Here are some of the new features and changes in Linux Mint 18.3:

Better Software Manager with Flatpak Support

The Linux Mint Software Manager was showing its age. So the team upgraded the code to improve the look and feel as well as the performance. Popular software like Skype, Spotify, WhatsApp, Minecraft and Google Earth are now in the Featured Application section. The user interface has a more modern look.

The backend is ported from Aptdaemon. Even though the Software Manager will now run in user mode, there is no password needed for application browsing. Also, for application installation and removal, the software will remember the password for a short period of time. It makes application installation and removal more convenient and fast.

Now Flatpak will be part of the default Linux Mint installation. It will help support bleeding-edge applications. Flatpaks are different than packages but they are represented in the same way in the Software Manager. However, after installation, the flatpaks will run in their own isolated environment.

Improved Backup Tool and New Timeshift

The Backup Tool has been rewritten to make it simpler to use. Your home directory is saved into a tar archive. Also, the tool runs in user mode and you wouldn’t need to enter your password every time.

A new tool called Timeshift will be responsible for system snapshots. Timeshift makes it easier to recover from a mistake in installation or system error. It backs up operating system data only. Your personal data will not be saved.

New System Reports Tool

Systèm Reports is a new tool developed for Linux Mint 18.3. It provides crash reports and information reports. The crash report is an easy way to produce core dumps and stack traces. It will help with debugging any system level problems. The information report is kind of like an interactive release note. It is going to provide targeted information to users about particular hardware configurations like CPUs or graphics cards.

XApps Improvements

The text editor Xed includes a minimap. Also, the toolbars for PDF Reader and Xreader has been improved by replacing history buttons with navigation buttons and adding zoom buttons. The XPlayer window has a cleaner look.

Better Login Screen

The Login Screen has more options than before. You can also choose automatic login. LDAP users will be able to hide user lists. Support for numlockx has been added.

Artwork Improvements

Linux Mint 18.3 has added a number of beautiful backgrounds contributed by the open-source community artists.

Other Improvements

  • Spell-check and synonym support added for English, German, Spanish, French, Italian, Portuguese and Russian.
  • Driver Manager detects CPU and customizes microcode package information.
  • Upload Manager and Domain Blocker has been removed from default software selection.
  • PIA Manager will run in user mode.

Edition Specific Features:

  • Cinnamon: It features Cinnamon 3.6 that supports GNOME Online Accounts. It supports both Synaptics and Libinput drivers out of the box. The configuration module for Cinnamon spices has been updated. The on-screen keyboard has been improved to work more smoothly. The default setting for HiDPI support on Cinnamon 3.6 is set to “Auto”. Applications can communicate their progress to the window manager which is visible in the panel window list.
  • MATE: It features MATE 1.18.
  • KDE: It features KDE Plasma 5.8.

Xfce: It features Xfce 4.12. It has more attractive and customizable notifications with better themes and symbolic icons. The terminal has been updated to version 0.8.0.


Upgrade Process for Linux Mint 18.3

You can upgrade to Linux Mint 18, 18.1 and 18.2 versions to 18.3. Only Cinnamon and MATE versions are available. The Xfce and KDE will be released later. Here are the upgrade steps:

Backup and Prepare System

Install Timeshift with the following commands:

apt update
apt install timeshift

Launch and configure Timeshift to create a system snapshot. You will need the backup if something goes wrong.

Also, disable the screensaver. Remember you have to upgrade the Cinnamon spices (if installed) from the System Settings.

Upgrade the System

Go to Update Manager and refresh the screen. If mintupdate and mint-upgrade-info show new updates, apply them.

  • Click on Edit->Upgrade to Linux Mint 18.3 Sylvia to start the process.
  • Follow the on-screen instructions. Replace configuration files when asked.
  • After a successful upgrade, reboot your computer.

More Information

References:

]]>
Install Jenkins on Ubuntu https://linuxhint.com/install-jenkins-on-ubuntu/ Mon, 04 Dec 2017 12:25:32 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20481 How To Install Jenkins on Ubuntu

Jenkins is an open source automation server. It is used for building software or apps automatically, unit testing of different programming projects and much more.  Jenkins can be used as a continuous integration or CI server. It is easy to install and configure and it supports many extensions. Many programming languages are supported on Jenkins such as PHP, Python, C/C++ etc.

In this article, I will show you how to install Jenkins 2.73.3 LTS on Ubuntu 17.10 Artful Aardvark. Let’s get started.

First, go to the official website of Jenkins at https://www.jenkins.io from your favorite web browser. I am using Firefox.

jenkins

Now click on the red “Download” button. You should see the following page as shown in the screenshot.

Now scroll down a little bit. You should see Jenkins 2.73.3 LTS version download link for Ubuntu as shown in the screenshot. Click on Ubuntu/Debian.

You should see the following page as shown in the screenshot.

You can install Jenkins 2.73.3 in two ways. You can either add Jenkins repository to your Ubuntu operating system, use apt package manager to install Jenkins. Or you can download the deb file of Jenkins 2.73.3 from this page and use dpkg to install Jenkins 2.73.3.

Using dpkg to install Jenkins is not recommended as dependency problems may arise. You may need to fix these manually. So I prefer using the apt package manager.

Again using the package manager has its advantages. You will get notified if any new version of Jenkins is available. If your system is configured to update packages automatically, then Jenkins will also be updated automatically.

If you don’t want update Jenkins automatically, you can do so using a package manager as well. I will show you how to do that at the end of the article.

I will use apt package manager to install Jenkins in this article. Because I feel it’s easier and safer.


Installing Jenkins on Ubuntu 17.10:

To install Jenkins using apt package manager, first we have to add the GPG key of Jenkins. Run the following command to add the GPG key:

$ wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add -

The GPG key has been added.

Now we have to add the Jenkins package repository to our Ubuntu 17.10 operating system.  To do that, create a new file under /etc/apt/sources.d/jenkins.list using the following command.

$ sudo touch /etc/apt/sources.d/jenkins.list

Now add the following line to the newly created file with the following command:

$ echo “deb https://pkg.jenkins.io/debian-stable binary/” | sudo tee /etc/apt/sources.list.d/jenkins.list


We can run the following command to verify that the Jenkins entry was added to the Jenkins.list file with the following command:

$ cat /etc/apt/sources.list.d/jenkins.list

You can see that the line was added to jenkins.list file.

Now we have to update the package repository cache of Ubuntu 17.10 operating system with the following command:

$ sudo apt-get update

If you try to install Jenkins using apt, the latest version of Jenkins will be installed. As of this writing it is Jenkins 2.73.3. But in the future, it will be a different version.

So to install the latest version of Jenkins, run the following command:

$ sudo apt-get install jenkins

If you want to install a specific version of Jenkins, say Jenkins 2.73.3, you can specify the version when you install. This is what I will be doing, as this is what this article is about.

Run the following command to install Jenkins 2.73.3:

$ sudo apt-get install jenkins=2.73.3

Press ‘y’ and then press <Enter> to continue. It should take a while to download and install everything.

On completion of the installation, you should see something like this.

Once the installation is complete, open a web browser and go to http://localhost:8080

You should see the following page. The default admin password is written to the file as marked in red text in this page. We have to copy and paste the password in the “Administrator password” section.

To find the password, just copy the location of the file and use the ‘cat’ command to read the file as shown below:

$ sudo cat /var/lib/jenkins/secrets/initialAdminPassword

The output is a hex string. Copy this line and paste it on the page.

Now click on “Continue”

Now you should see the following page. If you’re an expert on Jenkins, you can select plugins to install yourself. But if you’re new to Jenkins, you should select “Install suggested plugins”. It will install the most common plugins that are used by the Jenkins community. I will select “Install suggested plugins”.

The plugin installation process should start. Depending on your internet connection, this process may take a while to complete.

Once the plugin installation is complete, Jenkins should ask you to provide these details so that it can create the first Admin user. Fill in the details and click on the blue button that says “Save and Finish”

Now click on “Start using Jenkins”.

You should see the following page. This is the management interface of Jenkins.

NOTE: This section is optional.

If you want to keep using Jenkins 2.73.3, and you don’t want your package manager to update Jenkins automatically, you can put Jenkins package on hold. You can do it with apt package manager from the command line.

To hold the package, run the following command:

$ sudo apt-mark hold jenkins

You can also remove the hold. If you do so, Jenkins package will update along with other packages of your Ubuntu operating system as before.

To remove the holds, run the following command:

$ sudo apt-mark unhold jenkins

So this is how you install Jenkins 2.73.3 on Ubuntu 17.10 Artful Aardvark. Thanks for reading this article.

]]>
Ubuntu: Get a List of Installed Packages https://linuxhint.com/ubuntu-list-installed-packages-2/ Mon, 04 Dec 2017 11:31:10 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20475 How to List Installed Packages on Ubuntu

On an Ubuntu operating system, there are a lot of packages that are pre-installed. But you can also install new packages on top of that to enhance your Ubuntu experience as a user. Sometimes it is necessary to find out how many packages are installed, whether a specific package is installed or not, what version of that package is installed, what architecture the package belongs to etc. The good news is that you can find out all of this information.

I will show you how to find these information in this article. I will be using Ubuntu 17.10 Artful Aardvark for all the demonstrations. So let’s get started.

List all the installed packages:

You can use ‘dpkg’ command line utility to list all the installed software packages of your Ubuntu operating system from the terminal.

Run the following command to get a list of all the installed packages of Ubuntu:

$ dpkg --list

From the output of the command, you can see that the first column resembles the status of the installed package. The second column is the name of the package. The third column is the version of the package. The fourth column is the architecture of the package. The fifth column is the description of the package.

The two letters ‘ii’ here means that the package should be installed, and it is installed. The first letter describes the desired package status. The second letter describes the current status of the package.

Find whether a specific package is installed:

Let’s say you have a computer with Ubuntu installed and you want to find out whether openssh package is installed. You can easily do that. You can run ‘dpkg –list’ like before and filter the output with ‘grep’ or ‘egrep’ etc.

Run the following command to find whether openssh package is installed:

$ dpkg --list |  grep openssh

You can see that I have openssh-client, openssh-server and openssh-sftp-server packages installed on my Ubuntu 17.10 operating system.

Can you tell the version of these packages? Well you can. It’s 7.5p1-10

You can also tell the architecture, which is amd64 in this case.

You can also add more conditions. Like whether a specific version of specific package is installed. Let’s find out whether nano version 2.8 is installed.

Run the following command to find whether nano 2.8 is installed:

$ dpkg --list | grep nano | grep 2.8

You can see that the package was found.

You can add any number of conditions, just use more grep commands.

Find out how many packages are installed:

You can also find out how many packages are installed on your Ubuntu operating system. This is a little bit tricky, but it is possible. All you have to do is count the number of lines from ‘dpkg –list’ command’s output and subtract the number of lines taken by the header. That’s it.

From the previous output, you can see that the header consists of 5 lines. So we have to subtract 5 lines from the output.

Run the following command to find out how many packages are installed:

$ echo $((`dpkg --list | wc -l` - 5))

You can see that I have 1570 packages installed on my Ubuntu operating system right now.

So that’s how you list installed packages on Ubuntu 17.10 Artful Aardvark. Thanks for reading this article.

]]>
MariaDB Tutorial https://linuxhint.com/mariadb-tutorial/ Sat, 02 Dec 2017 05:34:15 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20432 For the longest time, MySQL database management system (DBMS) has been in use in database driven applications. However, when Oracle acquired MySQL, there were serious concerns from the community regarding its open source nature. This resulted in the development of MariaDB by the founders of MySQL. This tutorial focuses on introductory concepts in MariaDB which will hopefully push you to get a more in-depth understanding and master this database management system.

For a better understanding of this tutorial, prior experience with relational database management systems, MySQL, querying languages and general programming concepts are advantageous.


Introduction

MariaDB is the next step in database management. It has the adaptability needed to serve both enterprise needs and the smaller data processing jobs. Since there exist some similarities with MySQL, you can just simply uninstall MySQL (if you have it), and install MariaDB in its place.  MariaDB is a Relational database management systems (RDBMS) and as such stores data in multiple tables. The relationships between these tables are maintained using the established primary and foreign keys.  Before we go any further, let’s look at the most essential features of

MariaDB:

  • There is a vast selection of storage engines, some of which are high-performance engines to facilitate working with other RDBMS sources.
  • The querying language in MariaDB is standard and quite popular SQL – Structured Query Language.
  • MariaDB is flexible and versatile being supported by multiple operating systems and programming languages.
  • MariaDB uses Galera cluster technology to achieve high performance and scalability through replication.
  • MariaDB supports PHP and offers a lot more commands than there is in MySQL which impact performance.

Installation

All the download resources you need at this point can be found on the official website of the MariaDB foundation. There you will be given multiple options for various operating systems and architectures. Chose an appropriate one and download.

On UNIX/LINUX

If you have a mastery of Linux, you can simply download the source and do the build yourself. The safest bet here would be using packages for various distributions. Distributions are available for-

  • Ubuntu/Debian
  • CentOS/Fedora/RedHat

Also, these distros have a MariaDB package inside their repositories-

  • Slackware
  • Magela
  • Arch Linux
  • Mint
  • openSUSE

Installation steps on Ubuntu

  1. Log in as the root user since you need to have unfettered access while doing the installation.
  1. Go to the directory that has the MariaDB package – this is the directory you downloaded the package into. At this point, we shall perform an importation of GnuPG signing key by using the following code.
    sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xcbcb082a1bb943db
  1. The next step is to add MariaDB to the file called sources.list. Use the following code after opening the file.
    sudo add-apt-repository 'deb http://ftp.osuosl.org/pub/MariaDB/repo/5.5/ubuntuprecise main.'
  1. Use the following command to refresh the system
    sudo apt-get update
  1. Install with the following command
    sudo apt-get install MariaDB-server

Creating a database

Creating and deleting databases needs administrative permissions, i.e., root user privileges. There are two ways to do this:

mysqladmin binary

This is a straightforward way of creating a database. The following is the code to create a new database called LINUXHINTS.

[root@host]# mysqladmin -u root -p create LINUXHINTS
Enter password: ******

PHP Create Database Script

Here, PHP will use the mysql_query function to create a MariaDB database. This function only uses two parameters where the value “true” is returned when successful and when unsuccessful, it returns “false”. Here are a sample code and syntax:

<html>
   <head>
      <title>Create MariaDB Database</title>
   </head>
   <body>
      <?php
         $dbhost = 'localhost:3036';
         $dbuser = ‘root’;
         $dbpass = 'root password';
         $conn = mysql_connect($dbhost, $dbuser, $dbpass)
         if(! $conn ) {
            die('Failed to connect: ' . mysql_error());
         }
         echo 'Connected successfully';
         $sql = 'CREATE DATABASE LINUXHINTS';
         $result = mysql_query( $sql, $conn ); 
         if(! $result ) {
            die('Failed to create the database: ' . mysql_error());
         }
         echo "Database LINUXHINTS creation successful\n";
         mysql_close($conn);
      ?>
   </body>
</html>

Drop database

This function also needs administrative privileges to execute. A query that takes two parameters and should return either true or false is executed: bool mysql_query( SQL, connection );

Here is a sample PHP code snippet for deleting a database:

<html>
   <head>
      <title>Delete  MariaDB Database</title>
   </head>

   <body>
      <?php
         $dbhost = 'localhost:3036';
         $dbuser = 'root';
         $dbpass = 'root password';
         $conn = mysql_connect($dbhost, $dbuser, $dbpass);
      
         if(! $conn ) {
            die('Could not connect: ' . mysql_error());
         }
         echo 'Connected successfully';
         
         $sql = 'DROP DATABASE LINUXHINTS';
         $retval = mysql_query( $sql, $conn );
         
         if(! $retval ){
            die('Could not delete database: ' . mysql_error());
         }

         echo "Database LINUXHINTS deleted successfully\n";
         mysql_close($conn);
      ?>
   </body>
</html>

Selecting database

Assuming you did not go through with the previous section of deleting a database, and it is still available on your localhost/server, you must now select it to start using it. Else, you will have to create it again before proceeding with the next steps.

To select the database, we employ the “use” SQL command. Below is the syntax:

USE database_name; 

Creating tables and dropping them

Tables are the glue to RDBMS. Before creating a table, you should already know its name, the names of the fields and their corresponding definitions. Here is a general syntax for this.

CREATE TABLE your_table_name (column_name column_type);
CREATE TABLE comments_tbl(
   -> comment_id INT NOT NULL AUTO_INCREMENT,
   -> comment_content VARCHAR(1000) NOT NULL,
   -> commenter_name VARCHAR(50) NOT NULL,
   -> submission_date DATE,
   -> PRIMARY KEY ( comment_id )
   -> );

To confirm whether the table was created, use “SHOW TABLES” command.
To drop the tables, use the DROP TABLE command.

mysql> use LINUXHINTS;
Database changed
mysql> DROP TABLE comments_tbl

Insert query

Information must first be existing in a table before manipulation. Hence, we must first add the information using the INSERT command. Below is the syntax for the insertion.

INSERT INTO table_name (field,field2,...) VALUES (value, value2,...);

For example

INSERT INTO users<tbl  
(user_id, user_name, user_address, signup_date)  
VALUES
(1,'John','Texas','2017-11-07 00:00:00'),
(2,’Jane','Vegas','2017-12-07 00:00:00');  

Select query

Since we have inserted data into our table, we can now query it. The SELECT statements are used to query data from a particular table or tables. The SELECT statements can include UNION statements, a LIMIT clause, an ORDER clause, among others. This is the general syntax –

SELECT field, field2,... FROM table_name, table_name2,... WHERE...

Where clause

This clause is essentially made to filter out statements such as UPDATE, SELECT, INSERT and DELETE. These clauses show the criteria to be used for a specified action. This is the general syntax-

[COMMAND] field,field2,... FROM table_name,table_name2,... WHERE [CONDITION]

Example

mysql> use LINUXHINTS;
Database changed
mysql> SELECT * from users_tbl WHERE user_address = 'Vegas';

These are only but basic concepts surrounding MariaDB. However, with the mastery of these commands, you can now advance your knowledge further and build a big MariaDB driven system.


Sources

https://www.tutorialspoint.com/mariadb/
https://mariadb.org/learn/
https://www.tecmint.com/learn-mysql-mariadb-for-beginners/
https://www.techonthenet.com/mariadb/index.php
https://www.javatpoint.com/mariadb-tutorial
https://mariadb.com/kb/en/library/training-tutorials/

]]>
Install Oracle JDK 9 on Ubuntu 17.10 https://linuxhint.com/install-oracle-jdk9-ubuntu/ Thu, 30 Nov 2017 12:23:07 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20407 Install Oracle JDK 9 on Ubuntu 17.10

JDK or Java Development Kit is used to develop Java applications. It is used by Java developers all over the world. There are two implementations of JDK. One is developed by the open source community, which is called OpenJDK. The other one is developed by Oracle, which is called just JDK. The OpenJDK is completely free of charge, and you’re free to do whatever you want with the source code. Basically it’s more open in nature. The JDK provided by Oracle is licensed to Oracle and has some proprietary components. It’s also free, but it’s not as open in nature as OpenJDK.

In this article, I will show you how to install Oracle JDK 9 on Ubuntu 17.10 Artful Aardvark. Let’s get started.


Downloading Oracle JDK 9

Oracle provides a compressed tar file and rpm file of Oracle JDK 9 for Linux. On CentOS/RHEL or any other RPM based Linux distributions, you can easily install Oracle JDK 9 using the rpm package file. But on other distributions such as Ubuntu, Debian, Slackware etc., you should use the compressed tar file. Since I am using Ubuntu 17.10 in this article, I will also use the compressed tar file.

To download Oracle JDK 9, go to https://www.oracle.com from any web browser and click on the “Menu” and then hover over “Downloads and Trials” and then click on “All Downloads and Trials” as shown in the screenshot below:

You should see the following window. Scroll down a little bit and Click on “Java for Developers”.

Then click on “Java Platform (JDK) 9” icon and shown in the screenshot:

You should see the following window. First you have to accept the license agreement. Then you will be able to download Oracle JDK 9 for Linux.

Click on “Accept License Agreement” as shown in the screenshot.

Once you accept the license agreement, click on the file that says “Linux” in the “Product / File Description” and file name ends with tar.gz as shown in the screenshot.

Now save the file. It’s a pretty big file, and may take a while to download.


Installing Oracle JDK 9

Once the download is complete, open a Terminal (Ctrl+Alt+T on Ubuntu) and go to the directory where the file is downloaded with the following command:

$ cd DIRECTORY_PATH_WHERE_YOU_DOWNLOADED_THE_FILE

Now run the following command to extract the file on /opt directory. Note that, /opt is the directory where I am installing Oracle JDK.

$ sudo tar xvzf jdk-9.0.1_linux-x64_bin.tar.gz -C /opt

You can see that the file was extracted on /opt

$ ls /opt

Note the name of the directory which is in my case ‘jdk-9.0.1’

Now we have to add Oracle JDK 9 to our path. To do this, edit /etc/bash.bashrc file with the following command:

$ sudo nano /etc/bash.bashrc

You should see something like this.

At the end of the file, add these two lines and save the file by pressing Ctrl+X and then press ‘y’ and <Enter>.

export JAVA_HOME=/opt/jdk-9.0.1
export PATH=$PATH:${JAVA_HOME}/bin

Now restart your computer with the following command:

$ sudo reboot

Once your computer boots, you can run the following commands to test whether Oracle JDK 9 is in the PATH:

$ whereis java
$ javac -version

You can see that java was found in the correct directory.

The version of java compiler is also 9.

I will just write a simple program and show you that it compiles successfully on JDK 9.

You can see that, the program was compiled and ran correctly.

So, that’s how you install Oracle JDK 9 on Ubuntu 17.10 Artful Aardvark. Thanks for reading this article.

]]>
GNURoot Tutorial https://linuxhint.com/gnuroot-tutorial/ Wed, 29 Nov 2017 02:51:20 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20396 GNURoot Debian

GNU/Linux has gained a reputation the world over as being an operating system that lets the users have a full experience and customization according to personal interests and style. As you may already know, the Android operating system was designed based on the Linux kernel. Having said that, Android runs on top of the Linux Kernel while Linux is an independent Operating system. Hence, they are not entirely similar.

Android apps have limited access to hardware resources compared to regular desktop apps. Therefore, a superuser would find it hard to use an Android app instead of a desktop one. Luckily, this situation can be rectified by adding a GNU/Linux environment onto the Android device. This will involve installing and configuring GNURoot Debian to allow a Linux environment to work with.  Purposely, this can be helpful when on the move, but you only have access to an Android gadget, be it a smartphone or a tablet. It is worth noting that what we are doing in this article is not like running a full-blown Linux distro installation on Android. Instead, we are only adding a program that in turn installs a Linux sub-system. This sub-system comes with a range of toys such as apt-get and even the privilege to launch a smaller X server. Well, let’s get started.


Installation

Before we begin, note that a GNU/Linux environment can be installed on any Android device whether it is rooted or not. However, since many users may not want to invalidate their warrants, they do not root their devices. As such, this tutorial assumes your device is not rooted.

Basically, setting up the GNU/Linux environment involves the installation of two components namely; the GNURoot Debian app and Xserver XSDL. GNURoot’s primary purpose is to create the Linux environment in the host operating system which in our case is Android OS. Usually, Linux’s “Chroot” functionality comes into play here, but since we do not have root privileges, the GNURoot app uses a software called “proot” to accomplish this. The Xserver XSDL connects to GNURoot to help with the processing of heavy graphics which is the primary function of the X servers.

How to Install

  1. Visit the Google Playstore and search for Xserver XSDL and GNURoot Debian.
  1. After the download is complete and installation is done, find the GNURoot app from the app drawer and run it. However, at this point, you should watch out for a “root” shell which is fake and ignore, because the app installs a “counterfeit” Linux root file system.
  1. The next step is ensuring that you have the most recent version of files obtained from recent upgrades and updates. As such, you have to run the apt-get upgrade and apt-get update commands since you are now on an Ubuntu/Debian Linux environment.
    $ sudo apt-get update
    $ sudo apt-get upgrade
    
  1. The next steps involves setting up an environment for handling graphics. This is done simply by running the “apt-get install lxde” command to get the graphical environment together with all the tools that come with it, or you can alternatively run the “apt-get install lxde-core” command if you are only interested in the desktop environment.
    $ sudo apt-get install lxde
    $ sudo apt-get install lxde-core
    
  1. 5. The next phase is creating a path to the terminal using the graphical environment. To do this a software program called XTerm is used. After that, you are also required to get the Synaptic Package Manager which is a front end to attain apt-get drivers so that you can hear audio playback using Pulseaudio. Use the following command:
    $ sudo apt-get install xterm synaptic pulseaudio
    
  1. The final step is starting the xServer XSDL and downloading all the necessary fonts. After doing that, return to the GNURoot and run the commands below:
    $ sudo export DISPLAY=:0 PULSE_SERVER=tcp:127.0.0.1:4712
    $ sudo startlxde &
    
  1. After that, go back to XServer XSDL and relax as you await the LXDE desktop.

Installing Linux Apps on Android

Now that we have successfully installed Debian Linux and it is running on our device, Linux apps are needed for enjoying the full Linux experience. At this stage, the Synaptic Package Manager comes into play since it houses the vast repository of Linux apps that can be installed on the device.

Simply access Run from the start menu at the bottom and type “synaptic” and hit enter. When the Synaptic Package Manager launches, all you have to do is find the name of the app you want from the list and select it for installation. When the apps finish installing, your device should be ready to go.

An important aspect to keep in mind is that we are not working on a fully Linux package, so some apps will not run as smoothly as they do on the full package. Some apps will not even run at all. So, it should not come as a shock to you, neither should you start thinking you missed a step during the installation process. The Android apps that were previously present can also be accessed from this point. Even though most of the apps will work just fine, those that require hardware acceleration like some games are likely to run into problems.

For those of us who use Linux nearly all the time, and need to pull off some Linux moves with just an Android device, this app will serve you right. Wherever you are, you can quickly fire up the command prompt and use the apt-get commands to install any command line tool you need be it Wget, Traceroute or even Ssh. If you want to enjoy Linux apps on your un-rooted Android device, then GNURoot Debian is the most straightforward method out there. Getting used to the smaller screen takes some time, but once you are done, you can get the hang of things and actually be more productive while on your handheld gadget.

Sources and Additional Info

https://www.fossmint.com/install-run-linux-on-android-device/

https://debril.org/2015/10/01/to-write-php-applications-with-android-use-gnuroot-debian/

https://www.xda-developers.com/guide-installing-and-running-a-gnulinux-environment-on-any-android-device/

https://www.techrepublic.com/article/use-gnuroot-to-install-a-gnulinux-distribution-on-your-android-device/

]]>
7 Tips to Optimize Your Docker Images https://linuxhint.com/7-tips-optimize-docker-images/ Tue, 28 Nov 2017 06:22:38 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20390


7 Tips to Optimize Your Docker Images

#1 Design Container Images for a Single Job

By focusing a container image for single job it will be focused and lightweight. Making a container image multi-purpose will bloat the size

#2 Install Required Packages Only

Install the bare minimum packages for the single job the image will be used for

#3 Reduce Number of Layers

Each RUN command creates a new layer. Combining the layers can reduce the image size. So, smart combinations of commands can lead to smaller images.

#4 Avoid Storing Application data

Storing application data in the container will balloon up your images. For production environments, always use the volume feature to keep the container separate from the data

#5 Avoid Using :latest

Using specific tags can ensure that you know the exact image being used from the Docker registry and don’t have any surprises if the :latest changes

#6 Sort Multi-line Arguments

Whenever you have a multi-line argument, sort the arguments alphanumerically to improve maintenance of the code. Haphazard arguments can lead to duplications. They are also harder to update

#7 Use .dockerignore

Use the .dockerignore to exclude unnecessary files and folders that complicate the build process and bloat the image


Optimizing Docker Images

]]>
How to Use Docker Registry https://linuxhint.com/how-to-use-docker-registry/ Sun, 26 Nov 2017 05:05:29 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20332 Docker is a technology which allows one to create virtual machines which encapsulate applications and all its dependences in a container unlike a hypervisor which emulates an operating system and components on top of it. The advantage of this is the encapsulated containers then can be distributed among fellow developers through a Docker registry.

Docker consists of multiple important parts and they are Docker file which is actually the source code of the image, Docker Image which is a template of the container, is compiled and ready to be executed, Docker Registry is the service where images are located, finally the Docker Container which is the encapsulated virtual machine running on top of Docker Engine. Docker containers share the same operating system; hence the resource consumption is minimum to low compared to a hypervisor and similar virtual machines. This article mainly discusses about Docker registry, but discussing about other parts are important as they all are necessary to deal with a Docker registry.


How to Install Docker in a Nutshell?

Since this tutorial is about Docker registry, installation phase isn’t covered thoroughly, however this is quite enough to go through the installation as it contains the default way to install Docker straightly from its repository instead of the Ubuntu repository.

sudo su
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu zesty stable"
apt-get update
apt-get install docker-ce

The commands start from getting administrator rights with sudo su command, then it uses curl to add the docker package’s signature key to the system to make sure apt-get allows to continue the installation without showing a warning message for insecure packages, then the path to the repository is added from where apt-get retrieves the package files when docker package is summoned, then apt-get update command updates the local repository information with the latest package details. This is quite useful to make sure when upgrade command or install command is called, it definitely uses latest packages instead of older ones. Finally, it summons the docker community edition package to install in the system.


How to Use Docker Registry?

Docker registry is a service where images are hosted. There are two types of registries, private and public. As private some of the popular ones are Google Container, Quay, AWS Container, Docker Hub which is the default registry provided by Docker themselves. Docker registry is a community based host to where images can be uploaded, and from where images can be downloaded. The following steps demonstrate how to download an existing image from a registry, how to use it in the system, and how to upload a new image back to the registry.

How to Access to a Registry?

As the registry this tutorial uses the default public registry provided by docker themselves. However, it requires the user to register in the website. Even though registration isn’t required for downloading images, it requires for uploading new images back to the registry; hence this step is recommended.

  1. Visit the following web URL
    https://hub.docker.com/
  1. Register in the website with a username/email address
  1. Once registered in the registry, visit the following web url to browse available images
    https://hub.docker.com/explore/
  1. Pick one of them. This tutorial uses PHP image for demonstrating purpose, and its page is located at this location
    https://hub.docker.com/_/php/
  1. Use the following command in terminal window with administrator rights (by using sudo su). What it does is downloading the php image to install in the system.
    docker pull php
  1. Type the following command to open the docker file to execute the codes.
    nano dockerfile
  1. As the codes type the following command lines. What it does is to retrieve the php 7 files, copy command copy the files in the source directory to the destination directory, workdir command instructs to set the working directory as the given path, so when the container is running the dependencies are searched from here, cmd is for stating the file to be executed, here it uses a php script file which is later going to be executed.
    FROM php:7.0-cli
    COPY . /usr/src/myapp
    WORKDIR /usr/src/myapp
    CMD [ "php", "./donscript.php" ]
    
  1. Once the dockerfile is crafted, it has to be compiled with the build command. Compiling the dockerfile results in a docker image which is assigned a name here as well.
    docker build -t donapp .
  1. If the php script requires the assistance of a web browser to display its contents, the default web host shipped with php can be initiated with the following command.
    docker run php -S localhost:8000
  1. The script file has to be created and placed in the same directory as the dockerfile, which is created in the home folder by default in Linux, as seen in the following screenshot. The script name should be same as the name stated with step7’s CMD command.
  1. Finally, the image can be executed with the following command. As seen in the screenshot, once the image is executed it displays the following message written in the script.
    docker run donapp
  1. Alternatively, the file can be executed even without compiling with the following command. The highlighted string is the name of the script which is intended to be executed.
    docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp php:7.0-cli php donscript.php
    

How to Search Images in Registry?

Docker provides an inbuilt function to search images within the terminal window, this is useful to browse images with ease without opening the web browser. To search images from the registry, use the following command.

docker search <image name>

example : docker search ubuntu

How to Upload Images to The Registry?

Just as images can be downloaded to utilize, they can also be uploaded to the registry to share with public or coworkers depending on the nature of the registry. If it’s a private registry, it’s recommended for either personal usage or limited number of people, but if it’s a public registry, it’s recommended to share the images with strangers as well. Either way the registry has to be accessed prior to upload images, and it can be done with the following command. This step assumes that previous steps are followed, and there is already an account in Docker Hub along with its user credentials.

  1. Type the following command along with the username of the user
    docker login –username MYUSERNAME
    Type the password when it promotes
  1. Tag the application with following format. What it does is to tag donapp app as dondilanga/donapp, here dondilanga means the username of the user whom account is used to upload the image.
    docker tag donapp dondilanga/donapp
  1. Now type the following command to upload the image file. It appears as it uploads a large amount of data even if the script is quite small, the reason is it uploads the dependencies of the executable or script along with it, and thus other users can download it and use it right away without worrying about missing dependencies
    docker push dondilanga/donapp

For next steps see some of the Docker related links below:

https://linuxhint.com/how-to-create-a-docker-image/

https://linuxhint.com/networking-storage-docker/

https://linuxhint.com/optimizing-docker-images/

]]>
Docker Image vs Container https://linuxhint.com/docker-image-vs-container/ Sun, 26 Nov 2017 04:37:32 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20329 Understanding the process Docker uses to store data through images and containers will help you better design your Docker applications. Docker images are like templates while Docker containers are the running instances created from those templates. Docker uses a layered approach to storing images and containers.

Images and Layers

A Docker image is created from multiple layers. If we take an example of a Dockerfile, every instruction is converted to a layer. Here is a simple Dockerfile:

FROM node:6.9.2
COPY server.js .
CMD node server.js

Every line in the above Dockerfile will create a layer. The FROM statement will look for the node:6.9.2 image in the local registry. If it doesn’t find it there, then it will download it from the Docker Hub. Then Docker will create the first layer. The next COPY statement adds the server.js file to the image as a second layer. The last layer runs a Node.js application. All these layers are stacked on top of each other. Each additional layer is added as a difference from the layer before it.


Containers and Layers

Containers are created from images. When a container is created from an image, a thin read/write layer is put on top of the image (Notice that image layers are immutable, container layers are not). Any changes made to the container is put on this read/write layer during the lifetime of the container. When a container is deleted, the associated thin read/write layer is removed. It means that multiple containers can share the same image. Each container layer will maintain its own data safely on top of the Docker image.


Images and Containers

Let’s try a simple example.  You can use the docker images command to find all the images:

$ docker images

REPOSITORY          TAG      IMAGE ID      CREATED      SIZE

And the docker ps command to find containers:

$ docker ps

CONTAINER ID    IMAGE    COMMAND   CREATED    STATUS   PORTS   NAMES

This is a fresh docker installation. So there is no image or container present. You can run the docker run -it node:6.9.2 command to start a container.

$ docker run -it node:6.9.2
Unable to find image 'node:6.9.2' locally
6.9.2: Pulling from library/node

75a822cd7888: Pull complete 
57de64c72267: Pull complete 
4306be1e8943: Pull complete 
871436ab7225: Pull complete 
0110c26a367a: Pull complete 
1f04fe713f1b: Pull complete 
ac7c0b5fb553: Pull complete 
Digest: sha256:2e95be60faf429d6c97d928c762cb36f1940f4456ce4bd33fbdc34de94a5e043
Status: Downloaded newer image for node:6.9.2
>

Now if we again check the Docker images, we find:

$ docker images

REPOSITORY      TAG         IMAGE ID         CREATED           SIZE
node           6.9.2      faaadb4aaf9b     11 months ago       655MB

And if we check container, we find:

$ docker ps

CONTAINER ID    IMAGE   COMMAND CREATED        STATUS      PORTS  NAMES
8c48c7e03bc7 node:6.9.2 "node" 20 seconds ago Up 18 seconds     reverent_jackson

If we start another container from the same image using the command:

$ docker run -it node:6.9.2

And check again, we see:

$ docker images

REPOSITORY      TAG         IMAGE ID      CREATED            SIZE
node           6.9.2      faaadb4aaf9b  11 months ago       655MB

And

$ docker ps

CONTAINER ID    IMAGE    COMMAND CREATED        STATUS        PORTS NAMES
96e6db955276  node:6.9.2 "node"  24 seconds ago Up 23 seconds    cocky_dijkstra
8c48c7e03bc7  node:6.9.2 "node"  4 minutes ago  Up 4 minutes     reverent_jackson

The two containers with CONTAINER ID 96e6db955276 and 8c48c7e03bc7 are both running on top of the Docker image with the IMAGE ID faaadb4aaf9b. The thin read/write layers of the Docker containers are residing on top of the layer of the Docker image.

Hints:

You can remove Docker containers with the command docker rm [CONTAINER ID] and remove Docker images with the command docker rmi [IMAGE ID].

The image node:6.9.2 we downloaded from Docker Hub is also created by combining multiple layers. You can check the layers of images using docker history [IMAGE ID].

$ docker history faaadb4aaf9b

IMAGE        CREATED         CREATED BY                            SIZE   
faaadb4aaf9b 11 months ago /bin/sh -c #(nop)  CMD ["node"]          0B                  
<missing> 11 months ago /bin/sh -c curl -SLO "https://nodejs.org/d 42.5MB              
<missing> 11 months ago /bin/sh -c #(nop)  ENV NODE_VERSION=6.9.2  0B                  
<missing> 11 months ago /bin/sh -c #(nop)  ENV NPM_CONFIG_LOGLEVEL 0B                  
<missing> 11 months ago /bin/sh -c set -ex   && for key in     955 108kB               
<missing> 11 months ago /bin/sh -c groupadd --gid 1000 node   && u 335kB               
<missing> 11 months ago /bin/sh -c apt-get update && apt-get insta 323MB               

Conclusion

A popular way of explaining images and containers is to compare an image to a class and a container to the instance of that class. The layered approach of docker images and containers helps keep the size of images and containers small.

References:

]]>
How to upgrade the kernel on Ubuntu 17.10 https://linuxhint.com/how-to-upgrade-the-kernel-on-ubuntu-17-10/ Thu, 23 Nov 2017 05:09:32 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20267 Ubuntu 17.10 Artful Aardvark uses kernel 4.13.0 by default. It is pretty new. Most of the people will not need to update it. But if someone really need something that is only available in a later version of the kernel and feel like updating the kernel, then this article is for you.  In this article, I will show you how to update the default kernel 4.13.0 on Ubuntu 17.10 Artful Aardvark. Let’s get started.

You can check the version of the currently installed kernel on Ubuntu 17.10 using the following command:

$ uname -r

You can update the kernel of your Ubuntu 17.10 operating system in 2 ways. You can either use graphical user interface or the terminal. You can use Ubuntu Kernel Update Utility (UKUU), which is used to update the kernel of Ubuntu operating systems. UKUU has a graphical user interface version called ukuu-gtk and a command line utility. The problem is, UKUU GTK doesn’t work on Wayland very well, it only works on X11. So on Ubuntu 17.10, we can update the kernel only through the command line interface. I will show you how you can update the kernel from the command line with UKUU.


Installing UKUU:

Ubuntu Kernel Update Utility (UKUU) is not installed on Ubuntu by default and it’s also not available on Ubuntu software repository. So we have to install UKUU from UKUU PPA.

To add the UKUU PPA, run the following command and press <Enter>:

$ sudo add-apt-repository ppa:teejee2008/ppa

Now run the following command to update the package repository cache of your Ubuntu 17.10 operating system:

$ sudo apt-get update

How to upgrade the kernel on Ubuntu 17.10How to upgrade the kernel on Ubuntu 17.10How to upgrade the kernel on Ubuntu 17.10

Now we are ready to install UKUU, Run the following command to install UKUU:

$ sudo apt-get install ukuu

Press ‘y’ and press <Enter> to continue. UKUU should be downloaded and installed.


Updating the Kernel with UKUU:

Once UKUU is installed, you can open up a terminal (Ctrl+Alt+T on Ubuntu) and update the kernel.

You can run the following command to see what you can do with UKUU.

$ ukuu –help

You can see that, we can check for kernel updates, install and remove kernels with UKUU.

Now run the following command to see what kernels are available right now for download.

$ ukuu –list

UKUU will download a list of kernels that are available. It may take a while. You will have to wait till it’s done.

Once the download is complete, ukuu should show you a very long list of kernels. But we are interested on the first items of the list, as the list is sorted in descending order which means the latest versions are listed above the old versions. You can see that the current kernel version is 4.13.0. I want to install 4.14.1, the latest version as of this writing.

To install kernel 4.14.1 with UKUU, run the following command:

$ xhost +
$ sudo ukuu --install v4.14.1

It will take several minutes to download and install everything depending on your internet connection.

Once the download and installation is complete, reboot your computer with the following command:

$ reboot

Now run the following command to see the current kernel version:

$ uname -r

You can see that the kernel is updated to 4.14.1.  So that’s how you update the kernel of Ubuntu 17.10 Artful Aardvark. Thanks for reading this article.

]]>
How to Upgrade the Kernel on CentOS 7 https://linuxhint.com/upgrade-kernel-centos-7/ Thu, 23 Nov 2017 04:32:56 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20253 By default, CentOS 7 uses an old version of kernel, which is 3.10.x. But the old kernel doesn’t support some new hardwares that we have today. So it’s necessary to update the old kernel for better hardware support.  In this article I will show you how to update the kernel of CentOS 7. Let’s get started.


Preparing for the Kernel Upgrade:

We must add ELRepo repository to CentOS 7 to update the kernel of CentOS 7. For more information, check the official website of ELRepo at http://elrepo.org/tiki/tiki-index.php.  First we have to add the GPG key for ELRepo. To do that, run the following command:

$ sudo rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

Now we can add the ELRepo repository on CentOS 7. To do that, run the following command:

$ sudo rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

The name of the ELRepo kernel repository is called elrepo-kernel.  There are 2 kernels in ELRepo. One is called kernel-lt and the other is called kernel-ml. The difference between these 2 are, kernel-lt package provides the Long Term Support version of the latest Linux kernel, and the kernel-ml package provides Mainline Stable version of the latest Linux kernel. The kernel provided by kernel-ml is more updated than kernel-lt. Both of these kernels are safe. You can use any of them.

I will show you how to install both of them, but I will install kernel-ml in this article.

You can check the version of the currently installed kernel on CentOS 7 with the following command:

$ uname -r

You can see that; the kernel CentOS 7 is using right now is 3.10.0. We will soon update that. Let’s continue.


Installing Latest Long Term Support Kernel:

You can easily install long term support kernel or kernel-lt package provided by ELRepo on CentOS 7. At the time of this writing, the version of the kernel provided by kernel-lt package is 4.4.100.

To install kernel-lt package on CentOS 7 from ELRepo, run the following command:

$ sudo yum --enablerepo=elrepo-kernel install kernel-lt

Press ‘y’ and press <Enter> to continue.

Once the installation is complete, just restart your computer. When it boots, select the new kernel from the GRUB menu. Your CentOS 7 operating system should use the new kernel afterwards.


Installing Mainline Stable Kernel:

You can easily install mainline stable kernel or kernel-ml package provided by ELRepo on CentOS 7. At the time of this writing, the version of the kernel provided by kernel-ml package is 4.14.1.

To install kernel-ml package on CentOS 7 from ELRepo, run the following command:

$ sudo yum --enablerepo=elrepo-kernel install kernel-ml



Now press ‘y’ and then press <Enter> to continue:

It should take a while to download and install the kernel.  Once the installation is complete, run the following command to restart your computer.

$ sudo reboot

Once your computer boots, select the new kernel from the GRUB menu, it should be using the latest kernel that you just installed.

You can check and verify that its using the latest kernel with the following command:

$ uname -r

You can see that the kernel is updated.

So That’s how you update/upgrade the kernel of your CentOS 7 operating system. Thanks for reading this article.

]]>
Install Clipgrab on Ubuntu https://linuxhint.com/install-clipgrab-ubuntu/ Thu, 23 Nov 2017 02:43:25 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20219 How to Install and Use ClipGrab on Ubuntu 17.10

ClipGrab is a software that is used to download videos from popular video sharing websites like YouTube, Vimeo, Facebook and many others. You can also convert the downloaded videos with ClipGrab. It is a cross platform software that runs on Windows, Linux and Mac.  In this article, I will show you how to install and use ClipGrab on Ubuntu 17.10 Artful Aardvark. Let’s get started.

Installing ClipGrab:

First go to https://clipgrab.org from your favorite web browser. I am using Firefox.

You should see the following window. This is the official website of ClipGrab.

Now click on the blue button that says “Free Download”

Your browser should prompt you to save the file. Click on “Save File” and click on “OK”. The download should start.

Once the download is complete, go to the directory where ClibGrab was downloaded. In my case, it was downloaded on my $HOME/Downloads directory.

Right click on the file and click on “Extract Here” to extract the compressed tar file.

You should see a new folder once the file is extracted.

Right click on the folder “clipgrab-3.6.6” and click on “Open in Terminal”.

A new terminal should open.

Now run the following command to copy the clipgrab executable to the /usr/bin directory. I moved it in this directory because by doing so, I can run ClipGrab from the command line without specifying the full path. The /usr/bin directory is already in the PATH of the operating system. So it makes everything easier.

$ sudo cp -v clipgrab /usr/bin

Now we have to install the dependencies for ClipGrab. Although it doesn’t say it on their website, I got an error while I tried to run it the first time. The error was due to unavailability of libQtWebKit.so.4 library file. It is easy to fix. All we have to do is install ‘libqtwebkit4’ package on Ubuntu 17.10.  To install libqtwebkit4 package, run the following commands:

$ sudo apt-get update
$ sudo apt-get install libqtwebkit4

Press ‘y’ and press to continue. It may take a while to get everything downloaded and installed depending on your internet connection.

Once libqtwebkit4 is installed, you can run ClipGrab with the following command:

$ clipgrab

You should see the following window. Can you see this warning? It basically says, avconv or ffmpeg is not installed on my computer. So I can’t download 1080p videos from YouTube. If you don’t care about 1080p videos, you can just click on “OK” and use ClipGrab now. But I think most people do care about 1080p videos. I will show you how to fix this as well.

Once you press “OK”, you should see ClipGrab main application window.

Now let’s enable 1080p support. To do that, you must have FFMPEG installed.
First close ClipGrab if it’s already open. Then to install FFMPEG package on Ubuntu 17.10, run the following commands:

$ sudo apt-get update
$ sudo apt-get install ffmpeg

Press ‘y’ and then press to continue the installation. It may take a while to download all these packages.

Once the download and installation is complete, you can run ClipGrab and it won’t show you that warning message again.


Using ClipGrab:

In this section, I will show you how you can use ClipGrab to download videos from YouTube.

First open ClipGrab with the following command:

$ clipgrab

Now, go to YouTube and find any video that you want to download and copy the video link.

Now on ClipGrab, click on “Downloads” tab to navigate to the Downloads tab.

Now paste the YouTube video link that you just copied on ClipGrab’s textbox in the Downloads tab. You can see that the video tittle is detected correctly in ClipGrab.

You can change the format, just click on the Format selector and select the file format you like. I am leaving it Original for now.

You can also change the quality of the video. Just click on the Quality selector and select the video quality that you like. I am selecting 360p for keeping the file size small for this demo. So the download will be quicker.

Once everything is set up, click on “Grab this clip!” button.

It should ask you for a location where you want to save the file. Just put a good file name, select the location and click on “Save”.

The download should start. You can see how much of the file is being downloaded on the progress bar.

If midway, you decide to cancel the download, just select the file from the list and click on “Cancel selected download”. I am not going to do that now.

Once the download is finished, you can find the video where you saved it.

You can also right click on any download in the list and do some other operations like Pause, Resume, Restart, Cancel and many more.


Configuring ClipGrab

In this section, I will show you how to do basic configuration of ClipGrab.

If you don’t want it to ask for a file name every time you click on “Grab this clip!” button, just check “Never ask for file name”.

When you click on “Grab this clip!”, It asks you to save the file in a default directory or the last used directory. If you want it to always save on a default directory, you can change it. Just click on “Settings” tab and click on “Browse” to select a default directory. Also uncheck “Always save at the last used path”

So that’s how you install and use ClipGrab on Ubuntu 17.10 Artful Aardvark. Thanks for reading this article.

]]>