Linux Wolfman – Linux Hint https://linuxhint.com Exploring and Master Linux Ecosystem Thu, 24 Dec 2020 02:48:15 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.2 Types of Software Testing https://linuxhint.com/types-of-software-testing/ Tue, 08 Dec 2020 12:23:43 +0000 https://linuxhint.com/?p=80226 The strategy for testing each software product is different. We need to consider the business goals and/or purpose of the software before developing the software test strategy. For example, software that runs in an airplane, which controls the engine and flight safety, has a different business context than a viral video sharing platform on the internet for kids. For the airplane software, it’s very critical that absolutely everything is defined and verified. Rapid new feature development and change is not a priority. For the viral video platform, the business needs innovation, speed, and rapid improvement, which are much more important than guaranteed validation of the system. Each context is different, and there are many different practices for software testing. Building the test strategy will consist of a mixture of the appropriate types of testing from the list of possible testing types, which are categorized below. In this article, we will list different types of software testing.

Unit Testing

Unit Testing is testing done on an individual function, class, or module independently than testing a fully working software. Using a framework for unit testing, the programmer can create test cases with input and expected output. When having hundreds, thousands, or tens of thousands of unit test cases for a large software project ensures that all the individual units work as expected as you continue to change the code. When changing a unit that has test cases, the test cases for that module should be studied and determine if new test cases are needed, the output has changed, or the current test cases can be removed as no longer relevant. Creating a large volume of unit tests is the easiest way to achieve high test case coverage for a software code base, but will not ensure that the final product is working as a system as expected.

Functional Testing

Functional testing is the most common form of testing. When people refer to software testing without much detail, they often mean functional testing. Functional testing will check the primary functions of the software work as expected. A test plan could be written to describe all the functional test cases that will be tested, which corresponds to the major features and capabilities of the software. Primary functionality testing will be “happy path” testing, which does not try to break the software or use it in any challenging scenarios. This should be the absolute bare minimum of testing for any software project.

Integration Testing

After unit testing and functional testing, there may be several modules or the entire system that has not yet been tested as a whole. Or there might be components that are largely independent but occasionally used together. Any time components or modules are tested independently but not as a whole system, then integration testing should be performed to validate the components function together as a working system according to user requirements and expectations.

Stress Testing

Think about stress testing like you are testing a space shuttle or airplane. What does it mean to put your software or system under “STRESS”? Stress is nothing more than an intense load of a specific type that is most likely to break your system. This could be similar to “Load Testing” in the sense of putting your system under high concurrency with many users accessing the system. But stressing a system could happen on other vectors too. For example, running firmware on a hardware component when the hardware has had physical deterioration and is operating in degraded mode. Stress is unique to all types of software, and systems and designing stress tests should be under the consideration of what natural or unnatural causes are most likely to stress your software or system.

Load Testing

Load testing is a specific type of stress testing, as discussed above, whereby large numbers of concurrent user connections and accesses are automated to generate the simulation of the effect of a large number of authentic users accessing your software system at the same time. The goal is to find out how many users can access your system at the same time without your software system breaking. If your system can easily handle normal traffic of 10,000 users, what will happen if your website or software goes viral and obtains 1 million users? Will this unexpected “LOAD” break your website or system? Load testing will simulate this, so you are comfortable with the future increase in users because you know your system can handle the increased load.

Performance Testing

People can become utterly frustrated and despair when the software is not meeting their performance requirements. Performance, generally, means how quickly important functions can be completed. The more complex and dynamic the functions are available in a system, the more important and non-obvious it becomes to test its performance, let’s take a basic example, Windows or Linux Operating system. An operating system is a highly complex software product, and doing performance testing on its system could involve the speed and timing of functions such as Bootup, installing an app, searching for a file, running computations on a GPU, and/or any other of the millions of actions that can be performed. Care should be taken when selecting the performance test cases, to ensure the important and likely to malfunction performance features tested.

Scalability Testing

Testing on your laptop is good, but not really good enough when you are building a social network, an email system, or supercomputer software. When your software is meant to be deployed on 1000 servers, all functioning in unison, then the testing you do locally on one system will not uncover the bugs that occur when the software is deployed “At Scale” on hundreds of thousands of instances. In reality, your testing will likely never be able to run at the full scale before releasing to production because it would be way too expensive and not practical to build a test system with 1000 servers costing millions of dollars. Therefore, scalability testing is done on multiple servers, but usually not the full number of production servers to try and uncover some of the defects that might be found as your systems are used on bigger infrastructure.

Static Analysis Testing

Static analysis is testing that is done by inspecting the software code without actually running it. To do static analysis, generally, you would use a tool, there are many, one famous tool is Coverity. Static analysis is easy to run before releasing your software and can find many quality problems in your code that can be fixed before you release. Memory errors, data type handling errors, null pointer dereferences, uninitialized variables, and many more defects can be found. Languages like C and C++ greatly benefit from static analysis because the languages provide great freedom to programmers in exchange for great power, but this also can create great bugs and mistakes that can be found using static analysis testing.

Fault Injection Testing

Some error conditions are very difficult to simulate or trigger, therefore the software can be designed to artificially inject a problem or fault into the system without the defect naturally occurring. The purpose of fault injection testing is to see how the software handles these unexpected faults. Does it gracefully respond to the situation, does it crash, or does it produce unexpected and unpredictable problematic results? For example, let’s say we have a banking system, and there is a module to transfer funds internally from ACCOUNT A to ACCOUNT B. However, this transfer operation is only called after the system has already verified that these accounts existed before calling the transfer operation. Even though we assume that both accounts do exist, the transfer operation has a failure case where one target or source account does not exist, and that it can throw an error. Because in normal circumstances we never get this error due to pre-testing of inputs, so to verify the system behavior when the transfer fails due to a non-existent account, we inject a fake error into the system that returns a non-existent account for a transfer and test how the rest of the system responds in that case. It is very important that the fault injection code is only available in testing scenarios and not released to production, where it could create havoc.

Memory Overrun Testing

When using languages like C or C++, the programmer has a great responsibility to directly address memory, and this can cause fatal bugs in software if mistakes are made. For example, if a pointer is null and dereferenced, the software will crash. If memory is allocated to an object and then a string is copied over the memory space of the object, referencing the object can cause a crash or even unspecified wrong behavior. Therefore, it’s critical to use a tool to try and catch memory access errors in software that uses languages like C or C++, which could have these potential problems. Tools that can do this type of testing include Open Source Valgrind or proprietary tools like PurifyPlus. These tools can save the day when it’s not clear why the software is crashing or misbehaving and directly point to the location in the code that has the bug. Awesome, right?

Boundary Case Testing

It is easy to make errors in coding when you are on what is called a boundary. For example, a bank automated teller machine says you can withdraw a maximum of $300. So, imagine the coder wrote the following code naturally when building this requirement:

If (amt < 300) {
     startWithdrawl()
}
else{
    error(“You can withdraw %s”, amt);
}

Can you spot the error? The user who tries to withdraw $300 will receive an error because it is not less than $300. This is a bug. Therefore, boundary testing is done naturally. Requirement boundaries then ensure that on both sides of the boundary and the boundary, the software is functioning properly.

Fuzz Testing

High-speed generation of input to software can produce as many possible input combinations, even if those input combinations are total nonsense and would never be supplied by a real user. This type of fuzz testing can find bugs and security vulnerabilities not found through other means because of the high volume of inputs and scenarios tested rapidly without manual test case generation.

Exploratory Testing

Close your eyes and visualize what the word “Explore” means. You are observing and probing a system in order to find out how it truly functions. Imagine you receive a new desk chair in mail order, and it has 28 parts all in separate plastic bags with no instructions. You must explore your new arrival to figure out how it functions and how it is put together. With this spirit, you can become an exploratory tester. You will not have a well-defined test plan of test cases. You will explore and probe your software looking for things that make you say the wonderful word: “INTERESTING!”. Upon learning, you probe further and find ways to break the software that the designers never thought of, and then deliver a report that details numerous bad assumptions, faults, and risks in the software. Learn more about this in the book called Explore It.

Penetration Testing

In the world of software security, penetration testing is one of the primary means of testing. All systems, whether biological, physical, or software have borders and boundaries. These boundaries are meant to allow only specific messages, people, or components to enter the system. More concretely, let’s consider an online banking system that uses user authentication to enter the site. If the site can be hacked and entered into the backend without proper authentication that would be a penetration, which needs to be protected against. The goal of penetration testing is to use known and experimental techniques to bypass the normal security boundary of a software system or website. Penetration testing often involves checking all the ports that are listening and trying to enter a system via an open port. Other common techniques include SQL injection or password cracking.

Regression Testing

After you have working software that is deployed in the field, it is critical to prevent introducing bugs into functionality that was already working. The purpose of software development is to increase the capability of your product, introduce bugs, or cause old functionality to stop working, which is called a REGRESSION. Regression is a bug or defect that was introduced when previously the capability was working as expected. Nothing can ruin the reputation of your software or brand faster than introducing regression bugs into your software and having real users find these bugs after a release.

Regression testing cases and test plans should be built around the core functionality that needs to continue working to ensure that users have a good experience with your application. All of the core functions of your software that users expect to work in a certain way should have a regression test case that can be executed to prevent the functionality from breaking on a new release. This could be anywhere from 50 to 50,000 test cases that cover the core functionality of your software or application.

Source Code Bisection Testing

A bug was introduced in the software, but it is not obvious which version of the release introduced the new bug. Imagine that there were 50 software commits from the last known time the software was working without the bug, until now when…

Localization Testing

Imagine a weather application that shows the current and projected weather in your location, as well as a description of the weather conditions. The first part of localization testing is to ensure that the correct language, alphabet, and characters are displayed properly, depending on the geolocation of the user. The app in the United Kingdom should be displayed in English with Latin characters; the same App in China should be displayed in Chinese characters in the Chinese language. More elaborate localization testing done, the wider range of people from different geolocations will interface with the application.

Accessibility Testing

Some of the citizens in our community have disabilities, and therefore, may have trouble using the software being created, so accessibility testing is done to ensure that populations with disabilities can still access the functionality of the system. For example, if we assume that 1% of the population is color blind, and our software interface assumes that users can distinguish between Red and Green but those color blind individuals CANNOT tell the difference. Therefore, a well-software interface will have additional cues beyond the color to indicate meaning. Other scenarios besides color blindness testing would also be included in software accessibility testing, such as full visual blindness, deafness, and many other scenarios. A good software product should be accessible by a maximum percentage of potential users.

Upgrade Testing

Simple apps on a phone, operating systems like Ubuntu, Windows, or Linux Mint, and software that runs nuclear submarines need frequent upgrades. The process of the upgrade itself could introduce bugs and defects that would not exist on a fresh install because the state of the environment was different and the process of introducing the new software on top of the old could have introduced bugs. Let’s take a simple example, we have a laptop running Ubuntu 18.04, and we want to upgrade to Ubuntu 20.04. This is a different process of installing the operating system than directly cleaning the hard drive and installing Ubuntu 20.04. Therefore, after the software is installed or any of its derivative functions, it might not be working 100% as expected or the same as when the software was freshly installed. So, we should first consider testing the upgrade itself under many different cases and scenarios to ensure that the upgrade works to completion. And then, we must also consider testing the actual system post upgrade to ensure that the software was laid down and functioning as expected. We would not repeat all test cases of a freshly installed system, which would be a waste of time, but we will think carefully with our knowledge of the system of what COULD break during an upgrade and strategically add test cases for those functions.

Black Box & White Box Testing

Black box and white box are less specific test methodologies and more of categorizations types of testing. Essentially, black box testing, which assumes that the tester does not know anything about the inner working of the software and builds a test plan and test cases that just look at the system from the outside to verify its function. White box testing is done by software architects that understand the internal workings of a software system and design the cases with knowledge of what could, would, should, and be likely to break. Both black and white box testing are likely to find different types of defects.

Blogs and Articles on Software Testing

Software testing is a dynamic field, and many interesting publications and articles that update the community on state-of-the-art thinking about software testing. We all can benefit from this knowledge. Here is a sample of interesting articles from different blog sources you might want to follow:

Products for Software Testing

The majority of valuable testing tasks can be automated, so it should be no surprise that using tools and products to perform the myriad tasks of software quality assurance is a good idea. Below we will list some important and highly valuable software tools for software testing that you can explore and see if they can help.

JUnit

For testing Java-based software, JUnit provides a comprehensive test suite for unit and functional testing of the code that is friendly to the Java environment.

Selenium

For testing web applications, Selenium provides the ability to automate interactions with web browsers, including cross-browser compatibility testing. This is a premier testing infrastructure for web testing automation.

Cucumber

A behavior-driven testing framework allows business users, product managers, and developers to explain the expected functionality in natural language and then define that behavior in test cases. This makes more readable test cases and clear mapping to expected user functionality.

Purify

Find memory leaks and memory corruptions at run time by executing your software with the Purify Plus instrumentation embedded that tracks your memory usage and points out errors in your code that are not easy to find without instrumentation.

Valgrind

Open-source tools that will execute your software and allow you to interact with it while pointing out an error report of coding errors such as memory leaks and corruptions. No need to recompile or add instrumentation into the compilation process as Valgrind has the intelligence to dynamically understand your machine code and inject instrumentation seamlessly to find coding errors and help you improve your code.

Coverity

Static analysis tool that will find coding errors in your software before you even compile and run your code. Coverity can find security vulnerabilities, violations of coding conventions, as well as bugs and defects that your compiler will not find. Dead code can be found, uninitialized variables, and thousands of other defect types. It’s vital to clean your code with static analysis before releasing it to production.

JMeter

An open-source framework for performance testing oriented to Java-based developers, hence the J in the name. Website testing is one of the main use cases for JMeter in addition to performance testing of databases, mail systems, and many other server-based applications.

Metasploit

For security and penetration testing, Metasploit is a generic framework with thousands of features and capabilities. Use the interaction console to access pre-coded exploits and try to verify the security of your application.

Academic Research on Software Testing

Conclusion

Software’s role in society continues to grow, and at the same time, the world’s software becomes more complex. For the world to function, we must have methods and strategies for testing and validating the software we create by performing the functions it is intended to perform. For each complex software system, a testing strategy and testing plan should be in place to continue to validate the functionality of the software as they continue to get better and provide its function.

]]>
My Cheap Computer for Linux https://linuxhint.com/my-cheap-computer-for-linux/ Sun, 15 Nov 2020 02:58:14 +0000 https://linuxhint.com/?p=76720 If you are like me you are always looking to buy a new computer or electronic device. The thrill of opening it up and getting it setup and then using your new computer for all sorts of things. But I am cheap. I don’t have money to spend on unnecessary expenditures, so I was on the look out for how to really go ahead and buy or build a fully working computer running Linux on a low budget. I am not claiming to have found the best or cheapest solution, but what I got was pretty cheap (ONLY $250 USD TOTAL!!!) and worked well with Ubuntu booting up with out any extra configuration, so I am going to share the details with you in case you want to copy my approach.

The Computer Itself: by Qotom

Available on Amazon from a company called Qotom. Qotom, founded in 2004, and based in Shenzhen China, is a manufacturing company that specializes in Mini-PC production.

The model I selected is the Qotom Mini PC with the following specs: Quad Core Intel Celeron 2Ghz; 2GB RAM; 64GB SSD; 2 RJ45 Ethernet Lan; 4 USB Ports; VGA and HDMI Video Ports; Headphone and mic jacks; It is a fan-less machine which promises to be super quiet. The price was $181 USD plus shipping at the time of order.

Let’s have a look at the unboxing photos from my system:

Qotom Mini PC Unopened Box

Now let’s open the box and see what’s inside. Nice packaging from the first lift of the box as shown below:

Qotom Mini PC Top of Open Box

Qotom Mini PC Side View with 4 USB Ports; COM Port; Mic Jack; and Power Switch

Qotom Mini PC Side View One with 2 Ethernet ports; Headphone Jack; Power Plug; VGA and HDMI ports for video

And a view from above the Mini-PC here. Sleek design, 1 inch high, 5 by 5 inches

Qotom mini pc top view

The Monitor: by Haiway

When I received the Qotom mini-pc I realized: “I don’t have a monitor!”. So to complete my system I located a really cool little device from Haiway available on Amazon.

The model I selected was the HAIWAY 10.1 inch Security Monitor, 1024×600 Resolution Small TV Portable Monitor with Remote Control with Built-in Dual Speakers HDMI VGA BNC Input for only $69. Let’s have a look at the pictures, starting with the unboxing step by step:

Haiway 10 inch monitor in the box unopened

Remove the lid now:

Haiway 10 inch monitor opening the top of box

Now lets have a look at the product once removed from the box:

Haiway 10 inch monitor just removed from box

Turn on power on Haiway 10 inch monitor

Back of the Haiway 10 inch monitor

This monitor came with lots of handy cables, some I will use, such as the HDMI video cable to connect to the MiniPC some I wont use, like a car charger. Here are photos of the cables:

Haiway peripherals that came with the monitor

Ok enough already, let me show you the whole thing booted up. I used a spare keyboard and mouse via the USB ports on the MiniPC so lets assume you have a keyboard and mouse around you can use, you will want that I think.

Qotom MiniPC with Haiway 10 inch mini Monitor, all working!

A few more notes that are quite interesting.

Ubuntu on the Qotom MiniPC

This product came with, comes with, Ubuntu pre-installed and fully functioning. That was so awesome. No installation of an OS and wondering if the PC is linux compatible, its all working upon opening the box. Be sure to note the user name and password for the Qotom: user name is oem, password is oem123. And sudo was working for this user on opening the product, so the first thing i did was change the password for oem user and root user using sudo.

SSH, Apt-get, Internet, Firefox

Since this product came with a working installation of Ubuntu, everything was as expected. I plugged the ethernet cable into my home router and immediately had internet connectivity. Then i did the following command to get the repo’s ready for software installation:

$ apt-get update

Next I installed ssh with apt-get and noted the the IP address of my Qotom mini-pc. I was able to ssh into the machine from a laptop on my home network using my new oem password I set.

Lastly I ran firefox and was able to suffer to linuxhint.com without a problem. See pic below:

Firefox browser working successfully on Qotom and Haiway

Replicate my setup

If you want to buy the same gear, just make sure you have any USB mouse and keyboard as well as one ethernet cable and then purchase the MiniPC with Ubuntu pre-loaded and the Haiway Mini Monitor. The computer and monitor together cost only $250 USD total at the time of writing! Awesome.

]]>
Liquid Web Hosting Services https://linuxhint.com/liquid_webhosting_services/ Thu, 07 May 2020 13:32:59 +0000 https://linuxhint.com/?p=59771

Liquid Web provides web hosting on dedicated servers or with virtual private servers (VPS). It uses the latest technology to provide 100 percent uptime to clients and offer fast support guarantees.Key Features

  • 24/7 monitoring
  • Fully managed dedicated servers
  • Storm platform virtual private server (VPS) hosting
  • Storm private cloud
  • Guaranteed fast support
  • Privately owned data centers
  • Dedicated monitoring team

Liquid Web provides web hosting on dedicated servers or with virtual private servers (VPS). It uses the latest technology to provide 100 percent uptime to clients and offers fast support guarantees.

LIQUID WEB FEATURES

Privately Owned Data Centers

Liquid Web owns all the servers it uses, which are located in four data centers. Three are located in Lansing, Michigan and the fourth is in Scottsdale, Arizona in order to provide geographical redundancy. All are staffed 24/7 by highly trained engineers and designed to be state of the art. Getting started is easy with free supported migration.

24/7 monitoring by Liquid Web’s “Sonar Monitoring” Team

A proprietary suite of tools is employed to monitor the service and the state of the system to make sure each user’s servers are functioning optimally. This allows for early detection and quick resolution because Liquid Web’s dedicated monitoring team searches for problems proactively. In many cases, situations get resolved before the client even knows there was one. Where the monitors note configuration problems on managed servers, recommendations will be sent so users don’t have to try to track down the cause of the issue on their own.

Self-managed servers are available that don’t include the team’s engagement, but monitoring servers will send an email alert as soon as any kind of problem is detected.

Fully Managed, Dedicated Servers

Liquid Web’s commitment to providing state-of-the-art systems has resulted in its next-generation servers, called Storm servers. What makes the Storm platform stand out is that the servers not only run on dedicated hardware for the client’s infrastructure, but also have the same powerful features that cloud servers do. Storm servers also feature solid state drives (SSDs). Users have the ability to specify the exact hardware they want, such as CPU processor type or RAID configuration. The servers run on either Windows or Linux.

Dedicated servers are available in three versions—a single Intel quad-core processor, and two different configurations with Intel Dual Xeon processors. Each of those is customizable to get the desired number of CPUs and the right amount of RAM and drive space.

ABOUT LIQUID WEB

Storm Platform VPS Hosting

Users who don’t need an entire dedicated server can still benefit from the Storm platform with virtual private server (VPS) hosting. VPS is much less expensive because it is actually a shared server, but the “virtual” aspect gives the user the same kind of control he or she would have on a dedicated server.

VPS on the Storm platform includes the same support guarantee, monitoring, and 100 percent network uptime that dedicated servers offer. All the great cloud benefits are included as well: free incoming bandwidth and 5TB outgoing bandwidth; cPanel; application programming interface (API) access; the Storm firewall; and users can easily upgrade, downgrade, create server images, and more.

Another feature of the Storm platform is that it allows daily billing, which is not particularly valuable to anyone with a single ongoing website. However, for businesses that set up separate websites for each product, or do a lot of development or experimenting, this feature allows them to use high-end servers and only pay for the days they use, not the whole month.

Storm VPS hosting runs on SSD technology and the smallest unit available is 1GB memory with 50GB SSD disk space. There are nine higher tiers that also include free domain registration, a standard Secure Sockets Layer (SSL) certificate (if the user wants it), and multiple CPU cores. Sizes range from 2GB memory with 100GB SSD disk space to 512GB memory with 1,800GB disk space and 24 CPU cores.  More Information on Liquid Web VPS here.

Storm Private Cloud

The Storm platform allows users to create their own private cloud network of servers, and allows them to move instances from a private cloud to a public cloud. Windows and Linux are both available for instances within the same account. Virtual instances can be configured however the user wants them to be, and they can be re-sized, moved, cloned, or destroyed. RAM, disk space, and the number of cores utilized by each instance can also be controlled.

Support is provided by a dedicated onsite team at each of Liquid Web’s data centers, and the company guarantees very short response times. Phone calls and live chat are answered within 59 seconds, and help-desk tickets have a 30-minute maximum wait time. If Liquid Web doesn’t make it in the promised time, the company credits the user 10 times the overage. The guarantee also extends to dedicated servers, and if a hardware problem is identified, the unit will be replaced within half an hour.

Liquid Web offers small VPS and dedicated server hosting, but small businesses with simple website needs can save money by going with much less expensive hosting. Resellers that build many different websites for clients, or enterprise-level businesses with multiple or complex websites, have access to the latest technology and the flexibility they need with Liquid Web.  More information on Liquid Web Private Cloud Services here.

LIQUID WEB HOSTING PLANS

Liquid Web provides you with dedicated servers to best match your needs and fulfills your requirements. This Dedicated Servers are the best possible extent, and combined with the Most Helpful Humans in Hosting® support; it is unbeatable.

All servers include the following features:

  • Standard DDoS Protection
  • Cloudflare CDN
  • Backup Drive
  • ServerSecure Advanced Security
  • Interworx, Plesk or cPanel Available
  • IPMI Access
  • Root Access
  • Dedicated IP Address
  • Business-grade SSD Storage
  • 100% Network and Power Uptime SLAs

They offer you six different packages of dedicated servers with variations in cores and SSD primary storage, backup storage, and RAM sizes. The kits are very reasonably priced so that they do not cut your pockets deep.

They provide excellent remote servers that are specifically dedicated to you. Liquid Web does not share your information and activity with any other customer or service. Monthly payment plans make it easier for you to pay.

VPS Hosting With SSD

If you want to have control of your dedicated server but keep the cost down, you can do it easily from the Liquid Web platform with VPS hosting. You have full authority to resize your server if you want to accommodate spikes in traffic or any such task.

The most significant advantage of resizing the server is that you would have to pay for only the services that you have used. Such a feature makes it very economical and feasible to use, and many customers benefit from this opportunity.

Although they provide you with four different packages, they have some features which are included in all packages as a standard bundle.

  • Gigabit Bandwidth
  • Unlimited Sites with InterWorx
  • Plesk and cPanel Available
  • Dedicated IP Address
  • Cloudflare CDN
  • Server Secure Advanced Security
  • Integrated Firewall
  • DDOS Attack Protection
  • Root Access
  • Easy Scalability (upgrade or downgrade)
  • 100% Network and Power Uptime SLAs

Moreover, you can ask for assistance as helplines are available 24/7 via email, call, or chat.  More information on Liquid Web VPS Hosting here.

Cloud Dedicated Servers

The enhanced version of the dedicated servers is the cloud dedicated servers. The traditional resources of the dedicated servers are combined with the provisioning and flexibility of a cloud platform to make it more useful, functional, and efficient.

If you want to create a custom cloud dedicated server, you can contact Liquid Web for such services. You will be addressed to and priced accordingly, and your custom plan created. Along with these, Liquid Web will also provide you with four different packages.

However, as the specs differ in the packages, some features are the same and standard for all the cloud dedicated servers:

  • Standard DDoS Protection
  • CloudFlare CDN
  • ServerSecure Advanced Security
  • Gigabit Uplink
  • Interworx, Plesk, and cPanel Available
  • Root Access
  • Dedicated IP Address
  • 100% Network and Power Uptime SLAs

More information can be found on Liquid Web Cloud Dedicated Servers here.

VMware Private Cloud Hosting

With the help of VMware and NetApp, the benefit of a traditional public cloud is added to the power, security, and performance.

The private cloud is the best solution if you are new to the field and confused, or are working previously and are looking for secure elasticity, or just not getting the support you want. Liquid Web provides you with the best solution and services for private hosting.

The single-tenant environment keeps you and your information secure with firewalls, v-Centers, and other features to make the process a lot more private for you.

There are three packages available, with the best of features, and that is the main reason for such high-end prices. But, they have some features included as standard so that your primitive and implicit need are met easily:

  • Fully Managed with 24/7/365 On-Site Support
  • Single-Tenant Solution on Dedicated Hardware
  • Firewall and Load Balancer Included
  • Scalable and Fully Customizable
  • High-Performance NetApp SAN Storage
  • Scheduled-Snapshot Backups for VMs
  • Free Standard DDoS Protection

More information on Liquid Web VMware Private Cloud Services here.

Managed WordPress Hosting

If you are involved in WordPress sites, there is no other solution for you, which is more suitable than this one. The boosted speeds on the website provide you with extensive and fast experience to save your time and give you a better experience of working.

They create a copy of your website, and every night, they test the plug-in updates so that they do not waste your time, and you do not have to bear the hectic work. Furthermore, you are provided with every minute’s information, and the white glove services make the experience much smooth and easy.

Liquid Web provides you with 24/7 support through various platforms. Moreover, there are no overage fees, metered pageviews, or traffic limits.

More information on Liquid Web WordPress Hosting can be found here.

Some of the features included are:

  • No Pageview/Traffic Limits
  • Full Server Access
  • Automatic Daily Backups
  • Automatic SSL
  • Amazing Speed
  • 24 Hour Support
  • Developer Tools

All these services prove Liquid Web as the best managers for WordPress sites as they understand the system and work zealously to provide you with all the functions and features that they promise.

Managed WooCommerce Hosting

This is a platform built specifically for eCommerce stores of all sizes. They understand that these stores do not work in the way WordPress sites do, so they have created an intelligent and efficient platform that is able to improve the load time of your store radically.

The query loads have been reduced by 95% with the help of different data tables, which increase your capacity by 75% without even upgrading. The revenue lost from abandoned carts is catered to with the help of Jilt, which is the best in the field.

They also provide performance tests to make sure your satisfactory standards are met, and you are happy and content with the results. Some of the notable features include:

  • 2-10x Faster Speed
  • Migration Support
  • A Total Solution
  • 24/7 Support
  • Thousands of Themes
  • Custom Look and Feel
  • Mobile-Optimized
  • Responsive Design
  • Custom Fields
  • Comprehensive Designs Included
  • Product Catalogues
  • Product Variations

The management of eCommerce sites can not be done better than the way Liquid Web does it.

Server Clusters at Liquid Web

It does not matter if you are scaling an eCommerce business or developing some sort of social media application. Liquid Web can still provide you with spectacular services that will entirely transform your work experience and make it run as smooth as butter and as fast as a supercar.

There are four different packages that have the main difference in the number of nodes. The lowest one has two nodes, then 3, then 4, and lastly, six nodes packages. However, there are some other features that are added on as you keep on upgrading.

Here are some generic features of the system:

App Server:

  • 4-cores @ 3.5 Ghz
  • 16GB RAM
  • 2 x 240GB SSD
  • Database Server
  • 6-cores @ 4 Ghz
  • 32 GB RAM
  • 2 x 240GB SSD – OS
  • 4 x 240GB SSD – Database

Private Switch:

  • 8-48 Port Gigabit Switches
  • Shared Load Balancer
  • SSL Termination
  • HTTP Compression
  • 1 Gbps throughput
  • 100,000 concurrent sessions
  • 2-10 servers
  • 1-10 virtual IPs

Firewall:

  • Cisco ASA 5506-X
  • 750 Mbps Throughput
  • 20,000 concurrent connections
  • 100 Mbps Max VPN Throughput
  • 50 IPSEC VPN User Sessions

Private VPS Parent

If you seek to create your cloud environment, within which you have the control to resize, move, shape, or destroy, any number of virtual instances, then you have to choose the private VPS parent service provided by Liquid Web.

The best feature is that you can create your virtual instances and customize them according to your requirements and preferences. For example, you can resize the RAM, Adjust the disk size, and configure the number of cores.

Whether you are creating a VPS Parent environment for your business or a client at the capacity of a reseller, it is the best fit. You have complete control and the ability to create an entire cloud infrastructure, which is solely controlled by you.

Liquid Web offers you four comprehensive packages to best match your needs. These have been designed by keeping in mind the factors of affordability as well as functionality. One can not get a better service provider in such fields.

High Performance

If you are running any kind of site or application, you would be aware of the problems that the traffic brings with it. Often, the site has to go under latency errors, or downtime errors, which reduce the performance.

However, Liquid Web has figured out a solution to this. They have proposed the use of a multi-server program combined with a load balancer. This would divide the load between multiple machines even under the heaviest of traffics and keep the site working at peak performance.

The incoming traffic on your site is encountered by the load balancer, which intelligently distributes it to different machines.

With passing the time, if your website or app can gain more traffic and become more popular, you also have the option to add more servers. It would help cater for the increased traffic and the load that it brings.

Hence, the efficiency of the site is promoted in a rather intelligent manner.

High Availability

If you want to run a successful business online, you need to take into consideration two major factors: server uptime and site availability. You might face a loss as a result of a downtime error. If you are generating revenue from a web presence, the reliability assurance is paramount.

High Availability (HA) mainly promotes a hardware environment, in which a floating IP address is assigned, a second physical server is added, and data replication keeps them in sync.

This produces such an environment, where if one server is down, the others keep the system going and hence, assuring your site availability.

The cloud infrastructure works far more efficiently than you can imagine. Customers have been noticed to be more than satisfied with what they got, and the results are fantastic.

If you are also a part of such a community where you correspondingly generate revenue, you should certainly give it a shot.

High Availability Database Hosting

The reliability of a website is said to be based upon and relative to that of the databases which empower it. High Availability (HA) Databases ensure that problems are avoided. They enable your site and insulate your business to make sure your databases are always available.

Downtime errors often prove costly, and that is the primary target of an HA database. It replicates the info to nodes to make it available on others when one is not possible to be accessed due to any reason.

This increases reliability as well as availability for your site and database. Not only does it reduce your chance of risks and losses, but it also improves your credibility and reputation on the Web, which would surely enhance your rankings and promote your business.

There are two packages available in this section. One is the business class configuration, and the other is the enterprise-class configuration. The first is designed for small to medium scale businesses while the latter is specified for large corporations. Learn more here about Liquid Web High Availability Database Hosting.

HIPAA Compliant Hosting

The HIPAA (Health Insurance Portability and Accountability Act of 1996) binds all those businesses who handle or deal with electronic health information to comply with some safeguards.

Liquid Web has found solutions for them as well. They have created an environment in which there would be no loss or compromise on the health information that might lead to destroyed reputation if not legal penalties.

Liquid WEB is HITECH Certified and confirmed by a third-party audit.

These are some of the features that Liquid Web uses to help you out:

  • Wholly owned Core Data Centers
  • Fully Managed Servers
  • Locked Server Cabinets Included
  • Business Associate Agreement (BAA) Available
  • 24/7/365 On-Site Support
  • Offsite Backup Available
  • High Availability Infrastructure
  • Extensive Administrative, Physical & Administrative Safeguards

There are two packages available in this service. They are dependent upon what number of servers you are using. The first is specified for single-server HIPAA hosting while the other is made for multi-server HIPAA hosting.

Learn more about Liquid Web HIPAA Compliant Hosting.

Add-on Features for Added Versatility

By supporting a series of add-on features, customers can design their Storm-based web presences to fit highly individual needs, though beware that some may add to the total cost of the solution. A few examples include:

  • Storm Block Storage – Users can create and destroy storage, adding up to 15TB in a matter of seconds.
  • Storm Object Storage – An API call provides direct access to secure, readily available storage of whatever size is needed. It is highly redundant, providing three copies, and is distributed across the system. Pricing for storage is on a per-gigabyte basis, so costs are easy to understand and plan for. The APIs used are Amazon S3-compliant.
  • Storm Load Balancer – Viral campaigns and popular events won’t crash your website because Storm Load Balancers distribute traffic to several servers, supporting an infrastructure that is fault-tolerantg. If traffic to your website increases, web servers can be easily and quickly added to the pool being managed by the load balancer.
  • Virtual Private Network (VPN) – You may be able to build a secure network in your office, but when you add employees connecting from mobile devices and working at remote locations, you need a secure network that encompasses it all. VPN is available with both dedicated and cloud servers.

Frequently Asked Questions

  1. Which is the assurance of liquid web?

Liquid Web provides a 100% network guarantee and power uptime. This is above the 99.99% industry norm.

  1. Will the web charge for the migration site?

Generally speaking, no. All Linux migrations are free on their website, but some more complicated migrations may not be fully free.

  1. Any liquid web trial free trail?

Yeah, they give new hosting customers a free 30-day trial.

Conclusion

Overall, Liquid Web is a full-featured web hoster with a fairly mature history. It’s been in business since 1997, and has been recognized by INC 5000 every year from 2007 to 2014 in the Fastest Growing Companies category. They have over 1000 onsite staff members, serving 32,000  customers in 150 countries. For customers looking for a hosting provider with some history, a stable platform, and eager support, Liquid Web is worth a look.

]]>
VIM Exit and Save, for Beginners https://linuxhint.com/vim_exit_and_save/ Tue, 28 Apr 2020 16:59:07 +0000 https://linuxhint.com/?p=59196

Syntax to Exit and Save from VIM Editor

Press esc key to ensure you are in command mode.  Then press the colon key to enter command line mode.  Command line mode will have a colon at the bottom left of the screen where you can enter command lines.  Then press ‘wq’ and ‘enter’ to indicate exit and save.  ‘w’ is short for write indicating save the file contents to disk.  ‘q’ is short for quit which indicates exit vim editor.

:wq

The screenshot below is of entering the ‘wq’ command in command line mode to instruct vim to exit the program after saving the contents, aka write and quit.

Understanding VIM Modes and How to use Them

Vim’s primary modes of operation are Insert Mode, Command Mode, Command Line Mode and Visual Mode.  The editor will be in one of these modes at any time, and interacting with the vim editor using keystrokes will take different actions depending on which mode the editor is currently in.  If you try to enter ‘wq’ to save and exit, when in insert mode it will not in fact save the contents you are working on nor exit the vim editor.  So therefore you need to be aware when using vim which mode you are in.

Insert Mode: in this mode you can actually type content into a file.  You are doing editing and not instructing Vim the editor what to do other than what text to enter into a file contents.  You can enter Insert Mode from the Command Mode by hitting the ‘i’ key which stands for insert.  You can also hit the ‘a’ key which will also enter the insert mode, but after moving the cursor after the current position, which is why ‘a’ in this case stands for append.

Command Mode: this is the mode where keystrokes are used to instruct vim what to do and how to function but not for entering new text or editing text in a file.  To enter Command Mode hit the ‘esc’ key on the keyboard.

Command Line Mode: a little mini command line prompt appears at the bottom left of the screen and allows you to enter commands such as we have shown in this tutorial, you can use ‘w’ which stands for writing a file and you can use ‘q’ which stands for quitting the editor.  These can be combined into a single instruction and you will exit the editor and save the file.

Visual Mode: visual mode allows you to highlight text and then use that selected text in copy and paste or other common operations on selected text.  All of this can be done from the keyboard without a mouse unlike GUI based editors such as Notepad or Visual Studio.

How to Save Your Contents but not Exit the Editor

If you want to save your work in the file you are editing but not exit the vim editor just use the ‘w’ command to write the file and omit the ‘q’ command as shown below.

:w

How to Exit the Editor But Not Issue a Save

If you want to exit the editor only, and not issue a save command, then just issue the ‘q’ command in the command line mode as shown below.  This command assumes you have not made any changes so far to the content or else you will get an error as shown in the second screenshot below.

:q

No error above.

Error above because exit command was issued without a save file command.

How to Exit the Editor But and Not Save Changes

If you have made changes, but you DO NOT want to save these changes to disk and want to revert to the previous content at the time of the last save issue the ‘q’ command with the ‘!’ command at the command line mode as shown below.

:q!

Conclusion

All of the information above may seem like too much complexity just to exit the editor and save the contents of the file you are working on, but there are reasons for it and everything is logical.  The vim editor can be operated almost entirely from the keyboard without use of the mouse which is ergonomically and speed wise usually more efficient for programmers and experience IT professionals.  Therefore all of the instructions must be specified with different key combinations and not the use of a graphical mouse pointer.  In order to accommodate rapid programming and usage of VIM the different modes of operation were introduced so the same keys can have different actions depending on the current mode.  So when you want to do a quick simple command to exit the editor while at the same time saving the contents of the file you are doing, it’s just a few keystrokes on the editor and not browsing through menus with a clumsy mouse.  Learn the tricks of VIM and you will be on the path to being an elite coder or systems administrator.

More VIM Education

]]>
Bash Shell Expansions: Brace Expansion, Parameter Expansion and more https://linuxhint.com/bash_shell_brace_parameter_expansion/ Tue, 24 Mar 2020 08:01:36 +0000 https://linuxhint.com/?p=57085 In this article we will cover all the basic features of Bash Shell expansion.  Some of the most complex and interesting expansions are the Brace Expansion and Parameter Expansion which have many features and options which are powerful but only mastered over time by BASH programmers and linux devops folks.  Word Splitting is also quite interesting and sometime overlooked.  Filename, Arithmetic Expansion and Variable substitution are well known.  We will cover numerous topics and show examples of the command and most useful syntaxes for each syntax.  So let’s get started.

Environment

In order to test all of the bash shell expansions features we need to make sure we are running a recent version of bash.  Below is the system information for this article. The tests in this article are running on Ubuntu 19.10 as shown below.

$ uname -a
Linux instance-1 5.3.0-1014-gcp #15-Ubuntu SMP Tue Mar 3 04:14:57
UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

The bash version for these tests is bash version 5, which is quite recent.  Older versions of bash are missing a bunch of features.

$ bash --version
GNU bash, version 5.0.3(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2019 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>

Command Substitution

Command substitution allows the running of one or multiple commands and capturing outputs and actions from those commands and including them in another command all in one line or less lines than running all the commands separately.  Command Substitution has two syntaxes; the more popular syntax is the backtick syntax where the command to be executed is enclosed in two backquotes, or backticks.  The other syntax which is equally powerful encloses commands in $() syntax and the output can be used from that new expansion.  Let’s look at a number of examples of Command Substitution below.

Simple command substitution using $() syntax to run the date command.

$ echo $(date)
Wed Mar 18 01:42:46 UTC 2020

Simple command substitution using backtick syntax to run the date command.

$ echo `date`
Wed Mar 18 01:43:17 UTC 2020

Using the stdin operator at the beginning of command substitution syntax is a fancy way to read the text of a file into a variable and use it in a command on the shell as below.

$ echo "hello world" > mytext
$ echo $(< mytext)
hello world

Read a file into a variable to be used in a command using the cat command and Command Substitution.

$ echo "hello world" > mytext
$ echo $(cat mytext)
hello world

Same as above, read a file and use it in Command Substitution using backticks and cat command.

$ echo "hello world" > mytext
$ echo `cat mytext`
hello world

Combine embedded Command Substitution with another Command Substitution using both $() and backticks together

$ echo `echo $(date) |cut -d " " -f 1` > myfile
$ cat myfile
Wed

Embedded Command Substitution inside another one using two $() syntax operations

$ echo "today is $(echo $(date) |cut -d " " -f 1)" > myfile
$ cat myfile
today is Wed

Use output from a command as arguments into another command, with the backtick syntax.  We will get a list of files by running cat which contains one file per line and then pass that into the rm command which will remove each file

$ touch one; touch two
$ echo one > myfiles; echo two >> myfiles
$ rm `cat myfiles`

Same as above but with $() syntax, pass command output from cat into rm command to delete files.

$ touch one; touch two
$ echo one > myfiles; echo two >> myfiles
$ rm $(cat myfiles)

Store the output from a cat command into a variable and then loop through the variable so you can more clearly see what is happening.

$ touch one; touch two
$ echo one > myfiles; echo two >> myfiles
$ MYFILES=$(cat myfiles)
$ for f in $MYFILES; do echo $f; rm $f; done
one
two

Same as above but use backticks syntax to run the cat command and store the output in a variable and then loop through the files int the variable.

$ touch one; touch two
$ echo one > myfiles; echo two >> myfiles
$ MYFILES=`cat myfiles`
$ for f in $MYFILES; do echo $f; rm $f; done
one
two

Use the Command Substitution with stdin operator to read a file line by line into a variable and then loop through the variable afterwords

$ touch one; touch two
$ echo one > myfiles; echo two >> myfiles
$ MYFILES=$(< myfiles)
$ for f in $MYFILES; do echo $f; rm $f; done
one
two

Process Substitution

Process Substitution is a documented feature of bash; it’s quite cryptic in my opinion.  In fact I have not found many good use cases to recommend for it.  One example is included here for completeness where we use Process Substitution to get the output of a command and then use it another command.  We will print the list of files in reverse order with sort command in this example after fetching files from the ls command.

$ touch one.txt; touch two.txt; touch three.txt
$ sort -r <(ls *txt)
two.txt
three.txt
one.txt

Variable Substitution

Variable Substitution is what you can consider basic usage of variables and substituting the value of the variable when it is referenced.  Its fairly intuitive, a few examples are provided below.

Simple variable assignment and usage where we put a string into variable X and then print it to stdout

$ X=12345
$ echo $X
12345

Check if a variable is assigned something or null, in this case it’s assigned so we print it to stdout

$ X=12345
$ if [ -z "$X" ]; then echo "X is null"; else echo $X; fi
12345

Check if a variable is assigned something or null, in this case it’s not set so we print “is null” instead of the value.

$ unset X
$ if [ -z "$X" ]; then echo "X is null"; else echo $X; fi
X is null

Brace Expansion

Brace Expansion is a super powerful feature of bash that allows you to write more compact scripts and commands.  It has many different features and options described below.  Within braces your syntax is interpreted into a more verbose syntax depending when you enter into the curly braces.  Let’s look at a number of examples for Brace Expansion.

Each version of the items in the list in braces is executed. So we go from one echo command and print 3 versions of the word below separated by spaces.

$ echo {a,m,p}_warehouse
a_warehouse m_warehouse p_warehouse

Expressions in the expansion causes execution multiple times.  To prove this we use the date and sleep command to validate that the date command is run once for each iteration of the pattern in the Brace Expansion.

$echo {a,m,p}_$(date; sleep 1)
a_Sun Mar 22 18:56:45 UTC 2020 m_Sun Mar 22 18:56:46 UTC
2020 p_Sun Mar 22 18:56:47 UTC 2020

Expansions using numbers with .. will cause sequential numbers to be expanded in a numerical sequence

$ echo {1..8}_warehouse
1_warehouse 2_warehouse 3_warehouse 4_warehouse 5_warehouse 6_warehouse 7_warehouse
8_warehouse

Reverse order brace expansion with sequence of numbers

$ echo {8..1}_warehouse
8_warehouse 7_warehouse 6_warehouse 5_warehouse 4_warehouse 3_warehouse 2_warehouse
1_warehouse

Using an optional increment value to specify the numerical increments of brace expansion

$ echo {1..9..3}_warehouse
1_warehouse 4_warehouse 7_warehouse

Lexicographical brace expansion will iterate through letters in the alphabet in the order of the locale

$ echo {a..e}_warehouse
a_warehouse b_warehouse c_warehouse d_warehouse e_warehouse

Reverse order lexicographical brace expansion

$ echo {e..a}_warehouse
e_warehouse d_warehouse c_warehouse b_warehouse a_warehouse

Lexicographical brace expansion with increment specified will iterate through a list of characters from the begin to the end point but skip characters according the increment value

$ echo {a..z..5}_warehouse
a_warehouse f_warehouse k_warehouse p_warehouse u_warehouse z_warehouse

Multiplicative brace expansion with 2 brace expansions in one command

$ echo {a..e}{1..5}_warehouse
a1_warehouse a2_warehouse a3_warehouse a4_warehouse a5_warehouse b1_warehouse
 b2_warehouse b3_warehouse b4_warehouse b5_warehouse c1_warehouse c2_warehouse
 c3_warehouse c4_warehouse c5_warehouse d1_warehouse d2_warehouse d3_warehouse
 d4_warehouse d5_warehouse e1_warehouse e2_warehouse e3_warehouse e4_warehouse
 e5_warehouse

Brace expansion to use the same root two times in a command.  This creates a foo.tgz tar file from a directory under name foo.  Its a handy syntax where you are using it inside another loop and want to assume that the base of the word is used multiple times.  This example shows it with tar, but it can also be used with mv and cp as per this example.

$ mkdir foo
$ touch foo/foo{a..e}
$ tar czvf foo{.tgz,}
foo/
foo/foob
foo/fooc
foo/fooa
foo/food
foo/fooe

Parameter Expansion

Parameter expansion is also a nice compact syntax with many capabilities such as: allow scripts to set default values for unset variables or options, string substring operations, search and replace substitutions and other use cases.  Examples are below.
 
Check for null and use the parameter if not null or the default value.  In this case X is not null so it will be used

$ X=1
$ echo ${X:-2}
1

Check for null and use the parameter if not null or the default value.  In this case X is null so the default value will be used

$ unset X
$ echo ${X:-2}
2

Check if the variable is NULL and set and echo it if it is NULL.  X is assigned 2 and printed $X.  This can both set the variable and use it in the command with the ${:=} syntax.

$ unset X
$ if [ -z "$X" ]; then echo NULL; fi
NULL
$ echo ${X:=2}
2
$ if [ -z "$X" ]; then echo NULL; else echo $X; fi
2

Substring expansion will substitute from an offset point a certain number of characters in the string

$ X="Hello World"
$ echo ${X:0:7}
Hello W

Change the offset to the second character and print 7 characters of substring

$ X="Hello World"
$ echo ${X:1:7}
ello Wo

Substring from beginning of string but cut off last 2 characters

$ X="Hello World"
$ echo ${X:0:-2}
Hello Wor

Get a string length with this version of parameter expansion

$ X="Hello World"
$ echo ${#X}
11

Search and replace within a variable.  In this example replace the first lowercase o with uppercase O

$ X="Hello World"
$ echo ${X/o/O}
HellO World

Search and replace within a variable but with all matches replaced because of the leading slash in the search pattern.

$ X="Hello World"
$ echo ${X//o/O}
HellO WOrld

Patterns starting with #, mean the match must start at the beginning of the string in order to be substituted

$ X="Hello World"
$ echo ${X/#H/J}
Jello World

Example where searching for match at beginning of string, but failing because match is later in the string

$ X="Hello World"
$ echo ${X/#W/J}
Hello World

Patterns beginning with % will only match at the end of string like in this example.

$ X="Hello World"
$ echo ${X/%d/d Today}
Hello World Today

Example for end of string match which fails because the match is in the beginning of string.

$ X="Hello World"
$ echo ${X/%H/Today}
Hello World

Use shopt with nocasematch to do case insensitive replacement.

$ shopt -s nocasematch
$ X="Hello World"
$ echo ${X/hello/Welcome}
Welcome World

Turn off shopt with nocasematch to do case sensitive replacement.

$ shopt -u nocasematch
$ X="Hello World"
$ echo ${X/hello/Welcome}
Hello World

Search for environment variables that match a pattern.

$ MY_A=1
$ MY_B=2
$ MY_C=3
$ echo ${!MY*}
MY_A MY_B MY_C

Get a list of matching variables and then loop through each variable and print its value

$ MY_A=1
$ MY_B=2
$ MY_C=3
$ variables=${!MY*}
$ for i in $variables; do echo $i; echo "${!i}"; done
MY_A
1
MY_B
2
MY_C
3

Make a string all uppercase

$ X="Hello World"
$ echo ${X^^}
HELLO WORLD

Make a string all lowercase
$ X="Hello World"
$ echo ${X,,}
hello world
 
Make first character of a string uppercase
$ X="george washington"
$ echo ${X^}
George washington
 
Make first character of a string lowercase
$ X=BOB
$ echo ${X,}
bOB

Positional Parameters

Positional Parameters are normally thought of as command line parameters, how to use them are shown with examples below.

Parameter $0 is the script name that is running and then $1, $2, $3 etc are command line parameters passed to a script.

$ cat script.sh
echo $0
echo $1
echo $2
echo $3
$ bash ./script.sh apple banana carrot
./script.sh
apple
banana
carrot

Parameter $* is a single variable with all the command line arguments concatenated.

$ cat script.sh
echo $1
echo $2
echo $*
$ bash ./script.sh apple banana
apple
banana
apple banana

Parameter $# is a number with the quantity of positional parameters passed to a script in this case below there are 2 arguments passed.

$ cat script.sh
echo $1
echo $2
echo $*
echo $#
$ bash ./script.sh apple banana
apple
banana
apple banana
2

Tilde Expansion

Tilde expansion is commonly seen with usernames and home directories, examples are shown below.

Tilde Expansion to get the HOME directory of the current user, using just tilde without the username.

$ echo $USER
root
$ cd ~/
$ pwd
/root

Refer to a specific user’s home directory, not the current user with Tilde and the username

$ cd ~linuxhint
$ pwd
/home/linuxhint

Arithmetic Substitution

Arithmetic Substitution allows bash to do mathematical operations in the shell or in a script.  Examples of common usages are shown below.

Simple Arithmetic Substitution with $ and double parentheses

$ echo $((2 + 3))
5

Post increment operator will update the value by one after the current command, note there is an equivalent post decrement not shown here.

$ X=2
$ echo $((X++))
2
$ echo $X
3

Pre increment operator will update the value by one just before the current command, note there is an equivalent pre decrement operator not shown here.

$ X=2
$ echo $((++X))
3
$ echo $X
3

Exponent operator can raise a number to a power exponentially

$ echo $((5**2))
25

Left bitwise shift; in this case shift the bits of the decimal number 8 to the left which essentially multiplies it by 2

$ echo $((8<<1))
16

Right bitwise shift; in this case shift the bits of the decimal number 8 to the right which essentially divides the number by 2

$ echo $((8>>1))
4

Bitwise AND Operator will compare the numbers bit by bit and and the result will be the bits that are all set.

$ echo $((4 & 5))
4

Bitwise OR Operator will compare the numbers bit by bit and the results will be the bits where either of the inputs has the bit set.

$ echo $((4 | 9))
13

Arithmetic equality operator will test for truth and return 1 or 0

$ echo $((4 == 4))
1

Arithmetic inequality operator will test for non-equality and return 1 or 0

$ echo $((4 != 4))
0

The Conditional Operator will test the first argument if true, replace with the second argument and if false replace with the third.  In this case 5 equals 4+1 so the first conditional is true and 9 is returned.  5 does not equal 4+2 so in the second echo 7 is returned.

$ echo $(( 5==4+1 ? 9 : 7 ))
9
$ echo $(( 5==4+2 ? 9 : 7 ))
7

You can use hexadecimal numbers in arithmetic expansions, in this case 0xa is equivalent to 10 and 10+7 = 17.

$ echo $(( 0xa + 7 ))
17

Word Splitting

Using the IFS environment variable to register a delimiter, and using the read and readarray commands we can parse strings into an array of tokens and then count the tokens and operate on them.  Examples are shown below.
 
Use IFS parameter as delimiter, read tokens into an array split by IFS which is set to a space character, and then print out the tokens one by one

$ text="Hello World"
$ IFS=' '
$ read -a tokens <<< "$text"
$ echo "There are ${#tokens[*]} words in the text."

There are 2 words in the text.

$ for i in "${tokens[@]}"; do echo $i; done
Hello
World

User readarray without IFS and specify delimiter in the readarray command.  Note this is just an example where we split a directory path based on the slash delimiter.  In this case the code has included the empty string before the first slash which would need to be adjusted in a real usage, but we are just showing how to call readarray to split a string into tokens in an array with a delimiter.

$ path="/home/linuxhint/usr/local/bin"
$ readarray -d / -t tokens <<< "$path"
echo "There are ${#tokens[*]} words in the text."

There are 6 words in the text.

$ for i in "${tokens[@]}"; do echo $i; done
 
home
linuxhint
usr
local
bin

Filename Expansion

When wanting to refer to a list of files or directories in the filesystem, a bash command or bash script can use Filename Expansion to generate a list of files and directories from simple commands.  Examples are shown below.

The * character expands to a wildcard and picks up all matching files with the rest of the wild card string.  Here we pick up all files ending in .txt and pass them into the du command for checking disk size.

$ touch a.txt b.txt c.txt
$ echo "Hello World" > content.txt
$ du *.txt
0          a.txt
0          b.txt
0          c.txt
4          content.txt

The ? character will only match a single character not an infinite number of characters, and therefore in this example will only pickup filenames with a single character followed by .txt.

$ touch a.txt b.txt c.txt
$ echo "Hello World" > content.txt
$ du ?.txt
0          a.txt
0          b.txt
0          c.txt

Characters in brackets expand to match any of the characters.  In this example a.txt and c.txt are picked up by the expansion

$ touch a.txt b.txt c.txt
$ echo "Hello World" > content.txt
$ du [ac].txt
0          a.txt
0          c.txt

Characters in brackets can be a range of characters and we see here all files from a to c range followed by .txt suffix are picked up

$ touch a.txt b.txt c.txt
$ echo "Hello World" > content.txt
$ du [a-c].txt
0          a.txt
0          b.txt
0          c.txt

Conclusion

We have covered many types of shell expansions in this article, and I hope the simple examples can serve as a cookbook for what is possible in bash to make you more productive with shell expansions.  As further references I recommend reading the full Bash Manual, and also the many good articles on NixCraft website about bash scripting including Shell Expansions.  We have other articles that may be of interest to you on LinuxHint including: 30 Bash Script Examples, Bash Lowercase Uppercase Strings, Bash Pattern Matching, and Bash Split String Examples.  Also we have a popular free 3 hour course on Bash Programming you can find on YouTube. ]]> What is vm.min_free_kbytes and how to tune it? https://linuxhint.com/vm_min_free_kbytes_sysctl/ Mon, 16 Mar 2020 17:16:47 +0000 https://linuxhint.com/?p=56576 What is vm.min_free_kbytes sysctl tunable for linux kernel and what value should it be set to?  We will study this parameter and how it impacts a running linux system in this article.  We will test its impact on the OS page cache and on mallocs and what the system free command shows when this parameter is set.  We will make some educated guesses on ideal values for this tunable and we will show how to set vm.min_free_kbytes permanently to survive reboots.  So let’s go.

How vm.min_free_kbytes works

Memory allocations may be needed by the system in order to ensure proper functioning of the system itself.  If the kernel allows all memory to be allocated it might struggle when needing memory for regular operations to keep the OS running smoothly.  That is why the kernel provides the tunable vm.min_free_kbytes.  The tunable will force the kernel’s memory manager to keep at least X amount of free memory.   Here is the official definition from the linux kernel documentation: “This is used to force the Linux VM to keep a minimum number of kilobytes free.  The VM uses this number to compute a watermark[WMARK_MIN] value for each lowmem zone in the system. Each lowmem zone gets a number of reserved free pages based proportionally on its size. Some minimal amount of memory is needed to satisfy PF_MEMALLOC allocations; if you set this to lower than 1024KB, your system will become subtly broken, and prone to deadlock under high loads. Setting this too high will OOM your machine instantly.“

Validating vm.min_free_kbytes Works

In order to test that the setting of min_free_kbytes is working as designed, I have created a linux virtual instance with only 3.75 GB of RAM.  Use the free command below to analyze the system:

# free -m

Looking at the free memory utility above using the -m flag to have the values printed in MB.  The total memory is 3.5 to 3.75 GB of memory.  121 MB of memory is used, 3.3 GB of memory is free, 251 MB is used by the buffer cache.  And 3.3 GB of memory is available.

Now we are going to change the value of vm.min_free_kbytes and see what the impact is on the system memory.  We will echo the new value to the proc virtual filesystem to change the kernel parameter value as per below:

# echo 1500000 > /proc/sys/vm/min_free_kbytes
# sysctl vm.min_free_kbytes

You can see that the parameter was changed to 1.5 GB approximately and has taken effect.  Now let’s use the free command again to see any changes recognized by the system.

# free -m

The free memory and the buffer cache are unchanged by the command, but the amount of memory displayed as available has been reduced from 3327 to 1222 MB.  Which is an approximate reduction of the change in the parameter to 1.5 GB min free memory.

Now let’s create a 2GB data file and then see what reading that file into the buffer cache does to the values.  Here is how to create a 2GB data file in 2 lines of bash script below.  The script will generate a 35MB random file using the dd command and then copy it 70 times into a new data_file output:

# dd if=/dev/random of=/root/d1.txt count=1000000
# for i in `seq 1 70`; do echo $i; cat /root/d1.txt >> /root/data_file; done

Let’s read the file and ignore the contents by reading and redirecting the file to /dev/null as per below:

# cat data_file > /dev/null

Ok, what has happened to our system memory with this set of maneuvers, let’s check it now:

# free -m

Analyzing the results above.  We still have 1.8 GB of free memory so the kernel has protected a large chunk of memory as reserved because of our min_free_kbytes setting.  The buffer cache has used 1691 MB, which is less than the total size of our data file which is 2.3 GB.  Apparently the entire data_file could not be stored in cache due to the lack of available memory to use for the buffer cache.  We can validate that the entire file is not stored in cache but timing the repeated attempts to read the file. If it was cached, it would take a fraction of a second to read the file.  Let’s try it.

# time cat data_file > /dev/null
# time cat data_file > /dev/null

The file read took almost 20 seconds which implies its almost certainly not all cached.

As one final validation let’s reduce the vm.min_free_kbytes to allow the page cache to have more room to operate and we can expect to see the cache working and the file read getting much faster.

# echo 67584 > /proc/sys/vm/min_free_kbytes
# time cat data_file > /dev/null
# time cat data_file > /dev/null

With the extra memory available for caching the file read time dropped from 20 seconds before to .364 seconds with it all in cache.

I am curious to do another experiment.  What happens with malloc calls to allocate memory from a C program in the face of this really high vm.min_free_kbytes setting.  Will it fail the malloc?  Will the system die?  First reset the the vm.min_free_kbytes setting to the really high value to resume our experiments:

# echo 1500000 > /proc/sys/vm/min_free_kbytes

Let’s look again at our free memory:

Theoretically we have 1.9 GB free and 515 MB available.  Let’s use a stress test program called stress-ng in order to use some memory and see where we fail.  We will use the vm tester and try to allocate 1 GB of memory.  Since we have only reserved 1.5 GB on a 3.75 GB system, i guess this should work.

# stress-ng --vm 1 --vm-bytes 1G --timeout 60s
stress-ng: info:  [17537] dispatching hogs: 1 vm
stress-ng: info:  [17537] cache allocate: default cache size: 46080K
stress-ng: info:  [17537] successful run completed in 60.09s (1 min, 0.09 secs)
# stress-ng --vm 2 --vm-bytes 1G --timeout 60s
# stress-ng --vm 3 --vm-bytes 1G --timeout 60s

Let’s try it again with more workers, we can try 1, 2, 3, 4 workers and at some point it should fail.  In my test it passed with 1 and 2 workers but failed with 3 workers.

Let’s reset the vm.min_free_kbytes to a low number and see if that helps us run 3 memory stressors with 1GB each on a 3.75GB system.

# echo 67584 > /proc/sys/vm/min_free_kbytes
# stress-ng --vm 3 --vm-bytes 1G --timeout 60s

This time it ran successfully without error, i tried it two times without problems.  So I can conclude there is a behavioral difference of having more memory available for malloc, when the vm.min_free_kbytes value is set to a lower value.

Default setting for vm.min_free_kbytes

The default value for the setting on my system is 67584 which is about 1.8% of RAM on the system or 64 MB. For safety reasons on a heavily thrashed system i would tend to increase it a bit perhaps to 128MB to allow for more reserved free memory, however for average usage the default value seems sensible enough.  The official documentation warns about making the value too high.  Setting it to 5 or 10% of the system RAM is probably not the intended usage of the setting, and is too high.

Setting vm.min_free_kbytes to survive reboots

In order to ensure the setting can survive reboots and is not restored to the default values when rebooting be sure to make the sysctl setting persistent by by putting the desired new value in the /etc/sysctl.conf file.

Conclusion

We have seen that the vm.min_free_kbytes linux kernel tunable can be modified and can reserve memory on the system in order to ensure the system is more stable especially during heavy usage and heavy memory allocations.  The default settings might be a little too low, especially on high memory systems and should be considered to be increased carefully.  We have seen that the memory reserved by this tunable prevents the OS cache from using all the memory and also prevents some malloc operations from using all the memory too.

]]>
How to Clear Cache on Linux https://linuxhint.com/clear_cache_linux/ Sat, 14 Mar 2020 11:14:54 +0000 https://linuxhint.com/?p=56477 The linux file system cache (Page Cache) is used to make IO operations faster.  Under certain circumstances an administrator or developer might want to manually clear the cache.  In this article we will explain how the Linux File System cache works.  Then we will demonstrate how to monitor the cache usage and how to clear the cache.  We will do some simple performance experiments to verify the cache is working as expected and that the cache flush and clear procedure is also working as expected.

How Linux File System Cache Works

The kernel reserves a certain amount of system memory for caching the file system disk accesses in order to make overall performance faster.  The cache in linux is called the Page Cache. The size of the page cache is configurable with generous defaults enabled to cache large amounts of disk blocks.  The max size of the cache and the policies of when to evict data from the cache are adjustable with kernel parameters.  The linux cache approach is called a write-back cache.  This means if data is written to disk it is written to memory into the cache and marked as dirty in the cache until it is synchronized to disk.  The kernel maintains internal data structures to optimize which data to evict from cache when more space is needed in the cache.

During Linux read system calls, the kernel will check if the data requested is stored in blocks of data in the cache, that would be a successful cache hit and the data will be returned from the cache without doing any IO to the disk system.  For a cache miss the data will be fetched from IO system and the cache updated based on the caching policies as this same data is likely to be requested again.

When certain thresholds of memory usage are reached background tasks will start writing dirty data to disk to ensure it is clearing the memory cache.  These can have an impact on performance of memory and CPU intensive applications and require tuning by administrators and or developers.

Using Free command to view Cache Usage

We can use the free command from the command line in order to analyze the system memory and the amount of memory allocated to caching.  See command below:

# free -m

What we see from the free command above is that there is 7.5 GB of RAM on this system.  Of this only 209 MB is used and 6.5 MB is free.  667 MB is used in the buffer cache.  Now let’s try to increase that number by running a command to generate a file of 1 Gigabyte and reading the file.  The command below will generate approximately 100MB of random data and then append 10 copies of the file together into one large_file.

# dd if=/dev/random of=/root/data_file count=1400000
# for i in `seq 1 10`; do echo $i; cat data_file >> large_file; done

Now we will make sure to read this 1 Gig file and then check the free command again:

# cat large_file > /dev/null
# free -m

We can see the buffer cache usage has gone up from 667 to 1735 Megabytes a roughly 1 Gigabyte increase in the usage of the buffer cache.

Proc Sys VM Drop Caches Command

The linux kernel provides an interface to drop the cache let’s try out these commands and see the impact on the free setting.

# echo 1 > /proc/sys/vm/drop_caches
# free -m

We can see above that the majority of the buffer cache allocation was freed with this command.

Experimental Verification that Drop Caches Works

Can we do a performance validation of using the cache to read the file? Let’s read the file and write it back to /dev/null in order to test how long it takes to read the file from disk.  We will time it with the time command.  We do this command immediately after clearing the cache with the commands above.

It took 8.4 seconds to read the file.  Let’s read it again now that the file should be in the filesystem cache and see how long it takes now.

Boom!  It took only .2 seconds compared to 8.4 seconds to read it when the file was not cached.  To verify let’s repeat this again by first clearing the cache and then reading the file 2 times.

It worked perfectly as expected.  8.5 seconds for the non-cached read and .2 seconds for the cached read.

Conclusion

The page cache is automatically enabled on Linux systems and will transparently make IO faster by storing recently used data in the cache.  If you want to manually clear the cache that can be done easily by sending an echo command to the /proc filesystem indicating to the kernel to drop the cache and free the memory used for the cache.  The instructions for running the command were shown above in this article and the experimental validation of the cache behavior before and after flushing were also shown.

]]>
About lspci Command on Linux https://linuxhint.com/lspci_command/ Fri, 13 Mar 2020 18:41:20 +0000 https://linuxhint.com/?p=56442 lspci command is a utility on linux systems used to find out information about the PCI busses and devices connected to the PCI subsystem. You can understand the meaning of the command by considering the word lspci in two parts.  The first part ls, is the standard utility used on linux for listing information about the files in the filesystem.  Pci is the second part of the command, so you can see naturally the command lspci will list information about PCI subsystem the same way that ls will list information about the file system.

In this article we will explain the basics of PCI, PCIe and the lspci command to display information on your system.

What is PCI?

PCI, or Peripheral Component Interconnect is an interface to add additional hardware components to a computer system.  PCIe or PCI Express is the updated standard that is used today.  For example let’s say you want to add a Ethernet card to your computer so that it can access the internet and exchange data.  Well the card needs a protocol to communicate with the rest of the internal system, PCI can be the standard interface used to add this card to your system.  You still need a driver for this card in order for the kernel to use it, however PCI is the slot, and bus and interface that will be used to add the hardware into the system with a standard interface.  Creation of a PCI linux driver will follow some standard interfaces you can see documentation for creating a PCI linux driver here.  You can see from the struct below the standard methods that must be implemented.  Methods such as probe, remove, suspend, resume, etc.

struct pci_driver {
struct list_head        node;
const char              *name;
const struct pci_device_id *id_table;
int (*probe)(struct pci_dev *dev, const struct pci_device_id *id);
void (*remove)(struct pci_dev *dev);
int (*suspend)(struct pci_dev *dev, pm_message_t state);
int (*resume)(struct pci_dev *dev);
void (*shutdown)(struct pci_dev *dev);
int (*sriov_configure)(struct pci_dev *dev, int num_vfs);
const struct pci_error_handlers *err_handler;
const struct attribute_group **groups;
struct device_driver    driver;
struct pci_dynids       dynids;
};

PCI Speeds and Uses

PCI 3.0 can run data up to 1GB/Sec per lane.  Different devices can have more than one lane, so it’s possible that individual devices can have multi-gigabytes of data transfer rate.  These numbers are always improving as new versions of the specification come out and new hardware comes out, so always check for the latest and fastest you can find.  Types of components and gadgets that you can buy that plug into a PCI interface include: WIFI adapters, Bluetooth, NVME Solid State Storage cards, Graphics cards and more.

Exploring the lspci Command

I have created a Ubuntu 19.04 instance on Google cloud and will now run the lspci command and see what happens.

What you see is one line per device with a numerical code and a verbal description of the device.  There are actually 5 fields displayed in this output per line: Slot, Class, Vendor, Device, and Revision.

So breaking down the first line what we have:

SLOT: 00:00.0
Class: Host bridge
Vendor: Intel Corporation
Device: 440FX – 82441FX PMC
Revision: 02

And looking at Slot 00:04.0 that is our Ethernet controller, which appears to be a virtual device as part of the virtual magic of Google’s cloud deployment.

To get more detailed, verbose information about each PCI slot, run the following command:

# lspci -vmm

This command will break down each line into its component fields and allow you to analyze each device with more descriptive labels.

You can also try the -v option for more verbose output

# lspci -v

And use double v or tripple v for verby verbose output:

# lspci -vvv

Or try the -mm option for script readable output format.

# lspci -mm

In order to see which kernel driver is being used for each device run -k option.

Many of my devices are using virtio-pci driver.

Lastly you can even see a hexadecimal dump of “the standard part of the configuration space” for each PCI device.  You should be a real kernel hacker to figure out how to use that information.  -x option is what gives you the dump output.

# lspci -x

Conclusion

The lspci command is a standard Linux command that can be used to list information about the PCI connected devices on your system.  This can be useful to know what hardware peripherals you have.  Its also super useful for developers, device driver creators, low level system folks to query information about the devices, the drivers and the system.  Enjoy using lspci.

]]>
Bash Until Loops https://linuxhint.com/bash_until_loops/ Tue, 06 Aug 2019 11:19:42 +0000 https://linuxhint.com/?p=44739 There are several types of loops that can be used in bash scripts. For loops, while loops and until loops.

Conceptually the for loop should be used to loop through a series of items such as loop through each item in an array or each file in a directory, etc. The while loop should be used as long as a certain condition is true, such as the a counter is less than a maximum value or the ping time to a server is lower than a threshold or forever if you loop while TRUE or while 1.

The until loop is similar to the while loop but with reverse logic. Instead of looping while a condition is true you are assuming the condition is false and looping until it becomes true. They are reverse of each other in logical expression. Choosing the correct loop between a while loop and until loop allows your program to be more readable and understandable by others or yourself when you come back to the code sometime later.

Some typical examples or reasons to use a until loop could be, loop until the user enters ‘exit’; loop until the data generated is greater than the requested data volume, or until a number of files that match your search are found.

The basic syntax of UNTIL loop looks like this:

until [ CONDITION ]; do
  LINES OF CODE
  MORE LINES OF CODE
done

Now lets take some examples. The first example will multiple factor of two until reaching a size threshold of 1000:

#!/bin/bash
NUM=1
until [ "$NUM" -gt 1000 ]; do
  echo $NUM
  let NUM=NUM*2
done

The second example will continue to ping a URL until the response time is greater than 100 milliseconds:

#!/bin/bash
MILLISECONDS=0

# we will ping until it becomes slower than 1000 milliseconds
until [ $MILLISECONDS -gt 1000 ]
do
  # run the ping and extract the line that has the ping time, which ends in time=XXXX ms  
  OUTPUT=`ping -c 1 google.com | grep time | awk -F= '{ print $NF }'`
  echo "Ping time: $OUTPUT"

  # extract number of milliseocnds from string as integer
  MILLISECONDS=`echo $OUTPUT | awk '{ print $1 }' | awk -F. '{ print $1 }' `
  echo "Number of ms = $MILLISECONDS"

  sleep 1
done

echo "ping time exceeded 1000 milliseconds"

The third example will take a file and will combine the file with itself until it reaches 1 kilobyte in size:

#!/bin/bash
FILENAME=`basename "$0"`
echo $FILENAME
TMP_FILE="./tmp1"
TARGET_FILE="./target"
cat $FILENAME > $TARGET_FILE
FILESIZE=0

# increase file size until 1KB
until [ $FILESIZE -gt 1024 ]
do
  # add this file to target file content
  cp $TARGET_FILE $TMP_FILE
  cat $TMP_FILE >> $TARGET_FILE

  FILESIZE=`du $TARGET_FILE | awk '{ print $1 }'`
  echo "Filesize: $FILESIZE"

  sleep 1
done

echo "new filesize reached target of 1KB"

The fourth example will ask the user for input of their name until they type exit to quit the program:

#!/bin/bash
RESPONSE="FOO"

# increase file size until 1KB
until [ "$RESPONSE" = "exit" ]
do
  echo -n "Enter your name or 'exit' to quit this program: "
  read RESPONSE
  if [ "$RESPONSE" != "exit" ]; then
    echo "Hello $RESPONSE"
  fi
done

echo "Thank you for playing this game"

CONCLUSION

The key point is to use UNTIL loop to make your code more clear when the condition is expected to be always false and then you want to stop your looping action when the condition becomes true.  In other words, continue looping UNTIL some point in time.  With this perspective I hope your bash scripts can be more clear and you have learned something with this article. Thank you. ]]> CentOS8 Release Date and Features https://linuxhint.com/centos8-release-date-and-features/ Sun, 12 May 2019 19:35:34 +0000 https://linuxhint.com/?p=39995

CentOS Logo

CentOS Logo


Red Hat released RHEL version 8.0 on May 7, 2019 so lots of folks are looking where is the equivalent build of CentOS. Well long story short, looking at the history it takes about a month to spin out a production release of CentOS after RHEL is released. Red Hat released RHEL7 on June 10 (2014) and CENTOS7 was released officially on July 7 (2014) almost a month later. So you should expect, rough and tough to see CetnOS8 released in the month June of 2019.

Once CentOS8 is released you can download it from the official project download site.

If you are clamoring to track the blow by blow status of the release progress, keep an eye on the project status page for the creation of CentOS 8.

Now the question, you have waited your time, what will you get? Well first of all you get a supported version longer than CentOS7. The fact of technology, is you have to always upgrade from time to time to stay with a working and supported version. So at some point regardless of the features provided in CentOS8 you will want to upgrade in order to be on the latest version if you like your CentOS systems.

Some interesting highlights from the new release that caught my eye:

  • Java 8 and Java 11 are both supported and native to the platform
  • Python 3 is the default but you have to install it yourself with yum
  • New Composer Tool for creating system images
  • New Stratis storage management tool
  • Session Recording which can be used for recording all activity of users who connect to a system via ssh. This could be interesting from a security point of view. This uses SSSD, system security services daemon
  • XFS Copy on Write Data Extents which can help with performance and disk usage by not copying large files until data is changed in the copy
  • System Wide Cryptographic Policies, which make it easier to create and manage configurations of all the necessary security protocols that are available such as network, disk, dns encryption, and kerberos etc.
  • The Virtual Data Optimizer which provides native storage de-duplication by the linux kernel, awesome!
  • Encrypted Root FileSystem for enhanced security

In summary it looks like RHEL8, and CENTOS 8, to follow is a feature packed release especially for Security, Performance, and Containerization and Virtualization. So its not just to stay up to date but their is real value here in this release and it should be well received by the user community. Keep your eyes and ears open in the month of June for the drop of CentOS 8.

]]>
journalctl tail and cheatsheet https://linuxhint.com/journalctl-tail-and-cheatsheet/ Sun, 14 Apr 2019 20:25:59 +0000 https://linuxhint.com/?p=38742 journalctl is a fancy new service in linux distributions, such as Ubuntu, Debian, CentOS and others, that wraps and abstracts the system log into a command line interface tool making it easier to find what you are looking for. The data is structured and indexed so its not like you are searching plain text files using grep, you have much more advanced searching and finding capabilities. You can use the journalctl command to print all the system logs, you can query it with a finer grained query, and sometimes you just want to TAIL the system logs to watch the system live as it operates. The –follow flag is used for the tail operation.

TL;DR : run journalctl -f

-f is short option for –follow. You can think of running journalctl -f as doing a tail operation on the system log.

journalctl cheatsheet

-a or –all

Show all characters, even long and unprintable lines and characters

-f or –follow

Like a tail operation for viewing live updates

-e or –page-end

Jump to the end of the log

-n or –lines=

Show the most recent n number of log lines

-o or –output=

Customizable output formatting. See man page for formatting options. Some examples include journalctl -o verbose to show all fields, journalctl -o cat to show compact terse output, journalctl -o json for JSON formatted output.

-x or –catalog

Explain the output fields based on metadata in the program

-q or –quiet

suppress warnings or info messages

-m or –merge

merge based on time local and remote entries

–list-boots

Print out the bootids which can be later used in filtering from time of a specific bootid

-b [ID][±offset]

Filter only based on the specified boot

-k or –dmesg

Filter only kernel messages

-g or –grep

Filter based on perl-compatible regular expressions for specific text

–case-sensitive[=BOOLEAN]

do case insensitive searching

-S, –since=, -U, –until=

Search based on a date. “2019-07-04 13:19:17”, “00:00:00”, “yesterday”, “today”, “tomorrow”, “now” are valid formats. For complete time and date specification, see systemd.time(7)

–system

Show system messages only

–user

Show user messages only

–disk-usage

Shows space used by this log system

The journalctl system takes system logging to the next level. To see all the options be sure to read the man page. I hope this cheat sheet helps you get started with some quick options.

]]>
Learn Bash Programming https://linuxhint.com/learn-bash-programming/ Sat, 23 Mar 2019 01:04:50 +0000 https://linuxhint.com/?p=37888 Hi there. Are you new to the world of linux and trying to get around the shell? You want to become more effective hacking around in the terminal? You want to start scripting and automating repeated jobs. You want to learn bash programming and become a GURU?

Ok Great! Let’s do it.

Start with learning some of the basic commands that you can run from the shell. Some commands you will want to learn include: date, ls in order to look at files and directories, rm to remove files, mkdir to create a new directory, whereis to find a program or utility’s path that you are looking for, chmod to set permissions, chown to set ownership, perform commands on multiple targets with wildcards, and then find files you are looking for with find command.

Manipulating strings and numbers will be a common task. Compare strings to each other, force strings to lower case and uppercase, learn proper escaping of strings, string encoding when needed, Convert hexadecimal to decimal format, Globbing strings to find expected patterns, Arithmetic operations, loop through a list of strings in a script, and return a string as the result of a function.

Now if you are ready to get dirty with data, lets learn some more advanced commands such as cut, grep, awk, uniq, and tr to manipulate streams of data. These articles show multiple examples of how these commands and shift and sift through what you are looking for in files.

You are going to want to make your scripts professional and fancy you will need to parse the command line arguments of the script. Getopts can help with parsing, you can create professional menu options with the select command and you can wait for user input, and the read it from the user with the read command. You can also make it more professional looking playing with colors in the terminal.

Now in your scripts you want to do some actual coding and programming to make complex logic in the script so you will need some programming concepts such as storing command results in variables, conditional statements like if and else and the case command. Loops in bash allow you to iterate through large jobs of actions. You can do for loops and while loops in bash. Arrays are commonly used in programming languages to store sequences of element data. Arrays can also be used in bash. You can even make bash into a more strongly typed programming environment using the declare command.

Any professional bash person should also be familiar with environment variables, bash history and bash alias to setup and use the shell more effectively. Be sure to have that knowledge.

Reading from files is critical in bash jobs. Some of the tasks you might want to learn include reading a file line by line or using the head and tail commands to read just the beginning or end of a file. If you have JSON data in a file and want to parse it you can use the jq command for that.

Interacting with websites and web resources you can use the curl command, or the mail command to send an email from the shell.

Timing and dynamic interaction of scripts with real world events can be tricky. There are numerous tricks, techniques and commands in bash to help you automate event handling. For example the yes command, so you don’t have to type ‘yes’ but have it programmatically respond to commands that ask for confirmation. To pause or sleep in a script master the sleep command, or subtlety different the wait command, Run multiple commands in one bash line using pipes, AND, and OR operators to sequence and combine tasks. Keep shell sessions open even if you close the window with the Screen command or the nohup command.

You want to see a variety of typical scripts in action check out 30 examples of bash scripts.

Or if you want to get fancy look at these fancy techniques to impress your boss or colleagues: iterate over sequences generated on the shell, learn about HEREDOC, tput, printf and shell expansions how to create awesome outputs with bash scripts, or use inotify and rsync to create a live backup system using only a while loop in a bash script.

Finally don’t forget to add comments to your bash scripts! It helps others to read your scripts and it helps you when you come back to them after some time for sure! ]]> CentOS Update https://linuxhint.com/centos-update/ Sat, 22 Dec 2018 22:01:36 +0000 https://linuxhint.com/?p=34374 Keeping your packages up to date is important to prevent running into known and already fixed bugs as well as patching any security vulnerabilities that might have been found by the distribution and package maintainers. Its not hard to do so lets get right to it.

The first command you want to know is yum check-update. If you are not familiar with yum, read our primer on yum first and then come back here. The check-update command will print out a list of any packages for which an update is available. For scripting purposes it will also return an exit value of 100 if updates are required, 0 if no updates are required or 1 if an error occurred.

Here is an example of how to check for updates in CentOS:

yum check-update > /dev/null
RC=$?
if [ $RC -eq 100 ]; then
   echo "Updates are needed"
elif [ $RC -eq 0 ]; then
   echo "No updates are needed"
else
   echo "An error occurred in the package update check, try again"
fi

yum check update centos

And here is an example of printing out the updates as needed:

yum check-update > ./output
RC=$?
if [ $RC -eq 100 ]; then
    cat ./output
fi

We can also check updates for a single package with yum update and NOT specifying Y, for yes, when asked. If you do press Y, for yes, the update will proceed for the specified package. For example I will do a check on the package vim-minimal now:

yum update vim-minimal

If you want to proceed and update all packages, then go ahead and run yum update and do not provide any package names. It will find all out of date packages and update them all after you confirm Y for yes at the prompt.

# yum update

yum update

After the update is complete you can re-run the check script above and expect to see nothing to update.

yum check-update > /dev/null
RC=$?
if [ $RC -eq 100 ]; then
   echo "Updates are needed"
elif [ $RC -eq 0 ]; then
   echo "No updates are needed"
else
   echo "An error occurred in the package update check, try again"
fi

CentOS no update needed

Conclusion

Its important to keep your CentOS system up to date. You can use the above methodology to help.

]]>
How to Install PostgreSQL on Debian https://linuxhint.com/install-postgresql-on-debian/ Wed, 19 Dec 2018 17:28:04 +0000 https://linuxhint.com/?p=34032 Debian is one of the most successful and independent linux operation system distributions and PostgreSQL is the same for relational database management systems (RDBMS). Both are independent of large corporate control and will allow you to have a free and powerful user experience to host a server and a relational database running on it securely.

In this article I will demonstrate how to install PostgreSQL on Debian. We will use the latest stable versions of both Postgres and Debian at the time of this article, and I expect the process to not vary widely for several years making this tutorial still accurate. We will use the native repo of Debian and not any custom process to have a fully Debian experience. The current Debian version is 9.6 and the current PostgreSQL version is 9.6, released in 2016. Yes that is old, but we are going with the stable versions natively provided by Debian. Also its just a complete coincidence that both Debian and PostgreSQL have the same version number at this time, please don’t read anything into that other than pure coincidence. This will ensure the most stability which is recommended for mission critical usage. I will start with a fresh install of Debian on Digital Ocean in order to ensure the system is clean and the process is reproducible.

Prepare the System

Firstly, lets do a quick apt-get update and apt-get upgrade to ensure that the system has all packages already installed up to date.

$  apt-get update
$  apt-get upgrade

Install PostgreSQL

There are numerous PostgreSQL packages you can see by running apt-cache search. The package we want to install is called just postgresql. We will install it with apt-get install.

$ apt-cache search postgres
$ apt-get install postgresql

Run dkpg to verify the install was completed and PostgreSQL related packages are installed:


$ dpkg -l | grep postgre

On Debian platform, there is a convenience service to manage PostgreSQL. So we will not be running initdb or starting and stopping the database using native commands like pg_ctl. Check the help message for the PostgreSQL service by running the command:

Starting and Stopping PostgreSQL

$ service postgresql


$ service postgresql

Before we begin trying to start and stop the processes, lets verify the configuration files. On Debian the configuration files are installed via the postgresql-common package under the location /etc/postgresql.

PostgreSQL Configuration Files

The postgresql.conf is the main database configuration file, where you can set custom settings for your installation. The pg_hba.conf is the access configuration file. These are started with sane and secure defaults for a Debian server. Notice the pg_hba.conf is configured for local access only, so you will need to update this file according to the documentation when you want to grant access to users or applications to connect to the database remotely.

Ok, lets practice stopping and starting the database with the provided service. With the service postgresql command you can provide the arguments stop, start, and status in order to control the server.

service postgresql start
service postgresql stop
service postgresql status

Connecting to the Database

By default PostgreSQL is installed in a fairly secure fashion. A linux user named postgres is created by default and this user has local access to connect to the database without any extra configuration steps. Even root can not login to the database by default. Let’s try to connect to the database as root user.

Root access denied

So rather, let us change linux user to the postgres user id, and then we can access the system.

$ su - postgresq
$ psql -l
$ psql postgres


Login as linux user: postgres

To verify the system is working, let us create a database from the command line with the createdb utility. We will then update the pg_hba.conf, restart the database and connect to this database.

As user postgres, create the new database:

$ createdb linuxhint

As user root, modify the pg_hba.conf to add the authentication for the new database.

Updated pg_hba.conf as root

Then also as user root, reload the configuration files.

$ service postgresql reload

Finally change back to user postgres and test the new database. We will test by logging into linuxhint database, creating a table, adding 2 rows, and then querying the table. As shown below.

create test table

Conclusion

What you will finally want to do, is design your database schema and configure your pg_hba.conf in order to allow external access to your database and then you are off to the races. ]]> Install MySQL on CentOS 7.5 https://linuxhint.com/install-mysql-on-centos-75/ Sat, 24 Nov 2018 04:02:26 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=32925 In this tutorial I will show you how to install MySQL Database on CentOS 7.5 operating system using the yum commands and the built in utilities of CentOS. It can be installed from source, or additional ways, but we will do it using CentOS native commands.

Firstly let’s make sure to update our system before starting in case of any out of date dependencies.

[root@centos7-linuxhint ~]# yum update

There are numerous packages available on CentOS related to MySQL without having to add the EPEL extra package repository. To see a list of them all try this command for yum search:

[root@centos7-linuxhint ~]# yum search mysql

Looking down the list, you can see there is no option for mysql, itself, just various related packages. The reason is that Oracle purchased MySQL when they bought Sun Microsystems, and the founders of MySQL restarted the pure open source initiative with a new name called mariadb, but its still basically MySQL but fully open source.

So lets install mariadb package.

Installing Mariadb Server

Run the yum install command for mariadb as such:

[root@centos7-linuxhint ~]# yum install mariadb
[root@centos7-linuxhint ~]# yum install mariadb-server

Run the following command to check which files were actually installed:

[root@centos7-linuxhint ~]# rpm -ql mariadb
[root@centos7-linuxhint ~]# rpm -ql mariadb-server

Using the MariaDB Service Controller

MySQL and MariaDB come with native utilities to initialize a database as well as starting and stopping a database. mysql_install_db and mysqladmin are two primary utilities. However given we are focused on the CentOS linux distribution lets look at the service file that comes with the RPM files and can be used for a native CentOS experience.

The service comes in a script file and also can be run with typical commands such as the following:

[root@centos7-linuxhint ~]# ls -lart /usr/lib/systemd/system/mariadb.service
[root@centos7-linuxhint ~]# service mariadb status

You can now start the mariadb service with the service script as shown:

[root@centos7-linuxhint ~]# service mariadb start

To verify the service is running lets connect to the DB with the mysql command line utility and run some basic commands once we are connected:

[root@centos7-linuxhint ~]# mysql

Conclusion

That’s all it takes to get started installing MySQL and using it on CentOS, but of course to be an advanced user you will want to learn a lot more. For more info check the links below:

  • MySQL LinuxHint
  • MySQL Tutorial
  • MySQL Cookbook (amazon) ]]> Install Redis on CentOS 7.5 https://linuxhint.com/install-redis-on-centos-7-5/ Fri, 23 Nov 2018 23:52:43 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=32862 Redis is a quick database like server that can be used as a in memory cache or data store. Its very popular in the context of scalable websites because it can store data in memory and be sharded to store large volumes of data and provide lightening fast results to users on the world wide web. Today we will look at how to install Redis on CentOS 7.5 and get started with its usage.

    Update Yum

    First start by updating your system to keep other packages up to date with yum update.

    Extra Packages for Enterprise Linux(EPEL)

    Redis server is not in the default repository on a standard CentOS7 install, so we need to install the EPEL package to get access to more packages.

    [root@centos7-linuxhint ~]# yum install epel-release

    After installing epel, you need to run yum update again.

    [root@centos7-linuxhint ~]# yum update

    Install Redis Server Package

    Now that the EPEL has been added a simple yum install command will install the redis server software.

    [root@centos7-linuxhint ~]# yum -y install redis

    After installation you will have redis-server and redis-cli commands in your system. And also you can see a redis service has been installed

    Start the Redis Server

    Even though technically you can start a redis server using the inbuilt commands, lets use the service that is provided with CentOS to do the start, stop and status of redis server on the system.

    [root@centos7-linuxhint ~]# service redis start

    It should be running now, check it with status command:

    Storing and Retrieving Data

    Ok, now that Redis is running, lets start with a trivial example and store a key and value pair and then see how to query it. We will use redis-cli with default options which will connect to a server on localhost and the default redis port. Also note in the real world, you should consider setting up proper security to your Redis instances.

    We will use the set and get commands in order to store a key value pair in the server. Here is a screen shot of an example:

    You can also use the inline help to get a list of all the possible commands and the help text with them. Enter interactive mode from the redis-cli and then type help as shown below:

    Redis: More information

    For more information check out the following links below:

    ]]>
    lsb_release Command on Ubuntu https://linuxhint.com/lsb_release_ubuntu/ Thu, 05 Jul 2018 01:22:07 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=27826 The lsb_release command is a helpful utility to find out information about your Linux installation. I will test drive it in this article on my new Ubuntu 18.04 LTS release install.

    Why do we care about the lsb_release command? I was sitting there on my Ubuntu system trying to remember if I had already upgraded it or not and version of Ubuntu I had. It was harder than I expected to find the version of Ubuntu I am running until i found lsb_release. Here is the command i used:

    :~$ lsb_release -a
    No LSB modules are available.
    Distributor ID: Ubuntu
    Description:    Ubuntu 18.04 LTS
    Release:    18.04
    Codename:   bionic

    And a screenshot of the same:

    lsb_release -a on Ubuntu 18.04

    lsb_release -a on Ubuntu 18.04

    lsb_release -sc is a handy and popular command line option. It will show you the Codename only in brief. ‘s’ is for short output format and ‘c’ is for codename. See the code and screenshot below:

    :~$ lsb_release -sc
    bionic
    lsb_release -sc on Ubuntu 18.04

    lsb_release -sc on Ubuntu 18.04

    lsb_release -d is good for a verbose description of the release version you have based on the number. See below:

    :~$ lsb_release -d
    Description:    Ubuntu 18.04 LTS
    lsb_release -d on Ubuntu 18.04

    lsb_release -d on Ubuntu 18.04

    No LSB modules are available.

    If you get the above error message from lsb_release -v or lsb_release with no arguments you are missing the lsb_core package.

    Error message when missing lsb-core package

    Error message when missing lsb-core package

    Go ahead and install lsb-core as such:

    :~$ sudo apt-get install lsb-core

    Now try the lsb_release command with no arguments and see the error message “No LSB modules are available is replaced with real output:

    lsb_release after installing lsb-core

    lsb_release after installing lsb-core

    Parsing and understanding the information provided in the output of lsb_release -v (the Linux Standard Base version) is not easy, but the purpose of it is to provide compatibility between Linux versions off of the same base components even with different Linux distributions. That seems like an admirable goal, however the experience of this author, is that rarely is software compatible between different Linux distributions and packages are usually available for each major distribution so its not really necessary to be compatible.

    That being said, the lsb_release tool itself is quick and available from the command line and helped me to find the info I was looking for which is basic information about the version of the Linux distribution currently being run.

    ]]>
    Ubuntu 18.04, Bionic Beaver, is Officially Released https://linuxhint.com/ubuntu-18-04-officially-released/ Sat, 28 Apr 2018 15:13:06 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=25653 The time has come for the official release of Ubuntu 18.04, which was promised in the 4th month of 2018 as per the Ubuntu numbering scheme. Lots of things have been written about the release, so in article I just want to point you to all the great resources already out there.

    1. LinuxHint: Ubuntu 18.04 LTS New Features
    2. Linux Journal: Gmail Redesign, New Cinnamon 3.8 Desktop and More
    3. Das Bityard: A Lengthy, Pedantic Review of Ubuntu 18.04 LTS
    4. QuidsUp: Video Review of 18.04 Ubuntu
    5. OMG Ubuntu: Ubuntu 18.04 Flavours
    6. LinuxHint: How to Upgrade to Ubuntu 18.04
    7. LinuxHint: Ubuntu 18.04 LTS Minimal Install Guide
    8. LinuxHint: Run Ubuntu 18.04 From USB Stick
    9. OMG Ubuntu: 11 Things To Do After Installing Ubuntu 18.04 LTS
    10. LinuxHint: Upgrade Kernel on Ubuntu 18.04
    11. LinuxHint: Install Oracle JDK 10 on Ubuntu 18.04

    With these links you should be able to see all the great new features of Ubuntu 18.04 and also how to get started using it. Enjoy!

    ]]>
    Coverity Scan Service Hacked! https://linuxhint.com/coverity_oss_system_hacked_2018/ Tue, 20 Mar 2018 09:23:29 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=23974

    The Register is reporting that the Coverity community server used by open source projects was hacked and being used for cryptocurrency mining. OMG, what is the world coming to?

    Synopsys, the company which owns Coverity was quoted by theregister.co.uk as saying: “The service was down for about four weeks. We took the service down immediately upon discovering the unauthorized access. We engaged a leading computer forensics company to independently assist in the investigation, and kept the service down until we completed the investigation. The investigation reported no evidence that database files or artifacts uploaded by the open source community users of the Coverity Scan service were accessed.”

    The system is apparently now restored and the company is working with security consultants. This on the heals of the widely public Scarlett Johansson PostgreSQL cryptocurrency hacking, has us all wondering how to shutdown these types of attacks.

    ]]>
    Security Vulnerability Hidden in Scarlett Johansson Image https://linuxhint.com/security-vulnerability-hidden-in-scarlett-johansson-image/ Sat, 17 Mar 2018 04:50:13 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=23761

    There is a new security vulnerability found by the community hidden in an image of the famous bombshell actress Scarlett Johansson. To make this more strange and intriguiging the image file of Scarlett contains embedded code which be used to start Monero Crypto Currency Mining!

    The attack will target PostgreSQL databases, insert a stored function into the database, and then call this function to execute their exploit code. The code will execute from the system shell via the stored function and start doing reconnaissance on the victim system by looking for what type of GPU is installed on the system and might be used for crypto mining!!! After identified, the attacker can start cryptomining on the victim system and update their own account with the profits.

    Imperva Notes In Their Report:

    After logging into the database, the attacker continued to create different payloads, implement evasion techniques through embedded binaries in a downloaded image, extract payloads to disk and trigger remote code execution of these payloads. Like so many attacks we’ve witnessed lately, it ended up with the attacker utilizing the server’s resources for cryptomining Monero.

    The security company Imperva was first to identify this vulnerability has written a detailed report about it.

    Above Image and Quotes are provided by Imperva, please see the full report. ]]> Linus Torvalds Slams AMD CPU flaw security report https://linuxhint.com/linus-torvalds-slams-amd-cpu-flaw-security-report-2018-03/ Thu, 15 Mar 2018 20:33:03 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=23722

    thumbnail courtesy of theinquirer.net

    The spectre and meldown security vulnerabilities have woken up the industry to potential security flaws in hardware that can be exploited to compromise the integrity of the native computer security role based authentication.

    Now a new report has indicated potential vulnerabilities on AMD, but Linus Torvalds has jumped into this discussion and shot down this report is not technically sound.

    Linus said on Google+:

    “I thought the whole industry was corrupt before, but it’s getting ridiculous.”

    Read more details about the report and Linus’s argument from the The Inquirer who has written it up nicely below: Linux Torvalds casts shade on CTS Labs’ AMD CPU flaw security report

    ]]>
    Memcached DDOS Vulnerabilities Impacting The Internet March 2018 https://linuxhint.com/memcached-ddos-vulnerabilities-march-2018/ Mon, 12 Mar 2018 21:45:57 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=23693 Apparently in recent times Memcached has become exploitable, and this exploit is floating around the Internet.  The vulnerabilities in Memcached have caused downtime at GitHub in recent days and many other sites are vulnerable.  Check out this link to the story at theregister.co.uk, and a few other links below on the same topic.

    Attacks tapering, as experts argue over ‘kill switch’ DDoS attacks taking advantage of ill-advised use of memcached have begun to decline, either because sysadmins are securing the process, or because people are using a potentially-troublesome “kill switch”. Cavalry riding to the rescue of DDOS-deluged memcached users

    thumbnail courtesy of theregister.co.uk

    The Wired’s In Depth Coverage of the Incidents

    Coverage by Security Firm Corero, How to Shut It Down, and How Bad is this Vulnerability?

    Bank Info Security Coverage of the Event ]]> Ubuntu 18.04 LTS Beta 1 Released for All Ubuntu Derivatives https://linuxhint.com/ubuntu-18-04-lts-derivatives-beta-1-released/ Sun, 11 Mar 2018 19:39:38 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=23625

    OMG Ubuntu has pointed out that Betas are now available for all major Ubuntu derivative platforms!  Get started with those new betas today.  Xubuntu with XFCE is great for folks with old systems and low system resources!

    Ahoy, Beavers! The first beta builds of the Ubuntu 18.04 release cycle have been released and are available to download. All of Ubuntu’s official family of flavors have opted to take part in this bout of Bionic Beaver beta testing, including: Xubuntu, with Xfce 4.12 Ubuntu Budgie, with Budgie 10.4 Kubuntu, with Plasma 5.12 Ubuntu MATE, with MATE Ubuntu 18.04 LTS Beta 1 Released for Participating Flavors

    thumbnail courtesy of omgubuntu.co.uk ]]> How to Install PostgreSQL on Ubuntu Linux: The Easy Way https://linuxhint.com/install-postgresql-ubuntu-easy/ Thu, 28 Dec 2017 02:51:28 +0000 https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=20977 PostgreSQL is a top ranked open source Relational Database Management System that was created in 1996 originally at the University of California, Berkeley and now developed by the PostgreSQL Development Group and licensed on the PostgreSQL License which is a permissive license similar to the MIT License.

    In this article I will show you how to install and setup PostgreSQL the easy way on Ubuntu Linux.  In order to use “The Easy Way”, it implies that will use the version of PostgreSQL that comes with the Ubuntu distribution and not get picky about specifying a different version. So lets get started.

    Run apt-get to install postgresql package for Ubuntu as such:

    sudo apt-get install postgresql

    After the command completes PostgreSQL software will be installed and configured to an initial running and usable state. To verify what has been done try the following commands:

    ps -ef | grep postgres

    sudo su - postgres
    pwd
    # psql -l


    Now check the output form the ps command that was done earlier and notice where is the location of the config_file.  In my example the following arg was added on the command line:

    -c config_file=/etc/postgresql/9.6/main/postgresql.conf

    Let’s open the postgresql.conf configuration file to see what we can learn.  The following interesting entries were specified which will help us understand how PostgreSQL was installed on this system:

    data_directory = '/var/lib/postgresql/9.6/main' # use data in another directory
    # (change requires restart)
    hba_file = '/etc/postgresql/9.6/main/pg_hba.conf' # host-based authentication file
    # (change requires restart)
    port = 5432 # (change requires restart)

    From the above we can see some critical directories. The data_directory is where the data we insert into the database is actually stored, we should not need to play around with that at all. The hba_file is where we will update our access permissions for new connections to the database. hba file is certainly something we will want to modify when we setup more robust security.  By default password’s are used, but LDAP or Kerberoros are probably desired in a more secure setting.  And the port is set to 5432, which is the standard port. If we wanted to be more secure we could modify to use a different port, but I don’t think it really helps too much anyway against sophisticated attackers.

    Before making any other configurations lets do some simple queries and see the output to get a feeling for what is setup.

    $ psql postgres
    postgres=# SELECT * FROM pg_user;
    postgres=# SELECT * FROM pg_database;

    Next let us create a new user that can login to the database that is not the postgres superuser.  Use the following command:

    createuser -EPd sysadmin

    ‘E’ means store password for this user encrypted, ‘P’ means prompt now for a new password for this new user, and ‘d’ means allow the new user to create databases in the system.  Now you can exit out from the linux user ‘postgres’ and from the command prompt of a regular user let us connect to the database:

    psql -U sysadmin -h127.0.0.1 postgres

    To make this easier to use we can set a few environment variables as shown below:

    export PGUSER=sysadmin
    export PGHOST=127.0.0.1

    And finally before getting started, let us create a new database that we can use for our data with the createdb command:

    createdb mywebstore

    The command above will create a new database in the system called ‘mywebstore’ which can be used for storing your user data. And with that we have installed and setup PostgreSQL on Ubuntu “The Easy Way”.

    References

    PostgreSQL Wikipedia Page
    PostgreSQL Project Home Page
    PostgreSQL Official Documentation
    PostgreSQL License ]]>