Mats Tage Axelsson – Linux Hint https://linuxhint.com Exploring and Master Linux Ecosystem Tue, 09 Mar 2021 05:50:19 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.2 Arduino With Python Tutorial for Beginners https://linuxhint.com/arduino-with-python-beginners-tutorial/ Sun, 07 Mar 2021 22:09:25 +0000 https://linuxhint.com/?p=93184

When you have started playing with Arduino boards, the standard programming language is provided by Arduino. This language is extremely useful for getting started and can even be used for real use. People who have used it for a while, though, notice a few limitations. You might also be used to programming in Python already. For this reason, developers have invented Micropython.

With Micropython, you have all the basics of Python, with limitations due to the hardware you are finally running it on. This article will not discuss these limitations. Hopefully, you have a clear picture of what a microcontroller can do. Most likely, you will find that it can do much more than you imagined before you started.

Some solutions

There is a multitude of ways you can start programming an Arduino using Python. Before you start, you want to think about whether you are preparing a new Arduino program or want to talk to one. There are several libraries that create new Arduino programs, bypassing the standard programming system that they supply.

You have boards that already run Micropython; you can find these on their respective home pages.

You may want to create a Python program that talks to a standard microcontroller. If you do, you have a few interface libraries for Python. Well-known ones are Micropython and CircuitPython; these are ready distributions for running on special boards. You can compile for other boards if you have the skills.

The mu-editor for micropython

A simple editor to use is a mu-editor. This editor is prepared so that it detects your board on the serial port if you have one. If you do not have one, you can start working with regular Python. To choose, change the mode from the left top corner. The standard Python works, and you can get used to the editor.

This editor has a few IDE features, like code completion, highlighting, and you can start a REPL. These features all work even when connected directly to the board. To install the editor, you can find it in your distribution’s repository.

$ sudo apt install micropython mu-editor mu-editor-doc

These are all the tools you need with a board that already has Micropython on it. One simple code you can try is the common blinking of the LED on the board. To get to the hardware, like an LED, you need to import the library.

from pyb import LED

import time


state=False;


while True:

    time.sleep(0.5)

    if state == False:

        LED(on);

        state=True;

    else:

        LED(off);

        state=False;

Use the code above to try your new board. Note that the ‘pyb’ will vary from board to board, Adafruit uses the machine. Take the time to learn what your boards’ values are from the documentation.

REPL – Read, Evaluate, Print, Loop

When using MicroPython, or any Python, you have a REPL available. This is a great way to test short snippets of code. In this case, you can use it to discover what modules are available. The help() function does a great job of guiding you through the basics of what you have available.

When you run help() without parameters, it gives you a list of options. After that, it is interactive; type in what you need to ask about and guidance on using it.

Use the REPL to find what libraries the board supports. It is a slightly harder learning method, but you get in the habit of using the built-in documentation. To truly learn, you need to take a few tutorials and build something else upon them.

Boards running Micropython

The easiest way to start programming for Arduino using Python is to buy a board ready for it. The boards that exist on the market are impressive and come from many suppliers. The main libraries are CircuitPython and Micropython.

An impressive line of boards come from Adafruit, called Circuit Playground. These boards are round, which is odd. More importantly, they have 10 Neopixels onboard, and that is just the visual part. Several sensors are on the board, also included are two push buttons and a slide switch. The input/output pins are made for using alligator clips while still being available as capacitive touch buttons.

Seedstudio also has a range of boards supporting CircuitPython. These come in a range from very small to very capable. The WiPy 2.0 is a tiny board that is ready to go, though it is useful to get the antenna kit. The board sports a WiFi module for the ESP32, one RGB LED, and a reset switch. You get much less hardware, but the size is 42mm x 20mm x 3.5mm, and you still have many pins on the board.

Simple projects to get you started

After you have made your blink program, you are certain to want to try something harder. Make sure you have something compelling that is challenging but solvable. Here are some suggestions.

Make a program that flashes one light at a steady pace. At the same time, make a button turn on and off another lamp. You will quickly see the limitations of delay()!

Make a MIDI controller.

Make a simple alarm system using an infrared sensor and some NeoPixels.

Conclusion

The best way to get started with MicroPython is to get a decent board that already supports MicroPython or CircuitPython and start trying out your ideas. Since the idea is to control other things, look for a package, or a kit, that contains a few sensors and a display or two.

Happy Hacking.

]]>
How to Write Gentoo Ebuilds https://linuxhint.com/write-gentoo-ebuilds/ Sun, 21 Feb 2021 19:58:28 +0000 https://linuxhint.com/?p=90769 If you do not have the Gentoo package that you desire, fear not! You can build your own! To do this, you will need some experience with compiling software using the known Linux toolkits make, gcc, and others. To create a Gentoo package, ’emake’ is used to control and tune the process. Using these tools, you can create very slim packages that run quickly and reliably.

Ebuild Structure

To create your own ebuild, you must start with the correct *.ebuild file. Your ebuild file is the heart of your whole ebuild. The ebuild file depends on many other files, much like make does. In fact, in most cases, your ebuild will depend on make, though it is your choice. The following is the tree of neovim:
/mnt/SW/projects/System/Gentoo/gentoo/app-editors/neovim
├── files
│ ├── neovim-0.4.3-gcc-10-fix.patch
│ ├── neovim-0.4.4-cmake_luaversion_patch
│ ├── neovim-0.4.4-cmake-release-type.patch
│ └── sysinit.vim
├── Manifest
├── metadata.xml
├── neovim-0.4.4-r100.ebuild
└── neovim-9999.ebuild

So, what do you use these files for in your application? The *.ebuild file is the obvious file. This file contains the SRC_URI, which directly points to the code. Other information in the file includes the description, the website, and further information necessary for compiling the package.

The Manifest file contains the hash that uniquely identifies the code.

The metadata.xml file contains the maintainer’s name and email address, the project name, and a few flags for compiling. The remote identity is also located in this file, like the GitHub repository for the upstream. The files directory contains any patches you may need and any special settings that you require. The above example shows a file with appropriate settings according to the Gentoo maintainers.

Inside the Ebuild File

The values inside the file are easy to understand, for the most part. The Description and Homepage are for the developer’s help. The EAPI number indicates which version of Gentoo will be run. You also have the License, which is quite clear; match the License to the code for which you are building an ebuild file.

Even trickier is SLOT, which is used if you need to have several versions. SLOT will then point this build to the version that you are supporting. Most software will have the 0 value, allowing only one version at a time.

KEYWORDS is the value that indicates which platforms to which your source code can compile. The given ones are amd65, x86, and possibly arm64. A full list is available on your Gentoo system. Note that if you want to contribute, you must set a tilde (~) in front of the architecture. This means that the code is untested, so make sure that the code is well-tested before you remove this symbol. Preferably, have many users view the code before removing the tilde.

The IUSE variable returns to the parameters that you want to set for your compiler.

You also have DEPEND, which comes in three different types. The RDEPEND values are the values that you use while running the code. The BDEPEND values are the build-dependent values. The package that you are trying to add to Gentoo will contain a file describing the necessary dependencies.

For simple packages, you do not need anything else. However, the specific package that you are working on will probably have some things that must be done before compiling the code. If this does not match what Gentoo developers have expected, you can set up your own.

Functions

In the file, the installer will use certain functions for the whole process. For example, to apply patches before running the command, the src_prepare() function will handle this situation.

The src_configure() function uses econf to set, i.e., ‘use_enable.’ In this function, you can unpack your files using the unpack command. You can also pass args to ./configure for your project using econf. As you can see, these functions are named according to their make equivalents, and many times, they pass arguments across.

The src_install() function performs the same function that make install would do in a C/C++ build. However, it does contain many options that you can look up in the reference document.

Most functions are there for when you have special case software. You will probably start digging through these functions when you try implementing your first package.

Example: SimulIDE Package File

Here, we present a file that was created for the SimulIDE package. The package requires a Qt5 development environment, so you will need to add that in your ebuild file. In the following image, you can see the RDEPEND values reflecting this idea. The libraries are already contained within the Gentoo repositories, which makes it easy to point to.

# Copyright 2021 Mats Tage Axelsson
# Distributed under the terms of the GNU General Public License v3

EAPI=7

DESCRIPTION="SimulIDE simulates your circuit designs, it includes Arduino emulation."
HOMEPAGE="https://www.simulide.com/p/home.html"
SRC_URI="https://mailfence.com/pub/docs/santigoro/web/SimulIDE_0.4.14/simulide_0.4.14-SR4_Sources.tar.gz"

LICENSE="GPL-3"
SLOT="0"
KEYWORDS="~x86 ~amd64"

RDEPEND="dev-qt/qtsvg
         dev-qt/qtxml
         dev-qt/qtscript
         dev-qt/qtwidgets
         dev-qt/qtconcurrent
         dev-qt/qtserialport
         dev-qt/qtmultimedia"
DEPEND="${RDEPEND}
  dev-libs/libelf
  dev-embedded/avr-libc"

src_prepare() {
  unpack simulide_0.4.14-SR4_Sources.tar.gz
}

src_configure() {
  econf --with-popt
}

In the src_prepare() function, you can see that the package is unpacked before use.

Overlay

When you have trimmed and cleaned all your mistakes, you may want to add your package to the Gentoo project. Layman was created so that you can use experimental software for your main distribution install. The project is called Overlays, but the command to install it is called Layman.

Conclusion

Creating new packages for Gentoo is an undertaking that may stretch your abilities. Even so, if you have built many packages before using make and the gcc suite of tools, you should be able to pick this process up rather quickly. Also, be sure to contribute back to the community as much as you can.

]]>
Gentoo vs. Ubuntu Linux Comparison https://linuxhint.com/gentoo-vs-ubuntu-linux-comparison/ Thu, 18 Feb 2021 16:14:17 +0000 https://linuxhint.com/?p=90130

Habit is the enemy of change. If you have been using Linux for a while, you may have gotten used to the distribution it offers. If your situation and computing needs changing, then you should think it over. If not, you might want to consider learning a new system for the benefit of apprehension. Knowledge is a very light burden to bear.

Short Gentoo

The Gentoo system is aimed at savvier users, as such, it is inconvenient to start with. As an example, your installer is a command-line only, which means you need to compile the software, so you installed. However, there are some exceptions to this.

When you choose Gentoo, you must be prepared for the command-line. Other than that, you also need no default desktop. If you want a little bit more convenience, use a derivative. You can find a comparison of derivatives in this article.

While using a command-line installer seems to be inconvenient, it is advantageous once you get used to it. The Gentoo command line package manager, in particular, has a surprising number of features, including news! With this, you can read the latest news about Gentoo from the command line.

If you have a problem finding your application, you can add support for Flatpak with a package. If you want to use AppImage, you will need the required libfuse, which Gentoo delivers as sys-fs/fuse.

Short Ubuntu

Ubuntu is a polished version of Debian. Packages included will help you in finding derivatives easier.  This is the most popular distribution delivered by Canonical. You will always find references to Ubuntu when dealing with Linux.

You are presented with a great graphical installer when you choose Ubuntu. Before you start, remember to choose your favorite desktop environment. The default is GNOME, if you want KDE, then you need to choose Kubuntu. If you have other preferences, there are a variety of pre-sets available. If you start with the wrong one, you will end up with many unnecessary packages.

Adding software can be done in many ways. Standard is of course their own repositories, which uses Debian format files. Secondly, they have also chosen and designed snap as the secondary default. Aside from that, you can also use Flatpak and AppImage.

Basic Differences in philosophy

Ubuntu uses the Debian package manager, aimed mainly at binary packages. Though source code is available for most of the packages, this makes the installation faster. Also, hopping between desktop managers is less risky. Gentoo aims to deliver the source code and let the installer compile it for the platform you are using or plan to use. Gentoo tries, and usually succeeds, to create a system that is extremely optimized for the particular system you use. You can actually pick the specific CPU model if you wish. With the USE variable, you can force the binary to only support your specific desktop. Apart from that, you can compile every package and install it on both systems. Nevertheless, the philosophy leads to very different default behavior. This led to many flame wars.

Package Differences

The packages in Gentoo contain links to the upstream and seldom has any source code included. The bulk of the package system helps you set the compilation options and handle patches.

Packages in Ubuntu, in contrast, contain the entire binary or source code. Dependencies are controlled by both systems, though in Gentoo, you can use parameters called slots to have several versions installed. In Ubuntu, you must go through hoops to get several versions, though, with applications, you can use an AppImage.

Usage and Install Differences

The main objective of Gentoo is to optimize the system on every installation. As mentioned before, this leads to a long and, often, slow installation procedure. Proponents claim that this will lead to a faster and more stable system. This is probably true as long as you take the time to do the job at the install. For small systems, it is a good idea to run your installation with a distributed compiler, like distcc. This would speed up the compilation while leading to a more optimized system. For many users, this is a nuisance. Nonetheless, for both cases, you have special distributions and procedures for the popular Raspberry Pi. This includes an optimized Stage3 file, which is a basic system that can run on the target systems. The procedure is here, it shows you how to download the image and the Stage3. On top of that, you get a short introduction to cross-compiling using the Gentoo package ‘sys-devel/crossdev’. That package supports the Armv6, so you can install on the original and, crucially, the Pi Zero W.

For Ubuntu, you can choose and install images from their website. They are ready systems that aim to be servers or desktops of your choice. You can also use the source packages and compile the thing wherein it is less complicated as compared to using Gentoo.

On larger systems, like a laptop, you must measure performance or have exceptional demands to make a significant difference. Many lovers of Gentoo choose a suitable derivative and sticks with that. When they feel that they need an improvement, they recompile relevant parts of the system.

Who is the winner?

This is a very subjective question. The answer may differ for the same user from different circumstances. Ubuntu wins on ease to start, several of packages, and convenience. Gentoo has the advantage of becoming the most performance-wise, just as their goal is. You must choose what your priorities are and go for it. The number one thing that you must consider is the distribution that matches your needs for your system. Many times, Gentoo will win, but only after you have put in the effort for the right reasons.

Conclusion

For many users, choosing Gentoo is a giant leap. A leap that they never take but can be a serious mistake if you have important reasons to use your computer or system of computers.

]]>
How to upgrade Gentoo kernel https://linuxhint.com/upgrade-gentoo-kernel/ Mon, 08 Feb 2021 14:41:29 +0000 https://linuxhint.com/?p=89163 Gentoo is a rolling release, meaning that you have new updates available at regular intervals, but there are no major updates. The idea behind this is never to have incompatible parts of the system because they belong to different major releases. You upgrade as you need to. In many other distributions, the new kernels come with the new release. In Gentoo, you have new kernels when it has been tested. You can, of course, take the latest kernel out there and run that. With the caveat that you may be quite lonely on the forums if you have problems.

Existing kernel

You may not want your own kernel. For the first boot, a standard kernel may do. This way, you get the system running, and you can boot it at any time and set everything up correctly. Compiling a kernel can also take time, so using an existing kernel can be useful. To do this, copy the kernel and the modules to the correct locations. In the newer CD, the files are in the boot directory. Usually called ‘Gentoo’, you should find them easily. One caveat is that you must make sure they are the kernel and ‘.igz’ files. Use the file command for that.

$ file /boot/*
/mnt/cdrom/boot/EFI:               directory
/mnt/cdrom/boot/gentoo:            Linux kernel x86 boot executable bzImage,
version 5.4.80-gentoo-r1-x86_64 (root@catalyst) #1 SMP Sun Jan 17 23:41:47 UTC
2021, RO-rootFS, swap_dev 0x3, Normal VGA
/mnt/cdrom/boot/gentoo-config:     Linux make config build file, ASCII text
/mnt/cdrom/boot/gentoo.igz:        XZ compressed data
/mnt/cdrom/boot/System-gentoo.map: ASCII text

As you can see, the files are clearly marked with this method so you know which one to use. Next, you need to copy modules. The modules are in your lib/modules directory, one per kernel you run.

$ cp -R /lib/modules/5.8.0-generic /mnt/gentoo/lib/modules

For the directory, you can also use ‘uname -r’ to get the name.

Install tools

Gentoo comes with tools for many advanced tasks. When compiling a kernel, you usually use ‘make config’ which you can also use inside Gentoo. However, you also have a Gentoo tool; genkernel. It can compile your kernel automatically with given standard settings. You need to be aware that you can also install a kernel just by using the emerge packaging tool. You need to pick a kernel package that suits your platform. You can see a few choices below.

$ emerge –ask sys-kernel/installkernel-gentoo

$ emerge –ask sys-kernel/installkernel-systemd-boot

One of the tools to compile your kernel, after installing sources is ‘genkernel’.

$ genkernel

The genkernel tool runs all the scripts you need to upgrade the kernel after downloading new sources.

Using Source Code

This requires more compile power, but it is one of the reasons that you choose Gentoo. In fact, all documentation assumes you want to compile your kernel and has binary kernels as an alternative. The big change about this happened in September 2020 when the Gentoo developers released pre-built kernels. You have many packages to choose from, but the procedure is the same for all of them. Pick a kernel, from here! There are many more, but you can pick those after you are done. In here, you pick the newest kernel form Gentoo.

$ emerge –ask –update –deep –with-bdeps=y –newuse sys-kernel/gentoo-sources

This implies that you are choosing to upgrade only the kernel. A full system upgrade will often upgrade the sources for a new kernel. After this, you will have several kernels, select the one you want to use.

$ eselect kernel list $ eselect kernel set 3

The system has now changed the link to /usr/src/linux. All tools will use that symbolic link. You should then copy the old config file, so most of your new kernel has the same values. The old file is available in many places; one is in your running system.

$ zcat /proc/config.gz /usr/src/linux/config

Now, you can start the kernel configuration. You do this with any of the standard packages, ‘make config’, ‘make menuconfig’ and so on. However, Gentoo has an ace up its sleeve; genkernel! This tool takes all the steps and does the whole process for you. Mind you, to optimise; you need to add a few options.

$ genkernel –oldconfig –menuconfig

You can run without any parameters, but then you have no choices about your kernel configuration. This procedure is enough for creating and installing a new kernel. Setting the parameters is a big challenge.

Using pre-built kernels

Are you sure you want to compile your own kernel? You have several choices to get a binary kernel. If you set the value below, the install of the debiansources will install the binary kernel saving you the hassle of compiling your own.

$ echo "sys-kernel/debian-sources binary" >> /etc/portage/package.use

$ emerge debian-sources

You can also get the newest stable kernel directly from the developers from their site. To install and upgrade run emerge to install it.

$ emerge –ask sys-kernel/gentoo-kernel-bin

More kernels are available, and the distribution kernels are also available.

Using unsupported Source Code

You may have your own changes to the kernel code. To handle this situation, you want to turn off the automatic handling of that code. To make sure that Portage knows what dependencies need to be handled, you need to tell it you put it there but should not be updated automatically.

The file you need to fill up to inform Portage is /etc/portage/profile/package.provided

#Marking gentoo-sources-4.9.16 as manually installed
sys-kernel/gentoo-sources-4.9.16

This way, you can use any code and do what you want without having the scripts change things around unexpectedly.

Intel Micro Code

When you have finished compiling, you should make sure you have the microcode for your processor. These packages are for the Intel processor.

$ emerge intel-microcode iucodetool

Skip this if you have an AMD processor.

Grub

You must update-grub the way you do on other distributions, with a twist.

$ grub-install –efi-directory=/boot /dev/vda

The efi stuff is needed when you do not mount your boot partition on default: ‘boot/efi’. Special for Gentoo is this little guy, which will set up all your boot related stuff.

$ ego boot update

Check that it found the kernel and intramfs, the command lists all the successes and failures. Make sure it all works.

Removing sources

Since you install the sources with the package manager, you can also use the package manager to remove them and clean the tree between compiles.

To clean your tree:

$ emerge –ask –depclean gentoo-sources

To remove a certain kernel:

$ emerge –ask –noreplace gentoo-source:5.4.83

If you want to remove the current stable branch!

Other choices

A fairly recent project in Gentoo is to add “distribution kernels”. There are three available, once you have chosen one, the system will upgrade the kernel during a regular upgrade.

Conclusion

Gentoo was built for the tinkerer from the beginning, which makes it a powerful tool for optimisation. Nowadays, you can let the distribution handle the kernel for you. You will miss out on the fine-tuning, but you can dig into that at any time by adding sources with the standard packages. All and all, Gentoo is becoming accessible to more people without sacrificing tweaking capability. Way to go; Gentoo!

]]>
Best Gentoo Linux Derivatives https://linuxhint.com/best-gentoo-linux-derivatives/ Sun, 07 Feb 2021 18:27:23 +0000 https://linuxhint.com/?p=89292 Getting started with Gentoo requires some knowledge of Linux inner workings. This can be time-consuming and frustrating, especially if you have never done it or you have relied on automated install methods for a long time. With that said, it is worthwhile finding out more about your system. You could find many interesting points that can help your private computing or even your career. Many corporations use the Gentoo base and create an internal distribution. One example is Chromium OS; many others are specialized versions for their needs.

Why derivatives?

When the designers made Gentoo, they decided to give the user full control. This is great, but it means that you have to do a lot of heavy lifting. The settings and tweaks are not the most obvious until you have read up on processors and many other parts of the system.

If you pick one of the derivatives, you can cut the learning curve and still have the advantage of tweaking your system for your hardware. When people create derivatives, they have a special need. When this need matches yours, you have a derivative distribution where most work has already been done. You can, of course, still tweak, and hopefully, contribute back to the community.

Calculate Linux

Calculate Linux comes in many flavors; this includes Desktop, server and cloud. The Desktop comes in many editions to support Cinnamon, KDE, LXQt, MATE and Xfce. You can also get the Scratch that has the X server. You can also go the other way and get the Xfce Edition Scientific. As you can see, there is a great choice of desktops and since it is Gentoo compatible you can also set your choice of Desktop afterwards. Using the Gentoo Portage system is complex and needs a lot of practice to master. You may end up with a very fast machine, but setting up will not be trivial. Calculate has a graphical setup feature that shows all options, and you can choose any edition you want from that installer. You need to know what partitions you want if you want to use existing ones and so on. Once you have made your choices, you need to wait for installing and compiling your software. The installer reminds me of the old days when nothing was assumed. You had to know what you were doing. With that said, if you know these things, the install is all done for you and updates are handled automatically. Calculate also comes as a server, a cloud instance using lxc, and you can create a server to handle all the users on your network. The server is an LDP server specially set up for this distribution; you can also use it for other operating systems. The glory of open protocols!

Pentoo Linux

As you may guess from the name, the Pentoo Linux is a specialised distribution for penetration testing. You are supposed to put it on a USB stick. The design goes so far so that you can save your changes to the stick. This is not advanced, but few people use a USB stick that way. When installed, it comes with the XFCE4 window manager to stay lean. The other tools of note are opencl cracking library and a kernel for hacking WiFi connections.

http://www.pentoo.ch/download/

Sabayon Linux

This distribution looks like the others when it comes to packages that are included. You get a full set of office tools and the browsers you may need. You have several packages for maintaining the system and software. The ISO comes with a nice installer. This needs a lot of memory to use; the install option from Grub is a much faster alternative. You do have fewer options than from the live environment. You can let the installer partition the disk for you or roll your own. Tested packages you can pick up include Kodi for playing videos, many server choices, and a home desktop system. As with many Gentoo distributions, you also have the option to run a cloud edition. They are available as Docker, LXD/LXC and Vagrant image.

Funtoo

This is the one you must consider. Why? Because the founder is the lead developer for Gentoo! This does not mean that it meets your needs, but it means it is fully compatible with Gentoo. In fact, installing it, you should use the Gentoo install ISO and download the stage3 file that suits your system and needs. In this case, you are also stuck with the same install as Gentoo. The only difference is that you can get many different stage3 files. You can choose a desktop environment in this step.

Again, as with the other distributions, you also have cloud versions. LXD is the maintainers favourite; you also have Docker images. The way to start them and use them are well documented on their website. This distribution has the advantage of having many well-tested versions that you can download as stage3 files. The other distributions also have great installers so that they will save you more hassle. Not that you would choose Gentoo is if you want to avoid the hassle.

Conclusion

Choosing a derivative Gentoo distribution will help you to get started with Gentoo and the package management system quicker. This is great if you have just a few things you want to tweak. It is also useful when you the option to perfect your settings in the future. In general, you should shop around for a distribution that matches your needs and consider a community surrounding the distribution you will have much in common with. For active computing, you need an active community.

]]>
Gentoo Linux Installation Tutorial https://linuxhint.com/gentoo-linux-installation/ Sat, 30 Jan 2021 16:21:13 +0000 https://linuxhint.com/?p=88113

The installation procedure for Gentoo involves more steps than other distributions. This is intentional so you can control the steps in a more clear way. Using this strategy, you can get started with less than 4GiB of disk and memory of down to 256MiB, 512MiB if you want to use the liveDVD. You also have the opportunity to tweak your system to be as efficient as you can make it. Your first try will be slower if you are not well versed in Linux and all the intricate details, but you can end up with a very lean system.

The media choices

Choosing where to start; As long as you have regular hardware and many times odd hardware, you should use the minimal installation CD to install. This method is also the Stage3 method. If all goes well, you will never bother with Stage1 and Stage2, but they are there for extreme install situations.

  • Minimal installation CD
  • The occasional Gentoo LiveDVD
  • Tarballs for installing exotic hardware or situations.

The tarballs

You can download compressed files that have a file system with files for the init system and basic packages. Pick one that suits your needs. If you are uncertain, take the ‘systemd’ one. This is the most common.

The other stage files are for advanced users. Developers mostly use the Stage1 and Stage2 files; if you do need them, you already know most of Gentoo.

First Boot

Download the minimal CD and burn it to a USB stick. You should consider adding the ISO file to a virtual machine and practice from there! Files are on the Gentoo site.

When the minimal CD boots, it will give you 15 seconds to choose a kernel. The intention of this is to handle a situation where the framebuffer does not work, or some other odd boot problems occur. If you do nothing, the system falls back to booting from the internal disk. If you have problems, you need to specify kernel parameters like the below.

$ gentoo scandelay

This takes the ‘gentoo’ kernel and sends the ‘scandelay’ option. Other options are a long list that you should investigate before you start, though this is not needed on most hardware.

You can also add users at this stage. These users will only work in the install system, so it is seldom useful.

Network

To get started, you can do everything on a console but using a terminal has its advantages. If you want to do this, start sshd and set a password for the root user. Start by checking your IP address.

$ ip a

Then start sshd:

$ /etc/init.d/sshd start

Then set the password for the root user, or create a new temporary user.

$ passwd

You get a long printout that suggests a safe password. Handy if you have low energy or imagination. Now that you have both, you can ssh into your install system. One warning; when you start over from the CD, the ssh key will be re-created! Any key you have on your other system needs to be erased.

Preparing the disks

The disks are prepared as with many other distributions. To make it easier to follow the rest of their documentation, name your partitions as per the Gentoo standard. For a system that you will actually use, you should have a plan here. Preferably, one which has sizes for your ‘home’ as well as the ”. Necessary parts are the ‘/’ and the ‘/boot’. For UEFI, you should have 350 Megabytes of the disk for the boot files. Use your favorite partition editor. For the UEFI partition, use ‘mkfs.vfat -F 32 /dev/sda1’ and for the main use ‘mkfs.ext4 /dev/sda2’.

Mounting the main disks

You should have at least one boot disk of 350 MiB and one that will host your system as you start. A swap partition is also good to have. You can mount the with the standard command.

$ mount /dev/sda2 /mnt/gentoo

There is no reason to mount the ‘boot’ disk until you enter the chroot environment later. You can also mount user disks or partitions, but only if you are making the final system.

Downloading the tarballs

You can download the tarballs before you start or during install. Alternatively, the install environment has the ‘Links’ browser, so you can do it with the terminal. Download the files to the Gentoo disk.

$ cd /mnt/gentoo $ links www.gentoo.org/dowloads

Once you have the files on your disk, unpack them with the tar command.

$ tar -xvf stage3-amd64-systemd-20210120T214504Z.tar.xz

Install base system

You actually have a very basic and generic system already; that is what the Stage3 file is all about. For example, you make.conf file is there with standard settings. It needs to have a mirror, though, set one using the tool.

$ mirrorselect -i -o >> /mnt/gentoo/etc/portage/make.conf

It adds the value: GENTOOMIRRORS=”http://ftp.ntua.gr/pub/linux/gentoo/ https://mirror.wheel.sk/gentoo“, with your chosen mirrors, naturally. You also have an automatic option where you can specify protocol or speed constraints. There is also the deep option, where the tool actually downloads a 100KiB file to measure.

You also need an ebuild repository. This repository keeps track of what is available for upgrades. You can change this, which you will do when you look for a derivative of Gentoo. You can take the default from the newly created directory structure.

$ cd /mnt/gentoo $ mkdir –parents etc/portage/repos.conf $ cp usr/share/portage/config/repos.conf etc/portage/repos.conf/gentoo.conf

Usually, you do not change this. The cases when you do need to change it is when you have your own mirror. The below commands are directly from the gentoo.org website. They set up the environment for installing.

$ cp –dereference etc/resolv.conf /mnt/gentoo/etc $ mount –types proc /proc /mnt/gentoo/proc $ mount –rbind /sys /mnt/gentoo/sys $ mount –make-rslave /mnt/gentoo/sys $ mount –rbind /dev /mnt/gentoo/dev $ mount –make-rslave /mnt/gentoo/dev

Now, you are prepared to move into the environment you are creating.

$ chroot /mnt/gentoo /bin/bash $ source /etc/profile $ export PS1="(chroot) ${PS1}"

Inside the environment, you also need to mount the boot partition.

$ mount /dev/sda2 /boot

Which partition this is should be clear from earlier steps. On a UEFI install, you have created the ESP where you store all boot information. Next, you download the repository information into the directory defined by your repos.conf file.

$ emerge-webrsync

Here you see the first mention of emerge. This command handles all your upgrades and installation. The next vital command you need to know about is eselect. With eselect, you read the latest news about Portage

$ eselect read news

Or choose your profile;

$ eselect profile list $ eselect profile set 3

You choose the number from the list or use the entire name you see in the list. Now you MUST set the @world set to ensure the system is updated according to your system, not the stage3 you used.

$ emerge –ask –verbose –update –deep –newuse @world

The most powerful variable in GENTOO! The USE variable sets what support needs to be compiled into your programs. Used correctly, you can make your system much leaner than with other methods. You can change this to stop supporting KDE if you are certain you will not run KDE. Gentoo will then compile all programs without that support, making the binary smaller. If you decide to switch to KDE, you have to start over and re-compile all your applications.

USE="-kde gnome qt5 alsa"

All of the values are set in the default values, so what you put is a change from the normal. The first time you build it is probably better just to get the system running.

Time zone and locales

Next, set the time zone. Fill in the values in the files.

$ ls usr/share/zoneinfo $ echo "Europe/Athens" > /etc/timezone

Use emerge to configure the time correctly.

$ emerge –config sys-libs/timezone-data

Add data in your locales file:

en_GB ISO-8859-1

en_GB.UTF-8 UTF-8

Configure using the file you just created.

$ locale-gen

This is the standard for all installs in the Linux world. This is the Gentoo specific:

$ eselect locale list $ eselect locale set 3

$ env-update && source /etc/profile && export PS1="(chroot) ${PS1}"

Automatic kernel configuration

Before you run the script, you must add your boot partition in the /etc/fstab file.

/dev/sda2      /boot    ext4    defaults      0  2

In Gentoo, you have the freedom to compile your own kernel for each machine you want to start. The better way to start is to get binary kernels that suit your needs. When you feel ready to get into the complexities of kernel compilation, do that on your running system. To pick a kernel, run emerge as always;

$ emerge –ask sys-kernel/gentoo-kernel-bin

The emerge command will install your kernel and set everything up!

Time to configure the system

Create the networking files.

/etc/conf.d/net

config_eth0="dhcp"

modules="ifconfig"

/etc/conf.d/hostname

hostname="Gentoo"

$ emerge –ask net-misc/dhcpcd

This will install the dhcpcd program for handling dhcp. The default for Gentoo is dhcp.

Settings for dhcpcd using systemd are to enable the service.

$ systemctl –now enable net@enp1s0.service

Before you can boot into the new system, you need to have your boot loader installed. Here is how you choose GRUB2.

$ emerge –ask sys-boot/grub:2

$ grub-install /dev/sda –efi-directory=/boot

$ grub-mkconfig -o /boot/grub/grub.cfg

Now, you need to update your /etc/fstab file for the live system.

/etc/fstab

/dev/sda1               /boot           ext4            noauto,noatime  1 2

/dev/sda2               /               ext4            noatime         0 1

The ‘/dev/sda’ numbers will differ depending on your partitioning scheme. You can also use unique UUID numbers. You need to figure those out using the ‘blkid’ command.

Restart into your new system

You will now be able to boot into your live system. Remove the install media and try it out. If you have missed anything, you can always start over with the install media. Many problems are details, so you have all the files downloaded and can do only part of the installation. In that installation, you still have ’emerge’ available, and that is the tool you use for your upgrades and tweaking the system.

Conclusion

Gentoo does not have its own installer, which means you need to prepare what you want to achieve. You can do this by choosing the size of disks and investigating any specific needs for your system. If you want to get started quickly, you should choose a derivative and tweak from there until you feel certain that you can handle all details in a Linux system’s initial setup.

]]>
GPT vs. MBR Booting https://linuxhint.com/gpt-vs-mbr-booting/ Sun, 24 Jan 2021 05:15:01 +0000 https://linuxhint.com/?p=87343 Most of the time, we let our computers’ boot just happen, but sometimes we need to control it. One of those times is when you want to dual boot. The way your disk is organized affects what you need to do and think about. The way computers boot and have been booting is by using the Master Boot Record. That was the old way, but you will still see partitioning software give you the option to use this system. GPT means GUID Partition Table; it was introduced to address BIOS limitations, one being the size of disk it can address. To use GPT, you must have a UEFI based computer. In 2021, you do! Just watch out for decades-old hardware if you are a tinkerer. Note that you can still keep using MBR if you wish to do so.

The standards in your start-up.

Let’s make sure we know which standard does what:

BIOS checks your hardware before it looks for the disk and the MBR. The MBR is a section of the disk at the physical beginning. This space is only at that beginning. So BIOS looks for the MBR, which in turns points to the Operating System.

UEFI does the same job as the BIOS, but instead of pointing to a specific address on the disk, it searches for your ESP. The ESP is the partition where you have all the files that run your boot manager. You can point to any *.efi file; these files are executable and most commonly run grub.

The interesting part is that UEFI can also point to your MBR partitioned disk. This was necessary since many systems only had those disks and needed to stick with them for a few generations. This means that you can still choose to partition your disk using MBR. You will also have no problems doing this unless your disk is over 2.2 terabytes.

Using GPT on your disk has many advantages, though, and the added complexity is very small. A final detail that you can add to your disk is the PMBR. The PMBR will act as the MBR when the hardware cannot handle it. It is only a backward compatibility issue.

How do I use this?

This is interesting for you to know when you install a new distribution. Most distributions have built-in partitioning, but some do not. When you have finished the installing process, you may still need to partition new disks; hence you should know the difference between the partitioning standards. If you have no particular demands, you should use GPT and any standard that the distribution suggests.

Reasons to choose GPT over MBR

This is the simplest way to partition your drive, don’t make that your reason for doing so! Even compatibility is usually not a reason since your partitioning software will create the PMBR mentioned earlier. You would be forced to have, at least, PMBR on any USB drive that you plan to use on really old hardware. Any hard disk that you install in a machine with UEFI, you should use GPT. The reasons are many. Your disk’s size is not your main concern; in this case, instead, you have many features that speak for GPT.

One feature is that you can have as many partitions as your OS allows. The initial limitation is usually 128 partitions, but the standard allows many more. If you need more partitions, you probably have chosen the wrong strategy and should think again. The second feature you should appreciate is that the table is in two places on the disk. On an MBR disk, you have the table on the first sector and nowhere else! Using GPT, you have the table in two places; the beginning and end of the disk. On top of that, it is really simple to make a backup copy of the ESP to external media. GPT also uses CRC to check that the partition table is healthy. This can give you ample warning that one of the copies is corrupted. In this case, the system uses the second copy and boots as usual. If this is your situation, start gdisk ‘/dev/sdX’, type ‘v’ to verify your disk, and then ‘w’. You will end up with both tables in a good state. WARNING: If you have physical problems with the disk, you may end up with an un-bootable disk. Keep Backups!

Moving from MBR to GPT

Since you most likely want to use GPT, there is a way to move to MBR. You can usually achieve this without re-writing the entire disk, though you should keep backups!

The earlier mentioned ‘gdisk’ utility can do it for you. It is even simpler to use ‘cgdisk’, where you have a list of partitions listed and options at the bottom. It looks the same as ‘cfdisk’ and works almost the same. When you start ‘cgdisk’, you get warnings that the disk is an MBR disk and that ‘gdisk’ will convert your disk. This happens in memory, and you can back out at any time. When you have verified that the changes are good, cross your fingers and write to disk. If you have a decent and healthy disk, you should end up with a GPT disk. This can fail since some programs that create MBR disks do not align correctly, and ‘gdisk’ will not recover your disk.

Conclusion

In your current system, using MBR is usually unnecessary. If you have very old hardware, you may have some use of it, but during the operation of hardware newer than 2007, you are close to guaranteed to have support for GPT. With GPT being more robust and secure, you should use GPT except in extremely rare cases. Have fun with your portable media, and if you can still keep a BIOS machine running; Kudos! It is an achievement in itself!

]]>
How Do I Change UEFI Settings? https://linuxhint.com/change-uefi-settings/ Tue, 19 Jan 2021 20:00:24 +0000 https://linuxhint.com/?p=86490

When you are using Linux, of any distribution, you sometimes need to look at settings for the UEFI. The reasons vary; you may have a dual-boot system and cannot find the other boot option, maybe you want to have it boot securely, or, in some cases, you want to turn secure boot off so you can boot anything.

For secure boot, you need to use the mokutil command. This manages the keys that are available on the system.

Tools

efibootmgr

The most obvious and simple to grasp tool is the efibootmgr. Using this, you can work with the different points where you want the boot to continue. Using UEFI, it is much more flexible to create options for how you boot. With the small nifty tool, efibootmgr, you can change, add, and remove boot entries. The boot entries point the process to where it needs to go.

The efibootmgr is available for most distributions as a binary. So, install the ordinary one with your distribution. Once it is installed, you need to run it as root. As you should understand, you may render your system impossible to boot, so be careful. If you run the command without parameters, you get a simple list of current entries.

$ sudo efibootmgr

The list in the picture is very short; the dual boot systems will has many more entries. Since your system probably have many more entries, you may want to choose another start. This is done easily enough.

$ sudo efibootmgr -n 000C

This is intended for experiments, the ‘-n’ means set bootnext. This will set what will boot the next time you reboot; it does not change what will continue booting first. If you have added something new, you should do this to try it out. If the boot goes through the way you wished it would set it to permanent.

$ sudo efibootmgr -o 000C,000B

The above command changes the permanent boot order. You do not have to type all zeros, only ‘C, B’ would also have worked. In creating a boot entry:

$ sudo efibootmgr -c

Running the command without more switches assumes that you have your ESP on dev/sda1 and that it is mounted at /boot/efi. You can also set up the boot to be on another disk. Below is an example.

$ sudo efibootmgr -c -l \\EFI\\refind\\refindx64.efi -L rEFInd -d /dev/sdc

The command adds ‘-c’ and activates as the first boot entry. The parameter ‘-L’ sets where the file is. This is relative to the ESP partition, usually mounted at ‘/boot/efi’. The ‘-d’ parameter points to the drive you want to use, the default is /dev/sda. Did it go well? If not, you can activate and deactivate the boot entry using ‘-a’ and ‘-A’, respectively.

$ sudo efibootmgr -A -b C $ sudo efibootmgr -a -b C

The parameter points to Boot000C, as you can see, you can also use only the first non-zero value in the point number. If you have many disks, the output looks a little more complex. Use the verbose option to see if they are on many disks.

$ efibootmgr -v
root@mats-Ubuntu:/media/matstage/UEFI# efibootmgr -v

BootNext: 000C

BootCurrent: 000B

Timeout: 0 seconds

BootOrder: 0001,0000,000B,000C

Boot0000* rEFInd Boot Manager   HD(2,GPT,439e77ad-82ea-464d-801d-3d5a3d4b7cd4,0xfa000,0x96000)/File(\EFI\refind\refind_x64.efi)

Boot0001* rEFInd        HD(1,GPT,c85dcbd6-880b-f74d-8dac-0504f1dd291e,0x800,0xaf000)/File(\EFI\refind\refind_x64.efi)

Boot000B* ubuntu        HD(2,GPT,439e77ad-82ea-464d-801d-3d5a3d4b7cd4,0xfa000,0x96000)/File(\EFI\UBUNTU\GRUBX64.EFI)

Boot000C* UEFI OS       HD(2,GPT,439e77ad-82ea-464d-801d-3d5a3d4b7cd4,0xfa000,0x96000)/File(\EFI\BOOT\BOOTX64.EFI)

The funny part here is that you have the partition first, and then the UUID, and finally the path on that disk. It is a bit tricky to remember the values, but it makes for a more robust solution for the system. Any removable disk may not get the same letter after ‘sd’ next time you boot.

EFI Tools

The EFI tools are a collection of tools that you can use to figure out what is defined already. The efi-readvar tool can show you everything you have access to. The printout is academic since all you see are the keys. To manipulate the list, you use efi-updatevar. This requires many hoops to do, and when done incorrectly, you can brick your system. With that said, if you have a specific need, you can use the efivars file system. It is mounted read-only by default because of the risk of bricking the system. The steps to get access to the variables are detailed in the link below.

https://realmacmods.com/macbook-2011-radeon-gpu-disable/

This is about the Macbook Pro that cannot boot without using the GPU, which makes graphical boot impossible when you want to install Linux. Making more changes to the UEFI variables are dangerous not just to your disk contents, it can also set things to not even try a boot.

If you know what guide you are looking for, you use the efibootdump command. This requires a more in-depth knowledge of your system though.

Conclusion

Changing your UEFI variables is possible, however, you should make sure you know exactly what you are doing if you change anything else than the boot order. The boot order will make you reboot a few times until you understand any mistakes you may have made. If you are interested in speeding up your boot and make it more dynamic, consider rEFInd!

]]>
Sfdisk Tutorials https://linuxhint.com/sfdisk-tutorials/ Tue, 12 Jan 2021 18:36:25 +0000 https://linuxhint.com/?p=85369 Partitioning is vital for system administration. This is the reason the partitioning software comes in so many variants. fdisk and cfdisk are made to be interactive. With parted, you can create everything with commands. Those are the most commonly used ones; sfdisk is not very common. It does have many features, but you can use it for scripts to a much higher degree. For a long time, sfdisk lagged behind on supporting GPT since version 2.26, it does support GPT.

UEFI

This program is still defaulting to MBR, so you have to explicitly state that you are using GPT.

Backing Up

Before you start working with your disk, you have to back up any of your important data to other media! This is an assumption that you must have a clear mind from the start. For making sure that you can restore your current state or implement it on another disk, you can dump the table.

$ sfdisk –dump /dev/sda > sda-tables.txt

The result goes, like text, straight to standard output. In the command above, the file is easy to read. You can also use this to put everything back on the disk. This is what it looks like.

label: gpt
label-id: C9247CFD-5AF7-4AB1-9F62-CDDDFCC12982
device: /dev/sda
unit: sectors
first-lba: 34
last-lba: 976773134
sector-size: 512
/dev/sda1 : start= 2048, size= 1021952, type=C12A7328-F81F-11D2-BA4B-00A0C93EC93B
/dev/sda2 : start= 1024000, size= 614400, type=C12A7328-F81F-11D2-BA4B-00A0C93EC93B,
name="EFI system partition"
/dev/sda3 : start= 1638400, size= 126093312, type=E6D6D379-F507-44C2-A23C-238F2A3DF928
/dev/sda9 : start= 623642624, size= 353130496, type=0FC63DAF-8483-4772-8E79-3D69D8477DE4
/dev/sda10 : start= 127731712, size= 303263744, type=E6D6D379-F507-44C2-A23C-238F2A3DF928
/dev/sda11 : start= 430995456, size= 192647168, type=0FC63DAF-8483-4772-8E79-3D69D8477DE4

To bring it back or put it on a new disk, you use the restore option. More exactly, you just pipe this result into your next invocation of sfdisk.

$ sudo sfdisk /dev/sda < sda-tables.txt

Incidentally, this is a nice example of a correctly formatted command file for partitioning a disk. All parts are optional, this makes it possible to have another disk connected to the same machine and partition it the same. When you remove ‘dev/sdaX’ in the above file, you can partition any disk with the file.

Interactive Mode

During interactive mode, you need to know what you are doing. The opening display is sparse. You will see the type of disk and its size. When the disk is empty, you will NOT see the old situation, which makes it disconcerting to get started. Nevertheless, the procedure is strict. You have four values to put in for each partition separated by a comma, for default values, you just put the comma.

Before you start, create a label. This example is for a GPT partition.

$ echo "label: gpt" | sudo sfdisk /dev/sdc

This is the way Sfdisk is designed to run, but let’s start with interactive mode.

A simple partitioning runs.

$ sfdisk /dev/sdc
...
# The prompt changes to '>>>'
>>> ,350M, U
>>> ,10G, L
>>> ,,S
>>> write
# The result shows up. Confirm!

The data is written on the disk, and you can start formatting your partitions. As simple as this is, it is also error-prone. Using scripts is the main idea of sfdisk. Let’s go through the options and then the scripting language.

Setting disk label and partition labels

You can also use sfdisk with a command at a time. To do this, you use the parameters with dashes. Many of these commands, you can set with the script files. Setting the disk-label can be done in two ways, you saw one earlier in this tutorial.

$ sfdisk –label /dev/sdc gpt

This sets your disk to become a gpt disk. You have the option to stay with dos or more advisable, use the LegacyBIOSBootable flag for the gpt when you use hardware that does not support gpt. This is rare, so most likely, you will use this flag only for a memory stick that you want, to be able to boot even on old hardware.

You can also set labels for each partition. See the commands below.

$ sfdisk –part-label /dev/sdc 1 boot $ sfdisk –part-label /dev/sdc 2 home

Note the difference between part-label and disk-label. The disk-label only gives a supporting label for other software to use. The disk-label makes the whole disk, either gpt or dos.

Creating scripts

If you have chosen to use sfdisk, you probably have a reason to do so. Some of those reasons may be that you want to make many identical disks. Using fdisk, you can partition an entire disk with one command. Another reason may be that you want to make a new disk with the same scheme as the first one. The simple way to create a script is the dump command from earlier.

$ sfdisk –dump /dev/sdc

Using the output as a guide makes it easier to get started, just remember to check the documentation before doing anything rash. You can, for example, edit the file from before by removing the disk. In the example, the dump came from ‘/dev/sda’. If you remove that part, you still have a valid file.

start= 1024000, size= 614400, type=C12A7328-F81F-11D2-BA4B-00A0C93EC93B,
name="EFI system partition"

All other parts of the file are also acceptable as commands. Copy in or take away what you want and pipe it into sfdisk.

$ sfdisk /dev/sdc < Disk-tables.txt

Be careful! All commands will be run immediately. Although it will show you the resulting table and ask you to confirm the destruction of the disk you are working with.

Conclusion

This has been a short foray into the power and weakness of using the sfdisk program for your new disks. As you can see from this cursory examination, you can use this program for many things, especially if you have planned before what you need to do. There is also a great degree of freedom in adding sfdisk to scripts.

]]>
Neomutt Beginner Tutorial https://linuxhint.com/neomutt_beginner_tutorial/ Mon, 04 Jan 2021 20:35:51 +0000 https://linuxhint.com/?p=84169

“All email clients suck, this one just sucks less!” Quote from the lead developer. Checking email requires a web-based front-end or a separate mail client. These require graphical environments, with few exceptions. Neomutt is one such exception. With Neomutt, you can check your emails on the command line in a separate application. You may be limited when it comes to web-based emails, but there are workarounds also for that inconvenience.

The basic ideas

To understand, you need to understand the basic concepts. The main one is the views you have for each job; listing the mails and reading each mail.

Index

Neomutt starts with the Index Screen. This shows all emails listed the way you want. Whether you see read emails or not, you can set yourself. The default behaviour is to show all emails even when they are marked deleted; you later move them to trash. You choose an email with arrows, vim keys, or what you specify yourself in the index screen. To open them, hit enter and they will appear in the Pager screen.

Pager (Showing the Emails)

In the pager, your emails will show. In the basic form, you will only see text-based mails. To see HTML, you need to designate your web browser and have Neomutt call it up. Neomutt will make a temporary file that contains the corresponding web page that is the email.

Sidebar

The sidebar keeps all the mailboxes you have available, there can be many! You can choose to have this, not have it or toggle it with a key-binding. Most users will have a key-binding, like ‘B’, for example.

Navigation

You navigate your inbox with the arrow keys and scroll down emails with the space bar. To delete the email, you use ‘D’. All these things are common, and you can set them yourself with the configuration file. The interesting part is that you need to know the index and the pager. When you configure neomutt, the settings will be according to which view you use. Most of these will be for both views.

Binding keys

In neomutt, you will work with the keyboard exclusively. It is a terminal-based application, after all. For this reason, you will want to bind different keys to the functions you use the most. Earlier, you could read the standard bindings; if you want to change something, you need to bind them your self. To bind capital ‘B’ to toggle the sidebar, for example, use the below code.

bind index,pager B sidebar-toggle-visible

The list will end up fairly long so sourcing a separate file for the key-bindings is a good idea. The format is pretty simple; a capital letter means exactly that. To show Ctrl-x, you put \c-x

Connecting an account

The first thing you need to do is to connect an account. You can do this with one command at a time; you will need up to twenty commands to get to your mailbox. Not what you want to do daily. It can be helpful when you try to set up a new account. In ordinary use, you want to have the account open when you start neomutt. This requires a configuration file. In the file, you will need to set all the values for the account.

# Imap settings

set imap_user = "me@mydomain.com"

set imap_pass = ""

 

# Smtp settings

set smtp_url = "smtps://srv.some-hosting.com"

set smtp_pass = ""

 

# Remote folders

set folder = "imaps://srv.some-hosting.com"

set spoolfile = "+INBOX"

set postponed = "+/Drafts"

set record = "+/Sent Mail"

set trash = "+/Trash"

 

account-hook $folder "set imap_pass=""

The parameters are pretty simple to understand; you may have different passwords for IMAP and SMTP though it is rare. What can be confusing is the folder value. This configuration is for IMAP; the folder you are setting is on the remote server. You can use a local store for your emails, but that is another setup. The password is empty in this case. When you run, neomutt will ask for your password every time you start. If you set the password, neomutt will collect it from this config file. It is good practice to encrypt the file where the password is!

Web contents

When an email is written in HTML, you cannot read it with Neomutt, by default. You can access the mail in your default browser, though. On most systems, when you open an email, it will show that you cannot read HTML in the email client. When you press v, as it says in the pager, your default browser will open it. This is determined by the ~/.mailcap file. You find ‘text/html’ a semi-colon and the browser you will use in the file. On Debian based systems, it calls ‘/usr/bin/sensible-browser’. To set this value, you need to change it in ‘/etc/alternatives/x-www-browser’ and ‘/etc/alternatives/gnome-www-browser’. This is for the whole system.

$ sudo update-alternatives –config x-www-browser

$ sudo update-alternatives –config gnome-www-browser

$ xdg-settings set default-web-browser brave-browser.desktop

Note that the last one is for your use only, in case you do not have root to your system. You can also set any other web browser only for mail. You do this by setting mailcap directly to a browser.

Conclusion

The neomutt package is very versatile, but the configuration is confusing and needs more well-explained tutorials and examples than you have seen here. With your mailbox overflowing with HTML-mails you may think twice about switching to a text-based mail pager. Consider, though, that you can use it as a filter. Most commercial emails are in HTML only, are your other emails in plain text?

]]>
NixOS Development Environments https://linuxhint.com/nixos-development-environment/ Sun, 03 Jan 2021 19:45:59 +0000 https://linuxhint.com/?p=84085 When developing and running software, and many times, you need a very particular set of libraries in your environment. You achieve this with virtual environments, containers, and other tricks. However, you do not necessarily need all that. One common case is when you program in Python, you must choose between the two big versions. This has caused many headaches for users and developers alike. You can avoid all this if your libraries are designated, especially for one run. This may sound impossible or unnecessary, but it is very convenient for rare use cases and development.

Revision Hell

Anyone who reads this will be familiar with the Python issue of using a different version of the language. That is just one glaring example where even users are affected. This is due to old brilliant projects that have stopped maintaining the software. In many other situations, you also need great control over what is running and what libraries are available. Programming in C and C++ uses libraries that often need to be the exact version when you compile. Otherwise, you will be re-writing parts of the software you never intended to touch. Many developers use a container with all the libraries, and all other works happen on the host computer.

The Nix Fix

How does nix take care of this problem? Well, they have all the files in a store with hashes to identify the exact version. The environment you are going to use, then link to the library or execute, is something you would want to use for your current situation. For a running system, you can then use many versions of an application and even libraries. When you want to develop, you create a configuration file that covers the needs of your current project.

Configuration Files

When you have NixOS installed, the configuration.nix will control your environment for the whole computer. With that said, you can control it in every instance of your shell. Irrespective if you have NixOS or run any other distribution, you can use another nix file. The file is called default.nix by default. You can use this to make a directory structure that has a particular environment. The workflow is to create the default nix file to reflect what you want your environment to support. Then change the directory and run nix-build, followed by running the nix-shell. You can also use any name for the file if you specify it on the command line.

$ cd MyProject/
$ nix-build # Once, when you have changed something.
$ nix-shell default.nix

The parameter for the nix-shell will be implied, but if you want to have several in one directory, then you can use the switch. With the correct values set, you now have your environment the same every time you start nix-shell. If you move the nix file, you will be able to get the same anywhere! The big issue becomes; what do I put in the nix files?

The files use the Nix expression language, it is almost a programming language.

A few examples

Below, there are a few examples that can help you. There are many more things you can do to tweak your environment. This is a long exciting journey, but it will probably slow you down from the beginning. Before you get there, use other people’s code. This list is short, so look for ideas across the web.

Python

When you want to create a Python project, you would normally use virtual environments. With Nix, this is not necessary. Instead, you can create a shell.nix file that declares which version you want. The simplest way to do this is to use python38Full.

{ pkgs ? import <nixpkgs> {} }:
pkgs.mkShell {
    # nativeBuildInputs is usually what you want -- tools you need to run
    nativeBuildInputs = [ pkgs.buildPackages.python38Full];
}

This compiles an environment with all parts of the Python 3.8 that comes with NixOS. If you want to minimize your environment, you can choose its particular parts. You can also add your source code remotely with fetch functions.

Python Flask

An example of web development is a flask. It is a very powerful package for making web pages, even really complex ones, without much effort. Since Flask is a popular framework, there is a ready NixOS package for it. The file to control the build for this is called default.nix.

{ pkgs ? import <nixpkgs> {} }:

pkgs.python38Packages.buildPythonApplication {
    pname = "NixApp";
    src = ./.;
    version = "0.1";
    propagatedBuildInputs = [ pkgs.python38Packages.flask ];
}

As you can see, there are packages from nixpkgs that cover flask. If you want to use something else, you add them inside the square brackets. This goes for all types of packages that are included in the NixPkgs repository. If the package does not exist, use a fetcher.

Python Development

If you want to start a Python development environment, you add packages you need according to revision and others.

with import <nixpkgs> {};
with pkgs.python37Packages;

stdenv.mkDerivation {
    name = "python-devel";
    req = ./requirements.txt;
    builder = "${bash}/bin/bash";
    setup = ./setup_venv.sh;
    buildInputs = [
        python37Full
        python37Packages.pip
    ];
    system = builtins.currentSystem;
    shellHook = ''
    SOURCE_DATE_EPOCH=$(date +%s)
    '';
}

In the shellHook, between the double apostrophes (”), you can put any scripts you like. Again, think about what might already exist, as there are many smart people out there that are already developing using NixOS.

JavaScript

The standard version to use JavaScript, or more precisely, nodejs, is the nix script below. Name it shell.nix and place it in your project directory, then start with the nix-shell command.

with import <nixpkgs> {};

stdenv.mkDerivation {
    name = "node";
    buildInputs = [
    nodejs
    ];
    shellHook = ''
        export PATH="$PWD/node_modules/.bin/:$PATH"
    '';
}

This is the simplest, possible trick, although there are much more available. You can see how to add a script that you would otherwise run manually. Use this carefully and look for full alternatives before you do this.

Jupyter

The script below initializes a directory to host a batch of functions where you can run Jupyter. The other packages are for statistics and machine learning. You can also remove and add according to your needs.

with import <nixpkgs> {};

(
let
    in python38.withPackages (ps: with ps; [ geopandas ipython jupyter
    jupyterlab matplotlib numpy pandas seaborn toolz ])
).env

Configurations

For your IDE, editor, or anything, really, you can also bake in your settings. For developers, vim and Emacs will be the first candidates for this specialization. Vim has its own set of plugins available as nixpkgs.

Fetchers

The basis of the packages in NixOS are files that point to sources and what is needed for compiling the packages. You can use this if you are missing a package. As long as you can find the source package, you can use a fetcher to install it. The standard fetcher fetches tarballs but is named fetchurl.

{ stdenv, fetchurl }:

stdenv.mkDerivation {
    name = "hello";
    src = fetchurl {
        url = "http://www.example.org/hello.tar.gz";
        sha256 = "1111111111111111111111111111111111111111111111111111";
    };
}

You can use it the way it is in the above code. You also have fetchgit and other version control systems. On top of this, the major git services are covered with fetchFromGitHub, fetchFromGitLab, and more. With all these fetchers, you should be able to find any package you want for NixOS.

Conclusion

Using NixOS requires a bit more effort than other distributions. Having said that, if you want to develop software, the effort is worth it. You will keep your main system cleaner and can hop between projects without creating troublesome conflicts between environments.

]]>
LXC Network Configuration https://linuxhint.com/lxc-network-configuration/ Mon, 14 Dec 2020 06:25:52 +0000 https://linuxhint.com/?p=81450 When you start a Linux Container, you may want to use network functions. The question becomes: “Are you trying to network with the host, the wide internet, another container, or maybe all local containers?” Good thing that there are solutions for them all!

Profiles

To make this correct, you need to configure your container. The base configuration is already on your system if you have used a regular distribution. You can further configure this with commands, but most people will use YAML files. The base usually looks like the one below. The file resides in /etc/lxc/default.conf.

lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx

Each container follows the settings according to the default profile and the file mentioned above. You can print the default file as per below. For more configuration, it is best to make new profiles. Each profile will contain some configuration detail, in our case networking. You can change any setting in your container with a profile, and this makes even more sense when you want to run it both locally and on a platform.

$ lxc profile show default
config: {}
description: Default LXD profile
devices:
 eth0:
   name: eth0
   network: lxdbr0
   type: nic
 root:
   path: /
   pool: ros
   type: disk
name: default
used_by:
- /1.0/instances/guiapps
- /1.0/instances/ff

The resulting output is a YAML file. All your profiles will be in the same format. With LXC itself, you can create, remove, and edit your profile. You can see in the file that the default uses the lxdbr0 network and type nic. Now, create a new profile using the following:

$ lxc profile create nicnet

Before any container is running, edit the profile:

$ lxc profile edit nicnet

You use YAML format in the files that create these profiles. Note that the name “eth0” is the internal container name. The “parent” is what you have on your system, and you check it yourself using:

$ ip a

The printout will vary depending on what you have had before. You should also know that you can do the bridging from outside of the container with the brctl tools.

Using it in your container

Once you have created a profile, you want to add it to your container. This is done with the same set of programs ‘lxc’. First, make sure you have a container, in this example, the container is named ‘ff’:

$ lxc profile add ff nicnet

The change takes effect when you restart networking in the container. The easiest and safest is to always add profiles only in stopped containers.

Routed

A bridged connection is one where your container receives a MAC address on the same Ethernet interface as your host. This is what you did earlier in this post. With a few more tricks, you can have your router assign a separate IP address to the container, and you can set this in your container. Although, when you use macvlan, you may run into trouble using Wi-Fi. WPA/WPA2 will not accept the two addresses, so your Wi-Fi will break, as your host will not use the Wi-Fi.

The earlier example uses the brctl tools since lxc has created their own. This gets an address from the host, not the router. You can get the address from the router if you wish. Again, only if you use a wired connection or an insecure Wi-Fi.

When you have made sure that you have a network connection on your host, you can connect that to your container. Change the word parent and set your nictype to macvlan.

config: {}
description: Setting for the network interface
devices:
 eth0:
   name: eth0
   nictype: macvlan
   parent: enp3s0
   type: nic
name: Route
used_by:
- /1.0/instances/guiapps
- /1.0/instances/ff

You will have to make sure the parent value matches your configuration, so make sure you create it dynamically. After this is done, you can start your container and find it in your router’s list of host destinations. Well, they are interfaces, to be technical about it.

Figure 1: The container now shows up in your router

Mobile Profiles

An interesting part of the Linux containers is that you can grab your configurations and dump them into YAML files. To create the files for this, you run the show option in LXC, then pipe into a file. The output follows the YAML standard, and you can then use these files to configure them elsewhere.

$ lxc profile show Route > Route.yml

To use this for a new container, use the set values. Ordinarily, you would set a value at a time, but you already have a file for this.

$ lxc profile create newroute $ lxc profile set newroute user.network.config - < Route.yml

You can see that you must put the values into the namespace 'user.network.config'. This is important to know when you want to add other values unrelated to networking.

Conclusion

Networking with your containers has many options, which can be confusing, but with some research and testing on your own, you can get it to work the way you want. The best part is that you can try one thing at a time using profiles. You will never screw up your current container, just remove the one that did not work and add the old one. This technique works for everything in a container.

]]>
Catkin ROS https://linuxhint.com/catkin-ros-beginner-tutorial/ Sun, 13 Dec 2020 14:03:14 +0000 https://linuxhint.com/?p=81226 When using the Robotics Operating System, you will at some point want to develop your software. For the ROS1 version which is in maintenance until 2025, you will use Catkin to compile your projects. If you do not plan to program yourself, note that you may have to compile other people’s software so learning the basics is useful in any case. For the case that you have already moved to ROS2, you will be using colcon to do the same thing.

What is Catkin?

This tool is developed for ROS, the robotics operating system, by the team building the ROS tools. It has a multitude of tools to build your robotics project. Using it will be necessary if you develop robots using ROS. You should be aware that there have been several generations of these tools over the last few years. This means you need to choose the newest! Catkin is installed with the full ROS noetic distribution; all you need to take care of is the configuration. You need to set the correct environment for running Catkin.

Setting up the directory/environment

Create a directory with sub-directory src/ inside it. MyRob/src. The examples are from the beginner tutorial.

catkin_make creates CMakelists.txt in the src directory. These point to the other files that make up a project.

Next, you want to create your packages.

Go in the src directory

catkin_createpkg
$ catkin_createpkg beginner_tutorials std_msgs ropy roscpp

Note the mistake in the command. This creates everything as if you were not mistaken. You can find the result in the files with grep.

$ grep -r ropy
ubuntu@noetic:/home/ubuntu/catkin_ws/src/beginner_tutorials
$ grep -r ropy.
./CMakeLists.txt:  ropy
./CMakeLists.txt:#  CATKIN_DEPENDS ropy roscpp std_msgs
./package.xml:  ropy
./package.xml:  ropy
./package.xml:  ropy

The next compile will fail. You now have two choices, edit the files or remove the entire directory. The script is usually fast so the easiest is to remove and re-run the create command. Once you have cleared those mistakes, you continue by building the package. Either way, when you have fixed it, go to the workspace root and run catkin_make again. As you move on with any project, you will always go back up to the workspace root to make the whole project. This only makes sure that everything exists correctly, there are some clever tricks so you don’t have to recompile the whole project every time.

$ cd ~catkin_ws/ $ catkin_make

If it succeeds this time, you have just created your first package. Remember to fix up your package.xml file. You should probably set your name correctly and the license. There are more settings, they are all easy to understand.

A small project

Now, do it again in a new directory and create your project. Or better yet: for practice, pick up a project from GitHub, see where it goes, and then start changing it to your liking. To do this, you can create a workspace for catkin with a src directory. In the source directory, you copy in the source code directories. From the example, above you need two steps, clone the directory and run catkin_make.

$ cd /src
$ git clone https://github.com/crkaushik93/Go-Chase-It-RSEND-Project-2.git
$ cd ..
$ catkin_make

To make a less deep tree, you may move each sub-directory up one step but this is optional. The last command will search the src/ directory and find all code.

Installing

You do not usually install packages on the development system only. However, you run install to create an install environment and a development environment. The catkin_make command creates these for you. As you move on you should source one for development and the other for testing. A direct install will create your directories, including the scripts to initialize the environments.

$ catkin_make install


You will not have the files installed on your system, only in the project directory. This is great because all you need to do is run the setup and start testing.

$ source devel/setup_bash

Or…

$ source install/setup_bash

The first is for you to run testing and find out what mistakes you have embedded in your code.

Only ROS?

So, is this only valid for ROS1? Yes, catkin is aimed only at the ROS1 libraries. One thing to note though is that most of the job is cmake. You will be able to translate many of the practices to other projects that uses CMake. You only need to do more work since Catkin has simplified many tasks for you. For ROS2, many things are similar but the solutions are more refined and have more features to control how much you compile each time. You can also program in both levels of ROS, there is a bridge between the two!

Conclusion

Catkin is a very strong and versatile set of tools that makes your work much simpler and lets you get through the grind of developing your robotics code. The practices, though are an excellent way to learn more about programming. So even if your robot project is only for your closest circle and bragging rights, you can benefit from knowledge for other projects.

]]>
Linux Parted Command Line Examples https://linuxhint.com/linux-parted-command-line-examples/ Tue, 08 Dec 2020 12:15:45 +0000 https://linuxhint.com/?p=80221 There are many partitioning tools available, in which most of them have an interface in the form of a list. With hot keys and some tinkering, you can get a disk partitioned pretty quickly. However, fdisk is not meant to be used inside scripts; sfdisk is meant for scripting. Your opinion on which is best may vary. Here, you can hear about how to run parted.

You can run parted, only from the command line but in two modes; command line and interactive. In interactive mode, you have a new shell with only parted commands, while in the command line, you enter a new command each time. There is also an -s option, so you can run many commands in one go.

Check Before

Before you begin anything, you should make sure that the disk is what you think it is. Use the list option to do this. Note that parted will only show the disks that your user has access to, so you may have to bee root to find your new shiny disk. Also, it shows all disks.

$ parted -l

The list, if you have a new disk, should look something like this:

Model: ATA QEMU HARDDISK (scsi)
Disk /dev/sda: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table:
Disk Flags:
Number Start End Size File system Name Flags
Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0
has been opened read-only.
Model: QEMU QEMU DVD-ROM (scsi)
Disk /dev/sr0: 599MB
Sector size (logical/physical): 2048B/2048B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
2 19.7MB 116MB 96.5MB primary esp

Notice that there are two disks, the CD and the new hard disk. Observant readers will notice that I am using a virtual machine to run these commands. If you want to print only your disk, you need to use the format below:

$ parted /dev/sda1 – print
[root@nixos:~]# parted /dev/sda -- print
Model: ATA QEMU HARDDISK (scsi)
Disk /dev/sda: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
3 1049kB 537MB 536MB fat32 ESP boot, esp
1 537MB 19.3GB 18.8GB ext4 primary
2 19.3GB 21.5GB 2147MB primary

Labels

The labels, when using parted, designates the type of partition table you want to use. Make sure you have booted with a UEFI machine if you choose ‘gpt’. Your system will not boot if you get this wrong! To check what you have, print the firmware. Confusingly, when you format the disks with mkfs, you can put labels and a different concept.

$ ls sys/firmware

If it contains an efi line, you are good to go!

acpi dmi efi memmap qemu_fw_cfg

If you see what is below, you have to choose msdos. I am excluding Macs here because I have not yet experience using them.

acpi dmi memmap qemu_fw_cfg

Now that you are sure that you have a UEFI implementation on your machine, you can set the label.

$ parted /dev/sda – mklabel gpt

For the second case, msdos, you do the same but with another parameter.

$ parted /dev/sda – mklabel msdos

Now, you can start creating partitions!

Partitions

For the UEFI case, you need to put some space for the boot or ESP partition. This is where you can put all the booting stuff that UEFI/EFI supports. For this reason, you must leave space in front of the main partition. In this first command, we also add space for a swap partition. Look at the command below:

$ parted /dev/sda – mkpart primary 512MiB -8GiB

This command starts the partition at 512MiB and ends it at 8GiB before the end of the disk. Notice the ‘-‘ in front of the second term. For the second case, msdos, you do the same but starting closer to the beginning. The MBR is only up to a 1MiB, including the backup.

$ parted /dev/sda – mkpart primary 1MiB -8GiB

In both cases, your disk will fill everything in between the start and just before the end. This partition will fill the space between the start and end.

$ parted -l

To see what is happening to your disk. Do this between every step until you are confident with what happens.

On the rest of the disk, put your swap partition.

$ parted /dev/sda – mkpart primary linux-swap -8GiB 100%

Notice that the procedure does not need to know the size of the disk, as long as it is well over 8 gigabytes. Obviously, based on the size of your swap, you can set the amount of RAM in your case. In a virtual machine, you should probably put a maximum of 2GiB.

Finally, for the UEFI case only, create the UEFI System partition.

$ parted /dev/sda – mkpart ESP fat32 1MiB 512MiB

As you see in this command, you can set the file system for a partition when you create it. You can also set it after you created it.

Filling the Disk

You can fill the disk with parted without knowing its total size. There are many ways to do this, and you saw an example earlier where you put 100% to reach the end of the disk. Other ways to fill your disk is using s; for the sector, %; for the percentage, and chs; for the combined cylinder head and sector. The best part, in this case, is that you can be wrong about where to start, and parted will prompt you for the closest possible solution, and you can answer Yes.


Figure 1: Parted will give you a suggestion when you are wrong.

Setting Flags

In the UEFI case, you want to make sure the ESP is set to be just that by running parted.

$ parted /dev/sda – set 3 esp on

You set all flags this way.

Removing a Partition

Made a mistake? Changing strategy? You can remove partitions, one by one. You can change the number of the partition to choose the correct one.

$ parted /dev/sda – rm 1

Nothing adds there.

Rescue

You can also rescue your old disk using the rescue parameter. This works even when you have removed a partition by mistake.

$ parted /dev/sda – rescue 1MiB 20GiB

The action is slow, but it can help you recover from problems. When parted finds something, it will prompt you for action.

Conclusion

Parted is a very powerful way to partition your disk. You can choose to run a command at a time or open a shell.

]]>
How to Install Steam on NixOS? https://linuxhint.com/how-to-instal-steam-on-nixos/ Tue, 24 Nov 2020 17:34:47 +0000 https://linuxhint.com/?p=78116 When installing things on NixOS, you need to have a package in the right format on the nixos.org web page. Steam is available, but some quirks may trip you up when you try to install it. You will hear more about this here.

In particular, it is a non-free software package, so you must enable this option. You will also need to handle the ‘glXChooseVisual failed’ problem. The process will work one way in NixOS and another way on other distributions. It is more complex with just the Nix package manager.

What is Steam?

Most people who come already know this but let’s cover it here anyway. Steam is a platform and market for games and gamers. It started as a way to update your game from Valve. It was the only one for the first time in life. As the company added more games, they also added them to the platform. With so many games available, they rebuilt it to be a market place and community platform. You can now both play and buy games and stay in touch with fellow gamers on the platform. Given all this, of course, you want to have it installed on your NixOS system.

Installing the Main Executable

There have been some problems with Steam on NixOS in the past. The problems were solved but still require some extra actions compared to other packages.

One issue is that this is not free software. Second, the packages use Glx of the 32-bit variant, something that is not clearly reflected in the packages. These two issues need to be addressed in the setup of the package manager: Nix or NixOS configuration (.nix) file. The actual solution was to set the dri support 32bit value to true. There were a few others, but thanks to a new module from Maciej Krüger, you can now just add the module with the code below.

programs.steam.enable = true;
nixpkgs.config.allowNonFree = true;

This is a module that has solved several problems with some quirks of the Steam software. Once you have this set correctly, you can run the install. If you are interested, the below code is from the commit that adds the module to make it happen.

{ config, lib, pkgs, ... }:

with lib;

let
  cfg = config.programs.steam;
in {
  options.programs.steam.enable = mkEnableOption "steam";

  config = mkIf cfg.enable {
    hardware.opengl = { # this fixes the "glXChooseVisual failed" bug, context:
    https://github.com/NixOS/nixpkgs/issues/47932
      enable = true;
      driSupport32Bit = true;
    };

    # optionally enable 32bit pulseaudio support if pulseaudio is enabled
    hardware.pulseaudio.support32Bit = config.hardware.pulseaudio.enable;

    hardware.steam-hardware.enable = true;

    environment.systemPackages = [ pkgs.steam ];
  };

  meta.maintainers = with maintainers; [ mkg20001 ];
}

As you can see in the code, it activates the support for 32-bit direct rendering and audio. It also adds the package ‘pkgs.steam’, which is the main Steam package. With the earlier configuration, you should get the whole system up and running at the next rebuild switch. Once you have saved your configuration, run:

$ nixos-rebuild switch

For most of you, this will allow the install to go forward. You now need to make sure you have enough disk space for the install. Also the games you will install need disk space too.

hardware.opengl.driSupport32Bit = true;

If things go wrong, use:

$ strace steam

There are many other optional packages to install if you have any special needs or desires.

nixpkgs.steam-run (steam-run)

Why do you need steam-run? Steam-run makes it possible to run using NixOS libraries instead of the ones Steam provides. This may work better when you want to run games that expect a regular Linux system beneath. Some games need patching to run, using the Steam environment. The reason is that only the Steam provided games are built for a closed environment. To use these, add the steam-run or steam-run-native to your configuration file.

environment.systemPackages = with pkgs; [
...
steam-run-native
];

You can also use steam-run directly as a command, like this:

$ steam-run ./start-game.sh

This will run the game in a Steam environment directly.

Missing Dependencies

Some games may need dependencies that NixOS does not automatically provide. To fix this, you can add them to the configuration file under systemPackages.

environment.systemPackages = with pkgs; [
  ...
  (steam.override { extraPkgs = pkgs: [ mono gtk3 gtk3-x11 libgdiplus zlib ];
nativeOnly = true; }).run
  (steam.override { withPrimus = true; extraPkgs = pkgs: [ bumblebee glxinfo ];
nativeOnly = true; }).run
  (steam.override { withJava = true; })
 ];

The above code adds dependencies for many cases. You will pick the ones you need yourself, of course. You can also look for other dependencies that may be missing. However, you will be on your own if you do, so be prepared to use the terminal to start and trace it when you ask for help on the different support forums.

Other Useful Packages

You also have some special packages that may help you with some issues.

nixpkgs.steamcmd (steamcmd)

This package adds Steam command-line tools. You can use this for installing software and running your own servers; some tasks can be automated.

You also have many other packages available. To use them, you add them to your packages and rebuild. The currently available ones are below:

nixpkgs.kodiPlugins.steam-launcher (kodi-plugin-steam-launcher)

Launch Steam in Big Picture Mode from Kodi

nixpkgs.pidgin-opensteamworks (pidgin-opensteamworks)

Plugin for Pidgin 2.x, which implements Steam Friends/Steam IM compatibility

nixpkgs.bitlbee-steam (bitlbee-steam)

Steam protocol plugin for BitlBee

nixpkgs.eidolon (eidolon-1.4.6)

A single TUI-based registry for drm-free, wine, and steam games on Linux, accessed through a rofi launch menu

nixpkgs.kodiPlugins.steam-controller (kodi-plugin-peripheral.steamcontroller)

Binary addon for the steam controller

nixpkgs.matterbridge (matterbridge-1.18.0)

The simple bridge among Mattermost, IRC, XMPP, Gitter, Slack, Discord, Telegram, Rocket.Chat, Hipchat(via XMPP), Matrix, and Steam

nixpkgs.steamcontroller (steamcontroller)

A standalone Steam controller driver

nixpkgs.sc-controller (sc-controller-0.4.7)

User-mode driver and GUI for Steam controller and other controllers

Conclusion

Steam presents a small problem because a large part of the platform and some games still require 32-bit libraries, and you need to enable that. Hopefully, you have gotten your answer here. If not, you may ask on the forums! NixOS is extremely versatile, but getting to grips with the Nix language is a chore. When you switch, make sure you have some fundamental understanding of the language to avoid long winding searches for solutions. You should be able to come up with many yourself with enough grasp of the Nix language.

]]>
How to use NixOS Package Manager? https://linuxhint.com/how-to-use-nixos-package-manager/ Sun, 22 Nov 2020 10:45:52 +0000 https://linuxhint.com/?p=77755 The NixOS package manager is a system of its own. You can use it under any Linux Distribution.

What does NixOS Package Manager do?

Most package managers use a file that contains the executable or source code. They then calculate what it needs on the system and then make sure that it exists. In Nix, things work very similarly. The big difference is that Nix creates all the files, and compiles them if necessary, then put them in one place; the nix-store. The first question you have may be, “Will the files not have the same name?” The system avoids this by having one directory for each version AND naming all files with a hash. To make the application “feel at home”, all dependencies are then linked to their correct directories using ordinary symlinks. A profile keeps track of which version each user runs.

NixOS User Installs

With this system, you can have different versions installed in each user’s directory. If they are the same in several users, the administrator can let Nix re-link binaries, so only one exists at a time. This is useful in saving disk space. You can also create specific environments for each version of the package. This is especially useful when you want to test a new version or develop software.

Installing for common distribution

For most common platforms, you can install Nix, the package manager with a simple script. This script is available on the Nix website. The script will need root user access, but if you are very security conscious, you should read the script before you use it. If you want to avoid using root in the script, just create the /nix directory on your system.

$ sh <(curl -L https://nixos.org/nix/install)

If you have no root access or just super cautious, you can have Nix as a user only package manager.

$ sh <(curl -L https://nixos.org/nix/install) –no-daemon

This binary works well for most, if not all, distributions. Platforms are x8664, i?86, aarch64, and x8664-darwin, which cover almost all platforms available. If you use any other platforms, you can probably use the source code and build your own. When the installation is done, you will then have a bunch of new commands.

Adding your first program to NixOS

To install software and set when it can be used, you have nix-env. The install option (-i) is the most common one since you use it always and put a package as an argument.

$ nix-env -i firefox

This looks the same as in other distributions, so does the query argument. The install will take some time, though. The reason is that it must compile the software unless there is a pre-compiled version in the Nix Cache. Reaching the cache is not always very fast either. There is a difference that you should take note of; you can pick a version! If you want a special version, you must find which are available using regular expressions.

$ nix-env -qa 'firefox.*'

You will receive a list of all the available packages. You can install it the same way but using the value in the list.

$ nix-env –install 'firefox-78.4.0ser' –preserve-installed

This can fail if you already have an installed version. Option ‘–preserve-installed’ will not erase the installed version. You may end up with two versions of the same priority, which you can fix by setting the priority.

$ nix-env –set-flag priority 2 'firefox-82.0.2'

Now, you will run the old version the next time you start Firefox. To switch which one you run, you can set the priority accordingly. You can also start a shell to choose a binary. This is a developer’s option, and the command is nix-shell.

Updating NixOS

Once you have a collection of software, you want to stay updated. Same as always, you use the same command with an argument. But you must also keep the channel updated. The command is nix-channel.

$ nix-channel –update

This reads down the current versions of all packages available. After that, you can start upgrading your software with nix-env.

$ nix-env –upgrade

An upgrade like this will upgrade your old version of the software. In this case, the old Firefox will be replaced with the newest version. You may not want this for whatever reason, usually development.

Removing applications from NixOS

Removing applications is equally simple, with a small caveat. No applications are removed by a remove command.

$ nix-env –uninstall 'firefox-78.4.0ser'

This command will remove the links to the current build of this version of Firefox. All the files will always stay on disk. You have these versions available to do a rollback. A rollback means that you go back to using the old version. This can be useful if you have tried the newest and it has unforeseen problems.

$ nix-env –rollback

You rollback an entire generation, which means all the programs that were upgraded since the last generation. The option runs two commands; that list and then switches to that old generations. All installed packages exist in a generation on disk.

NixOS Roll-back and Cleaning up

The rollback function will lead to a lot of disk space being used by old versions. You can clean this up (you need to clean this up!). When you have had a long enough period, at your own choice, you can also clean up these old generations to save disk space.

$ nix-env –delete-generations old

With this command, you delete all generations except the two last ones. You can go back and forth in the list with more complex parameters to leave the specific generation that worked best for you. Unless you have many testing or development projects that need many versions for testing, you should use a scheduled removal of all old generations.

A simple script to keep your generations clean comes with a Nix package manager install.

$ nix-collect-garbage

You should also set up the collector to run automatically using systemd or other systems.

Conclusion

Nix package manager is a powerful system that can get you running complex development environments on your machine. You can also use it to keep your software tidy and have a simple way to recover on a new machine, should the catastrophe of a disk crash occur.

]]>
A Review of NixOS https://linuxhint.com/nixos-review/ Tue, 17 Nov 2020 11:23:19 +0000 https://linuxhint.com/?p=77085 Most reviews go over desktop tools and default tools, but such reviews are not very useful for describing NixOS, as the power of NixOS lies elsewhere. People who choose NixOS must be willing to do their own partitioning, and you will not be doing them any favours by telling them the default desktop manager can suit their needs.

With that said, if you can follow the NixOS manual, you will be fine. You can choose a default desktop environment if you want, but make sure you are comfortable with the command line and can edit a text file for configuration tasks.

A powerful configuration

The ability to configure NixOS is both an advantage and a challenge. Traditional package managers bring the package into the established LSB structure of the files. In NixOS, the installer puts the files in the store with a hash before it. This convention may sound complicated, but it enables many features.

When you install a program, the package manager prepares a directory with all files and adds links to the positions where they should be placed. It also copies the dependencies in the same directory and links them in the structure. To track which programs need which dependencies, a profile is used. With the store and the profiles, you can have many different combinations of packages.

You can also switch over with a few commands, and rolling back is super easy: just pick the old generation at the next reboot. If you are playing around with configurations, you will end up with many generations. However, you can use nix-collect-garbage -d to clear the boot partition (although you must them run the nixos-rebuild command!).

Handling revisions

In the Nix Store, where all your software is stored, you have one file for every executable. At first glance, this convention appears no different from those adopted by other systems; however, there is a big difference: Every time you upgrade, a new binary is added and then linked to your profile, which can very quickly lead to wasted disk space.

To address this issue, there is another garbage collection option, which is the same program that is used with the entire system. If you need old revisions for only a short test period, then you can set systemctl to run at a regular interval. Furthermore, you can save disk space by using the ‘nix-store –optimise’ command, which finds identical files in the store and links the files to that one file.

Setting up development environments

At first, it seems hard to develop software with this system. In fact, you can start a shell with a specific development environment each time. When you pick an environment, nix-shell will install the environment you need so that you can start a specific environment for some odd language you never use or create a file that collects everything you normally need.

Docker and other clouds

NixOS is an operating system, and Nix is a package manager. The two work together to provide a straightforward and reproducible configuration process. In other words, if you create a full configuration file that covers all your needs, then you can use that for your next machine.

The installation procedure starts by detecting hardware. In the second step, you define your environment and system packages using the configuration.nix file. Once you have the correct content in the file, the installer will recreate the same system when you use it on a second machine.

This functionality is useful because, for regular systems, a new disk needs only the file to rebuild your system (in addition to your user file backup, of course). Furthermore, for cloud computing, you have an even bigger advantage: While the files you need to write for a docker image are really long, the corresponding file for NixOS is short and easy to move between systems. In addition, you can use the import function to create special nix files for your odd configurations and import them into your config.

Appimage, snap and flatpak

While NixOS has many brilliant ways to run your applications and separate them from each other, a lot of software is delivered in other ways. Appimages and Flatpak are easy to use to distribute packages. Fortunately, NixOS has packages for handling these formats, and you can install these packages to run your favourite AppImages and Flatpaks. You can define the packages in your configuration.nix file and have them available when you need them.

Conclusion

NixOs seems intimidating because it has no graphical installer and you need to create a configuration file. However, only in NixOS do you set the same values in both cases. To back up a NixOS system, not including the user files, only a single file is needed. With this file, the system recreates the packages and settings. Furthermore, NixOS provides a built-in method for running a shell in a specific environment: Just use the same type of file! In the file default.nix, you can define all your libraries and dependencies and then run nix-shell in that directory.

This system has a lot of potential. Try it out: You can start with your own distribution and the nix package manager.

]]>
How to Install NixOS https://linuxhint.com/install-nix-os/ Sun, 15 Nov 2020 08:31:44 +0000 https://linuxhint.com/?p=76823 In the Linux world, there are many distributions, and these distributions usually differ in terms of package manager, environment, and packages. Once installed, you can find files in specific places in the file structure. Directories like /usr, /usr/local and /bin are used to store different files, and this standard makes it possible for an experienced Linux user to know where files are located and to run scripts that use these files over many distributions. To find out more, look up the LSB project.

While you can run applications under NixOS because they follow the above standard, the files are not where they would be in another system. The developers of NixOS and GNU Guix have strong opinions about this system, and they have come up with clever ways to comply with it.

A different system

Your software storage system affects functionality in a way that is much deeper than it seems at first glance. For the software to find the files it needs, NixOS uses symlinks. Each application has its own directory that contains the executable and links to the libraries that run it.

With this organisation system, you can have different files and versions installed at the same time. By default, all packages and their dependencies should compile during installation. However, it requires a lot of time and processing power to do so at every install, there are caches.

Downloading

With NixOS, there is always more than one way to do something. Like other distributions, with NixOS, you have an ISO on a USB stick. You have choices regarding how you want to install NixOS on your distribution. However, before we discuss this topic in more detail, it is important to understand that there are two slightly confusing parts of this process.

First, Nix is different from NixOS, and you must understand the difference between Nix, the package manager, and NixOS, which configures your system. You can download the Nix package manager and use it on your current system. With it, you can keep many versions of applications on your system without them interfering with each other.

Second, with NixOS, while you cannot not declare the partitioning scheme, everything else can be left in one file. Most users leave the automatically created hardware configuration file alone. When you first start out, you can keep your packages declared in the file, but over time, you will probably make separate files that you import into your configuration file.

Partitioning

Before installation, you must partition your drives. In other distributions, there are defaults you can accept; however, with NixOS, you must do your own partitioning. Partitioning is not very complex, but you can run into trouble when you have to set your configuration for the partitioning scheme you choose. It is important to understand that the instructions and scripts prefer if your file systems are labelled correctly.

The standard manual shows the partitioning commands. Note that the commands differ for a UEFI and an MBR disk, and setting the wrong values will cause many problems. The manual suggests using the values provided below for the initial installation, but it is really easy to test new values.

Standard partitions:
MBR:

parted /dev/sda -- mklabel msdos
parted /dev/sda -- mkpart primary 1MiB -8GiB
parted /dev/sda -- mkpart primary linux-swap -8GiB 100%

UEFI:

parted /dev/sda -- mklabel gpt
parted /dev/sda -- mkpart primary 512MiB -8GiB
parted /dev/sda -- mkpart primary linux-swap -8GiB 100%
parted /dev/sda -- mkpart ESP fat32 1MiB 512MiB
parted /dev/sda -- set 3 esp on

Mounting the partitions in MBR:

mkswap -L swap /dev/sda2
mount /dev/disk/by-label/nixos

Mounting the partitions in UEFI:

mount /dev/disk/by-label/nixos /mnt
mkdir -p /mnt/boot
mount /dev/disk/by-label/boot /mnt/boot

The next section will show you how to create your configuration file.

The Config File

Once you have your disks set up, you can start the configuration process. With NixOS, you configure first and then install. The following instructions assume that you have booted using the ISO, but you could boot with chroot.

With nixos-generate-config, the system generates a standard configuration file.

$ nixos-generate-config –root /mnt

This command creates two files: /mnt/etc/nixos/hardware-configuration.nix (you do not change this file) and /mnt/etc/nixos/configuration.nix. You can edit the second file in your favourite editor.

Usually, the options do not change depending on the method used to boot. You can use grub or another boot configuration. There are many options, but here are some standards.

Add this line for MBR only:

boot.loader.grub.device = "dev/sda";

Add these lines for UEFI only:

boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;

Change the config files as little as possible to start. These values are all in the original file: just un-comment them and make changes to fit your needs.

 networking.hostName = "nixos";
users.user.nixos = {
  isNormalUser = true;
  extraGroups = " wheel"
}

environment.systemPackages = with pkgs [
  wget vim
];

services.openssh.enable = true;

Add the packages you want to use as standard packages. All standard packages go in the square brackets with wget and vim. You can install more packages once the system is running.

Building

Once your configuration file is correct, you can run the install.

$ nixos-install

Next, the installer will ask for a root password that will be used on the real system. All programs will be compiled or downloaded from cache.nixpkgs.org and then installed in the nix store on your computer. Then, you can reboot, and you should get a login prompt.

$ reboot

Now, provide a password for your user using root. The user you defined in the configuration file will also have a home directory.

New Config

Once you have completed the above steps, you can play around with the configuration file. When you change something, try it out without installing it as follows:

$ nixos-rebuild test

Once you have new values that work well, run the rebuild command:

$ nixos-rebuild switch

Now, you will see if you have set the boot values correctly. It is important to note that the changes to the configuration are reversible. You can simply reboot and choose an older version, which is called a generation, and try again. However, each configuration does require disk space, so make sure you are familiar with the garbage collection function.

Conclusion

NixOS requires a few manual steps to set up, but you can return to a running system much quicker than with other systems. Furthermore, version control is easier if you need many versions of the same application or if you need many versions of the same libraries. At first glance, there may seem to be many limitations, but these limitations can be overcome with the more advanced parts of the system.

]]>
How to Plan a Simple Robot Using Linux https://linuxhint.com/simple_robot_linux/ Mon, 09 Nov 2020 09:11:54 +0000 https://linuxhint.com/?p=76354 Once you have ROS installed, you might want to build a robot. A good way to succeed in this project is to plan what you want to do. In this case, ROS comes to the rescue. With ROS, you can set up what you have built and visualize the whole thing. When working with robots, there will be many scenarios that you may need to consider. The robot must interact with the environment, such as avoiding the sofa and finding its way back from the kitchen. The robot should also have arms and legs if your needs require it. You can simulate all of this using ROS, and for the coding part, you can also simulate the internals of your system.

How Do You Build a ROS Robot?

For the system to work well, and for you to be able to follow what the device will do in certain situations, you need standard definitions for each part. In ROS, these components are nodes, services, and topics. In short, you create one node for each major need. For example, motion is one node, vision is another node, and planning is a third node. The nodes contain services that can send information to other nodes, and services can also handle requests and responses. A topic can broadcast values to many other nodes. Getting to grips with these terms and how you should use them is the first key to mastering ROS2 development.

Emulate Navigation with turtlesim

When starting out in ROS, you will probably buy a robot that walks or rolls around in your house. To do this, the robot needs to have a view of the area where it is navigating. To do this, you can use a map-like application to test your robot’s behavior. The designers behind the Turtlebot have come up with an application, called turtlesim, that can do this for you. As with all other parts of ROS2, you can start these tools with a sub-command from the command line. You then have activities for different functions. The first part is to start the window where you can see the simulation, and this is called a node.

$ ros2 run turtlesim turtlesimnode

A window will appear with a Turtle in the center. To control the turtle with your keyboard, you must run a second command that stays open and keep pressing certain keys. This is a second node that communicates with the first one.

$ ros2 run turtlesim turtleteleopkey

Now, you can move the turtle around and see how it moves. You may also get errors, such as hitting the wall. These errors show up in the terminal where the turtlesimnode is running. This is the simplest use of the simulation module. You can also run given shapes, a square is provided, and add more turtles. To add more turtles, you can use the rqt command.

Define Services with rqt

The rqt program provides services for the simulation. The q stands for Qt, which is for handling the interface. In this example, you spawn a new turtle.

$ rqt

The rqt interface is a long list of services for the simulation you are running. To create a new turtle, pick the ‘spawn’ drop-down menu, give the turtle a new name, and click ‘call.’ You will immediately see a new turtle next to the first. If you click the ‘spawn’ drop-down menu, you will also see a new bunch of entries related to the newly-spawned turtle.

You can also remap commands to run the new turtle. The command to do so is as follows:

$ ros2 run turtlesim turtleteleopkey –ros-args –remap turtle1/cmdvel:=turtle2/cmdvel

Set the name ‘turtle2,’ according to your earlier choice.

Advanced Viewing with Rviz

For more advanced and 3D viewing, use rviz. This package simulates all the nodes in your design.

$ ros2 run rviz2 rviz2

In the graphical interface, you have three panels, with the view in the center. You can build environments using the ‘Displays’ panel. You can add walls, wind forces and other physical properties. This is also where you add your robots.

Be aware that before you get to this point, you will need to understand how to use the URDF format. The URDF format defines a robot, allowing you to set the body, arms, legs, and, above all, collision zones. The collision zones are there so the simulation can decide whether the robot has collided.

Learning about creating a robot in the URDF format is a large project, so use an existing open-source code to experiment with the emulators.

Simulate Physics with Gazebo

In Gazebo, you can simulate the physics of the environment surrounding your robot. Gazebo is a complement program that works well together with rviz. With Gazebo, you can see what is actually happening; with rviz, you keep track of what the robot detects. When your software detects a wall that is not there, Gazebo will show empty and rviz will show where in your code the wall was created.

Conclusion

Simulating your robot and its environments is necessary to find bugs and provide needed improvements in the operation of your robot before you put it out in the wild. This is a tedious process that continues long after you begin testing the bot, in both controlled environments and real life. With adequate knowledge of the infrastructure of your robot’s internal systems, you can make sense of what you have done right and wrong. Learn quickly to appreciate all the faults you find, as they can make your system more robust in the long run.

]]>
Install Linux in Android without Root https://linuxhint.com/install_linux_android_without_root/ Thu, 05 Nov 2020 19:37:40 +0000 https://linuxhint.com/?p=75672

Using Linux on an Android phone can be useful when you need to use command-line tools. It is also useful for running entire desktops on your phone. A common issue is that you need to root your phone to get a running system, but there are now many systems available for running your favorite distro and desktop on your mobile device. Some of these systems are available for free and even open-source versions exist.

Before choosing how to get started, consider what you are aiming for. Are you looking for a few specific applications, a command-line, or a full desktop? Your choice is important since you can use your phone for many different things. This article shows you how to install entire distributions, as well as a CLI-Launcher, on your mobile device.

Overview

To install Linux on Android without ruining your standard phone by rooting it, you will need the Proot program. This program makes it possible to run applications as if they were in a different root file system. The launchers and installation applications for Android use Proot to install a distribution or application on your phone. When you just want one or two specific applications to run on Linux, you can use the installer for one application at a time. You can also choose to install an entire distribution. You can decide what best fits your needs and choose your tool from there. Remember that you are faking a file system here, so from a security point of view, you are on your own.

The process is simple thanks to the applications that developers have made for us. You can install the helper or the install app from F-Droid or the Play Store, and choose how deep into the rabbit hole you want to go. Most of these applications are available in both stores. Finding the application on APK pure or similar is also an option.

How to Use

The process of installing a distribution is similar to the different options you have, but in principle, all you need to do is pick the options available within the application itself. Most options include setting up a VNC or an SSH process so that you can reach it from other computers.

Linux CLI Launcher

If you are a fan of the command-line, this one is for you! The CLI Launcher comes as an application, which you can pick up from the Google Play store or many of the APK download sites. The launcher gives you most Linux commands, as well as a way to launch your applications. You can type the name of the application and tap on the list below to start the application.

This application is not only for staying true to your keyboard-centered view of computing. You may have some jobs that require power-consuming processes that you want to offload from your main system. Or, vice versa, some low power that you want to have running in your main system.

GNURoot

GNURoot is a solution to run Proot and the setup for installing Linux applications and distributions. Using this, you can install many distributions and applications at once. This application is a tool that can be used to install any root file system on your mobile device.


In practice, you will download GNURoot first, and then your distribution separately. With GNURoot, you will have many distributions to choose from, including Debian, Gentoo, and aboriginal. You even have GNU Octave available. All these distributions start in the terminal as standard. To start using the Graphical User Interface, find the Xserver XSDL application and install it. When this step is done, you can install all the X components in your fake root install. After you run the X server on the local machine, you will have the desktop on your mobile. You can also run the X desktop on your laptop; this way, you have some applications that are separate from your regular system. This may be useful if you have other demanding jobs in your main system.

WheezyX

WheezyX is a rootfs system that you can install using the GNURoot application. However, to make this update, you will need to switch to a newer distribution. You can do this by changing the file in the /etc/apt/sources.list file. This entails updating the entire image to the buster, which can cause problems.

deb http://ftp.debian.org/debian/ buster main contrib non-free
deb-src  http://ftp.debian.org/debian/ buster main contrib non-free

UserLand

With UserLand, you get similar functions, but they are neatly listed on the start screen. You do not have many options, though they are all available right there in the application. The options you do have are several distributions and a few applications. This application is extremely simple to use, and it fetches all the files, unpacks them, and calls the X server, Vnc server, or Xsdl server. When you pick one option and allow it to install, you will need to choose how to display the running environment. Depending on what you choose, UserLand will direct you to the Play Store to download the tool for this purpose. Once the appropriate tool is downloaded, the application will start this tool every time you start the session.


An important warning here is that your new root file system will be updated in this process. Make sure that you have the disk space to accommodate what you are installing. The amount of storage space you need will depend on your choice of application, but a good 10 GB is a good start. If you are short, you may end up with a long install, and then it all crashes anyways due to lack of space.

https://github.com/CypherpunkArmory/UserLAnd

Conclusion

Getting this process started takes much more than a single application. You will require some skills with the command-line and enough disk space to handle it. Your patience may also take a toll, since the first time you run it, you will have to wait for the initial download, and after, that the additional upgrades.

]]> The Best Robotics Distros https://linuxhint.com/best_robotics_distros/ Sun, 01 Nov 2020 08:18:42 +0000 https://linuxhint.com/?p=75006 For robotics development, there are many collections available to choose from. From habit, Linux users look for distributions to find the perfect solution for their project. While there are distributions, you can miss out if you do not look for common libraries to help you with certain tasks. What tools you are already using is also a consideration to take seriously.

Since working with robots will be a development effort, most tools you will need will be development tool-kits. The Open Source Robotics Foundation (OSRF) has a great web page with resources. They support and maintain the Robotics Operating System (ROS). This is a vast collection of tools that you can install both on your existing system or as a container.

The reasoning behind a distribution

When you start experimenting with robots, you will discover many things that you did not expect when you started. Do you realise how much interpreting images matter to robotics development? In a distribution, you have all the tools that you will need before you know that you need them.

This makes it faster to get started and avoids complications as projects evolve. You will occupy disk space unnecessarily but the total size of the system is not big for a modern computer. As you progress in your project, make sure you know what to put in the robot and not. Your disk space restraints are much stricter there.

ROS – The biggest and obvious choice

As mentioned earlier, ROS has a vast library of functions. These range from hardware control, messaging between subsystems to vision libraries, and simulators. The project is well supported by the OSRF. They are in turn supported by many industry leaders, and their commercial subsidiary supports the same companies in their efforts.

Despite the vast range of choices and the high level of sophistication, a new user can get started using some standard components. Thanks to the cooperative methods of the OSRF, there are many robots that have been built using the ROS. Many are consumer products that you can buy for a reasonable amount and start your project for a specific task. Currently available products are one on wheels, legs, and wings on flying drones. You can even find water dwellers, including submarines.

This project will last you through to industrial use if you aim to go there. There is a bit of a learning curve getting started but you have a lot to learn about robotics anyway. In fact, when you plan your first project, you will most certainly miss many features you need just for basic use.

Mobile Robotics Programming Toolkit

As you will see when you start with robots, much of the job will be programming. This toolkit helps you with SLAM (Simultaneous Localisation and Mapping) and other path-planning tools. Many of the tools have to do with vision. an interesting piece of this tool kit is the support for the Kinect hardware. The libfreenect libraries are the underlying ones for this project. To add it to your Ubuntu install, you can pick up the PPA and install it with apt. Compiling your own requires gcc-7 or newer, clang-4, or newer. For more information and what you need to do when you are using ROS, goto their GitHub page.

YARP

Named ‘Yet Another Robot Platform’, it is based on the idea that you use as much as possible of existing tools. YARP is a collection of C++ libraries that defines communication protocols for all levels of robotics projects. You have three components of YARP, YARPos, YARPsig, and YARPdev. They are all concerned with how to send data between the components of your project. The YARPos component creates interfaces towards the OS you are running. This makes it easy to switch the OS or hardware of one component while retaining the same YARP streams of data. This is required for running the other parts of the system. YARPsig handles signal processing tasks, it interfaces with OpenCV and similar libraries. It does not do the processing. YARPdev provides interfaces to all manner of devices you need. Think cameras, microphones, motor drivers, and more. YARP will make the interface to the overall system. You will also use it to configure your devices. YARP will help you plan all other software so you can use what is existing out there. In fact, you also have the option to run some components under ROS while others run YARP. There are many options available and you can transition between the two gradually.

Conclusion

When you start out with robotics, you will need many software parts. Each controller and embedded computer have different needs and systems. Every camera has anew driver. All the data needs to move around between subsystems and components. It gets really complicated. To get started faster, you need to have a system that coordinates everything. You do this by having a coordinating function for messages and data. These are organised as ‘topics’, ‘nodes’, and ‘services’. The reason for these different functions is that sometimes, you want to execute commands, sometimes you want to make data available to many other parts of the system and you will also be able to send all the data out to the system in general.

This is where you need a distribution or a platform that keeps all this coordinated. ROS is the system that works with most, if not all hardware and types of projects. Remember to understand what you are trying to achieve with your design. This is particularly important when you are learning. If you have a goal, then all steps have a reason and an association. That is the basics of learning; To see the connection between reason and action.

]]>
HeliOS for Arduino https://linuxhint.com/linux_on_arduino/ Sun, 01 Nov 2020 07:09:26 +0000 https://linuxhint.com/?p=74966 The microcontrollers of an Arduino use a single program to control all the switches, LEDs and other parts of the system. The first program learned by an Arduino user is typically the ‘Blink’ program, which uses the delay function to turn an LED on and off in an even pattern. This simple program can be extended to do many things, but it cannot include multitasking.

For more advanced projects, you need to change values and read data in real time, which is not possible with the standard delay function in Arduino. Therefore, a different solution is needed. Luckily, HeliOS can help.

The Limitations of Arduino

As mentioned in the introduction, the standard language of an Arduino can be applied in many ways. However, there is a problem: the Arduino cannot multitask. For example, you cannot set three different LEDs to blink at independent intervals. This task cannot be carried out because, if you use delay, the LED with the longest delay will block the blinking of the other LEDs while waiting to switch states.

Standard polling is also troublesome, as checking the state of a button requires an action to be taken. In a standard Arduino, you have to setup a function to poll the state of a switch or any other state.

While there are solutions for addressing these issues (e.g., hardware interrupts, the millis function, the FreeRTOS implementation), but these solutions also have limitations. To overcome the issues of these solutions, Mannie Peterson invented HeliOS. HeliOS is small and efficient, and it can even run on 8-bit controllers.

Consider the code below, which is unreliable at best because the delay statement will prevent the button from being checked.

int buttonPin = 2;    // the number of the pushbutton pin
int ledPin =  4;      // the number of the LED pin

// variables will change:
int buttonState = 0;         // variable for reading the pushbutton status

void setup() {
  // initialize the LED pin as an output:
  pinMode(ledPin, OUTPUT);
  pinMode(LED_BUILTIN, OUTPUT);
  // initialize the pushbutton pin as an input:
  pinMode(buttonPin, INPUT);
}

void loop() {
  // read the state of the pushbutton value:
  buttonState = digitalRead(buttonPin);

  // check if the pushbutton is pressed. If it is, the buttonState is HIGH:
  if (buttonState == HIGH) {
       digitalWrite(ledPin, HIGH); // turn LED on
  } else {
        digitalWrite(ledPin, LOW); // turn LED off
  }

  digitalWrite(LED_BUILTIN, HIGH);   // turn the LED on (HIGH is the voltage level)
  delay(1000);                       // wait for a second
  digitalWrite(LED_BUILTIN, LOW);    // turn the LED off by making the voltage LOW
  delay(1000);                       // wait for a second

}

When you run this code you will see that the ‘ledPin’ will blink normally. However, when you push the button, it will not light up, or if it does, it will delay the blink sequence. To make this program work, you can switch to other delay methods; however, HeliOS provides an alternative.

Linux Embedded on Arduino (HeliOS)

Despite the “OS” in its name, HeliOS is not an operating system: it is a library of multitasking functions. However, it does implement 21 function calls that can make simplify complex control tasks. For real-time tasks, the system must handle external information as it is received. To do so, the system must be able to multitask.

Several strategies can be used to handle real-time tasks: event-driven strategies, run-time balanced strategies and task notification strategies. With HeliOS, you can employ any of these strategies with function calls.

Like FreeRTOS, HeliOS enhances the multitasking capabilities of controllers. However, developers who are planning a complex project of critical importance need to use FreeRTOS or something similar because HeliOS is intended for use by enthusiasts and hobbyists who want to explore the power of multitasking.

Installing HeliOS

When using the Arduino libraries, new libraries can be installed with the IDE. For versions 1.3.5 and above, you pick use the Library Manager.


Alternatively, you can download a zip file from the webpage, and use that file to install HeliOS.


Please note that you need to include HeliOS in your code before you can start using it.

Example

The code below can be used to make an LED blink once per second. Although we have added HeliOS code, the final effect is the same as that of the introductory tutorial.

The main difference here is that you must create a task. This task is put into a waiting state, and a timer is set to tell the task when to run. In addition, the loop contains only one statement: xHeliOSLoop(). This loop runs all the code defined in the setup() of the code. When you plan your code, you need to set all pins, constants and functions in the top setting.

#include

//Used to store the state of the LED
volatile int ledState = 0;
volatile int buttonState = 0;
const int buttonPin = 2;
const int ledPin = 4;

// Define a blink task
void taskBlink(xTaskId id_) {
    if (ledState) {
       digitalWrite(LED_BUILTIN, LOW);
       ledState = 0;
       } else {
       digitalWrite(LED_BUILTIN, HIGH);
              ledState = 1;
       }
    }
}

// Define a button read task
void buttonRead(xTaskId id_) {
 buttonState = digitalRead(buttonPin);

 // check if the pushbutton is pressed. If it is, the buttonState is HIGH:
 if (buttonState == HIGH) {
   // turn LED on:
   digitalWrite(ledPin, HIGH);
 } else {
   // turn LED off:
   digitalWrite(ledPin, LOW);
 }
}

void setup() {
 // id keeps track of tasks
 xTaskId id = 0;
 // This initialises the Helios data structures
 xHeliOSSetup();

 pinMode(LED_BUILTIN, OUTPUT);
 pinMode(ledPin, OUTPUT);
 // initialize the pushbutton pin as an input:
 pinMode(buttonPin, INPUT);

 // Add and then make taskBlink wait
 id = xTaskAdd("TASKBLINK", &taskBlink);
 xTaskWait(id);
 // Timer interval for 'id'
 xTaskSetTimer(id, 1000000);

 id = xTaskAdd("BUTTON", &buttonRead);

 xTaskStart(id);
}

void loop(){
//This, and only this, is always in the loop when using Helios
 xHeliosLoop();
}

With this code, you can program the LED to blink at any time without having to worry about the Arduino being delayed.

Conclusion

This project is great for people who are new to Arduino, as it lets you use the regular Arduino code to handle real-time tasks. However, the method described in this article is for hobbyists and researchers only. For more serious projects, other methods are necessary.

]]>
Installing the Robotics Operating System https://linuxhint.com/install_robotics_operating_system/ Sat, 03 Oct 2020 09:43:07 +0000 https://linuxhint.com/?p=69805 When you get started with robotics, you soon need a lot of software. For the serious developer or hobbyist, you are going to need it packaged neatly for some reason. The first reason is convenience, later it will be necessary because your platforms will have small memory. Being efficient becomes a necessity when you start using micro-controllers. Since ROS comes in two versions and it contains many modules, installation is trivial on the surface but can quickly become complex.

What do you need, and when?

ROS2 consists of many libraries, all of which you will install while learning. When you get more advanced, you will put only the necessary parts where you need them. To start with you will need compilers, command-line tools, and simulators. For external systems, you will want to have only the finished nodes and the communication core of the system. In the early stages, you need to try some examples and see how to simulate a robot or even several robots in action. Those tools are only available in the desktop install.

How does ROS help?

The libraries in ROS are meant to give you many standard functions for robotic activities. It has libraries for sensor handling, motor control, and much more. The focus is on the communication between nodes, which is a core concept of each function in the ROS framework.

You have options!

You can install the ros-base package with your favorite package manager. The problem is that you may use many versions of the ROS system for different projects. To avoid this problem, use a container. You will end up installing in the same way, only inside the container. The point, in the end, is that the ROS system comes in several versions and they can only run on a certain distribution version. Here is a short table:

Ubuntu Ver. ROS2 ver. ROS1 ver.
18.04 eloquent Melodic
20.04 Foxy noetic

There are more versions and more dependencies, see the list at the wiki on ros.org. The point is that you have to make sure your setup supports the ROS version. ROS also uses Python to a high degree, they are testing for Python 3 (and 2.7) so you can choose. There is also a docker image available if you are more comfortable with that. The image is named ros:foxy-ros-base-focal.

Depending on what you are working with, you may need a different amount of software, which is a second reason to use containers. As you see in the table, you may also need to choose ROS1 or ROS2. If you use both, it is a big risk that settings confuse things for your compilers and other tools. The end of life for ROS1 is 2025, so don’t start new big projects with it.

Many parts inside

The Robotics Operating System has many subsystems. You need to know which one is needed where, and when you should have it installed. As soon as you start installing, you run into the choice of how much you want to install. By default, you will use your package manager to install the entire distribution. This is called ros-desktop-full, you will have all that you may need. It also takes a lot of space on your drive.

ROS Core

The ROS Core makes it possible to compile your programs through the rclcpp and rclpy client libraries. These are the two that the ROS developers maintain. More clients exist for other languages. They use the API to create consistent behavior across platforms. Included at this level are also all the ways your robotic system will communicate.

ROS Base

The ros-base includes many tools for development but contains no GUI tools.

ROS Desktop

Contains all different pieces of the system, including many examples. It also gives you all the GUI tools, including simulators and ways to test communication between nodes. The only extra things you need will be special drivers and some extra implementations of i.e. the urdf parser.

Installing ROS Desktop

The simplest way to install the ROS desktop is to use apt for Ubuntu and other Debian based distributions. On other distributions, you need to build it yourself. When you do, the result is put in a single directory structure. This means you will need to initialize that environment by sourcing the setup file. Uninstall is to remove the directory structure and stop sourcing the setup file. The sourcing you will need to do with Debian packages also.

The ROS2 packages are available as their repository at their repository. To add that to your system, copy their key.

curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add -

And then add their repositories.

sudo sh -c 'echo "deb [arch=$(dpkg --print-architecture)]
http://packages.ros.org/ros2/ubuntu $(lsb_release -cs) main"
&gt; /etc/apt/sources.list.d/ros2-latest.list'

When that is done, you can update and install the libraries.

sudo apt update
sudo apt install ros-foxy-desktop

With all this installed, you need to initialize the environment in the shell you are running.

source /opt/ros/foxy/setup.bash

Add this line to your .bashrc script, so all invocations of bash are ready for you to work. This is also where using a Linux container comes in handy. When you have several projects, using different versions, it is better to create a virtual machine only for ‘Foxy’ and another one for each you need.

Conclusion

While the basic installation is simple for ROS2, you need to be precise about what you want. If you stay with one distribution, you have no problems but start using many versions and you have complications.

]]>
openSCAD cylinder https://linuxhint.com/openscad_beginner_guide-2/ Mon, 21 Sep 2020 19:04:58 +0000 https://linuxhint.com/?p=68915 When preparing this article, I wanted to find out what people have problems with openSCAD. To my surprise, the most common question was about creating a cylinder. There is a cylinder command that you will learn the details about first. After that, you will see innovative ways to create cylinders to your liking. You can also take away cylinders from other pieces to create more interesting things. Most readers, who come here probably wants to see a hollow cylinder or a tube of some kind. Keep reading, we have lots in store for you.

The cylinder command

If you use the simplest version of the cylinder command, you only need one parameter. This makes one solid uniform cylinder and nothing more. You should note that that cylinder will be of standard radius and the height of the value in the parenthesis. The command has many options though, let’s dig through them.

cylinder( r1 = 20 );
cylinder( r1 = 20, r2 = 5 );
cylinder( r1 = 20, h = 40 );
cylinder( r = 20, h = 40 );
cylinder( r1 = 20, r2 = 5, h = 40, center = true );

The first two cylinders in the code above make no sense because they have no height. A common mistake is when you forget the value and it does not look the way you intended. When you use variables, the same thing happens if you use an undefined variable. In this case for height, but check the console log when you run it.

A Cone

The third one is a cone, the reason is that the r2 value has a standard size. Try the fourth one, and see what happens. The last one creates a cone where you have full control of the dimensions. This one is simple to use for solid cones. You set the two radii and the height and you are done. You can also use the diameter if that suits you better.

The center = true value is valid for the z axle, leaving the cone halfway up from the “ground”. Default is false, which makes the bottom of the cone end up on the “ground” so to speak. You can also choose how close the cones walls are to being circular with the ‘$fn’ parameter.

Hollow cylinder

Hey, wait a minute! This only creates solid pieces, how do I drill holes in them? You ask, thank you! I will tell you. The answer is all in the difference. The command that is. Consider the code below, it contains two cylinders which are embraced with curly brackets and the difference command.

difference(){
    cylinder(r = 30, h = 40);
    cylinder(r = 28, h = 41);
    }

Simply put, when you have several pieces, then you cut away material from the first piece using all the following pieces. In this case, you cut a cylinder out of a cylinder. If you want to cut any other shape out, you can do that also. Try a cube or a sphere! Note the interesting, and sometimes devastating effects the $fn value can have on this code.

Hollow Cone

You can also do this with a cone, just use the double radius values. Since you are defining both cones, you have a lot of control on the final result. The simplest hollow cone is just two cones inside each other with a thickness for the material.

difference() {
    cylinder( r1 = 30, r2 = 12, h = 50);
    cylinder( r1 = 25, r2 = 7, h = 45);
}

This cone is covered at the top, you can open it by simply setting the second height higher than the first. Since you have two cylinders, you can change any of the two. As an example, you can cut a straight hole through it by changing the second cylinder. You can also choose a cube, but be aware this can cut too much material out of the cone.

Pyramid

This may seem irrelevant but it is a useful trick you need to keep in mind as you continue using openSCAD. All cylinders, and other elements, are an approximation of a shape. You read about the $fn parameter earlier, here you take advantage of it. With this in mind, you may think: A Pyramid is a cone with four sides. Correct! use $fn = 4 and you have a cone with four sides, meaning a pyramid.

difference() {
    cylinder(r1 = 30, r2 = 12, h = 40, $fn = 4);
    cylinder(r1 = 25, r2 = 7, h = 35, $fn = 4);
    }

The inner cylinder cuts the same cylinder as the outer one. Until you start playing with the $fn parameter. To get familiar with the effects of this parameter, try to make a four-legged stool. How does the $fn parameter affect the result? Also, how can you cover the top or the bottom?

Combining many

To have a lot of use of cylinders, you should learn how to combine many of them. The final result can be very complex and sometimes even useful. Putting a top on your cylinder is one option. To do this well, you must start using variables. Make it a habit to put them at the top of what you are designing. It makes it easier to make modules later.

thickn = 5;
baser = 30;
topr = 12;
height = 50;


union() {
// The bottom cone
    difference() {
        cylinder(r1 = baser, r2 = topr, h = height);
        cylinder(r1 = baser-thickn, r2 = topr - thickn, h = height + thickn);
        }
// The top ball
    translate([0, 0, height])
     difference(){
       sphere(r = topr);
       sphere(r = topr -thickn);
       translate([0, 0, -topr])
            cube(size = topr*2, center = true);
     }
}

Starting from the top, you have variables. They are for the thickness, base radius, top radius, and height. The union statement brings the pieces together. Inside the braces, you have the cone and then the top ball. Because they are inside the union, they will become one piece at the end. You can do even more when you use many cylinders in many angles.

Making a test tube

Moving on from cones, make a test tube. First, you need to consider what shapes make a test tube. The main part is a cylinder, nothing fancy, just the regular difference between two cylinders. If you set the length as a variable, you can use that value as a reference. You need to know where the tube ends and becomes the half-sphere at the bottom. You will also use the radius for the tube to define the sphere.

tubr = 20;
tubl = 80;
thickn = 2;

   difference() {
    cylinder(r1 = tubr, r2 = tubr, h = tubl);
    cylinder(r1 = tubr - thickn, r2 = tubr - thickn, h = tubl);
   }

Try this and you will have only a simple cylinder, to make the whole tube you need to melt it together with the half sphere. There is no half-sphere in the default openSCAD, you must make it. Use the difference between two spheres to create a hollow sphere, then remove another cube that cuts off the sphere.

difference() {
     sphere(tubr);
     sphere(tubr - thickn);
     translate([0, 0, -tubr])
         cube(size=tubr*2, center = true);
}

Now, you have two separate pieces. The next step is to put them together. Here, you can use the union command. Like the difference command, the union takes all the pieces in order. In union, the order is not as important since it is an addition. The code will look a little ugly because we do not use modules here.

union() {
// Main Tube
difference() {
    cylinder(r1 = tubr, r2 = tubr, h = tubl);
    cylinder(r1 = tubr - thickn, r2 = tubr - thickn, h = tubl);
   }
// Bottom sphere
   translate([0, 0, tubl]) {
       difference() {
            sphere(tubr);
            sphere(tubr - thickn);
            translate([0, 0, -tubr])
                cube(size=tubr*2, center = true);
       }
     }
// Top ring
difference() {
    cylinder(r = tubr + thickn, h = thickn);
    cylinder(r = tubr, h = thickn);
            }
  }

Here we design it upside down, this is up to you. Do what is convenient for the particular case. You can always rotate it when you use it. The top ring has sharp edges, you can remedy this by using a circle and rotate_extrude it. There are other ways to do it, explore, and experiment!

rotate_extrude(convexity = 10, $fn = 100)
translate([tubr, 0, 0])
circle(r = thickn, $fn =100);

Combining Many cylinders

Once you have made a tube out of several cylinders, you may also want to connect them in different ways. To do this, you can use a union again. Let’s say you want one tube in a forty-five-degree angle to the other tube. To make this, you position the angled tube halfway up the large tube.

union() {
    tube(50, 4, 300);
    translate([0, 0, totlength/2]) rotate([45, 0, 0]) {
    tube(50, 4, 150);
    }
}

When you try this, it looks great from the outside. When you look inside, you see that you have both entire tubes. The short one is blocking the flow in the long tube. To remedy this, you need to erase both cylinders inside the tubes. You can consider the whole union one piece and put the corresponding cylinders after it inside a difference.

difference() {
    union() {
        tube(50, 4, 300);
        translate([0, 0, totlength/2]) rotate([45, 0, 0]) {
        tube(50, 4, 150);
        }
    }
    cylinder(r = 50 - 4, h = totlength);
    translate([0, 0, totlength/2]) rotate([45, 0, 0]){
        cylinder(r = 50 - 4, h = totlength/2);
        }
}

As you can see, the first cylinder stretches the whole length of the tube. This will erase anything inside the large tube but the small tube that is leaning also needs to be erased. The translate command moves the tube up halfway, it then rotates and put the cylinder inside the tube. In fact, the code is copied from above and the tube is replaced with a cylinder.

Plumbing

If you want to make more tubes, you can use the module in the example above and start expanding. The code is available at https://github.com/matstage/openSCAD-Cylinders.git, At the time of writing, there are only these two but check back often to see more. You may be able to create more exciting stuff.

Inside a block

If you are aiming to make an internal combustion engine, you need a cylindrical hole in a solid piece. Below is an example, the simplest possible, for cooling channels and pistons there is a lot more to add. That is for another day though.

module cylinderblock(
        cylinderR = 3,
        Edge = 1,
        numCylinders = 8)
{
    difference() {
        cube([cylinderR*2 + Edge * 2,
                cylinderR*2*numCylinders+Edge*numCylinders + Edge,10]);
        for(x = [0:1:numCylinders-1])
          translate([cylinderR + Edge, cylinderR*x*2+Edge*x+ cylinderR+Edge,0])
          cylinder(r = cylinderR, h = 12);
    }
}

Here, you have a cube that grows according to the number of cylinders you want inside the block. All values in the module are the default so you can use it without values. To use it, use the ‘use <cylinderBlock.scad>’ statement at the top of your file and then add cylinderblock(numCylinders = 8). You can use or omit any value, when you do omit them, it will take the default. In short, the inside the module starts with the values and then creates a cube to be long enough to fit the cylinders. It then continues by removing the cylinders with a for statement. Thanks to the for statement, you can make a bigger or smaller block. For more advanced modules, you can put constraints in that change the design when certain values are reached. Maybe you want to make it a V if it is 8 or more cylinders.

Extruding from a flat shape

Another way to create a cylinder is to make a circle and extrude it. A solid cylinder is only two lines:

linear_extrude(15)
circle(20);

This creates a 15 (no units in openSCAD) long, with a 20 radius. You can use the diameter using the d parameter. Just creating a cylinder is not very useful but you can use the same technique for any 2D shape. You will see this later. While a hollow cylinder the code is a little longer.

linear_extrude(15)
difference() {
circle(20);
circle(18);
}

This is the same but, as we have done earlier, you remove the centre circle. You can also bend it in a circle with the rotate_extrude version. This is great for making donuts, the simplest version looks like one.

rotate_extrude(angle =180, convexity =10) {
    translate([30,0,0])
    difference() {
        circle(20);
        circle(10);
        }
    }

This code creates a half-circle that is hollow. A note that you should be careful with is the translate is necessary or you will get an error: “ERROR: all points for rotateextrude() must have the same X coordinate sign (range is -2.09 -> 20.00)”. The numbers will depend on the value in the circle. Since this creates the same shape as a cylinder it may seem useless. It is not! The best use of this command is to make flat shape functional somehow. The manual has a simple polygon as an example, it creates a round shape where you can run a belt. You also can twist it around. The below code creates a corkscrew.

translate([-80,0,0])
linear_extrude(80, twist = 900, scale = 2.0, slices = 100)
translate([2, 0, 0])
square(10);

The example in the manual shows a polygon that can be useful. The below code can be whatever you like but illustrates the power of doing it this way.

translate([0, -80, 0])
rotate_extrude(angle = 275)
translate([12,3,2])
polygon(points = [[0,0], [20,17], [34,12], [25,22], [20, 30]]);

You can experiment with the shape of the polygon until you get it right for your application. If it feels a little daunting using just numbers, you can create the profile in other CAD programs and import the dxf result using the import() command.

Conclusion

Making a cylinder is simple but just the start of the process. The tricky part is to make something useful with it. You also need to incorporate it into your design and maybe create more complex issues than cylinders. Find ways and challenges for your ongoing expansion of knowledge using openSCAD. Remember to use the documentation and lean on other software when it cannot be easily achieved with numbers and such. Something not covered in this post is that you can draw stuff in Inkscape and Blender and import it to openSCAD. Exporting from openSCAD to stl and other formats is well supported and if you are really curious, check out the creations over on Thingiverse. They have a bundle of enthusiasts contributing things to their site. ]]>