Linux Hint Exploring and Master Linux Ecosystem 2018-12-11T16:15:43Z https://linuxhint.com/feed/atom/ WordPress Sidratul Muntaha <![CDATA[Install Xfce on Manjaro Linux]]> https://linuxhint.com/?p=33532 2018-12-11T16:15:43Z 2018-12-10T21:55:03Z Manjaro Linux is such an awesome Linux distro that brings the Arch Linux into a more user-friendly manner to the community. It dramatically releases the learning curve of Arch Linux. With intuitive and modern design, Manjaro Linux is suitable for home to professional usage on any level. In the case of any Linux distro, the desktop environment is one of the most important parts. Desktop environment is largely responsible for the user experience of that specific Linux distro. Keeping that in mind, there are already tons of available desktop environments for the Linux community, for example, GNOME, Xfce, KDE Plasma, LXDE etc. As of now, Manjaro Linux is available in 4 different variants – Xfce, KDE, Gnome, and Manjaro-architect.

Why Xfce?

I personally like the XFCE more than the others because of its lightweight nature and simplicity. XFCE features the blended look and feel of the classic computing systems and modern interfaces. It also comes up with a pretty basic set of powerful tools for everyday usage. Yet, it hogs less hardware resource (only 400MB of system memory) than most other desktop environments like KDE or GNOME.

Xfce is open-source and available on almost all the Linux distros. Are you a fan of Xfce? Let’s enjoy the awesome desktop environment on Manjaro Linux – an Arch-based distro targeting novice and new users into the world of Arch.

XFCE on Manjaro Linux

There are 2 different ways you can enjoy XFCE on Manjaro Linux.

Method 1

Get the XFCE version of Manjaro Linux ISO.

Then, install Manjaro Linux on your computer.

Method 2

If you already have Manjaro Linux installed and willing to switch to Xfce, then follow the guides. Note that the installation will take about 400MB of additional HDD space.

At first, make sure that all your system components are up-to-date.

sudo pacman -Syuu

Now, it’s time to install Xfce. Run the following command –

sudo pacman -S xfce4 xfce4-goodies network-manager-applet'

Optional steps

These next steps are optional but I recommend using them for the complete Xfce experience.

Run the following commands –

sudo pacman -S lightdm lightdm-gtk-greeter lightdm-gtk-greeter-settings

sudo systemctl enable lightdm.service --force

This will install and use LightDM as the default display manager for Xfce.

Manjaro Linux officially offers predefined configurations and theming for Xfce.

sudo pacman -S manjaro-xfce-settings manjaro-settings-manager

Update the current user –

/usr/bin/cp -rf /etc/skel/. ~

Edit the “lightdm-gtk-greeter.conf” and replace the existing content with the following –

sudo gedit /etc/lightdm/lightdm-gtk-greeter.conf

Change the following lines –

[greeter]
background = /usr/share/backgrounds/breath.png
font-name = Cantarell 10
xft-antialias = true
icon-theme-name = Vertex-Maia
screensaver-timeout = 60
theme-name = Vertex-Maia
cursor-theme-name = xcursor-breeze
show-clock = false
default-user-image = #avatar-default
xft-hintstyle = hintfull
position = 50%,center 50%,center
clock-format =
panel-position = bottom
indicators = ~host;~spacer;~clock;~spacer;~language;~session;~a11y;~power

After everything is complete, restart your system.

Enjoying Xfce

Voila! Your system is now using Xfce!

For more information see the wikipage from Manjaro, that helped us write this article. ]]> Sumesh Prabhu http://serveradminsupport.com/ <![CDATA[Postfix Mail Queue Management]]> https://linuxhint.com/?p=33498 2018-12-11T16:01:40Z 2018-12-10T21:23:33Z

Postfix Mail System is the one of the most widely used mail systems along with Exim. In the initial days postfix was widely used for custom setup and custom Mail server setups. But nowadays Plesk servers also has Postfix as the default mail server and not Qmail. In this blog, we mainly concentrate on Mail Queue Management commands which almost all server owners and server administrator may need at some point of time.

Postfix has five different queues and they are listed below. All mails which postfix handles will stay in the server in one of these queues until the message leaves from the server.

  1. maildrop
  2. hold
  3. incoming
  4. active
  5. deferred
  6. Corrupt

You can get a detailed reference of all the above queues from this link. Postfix uses a separate directory for each of the above queues and the default directory for those are:

/var/spool/postfix/maildrop
/var/spool/postfix/hold
/var/spool/postfix/incoming
/var/spool/postfix/active
/var/spool/postfix/deferred
/var/spool/postfix/corrupt

The above is just a reference for the queue structure and below is the actual set of commands which a server owner or a server administrator needs to handle a Postfix Mail queue and I will also mention how to find out a spamming instance as well so that you can get a more detailed idea on postfix queue management.

Display the list of Queued mails , deferred mails, and Pending mails

# postqueue -p
Sample Output
[root@host1 ~]# postqueue  -p
-Queue ID- --Size-- ----Arrival Time---- -Sender/Recipient-------
C79CEC3F6BC*     526 Wed Dec  5 15:05:18  root@host1.server.com
test.test@gmail.com

In the above result, Queue ID is C79CEC3F6BC and we need this for all future checks

To Display the mail header and contents

# postcat -q “Queue ID”
# postcat -q C79CEC3F6BC

To check the total number of mails in the queue

# postqueue -p | grep -c "^[A-Z0-9]"

To reattempt delivery of all mails in the queue

# postqueue -f

To remove all Mails in the Queue

# postsuper -d ALL

To remove all mails in the deferred Queue

# postsuper -d ALL deferred

To remove particular mail in the queue.

# postsuper -d “Queue ID”
# postsuper -d C79CEC3F6BC

To remove all mails from a particular mail id

test.test@domain.com

# mailq | tail +2 | awk 'BEGIN { RS = "" } / test.test@domain\.com$/ { print $1 }' |
tr -d '*!' | postsuper -d -

To attempt to send one particular mail

# postqueue -i “Queue ID”
# postqueue -I C79CEC3F6BC

To clear the infected mails by user or pattern

To clear the infected mails sent by a specific user or any specific pattern, you can use the below one. This will simply check that content which is searching and will remove all those emails which contains that pattern.

To remove all mails which have test.test@domain.com in the entire mail.

# for id in `postqueue -p|grep '^[A-Z0-9]'|cut -f1 -d' '|sed 's/*//g'`; do postcat -q $id
| grep test.test@domain.com  && postsuper -d $id; done

To remove all mails which have a particular pattern like  “X-PHP-Originating-Script: 48:badmailing.php” we can use the above script as below. When you are giving a longer pattern, make sure you copy paste all space and give all those exactly in the double quotes.

# for id in `postqueue -p|grep '^[A-Z0-9]'|cut -f1 -d' '|sed 's/*//g'`;
do postcat -q $id | grep “X-PHP-Originating-Script: 48:badmailing.php”
&& postsuper -d $id; done

Conclusion

I hope this article helps you get more comfortable with Postfix Mail Queue Management.

]]>
Frank Hofmann http://www.dpmb.org/ <![CDATA[Debian Package Dependencies]]> https://linuxhint.com/?p=33482 2018-12-10T15:36:22Z 2018-12-10T12:23:00Z For Linux distributions such as Debian GNU/Linux, there exist more than 60.000 different software packages. All of them have a specific role. In this article we explain how does the package management reliably manage this huge number of software packages during an installation, an update, or a removal in order to keep your system working and entirely stable.

For Debian GNU/Linux, this refers to the tools apt, apt-get, aptitude, apt-cache, apt-depends, apt-rdepends, dpkg-deb and apt-mark.

Availability of software packages

As already said above, a Linux distribution consists of tons of different software packages. As of today software is quite complex, and that’s why it is common to divide software into several single packages. These packages can be categorized by functionality or by role such as binary packages, libraries, documentation, usage examples as well as language-specific collections and provide a selected part of the software, only. There is no fixed rule for it, and the division is made by either the development team of a tool, or the package maintainer who takes care of the software package for your Linux distribution. Using aptitude, Figure 1 lists the packages that contain the translations for the different languages for the webbrowser Mozilla Firefox.

aptitude-firefox.png

Figure 1: aptitude-firefox.png

This way of working makes it possible that each package can be maintained by a different developer or as an entire team. Furthermore, the division into single components allows other software packages to make use of it for their own purposes too. A required functionality can be applied and does not need to be reinvented.

Package Organization

The package management tools on the Debian GNU/Linux distribution take constantly care that the dependencies of the installed packages are met completely. This is especially the case if a software package is meant to be installed, updated, or deleted on or from your system. Missing packages are added to the system, or installed packages are removed from the system in case they are no longer required. Figure 2 demonstrates this for the removal of the package ‘mc-data’ using ‘apt-get’. The package ‘mc-data’ recommends to automatically remove the package ‘mc’, too, because it does not make sense any more to be installed without ‘mc-data’.

Figure 2: apt-get-remove-mc.png

Package marks and flags

During its work the package management tools respect the package flags and marks that are set. They are either set automatically, or set manually by the system administrator. Especially this behaviour refers to the flag ‘essential package’ that is set for packages that should not be removed. A clear warning is issued before you do that (see Figure 3).

Figure 3: apt-get-remove.png

Also, the three marks ‘automatic’, ‘manual’ and ‘hold’ are taken into account. They mark a package as being automatically installed, manually installed, or must not be updated (hold the current version). A software package is either marked ‘automatic’ or ‘manual’ but not both.

Among others, the command ‘apt-mark’ handles the marks and flags using the following subcommands:

  • auto: set a package as automatically installed
  • hold: hold the current version of the package
  • manual: set a package as manually installed
  • showauto: show the automatically installed packages
  • showmanual: show the manually installed packages
  • showhold: list the packages that are on hold
  • unhold: remove the hold flag for the given package

In order to list all the manually installed packages issue this command:

$ apt-mark showmanual
abiword
abs-guide
ack-grep
acl
acpi

$

In order to hold a package version use the subcommand ‘hold’. The example below shows this for the package ‘mc’.

# apt-mark hold mc
mc set on hold
#

The subcommand ‘showhold’ lists the packages that are on hold (in our case it is the package ‘mc’, only):

# apt-mark showhold
mc
#

Using an alternative method titled ‘apt pinning’, packages are classified by priorities. Apt applies them in order to decide how to handle this software package and the versions that are available from the software repository.

Package description

Using an alternative method titled ‘apt pinning’, packages are classified by priorities. Apt applies them in order to decide how to handle this software package and the versions that are available from the software repository.

Every software package comes with its own package description that is standardized. Among other fields this description explicitly specifies which further package(s) it depends on. Distribution-specific tools extract this information from the package description, and compute and visualize the dependencies for you, then. The next example uses the command ‘apt-cache show’ in order to display the package description of the package ‘poppler-utils’ (see Figure 4).

Figure 4: package-description-poppler-utils.png

Figure 4: package-description-poppler-utils.png

The package description contains a section called ‘Depends’. This section lists the other software packages plus version number that the current package depends on. In Figure 4 this section is framed in red and shows that ‘poppler-utils’ depends on the packages ‘libpoppler64’, ‘libc6’, ‘libcairo2’, ‘libfreetype6’, ‘liblcms2-2’, ‘libstdc++6’ and ‘zlib1g’.

Show the package dependencies

Reading the package description is the hard way to figure out the package dependencies. Next, we will show you how to simplify this.

There are several ways to show the package dependencies on the command line. For a deb package as a local file use the command ‘dpkg-deb’ with two parameters – the file name of the package, and the keyword ‘Depends’. The example below demonstrates this for the package ‘skypeforlinux-64.deb’:

$ dpkg-deb -f Downloads/skypeforlinux-64.deb Depends
gconf-service, libasound2 (>= 1.0.16), libatk1.0-0 (>= 1.12.4), libc6 (>= 2.17),
libcairo2 (>= 1.2.4), libcups2 (>= 1.4.0), libexpat1 (>= 2.0.1),
libfreetype6 (>= 2.4.2), libgcc1 (>= 1:4.1.1), libgconf-2-4 (>= 3.2.5),
libgdk-pixbuf2.0-0 (>= 2.22.0), libglib2.0-0 (>= 2.31.8), libgtk2.0-0 (>= 2.24.0),
libnspr4 (>= 2:4.9-2~), libnss3 (>= 2:3.13.4-2~), libpango-1.0-0 (>= 1.14.0),
libpangocairo-1.0-0 (>= 1.14.0), libsecret-1-0 (>= 0.7), libv4l-0 (>= 0.5.0),
libx11-6 (>= 2:1.4.99.1), libx11-xcb1, libxcb1 (>= 1.6), libxcomposite1 (>= 1:0.3-1),
libxcursor1 (>> 1.1.2), libxdamage1 (>= 1:1.1), libxext6, libxfixes3,
libxi6 (>= 2:1.2.99.4), libxrandr2 (>= 2:1.2.99.3), libxrender1, libxss1,
libxtst6, apt-transport-https, libfontconfig1 (>= 2.11.0), libdbus-1-3 (>= 1.6.18),
libstdc++6 (>= 4.8.1)
$

In order to do the same for an installed package use ‘apt-cache’. The first example combines the subcommand ‘show’ followed by the name of the package. The output is sent to the ‘grep’ command that filters the line ‘Depends’:

$ apt-cache show xpdf | grep Depends
Depends: libc6 (>= 2.4), libgcc1 (>= 1:4.1.1), libpoppler46 (>= 0.26.2),
libstdc++6 (>= 4.1.1), libx11-6, libxm4 (>= 2.3.4), libxt6
$

The command ‘grep-status -F package -s Depends xpdf’ will report the same information.

More specific, the second example again uses ‘apt-cache’ but with the subcommand ‘depends’, instead. The subcommand is followed by the name of the package:

$ apt-cache depends xpdf
xpdf
Depends: libc6
Depends: libgcc1
Depends: libpoppler46
Depends: libstdc++6
Depends: libx11-6
Depends: libxm4
Depends: libxt6
Recommends: poppler-utils
poppler-utils:i386
Recommends: poppler-data
Recommends: gsfonts-x11
Recommends: cups-bsd
cups-bsd:i386
Collides with:
Collides with:
Collides with:
Collides with:
Replaces:
Replaces:
Replaces:
Replaces:
Collides with: xpdf:i386
$

The list above is quite long, and can be shortened using the switch ‘-i’ (short for ‘–important’):

$ apt-cache depends -i xpdf
xpdf
Depends: libc6
Depends: libgcc1
Depends: libpoppler46
Depends: libstdc++6
Depends: libx11-6
Depends: libxm4
Depends: libxt6
$

The command ‘apt-rdepends’ does the same but with version information if specified in the description:

$ apt-rdepends xpdf
Reading package lists… Done
Building dependency tree
Reading state information… Done
xpdf
Depends: libc6 (>= 2.4)
Depends: libgcc1 (>= 1:4.1.1)
Depends: libpoppler46 (>= 0.26.2)
Depends: libstdc++6 (>= 4.1.1)
Depends: libx11-6
Depends: libxm4 (>= 2.3.4)
Depends: libxt6
libc6
Depends: libgcc1

$

The command ‘aptitude’ works with switches, too. For dependencies, use the switch ‘~R’ followed by the name of the package. Figure 5 shows this for the package ‘xpdf’. The letter ‘A’ in the second column of the output of ‘aptitude’ identifies the package as being automatically installed.

Figure 5: aptitude-rdepends.png

Package dependencies can be a bit tricky. It may help to show package dependencies graphically. Use the command ‘debtree’ followed by the name of the package in order to create a graphical representation of the package dependencies. The tool ‘dot’ from the Graphviz package transforms the description into an image as follows:

$ debtree xpdf | dot -Tpng > graph.png

In Figure 6 you see the created PNG image that contains the dependency graph.

Figure 6: dot.png

Show the reverse dependencies

Up to now we displayed we have answered the question which packages are required for a package. There is also the other way round – so-called reverse dependencies. The next examples deal with the package as well as the packages that depend on it. Example number one uses ‘apt-cache’ with the subcommand ‘rdepends’ as follows:

$ apt-cache rdepends xpdf
xpdf
Reverse Depends:
|octave-doc
xpdf:i386
libfontconfig1:i386
|xmds-doc
xfe
wiipdf
|vim-latexsuite
python-scapy
|ruby-tioga
|python-tables-doc
|page-crunch
|octave-doc
|muttprint-manual
mozplugger
mlpost
libmlpost-ocaml-dev

$

Packages, that depend on other packages are marked with a pipe symbol. These package do not need to be installed on your system but have to be listed in package database.

The next example uses ‘aptitude’ to list the packages that have a hard reference to the package ‘xpdf’ (see Figure 7).

Figure 7: aptitude-search.png

Validate the installation for missing packages

‘Apt-get’ offers the subcommand ‘check’ that allows to validate the installation. If you see the following output no packages are missing:

# apt-get check
Reading package lists… Done
Building dependency tree
Reading state information… Done
#

Conclusion

Finding package dependencies works well with the right tools. Using them properly helps you to understand why packages are installed, and which ones might be missing.

Links and References

]]>
Ranvir Singh <![CDATA[Book Review: The Go Programming Language]]> https://linuxhint.com/?p=33478 2018-12-10T15:29:46Z 2018-12-10T12:20:36Z The Go Programming Language, by Alan A. A. Donovan and Brian Kernighan, is reviewed in this post. Brian Kernighan is well known as the coauthor of The C Programming Language, and that book itself has severed as a standard text for generations of engineers. Go has often been referred to as the 21st Century C and The Go Programming Language may very well be the standard reference text for it.

The Beginning

The book starts out strong with a Tutorial chapter giving you a simple “Hello, World” program and also showing off some of the advantages of using Go. The minimalism is bound to appeal to programmers who have had it with bloated libraries. You can’t declare a variable and not use it in the rest of your Go program. You can’t import a library and not use it in your code. It will simply not compile. You don’t have to argue about the format of your code. For example, the age old battle between:

func main() {
}
//And
func main()
{
}

Is settled by the compiler which accepts only the former and not the latter. Other nuances are settled by tools like gofmt which takes your Go source file and formats it in a standardized manner. So all Go programs follow the same convention, which in turn improves the readability of the code.

The first chapter emphasizes these selling points and does a really good job of giving the readers a taste of what Go is really about: A general purpose language designed for generating static binaries with as little bloat as possible.

Brevity

Experienced programmers are tired of learning about the same concepts like for loops, if-else statements, etc over and over again for different languages. The first chapter sneaks in all this tedious information by encouraging the users to write simple Unix-y programs (as was the case with The C Programming language).

One drawback of this rapid-introduction is the fact that new readers will get completely baffled by syntax. Programs rapidly start using the dot operators and various object oriented programming concepts after two or three examples down the very first chapter. This is important for maintaining speed and brevity of the overall reading experience and is a very conscious choice on the part of the writers.

The book also assumes that readers are familiar with at least one programming language, before they picked up this book. This could be Python, JavaScript, Java, C or any other general purpose language.

Companion Website

The book comes with a companion website. You can directly import the programs given in the book from this website and run it without having to type (or copy paste from your Kindle App). You can even check out the first chapter (which, by the way, is my favorite one) for free on this website and decide if this book is for you or not.

The authors have paid attention to the pains of a programmer trying to learn a new language. Distractions are kept to a minimum with each program’s web link mentioned on top of it. So you can fetch the code, run it, tweak it and build upon it, if you like.

A comprehensive list of errata is also maintained on this website, and you can refer it if you think something’s amiss.

Serious Business

If you are expecting a simple guide for causal scripting, this is not the book for you. The reason is that a lot of ground is covered first and then the details are filled along as we progress towards later chapters.

This book is for people who want to understand the constructs, the nitty-gritty details of how Go works. You will be creating GIFs, writing web servers and  plotting Mandelbrot Sets and much much more, but none of it would make any sense unless you have paid attention to the finer points made  preceding chapters (with the Chapter 1 being somewhat of an exception, as it is meant as an overview of the language).

The majority of the rest of the book focuses on various syntax related details about Go including things control loops, variables, functions, methods, Go routines and much much more. All of this is illustrated by making the reader go through useful programs and not made-up idealistic scenarios.

Even if you wish to skip most chapters from the middle of the book, I would strongly suggest digging through chapter 5 for an understanding of Panic, Error handling and anonymous functions. However, I would strongly suggest going through all the chapters sequentially before we come to the crown-jewel of Go — Concurrency.

Emphasis on Concurrency

Go language is designed, from ground up with concurrency in mind. Most modern processors are multicore and multithreaded but programmers despise the complications they face when writing programs to run on such architecture. With cloud computing heading towards distributed systems, concurrent code will soon be the only well-performing code out there.

The chapter on concurrency is written to beat the fear of concurrent design out of our minds. It is complicated, yes, but not hopeless. The book does a great job of conveying how Go can help you develop the correct mindset from this.

Conclusion

The experience of Kernighan from early UNIX days is still very very viable in the modern age of cloud desktops, GPUs, IOT, cloud and whatever will follow next. He and Donovan have done a great job of imparting this wisdom of application design and UNIX philosophy using a simple, modern language with performance in mind and I have zero hesitation in recommending this book to anyone from a high school student to an senior software engineer with decades of experience.

]]> David Morelo http://www.davidmorelo.com <![CDATA[Best Alternatives to Red Hat Linux]]> https://linuxhint.com/?p=33475 2018-12-10T15:17:08Z 2018-12-10T12:18:16Z

The recent news of IBM’s purchase of Red Hat sent a ripple through the global open source community, sparking fear that it will eventually push either entire Red Hat or at least some of its parts to the scrap heap.

But we’re not here to make educated guesses about the future of the beloved Linux distribution. Instead, we’re here to list the top 5 best alternatives to Red Hat Linux that you can try right now to see what other options are out there.

Debian

If there’s one Linux distribution that has led to the creation of more derivatives than Red Hat, it’s Debian. First released in 1993, Debian is an early operating system based on the Linux kernel, and its tremendous success speaks for itself.

Debian has a unique release cycle with three distinct branches: Unstable, Testing, and Stable. As their names suggest, the branches target everyone from those who prefer bleeding-edge software to those who require impeccable stability to meet the needs of enterprise customers. In addition to the three main branches, there are also branches with archived software releases and experimental software.

What separates Debian from many other Linux distributions are the Debian Free Software Guidelines, which are used to determine whether a software license is a free software license. They were created as a reaction to Red Hat never clearly explaining its social contract with the Linux community.

Debian is currently available in 75 languages, and its development is carried out over the internet. You definitely don’t need to worry about Debian being purchased by some big tech company because that would go directly against everything the project stands for.

Ubuntu

We’ve mentioned that Debian has led to the creation of a number of derivative Linux distributions, and one of them is Ubuntu. Even though Ubuntu is perhaps best known for its polished user interface and user-friendly desktop features, it’s also an excellent alternative to Red Hat thanks to its server edition, simply called Ubuntu Server.

According to Canonica, a UK-based privately held computer software company founded and funded by South African entrepreneur Mark Shuttleworth to market commercial support and related services for Ubuntu and related projects, the server edition of Ubuntu is ready for everything from self-hosted clouds to Hadoop clusters to massive render farms with tens of thousands of nodes.

In addition to Ubuntu Desktop, Ubuntu Server, and Ubuntu Core, which is an edition for Internet of Things devices, there’s a whole host of official derivatives of Ubuntu, including Kubuntu, Lubuntu, Ubuntu Budgie, Ubuntu Kylin, Ubuntu MATE, or Edubuntu, a complete Linux based operating system targeted for primary and secondary education, just to give a few examples.

SLES (SUSE Linux Enterprise Server)

SUSE Linux Enterprise Server, or SLES for short, is a Linux-based operating system developed by SUSE for servers, mainframes, and workstations. It was first released in 2000, and there have been seven major versions of SLES released since then.

Unlike Red Hat, which is a massive company with more than 10,000 employees spread all around the world, SUSE has “just” around 1,400 employees, who are mostly based in Germany, USA, Czech Republic, Great Britain, and Italy. But just like Red Hat, SUSE is a generous developer and sponsor of a number of open source projects, including the community-supported openSUSE Project, whose purpose is to develop the openSUSE Linux distribution.

SLES uses the RPM Package Manager (RPM) package management system, originally created for use in Red Hat Linux, and has built several applications on top of it. One such application is package manager engine ZYpp, whose source code is hosted publicly on GitHub.

If you worry that SUSE could be acquired by a large tech company just like Red Hat, we have bad news for you: it already has gone through several acquisitions. The most recent one was announced in July 2018. The current owner of SUSE, Micro Focus, announced that it would sell its SUSE business segment to EQT Partners for $2.5 billion. Fortunately, EQT Partners has promised to let SUSE further build its brand and unique corporate culture as a stand-alone business.

Arch Linux

Arch Linux is everything you want it to be thanks to the design approach of its development team, which is succinctly summarized by the acronym KISS (keep it simple, stupid).

“Relying on complex tools to manage and build your system is going to hurt the end users,” explains Aaron Griffin, the lead developer of Arch Linux. “If you try to hide the complexity of the system, you’ll end up with a more complex system. Layers of abstraction that serve to hide internals are never a good thing. Instead, the internals should be designed in a way such that they need no hiding.”

This rolling-release distribution has become infamous for its console-based installation, but seasoned Red Hat administrators have absolutely nothing to worry about. Thanks to the fantastic package manager that comes with Arch Linux, pacman, installing any of the thousands of officially maintained packages is a breeze. What’s more, Arch Linux users can also install user-produced content hosted in the Arch User Repository, or AUR.

FreeBSD

We’re aware that FreeBSD isn’t a Linux distribution, but that doesn’t take anything away from the fact that it’s a fantastic alternative to Red Hat Linux. This direct descendant of BSD was first released in 1993 and has since become the most widely used open-source BSD operating system.

The most important difference between FreeBSD and various Linux distributions is in scope. FreeBSD developers maintain all the individual parts that together make a complete operating system, whereas Linux only maintains a kernel and drivers, relying on third-parties for system software.

FreeBSD uses its own open source license, which imposes very small restrictions on the use and distribution of software. FreeBSD has a large community of dedicated users and developers, giving you one extra reason to give it a try.

Conclusion

Try one of these distribution if you want to diversify away from RedHat.

]]>
Shahriar Shovon <![CDATA[How to Install Jetbrains PHPStorm on Ubuntu]]> https://linuxhint.com/?p=33436 2018-12-08T21:00:12Z 2018-12-07T20:56:52Z PHPStorm by JetBrains is one of the best PHP IDE. It has plenty of amazing features. It also has a good looking and user friendly UI (User Interface). It has support for Git, Subversion and many other version control systems. You can work with different PHP frameworks such as Laravel, CakePHP, Zend Engine, and many more with PHPStorm. It also has a great SQL database browser. Overall, it’s one of the must have tool if you’re a PHP developer.

In this article, I will show you how to install PHPStorm on Ubuntu. The process shown here will work on Ubuntu 16.04 LTS and later. I will be using Ubuntu 18.04 LTS for the demonstration. So, let’s get started.

Installing PHPStorm:

PHPStorm has a snap package for Ubuntu 16.04 LTS and later in the official snap package repository. So, you can install PHPStorm very easily on Ubuntu 16.04 LTS and later.  To install PHPStorm snap package on Ubuntu 16.04 LTS and later, run the following command:

$ sudo snap install phpstorm --classic

As you can see, the PHPStorm snap package is being downloaded.

At this point, PHPStorm snap package is installed.

You can also install PHPStorm manually on Ubuntu. But I recommend you use the snap package version as it has better integration with Ubuntu.

Initial Configuration of PHPStorm:

Now that PHPStorm is installed, let’s run it.

To run PHPStorm, go to the Application Menu and search for phpstorm. Then, click on the PHPStorm icon as marked in the screenshot below.

As you’re running PHPStorm for the first time, you will have to configure it. Here, select Do not import settings and click on OK.

Now, you will see the Jetbrains user agreement. If you want, you can read it.

Once you’re finished reading it, check I confirm that I have read and accept the terms of this User Agreement checkbox and click on Continue.

Here, PHPStorm is asking you whether you would like to share usage statistics data with JetBrains to help them improve PHPStorm. You can click on Send Usage Staticstics or Don’t send depending on your personal preferences.

Now, PHPStorm will tell you to pick a theme. Jetbrains IDEs has a Dark theme called Darcula and a Light theme. You can see in how each of the themes look like here. Select the one you like.

If you don’t want to customize anything else, and leave the defaults for the rest of the settings, just click on Skip Remaining and Set Defaults.

If you want to customize PHPStorm more, click on Next: Featured plugins.

Now, you will see some common plugins. If you want, you can click on Install to install the ones you like from here. You can do it later as well.

Once you’re done, click on Start using PhpStorm.

Now, you will be asked to activate PHPStorm. PHPStorm is not free. You will have to buy a license from JetBrains in order to use PHPStorm. Once you have the license, you can activate PHPStorm from here.

If you want to try out PHPStorm before you buy it, you can. Select Evaluate for free and click and Evaluate. This should give you a 30-day trial.

As you can see, PHPStorm is starting. It’s beautiful already.

This is the dashboard of PHPStorm. From here, you can create new projects or import projects.

Creating a New Project with PHPStorm:

First, open PHPStorm and click on Create New Project.

Now, select the project type and then select the location of where the files of your new projects will be saved. Then, click on Create.

As you can see, a new project is created. Click on Close to close the Tip of the Day window.

Now, you can create new files in your project as follows. Let’s create a PHP File.

Now, type in a File name and make sure the File extension is correct. Then, click on OK.

As you can see, a new PHP file hello.php is created. Now, you can start typing in PHP code here.

As you can see, you get auto completion when you type in PHP code. It’s amazing.

Changing Fonts and Font Size:

If you don’t like the default font or the font size is too small for you, you can easily change it from the settings.

Go to File > Settings. Now, expand Editor.

Now click on Font. From the Font tab, you can change the font family, font size, line spacing etc. Once you’re done, click on OK.

As you can see, I changed the fonts to 20px Ubuntu Mono and it worked.

Managing Plugins on PHPStorm:

Plugins adds new features or improve PHPStorm IDE. PHPStorm has rich set of plugins available for download and use.

To install plugins, go to File > Settings and then click on the Plugins section.

Here, you can search for plugins. Once you find the plugin you like, just click on Install to install the plugin.

Once you click on Install, you should see the following confirmation window. Just click on Accept.

The plugin should be installed. Now, click on Restart IDE for the changes to take effect.

Click on Restart.

As you can see, the plugin I installed is listed in the Installed tab.

To uninstall a plugin, just select the plugin and press <Delete> or right click on the plugin and select Uninstall.

You can also disable specific plugins if you want. Just select the plugin you want to disable and press <Space Bar>. If you want to enable a disabled plugin, just select it and press the <Space Bar> again. It will be enabled.

So, that’s how you install and use JetBrains PHPStorm on Ubuntu. Thanks for reading this article.

]]>
Sidratul Muntaha <![CDATA[How to Install Spotify on Manjaro]]> https://linuxhint.com/?p=33410 2018-12-08T20:57:55Z 2018-12-07T20:33:31Z Life itself is like music, right? There’s no human being without the passion for music. Whenever you’re on gym or a tour, without music, it’s incomplete. Music gives our brain a pleasure to the utmost level that a very few things can bring.  What’s more loving than free music on the go? I, personally, just can’t LOVE enough the awesome service Spotify provides! I can enjoy my favorite music from anywhere, any device!

Spotify is also available as a native client on a number of devices – computers, smartphones and what not! However, on Linux, it’s a tough call. Spotify is available on a wide range of Linux distros. What about Manjaro?

Manjaro is a super cool Linux distro that’s based on the Arch Linux. However, it reduces the strain of the classic Arch a lot. In fact, it’s one of the finest Arch-based user-friendly distro, especially for new Linux users!

Let’s enjoy Spotify on Manjaro.

Installing Spotify

Officially, Spotify offers a “snap” package of the client. The “snap” packages are universal Linux apps that can run on any platform without any modification in the main program. For enjoying Spotify, we have to get the “Spotify” snap package.

However, for installing any snap package, you need the “snap” client installed in your system. This one isn’t available on the default software repository of Manjaro Linux. The source code is hosted on AUR. Building and then installing an app is the natural process of AUR, especially for the Arch and Arch-based distros. Let’s get started.

Installing snap package

For getting the source code of “snap” core, we have to use Git. Start by installing Git –

sudo pacman -S git

Now, your system is ready to grab the source code from the AUR repository.

git clone https://aur.archlinux.org/snapd.git

Download complete? Good! Time to start the build process. Change the active directory –

cd snapd/

Start the building process –

makepkg -si

Wait for the process to complete.

Once complete, you have to tell the system to enable the “snap” service.

sudo systemctl enable --now snapd.socket

There are a number of “snap” packages that are marked as “classic”. In order to make sure that you don’t have any problem with them, run the following command –

sudo ln -s /var/lib/snapd/snap /snap

Verify that “snap” is successfully installed –

snap --version

Installing Spotify snap

Running the following command will download and install the latest Spotify “snap” package from the Snapcraft store.

sudo snap install spotify

Wait for the process to complete.

Voila! Spotify is installed!

Using Spotify

Start Spotify from the menu –

You can also start Spotify from the terminal –

Spotify

If you have an existing one, you can easily login into your account. Otherwise, go to Spotify and create an account.

Now, let’s have a look at the Spotify app’s settings.

The interface is pretty self-explanatory. If your system uses proxy, you can specify the settings for the app. You can also the proxy type (Socks4, Socks5, HTTP etc.) or “No Proxy”.

Enjoy your music!

]]>
Shahriar Shovon <![CDATA[Install NextCloud on Ubuntu]]> https://linuxhint.com/?p=33377 2018-12-08T20:53:08Z 2018-12-07T20:18:15Z NextCloud is a free self-hosted file sharing software. It can be accessed from the web browser. Next cloud has apps for Android, iPhone and Desktop operating systems (Windows, Mac and Linux). It is really user friendly and easy to use.

In this article, I will show you how to install NextCloud on Ubuntu. So, let’s get started.

Installing NextCloud On Ubuntu:

On Ubuntu 16.04 LTS and later, NextCloud is available as a snap package. So, it is very easy to install.

To install NextCloud snap package on Ubuntu 16.04 LTS and later, run the following command:

$ sudo snap install nextcloud

As you can see, NextCloud snap package is being installed.

NextCloud snap package is installed at this point.

Creating NextCloud Administrator User:

Now, you have to create an administrator user for managing NextCloud. To do that, you have to access NextCloud from a web browser.

First, find out the IP address of your NextCloud server with the following command:

$ ip a

As you can see, the IP address of my NextCloud server is 192.168.21.128. It will be different for you. Make sure you replace it with yours from now on.

Now, from any web browser, visit the IP address 192.168.21.128.  Now, type in your Administrator username and password and click on Finish setup.

As you can see, you’re logged in. As you’re using NextCloud for the first time, you are prompted to download the Next Cloud app for your desktop or smart phone. If you don’t wish to download the NextCloud app right now, just click on the x button at the top right corner.

This is the NextCloud dashboard. Now, you can manage your files from the web browser using NextCloud.

Using Dedicated Storage for NextCloud:

By default, NextCloud stores files in your root partition where the Ubuntu operating system is installed. Most of the time, this is not what you want. Using a dedicated hard drive or SSD is always better.

In this section, I will show you how to use a dedicated hard drive or SSD as a data drive for NextCloud. So, let’s get started.

Let’s say, you have a dedicated hard drive on your Ubuntu NextCloud server which is recognized as /dev/sdb. You should use the whole hard drive for NextCloud for simplicity.

First, open the hard drive /dev/sdb with fdisk as follows:

$ sudo fdisk /dev/sdb

/dev/sdb should be opened with fdisk partitioning utility. Now, press o and then press <Enter> to create a new partition table.

NOTE: This will remove all your partitions along with data from the hard drive.

As you can see, a new partition table is created. Now, press n and then press <Enter> to create a new partition.

Now, press <Enter>.

Now, press <Enter> again.

Press <Enter>.

Press <Enter>.

A new partition should be created. Now, press w and press <Enter>.

The changes should be saved.

Now, format the partition /dev/sdb1 with the following command:

$ sudo mkfs.ext4 /dev/sdb1

The partition should be formatted.

Now, run the following command to mount /dev/sdb1 partition to /mnt mount point:

$ sudo mount /dev/sdb1 /mnt

Now, copy everything (including the dot/hidden files) from the /var/snap/nextcloud/common/nextcloud/data directory to /mnt directory with the following command:

$ sudo cp -rT /var/snap/nextcloud/common/nextcloud/data /mnt

Now, unmount the /dev/sdb1 partition from the /mnt mount point with the following command:

$ sudo umount /dev/sdb1

Now, you will have to add an entry for the /dev/sdb1 in your /etc/fstab file, so it will be mounted automatically on the /var/snap/nextcloud/common/nextcloud/data mount point on system boot.

First, run the following command to find out the UUID of your /dev/sdb1 partition:

$ sudo blkid /dev/sdb1

As you can see, the UUID in my case is fa69f48a-1309-46f0-9790-99978e4ad863

It will be different for you. So, replace it with yours from now on.

Now, open the /etc/fstab file with the following command:

$ sudo nano /etc/fstab

Now, add the line as marked in the screenshot below at the end of the /etc/fstab file. Once you’re done, press <Ctrl> + x, then press y followed by <Enter> to save the file.

Now, reboot your NextCloud server with the following command:

$ sudo reboot

Once your computer boots, run the following command to check whether the /dev/sdb1 partition is mounted to the correct location.

$ sudo df -h | grep nextcloud

As you can see, /dev/sdb1 is mounted in the correct location. Only 70MB of it is used.

As you can see I uploaded some files to NextCloud.

As you can see, the data is saved on the hard drive that I just mounted. Now, 826 MB is used. It was 70MB before I uploaded these new files. So, it worked.

That’s how you install NextCloud on Ubuntu. Thanks for reading this article.

]]>
Shahriar Shovon <![CDATA[How to Install JetBrains PyCharm on Ubuntu]]> https://linuxhint.com/?p=33334 2018-12-06T14:55:45Z 2018-12-06T08:13:36Z PyCharm is an awesome Python IDE from JetBrains. It has a lot of awesome features and a beautiful looking UI (User Interface). It is really easy to use.

In this article, I will show you how to install PyCharm on Ubuntu. The procedure shown here will work on Ubuntu 16.04 LTS and later. I will be using Ubuntu 18.04 LTS for the demonstration in this article. So, let’s get started.

Getting Ubuntu Ready for PyCharm:

Before you install PyCharm on Ubuntu, you should install some pre-requisites packages. Otherwise, PyCharm won’t work correctly.

You have to install the Python interpreters that you want to use with PyCharm to run your project. You also have to install PIP for the Python interpreters that you wish to use.

If you want to use Python 2.x with PyCharm, then you can install all the required packages with the following command:

$ sudo apt install python2.7 python-pip

Now, press y and then press <Enter>.

All the required packages for working with Python 2.x in PyCharm should be installed.

If you want to use Python 3.x with PyCharm, then install all the required packages with the following command:

$ sudo apt install python3-pip python3-distutils

Now, press y and then press <Enter> to continue.

All the required packages for working with Python 3.x in PyCharm should be installed.

Installing PyCharm:

PyCharm has two versions. The Community version, and the Professional versions. The Community version is free to download and use. The Professional version is not free. You have to purchase a license to use the Professional version. The Community version is okay mostly. But it lacks some of the advance features of the Professional version. So, if you need these features, then buy a license and install the Professional version.

On Ubuntu 16.04 LTS and later, PyCharm Community and Professional both versions are available as a snap package in the official snap package repository.

To install PyCharm Community version snap package on Ubuntu 16.04 LTS and later, run the following command:

$ sudo snap install pycharm-community --classic

To install PyCharm Professional version snap package on Ubuntu 16.04 LTS and later, run the following command:

$ sudo snap install pycharm-professional --classic

In this article, I will go with the PyCharm Community version.

As you can see, PyCharm Community version snap package is being downloaded.

PyCharm Community version is installed.

Initial Configuration of PyCharm:

Now that PyCharm is installed, you can start it from the Application Menu of Ubuntu. Just search for pycharm in the Application Menu and you should see PyCharm icon as marked in the screenshot below. Just click on it.

As you’re running PyCharm for the first time, you will have to do some initial configuration. Once you see the following window, click on Do not import settings and click on OK.

Now, you will see the JetBrains license agreement window.

Now, click on I confirm that I have read and accept the terms of this User Agreement and click on Continue to accept the license agreement.

Now, you have to select a UI theme for PyCharm. You can select either the dark theme – Darcula or the Light theme.

Once you select a theme, you can click on Skip Remaining and Set Defaults to leave everything else the default and start PyCharm.

Otherwise, click on Next: Featured plugins.

Once you click on Next: Featured plugins, PyCharm will suggest you some common plugins that you may want to install. If you want to install any plugins from here, click on Install.

Now, click on Start using PyCharm.

As you can see, PyCharm is starting.

PyCharm has started. This is the dashboard of PyCharm.

Creating a Project in PyCharm:

In this section, I will show you how to create a Python project in PyCharm.First, open PyCharm and click on Create New Project.

Now, select a location for your new project. This is where all the files of this project will be saved.

If you want, you can also change the default Python version of your project. To do that, click on the Project Interpreter section to expand it.

Here, you can see in the Base interpreter section, Python 3.6 is selected by default. It is the latest version of Python 3 installed on my Ubuntu 18.04 LTS machine. To change the Python version, click on the Base interpreter drop down menu.

As you can see, all the Python versions installed on my Ubuntu 18.04 LTS machine is listed here. You can pick the one you need from the list. If you want any version of Python which is not listed here, just install it on your computer, and PyCharm should be able to detect it.

Once you’re happy with all the settings, click on Create.

The project should be created.

Now, to create a new Python script, right click on the project and go to New > Python File as marked in the screenshot below.

Now, type in a file name for your Python script and click on OK.

As you can see, test.py file is created and opened in the editor section of PyCharm.

I wrote a very basic Python script as you can see.

Now, to run the Python script currently opened in the editor, press <Alt> + <Shift> + <F10> or go to Run > Run… as marked in the screenshot below.

As you can see, the Python script which is currently opened in the editor is shown here. Just press <Enter>.

As you can see, the script is running.

Once the type in all the inputs, I get the desired output as well.

So, that’s how you install and use PyCharm on Ubuntu. Thank you for reading this article.

]]>
Sidratul Muntaha <![CDATA[How to Install KDE on Manjaro Linux]]> https://linuxhint.com/?p=33310 2018-12-06T14:53:06Z 2018-12-06T07:55:07Z I just simply can’t LOVE enough the desktop environments! Desktop environments are what makes the Linux systems lovely and attractive!  Desktop environments are basically an implementation of a cool-looking GUI rather than the classic CLI. General to moderate users are more accustomed to the GUI computing whereas experts prefer the CLI for more power over the system.

Speaking of the desktop environment, KDE Plasma is one of my most favorite ones. This one has the shiniest interface along with a cool collection of handy tools of its own. However, because of the polished and shiny interface, KDE Plasma is a bit more resource hungry than the others. However, most of the computers these days come with a pretty decent amount of RAM, so the additional RAM consumption shouldn’t affect your performance even the slightest bit.

Today, we’ll be enjoying KDE Plasma on another of my favorite distros – Manjaro Linux! Arch Linux is always feared as one of the difficult distros. Manjaro brings the experience of Arch Linux in the simplest possible manner for the entry-level and moderate Linux users. In fact, Manjaro Linux dramatically simplifies most of the Arch hurdles with ease.

Getting KDE Plasma

There are 2 ways you can get KDE Plasma on Manjaro Linux – installing the KDE Plasma edition of Manjaro Linux or installing KDE separately on the currently installed Manjaro system.

Method 1

Get the KDE Plasma version of Manjaro.

Then, make a bootable USB flash drive using Linux Live USB Creator or Rufus. Using the tools, all you have to do is select the ISO and the target USB flash drive. The tool will do the rest all by itself.

Boot into the device and run the installation of Manjaro Linux (KDE Plasma edition). Note that the tutorial is a demo using VirtualBox but the real life installation steps will be EXACTLY the same, so no need to worry about.

Method 2

If you installed any other version of Manjaro Linux, then you have to follow these steps for enjoying the smoothness of KDE Plasma.

At first, install the core of KDE Plasma –

sudo pacman -S plasma kio-extras

For the complete experience of KDE Plasma, let’s install all the KDE applications. Note that this installation will consume a huge amount of disk space.

sudo pacman -S kde-applications

If you’re not interested in the entire package of KDE apps (literally a HUGE collection of apps), you can install a small one (containing only necessary ones).

sudo pacman -S kdebase

The default display manager for KDE is SDDM. If you’re a hardcore fan of KDE, without SDDM, you may not have the full enjoyment of KDE. You can also configure SDDM as the display manager of KDE.

sudo systemctl enable sddm.service --force

After this step, restart your system.

reboot

Don’t forget to install Manjaro configurations and theming for KDE Plasma. I strongly recommend getting them as they include a number of tweaks for the newly installed KDE Plasma specifically for the Manjaro Linux environment.

sudo pacman -S manjaro-kde-settings sddm-breath-theme
manjaro-settings-manager-knotifier manjaro-settings-manager-kcm

Now, it’s time to update the current user –

/usr/bin/cp -rf /etc/skel/. ~

After everything is configured properly, restart your system.

reboot

Enjoying KDE Plasma

Voila! KDE Plasma is now the default desktop environment of your Manjaro Linux!

]]>
Shahriar Shovon <![CDATA[How to Use fdisk in Linux]]> https://linuxhint.com/?p=33279 2018-12-06T14:49:31Z 2018-12-06T07:34:26Z fdisk is a tool for partitioning hard drives (HDDs), solid state drives (SSDs), USB thumb drives etc. The best thing about fdisk is that it is installed by default on almost every Linux distribution these days. Fdisk is also very easy to use.

In this article, I will show you how to use fdisk to partition storage devices such as HDDs, SSDs, and USB thumb drives in Linux. So, let’s get started.

Finding the Correct Device Identifier:

In Linux, the block devices or hard drives has unique identifiers such as sda, sdb, sdc etc.  Before you start partitioning your hard drive, you must make sure that you’re partitioning the right one. Otherwise, you may lose data in the process.

You can use fdisk to list all the storage/block devices on your Linux computer with the following command:

$ sudo lsblk

As you can see, I have a hard drive (sda) and a USB thumb drive (sdb) attached to my computer. The lsblk command also lists the partitions. The raw storage device has the TYPE disk. So, make sure you don’t use a partition identifier instead of raw disk identifier.

As you can see, the hard drive (sda) is 20GB in size and the USB thumb drive (sdb) is 3.8GB in size.

You can access the device identifier, let’s say sdb, as /dev/sdb.

In the next section, I will show you how to open it with fdisk.

Opening Storage Devices with fdisk:

To open a storage/block device with fdisk, first, you have to make sure that none of its partition is mounted.

Let’s say, you want to open your USB thumb drive /dev/sdb with fdisk. But, it has a single partition /dev/sdb1, which is mounted somewhere on your computer.

To unmount /dev/sdb1, run the following command:

$ sudo umount /dev/sdb1

Now, open /dev/sdb with fdisk with the following command:

As you can see, /dev/sdb storage/block device is opened with fdisk.

In the next sections, I will show you how to use the fdisk command line interface to do common partitioning tasks.

Listing Existing Partitions with fdisk:

You can press p and then press <Enter> to list all the existing partitions of the storage/block device you opened with fdisk.

As you can see in the screenshot below, I have a single partition.

Creating a New Partition Table with fdisk:

A partition table holds information about the partition of your hard drive, SSD or USB thumb drive. DOS and GPT are the most common types of partition table.

DOS is an old partition table scheme. It is good for small size storage devices such as a USB thumb drive. In a DOS partition table, you can’t create more than 4 primary partitions.

GPT is the new partition table scheme. In GPT, you can have more than 4 primary partitions. It is good for big storage devices.

With fdisk, you can create both DOS and GPT partition table.

To create a DOS partition table, press o and then press <Enter>.

To create a GPT partition table, press g and then press <Enter>.

Creating and Removing Partitions with fdisk:

To create a new partition with fdisk, press n and then press <Enter>.

Now, enter the partition number and press <Enter>. Usually, the default partition number is okay. So, you can just leave it as it is unless you want to do something very specific.

Now, enter the sector number on your hard drive from which you want the partition to start from. Usually, the default value is alright. So, just press <Enter>.

The last sector number or size is the most important here. Let’s say, you want to create a partition of size 100 MB, you just type in +100M here. For 1GB, you type in +1G here. The same way, for 100KB, +1K. For 2TB, +2T. For 2PT, +2P. Very simple. Don’t type in fractions here, only type in real numbers. Otherwise, you will get an error.

As you can see, I created a 100MB partition. The partition is created.

If you had a partition that started and ended in the same sector before, you may see something like this. Just, press y and then press <Enter> to remove the partition signature.

As you can see, fdisk tells you that when you write the changes, the signature will be removed.

I am going to create another partition of 1GB in size.

I am going to create another 512MB partition just to show you how to remove partitions with fdisk.

Now, if you list the partitions, you should be able to see the partitions that you created. As you can see, the 100MB, 1GB and 512MB partitions that I just created are listed here.

Now, let’s say you want to delete the third partition /dev/sdb3 or the 512MB partition. To do that, press d and then press <Enter>. Now, type in the partition number and press <Enter>. In my case, it is the partition number 3.

As you can see, partition number 3 is deleted.

As you can see, the 512MB partition or the 3rd partition is no more.

To permanently save the changes to the disk, press w and then press <Enter>. The partition table should be saved.

Formatting and Mounting Partitions:

Now that you’ve created some partitions using fdisk, you can format it and start using them. To format the second partition, let’s say /dev/sdb2, to ext4 filesystem, run the following command:

$ sudo mkfs.ext4 -L MySmallPartition /dev/sdb2

NOTE: Here, MySmallPartition is the label for the /dev/sdb2 partition. You can put anything meaningful here that describes what this partition is for.

The partition is formatted to ext4 filesystem.

Now that the partition /dev/sdb2 is formatted to ext4, you can use the mount command to mount it on your computer.  To mount the partition /dev/sdb2 to /mnt, run the following command:

$ sudo mount /dev/sdb2 /mnt

As you can see, the partition /dev/sdb2 is mounted successfully to /mnt mount point.

So, that’s how you use fdisk in Linux to partition disks in Linux. Thanks for reading this article.

]]>
Sidratul Muntaha <![CDATA[How to Install GNOME on Manjaro Linux]]> https://linuxhint.com/?p=33253 2018-12-05T15:05:39Z 2018-12-05T06:00:55Z Manjaro Linux is one of the finest Linux distros out there that brings the experience of Arch in a very simple manner. Arch Linux is a difficult one for sure, at least, for new users. Now, Manjaro takes the step further by simplifying the entire system and making Arch more user-friendly than ever.  If you’re a new or moderate Linux user, feel free to try out Arch today with Manjaro Linux! Arch is definitely more respected than the other Linux systems. That’s why I love Manjaro; it allows me to brag about running Arch Linux!

Now, Manjaro Linux comes up in tons different desktop environments such as Xfce, KDE Plasma, GNOME, MATE, Budgie, Cinnamon, LXDE, and a lot more. You may have already used all of them by now.

GNOME is a mid-weight and powerful desktop environment that comes up with a ton of its own apps, known as “GNOME Apps”. Is GNOME your favorite desktop environment? If that’s so, then let’s enjoy GNOME on our favorite Manjaro!

Getting GNOME

There are mainly 2 ways of getting GNOME as the desktop environment on Manjaro – installing Manjaro (GNOME edition) or installing GNOME separately. Don’t worry; both will be covered.

Installing Manjaro (GNOME edition)

Get the latest ISO of Manjaro (GNOME edition).

Then, you have to install Manjaro Linux using the downloaded ISO. You can test the installation process on VirtualBox or install it directly into your system. In each case, the steps are same.

Installing GNOME separately

If you have the luxury, following the previous method is STRONGLY recommended. Mixing more than one desktop environment in the single system may lead to some stability issues and other display glitches.

If your current system is using any other desktop environment other than GNOME, then follow the following steps.

At first, install the core of GNOME –

sudo pacman -S gnome

This step is optional but recommended. It will install the additional GNOME features (themes, games etc.).

sudo pacman -S gnome-extra

In the case of GNOME, the display manager is GDM. It’s also pretty beautiful and charming. When you installed GNOME, GDM is already installed. Enable GDM by running the following command –

sudo systemctl enable gdm.service --force

Now, the Manjaro part. Manjaro Linux officially offers various tweaks and theming for GNOME. Installing the tweaks will enable better compatibility and performance on your system.

sudo pacman -S manjaro-gnome-assets manjaro-gdm-theme manjaro-settings-manager

Finally, it’s time to update the current user.

/usr/bin/cp -rf /etc/skel/. ~

Enjoying GNOME

After every single configuration is complete, restart your system.

reboot

Voila! Now, GNOME is the default desktop environment of your Manjaro!

]]> Ranvir Singh <![CDATA[REST API vs GraphQL]]> https://linuxhint.com/?p=33245 2018-12-05T14:58:33Z 2018-12-05T05:50:38Z

TL;DR version

In one of the previous posts, we discussed, in brief, what’s it like to use the GitHub API v3. This version is designed to be interfaced like any other REST API. There are endpoints for every resource that you need to access and/or modify. There are endpoints for each user, each organization, each repository and so on. For example, each user has his/her API endpoint at https://api.github.com/users/<username> you can try substituting your username instead of <username> and enter the URL in a browser to see what the API responds with.

GitHub API v4, on the other hand, uses GraphQL where the QL stands for Query Language. GraphQL is a new way of designing your APIs. Just like there are many web services offered as  REST APIs not just the ones offered by GitHub, there are many web services that allows you to interface with them via GraphQL.

The starkest difference you will notice between GraphQL and REST API is that GraphQL can work off of a single API endpoint. In case of GitHub API v4, this end point is https://api.github.com/graphql and that’s that. You don’t have to worry about appending long strings at the end of a root URI or supply a query string parameter for extra information. You simply send a JSON like argument to this API, asking only for the things you need, and you will get a JSON payload back with the exact same information that you requested. You don’t have to deal with filtering out unwanted informations, or suffer from performance overhead because of large responses.

What is REST API?

Well, REST stands for Representational State Transfer and API stands for Application Programming Interface. A REST API, or a ‘RESTful’ API,  has become the core design-philosophy behind most modern client-server applications. The idea emerges from the need to segregate various the components of an application like the client-side UI and server-side logic.

So the session between a client and a server is typically stateless. Once the webpage and related scripts are loaded you can continue to interact with them and when you perform an action (like press a send button) then a send request is sent along with all the contextual information that the web server needs to process that request (like username, tokens, etc). The application transitions from one state to another but without a constant need for connection between the client and the server.

REST defines a set of constraints between the client and the server, and the communication can only happen under those constraints. For example REST over HTTP usually uses the CRUD model, which stands for Create, Read, Update and Delete and HTTP methods like POST, GET, PUT and DELETE help you perform those operations and those operations alone. Old intrusion techniques like SQL injections are not a possibility with something like a tightly written REST API(although it is REST is not a security panacea).

It also helps UI developers quite a lot! Since all you recieve from an HTTP request is typical a stream of text (formatted as JSON, sometimes) you can easily implement a web page for browsers or an app (in your preferred language) without worrying about server side architecture. You read the API documentation for services like Reddit, Twitter or Facebook and you can write extensions for them or third-party clients in the language of your choice since you are guaranteed that the behaviour of the API will still be the same.

Conversely, the server doesn’t care whether the front-end is written in Go, Ruby or Python. Whether it is a browser, app or a CLI. It just ‘sees’ the request and responds appropriately.

What is GraphQL?

As with anything in the world of computers, REST APIs got larger and more complex and at the same time people wanted to implement and consume them in a faster and simpler manner. This is why Facebook came up with the idea of GraphQL, and later open sourced it. The QL in GraphQL stands for Query Language.

GraphQL allows clients to make very specific API requests, instead of making rigid API calls with predefined parameters and responses. It is much more simpler because the server then responds with exactly the data that you asked it for, with nothing excess.

Take a look at this REST request and its corresponding response. This request is meant to view just a user’s public bio.

Request: GET https://api.github.com/users/<username>
Response:
{
"login": "octocat",
"id": 583231,
"node_id": "MDQ6VXNlcjU4MzIzMQ==",
"avatar_url": "https://avatars3.githubusercontent.com/u/583231?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/octocat",
"html_url": "https://github.com/octocat",
"followers_url": "https://api.github.com/users/octocat/followers",
"following_url": "https://api.github.com/users/octocat/following{/other_user}",
"gists_url": "https://api.github.com/users/octocat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/octocat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/octocat/subscriptions",
"organizations_url": "https://api.github.com/users/octocat/orgs",
"repos_url": "https://api.github.com/users/octocat/repos",
"events_url": "https://api.github.com/users/octocat/events{/privacy}",
"received_events_url": "https://api.github.com/users/octocat/received_events",
"type": "User",
"site_admin": false,
"name": "The Octocat",
"company": "GitHub",
"blog": "http://www.github.com/blog",
"location": "San Francisco",
"email": null,
"hireable": null,
"bio": null,
"public_repos": 8,
"public_gists": 8,
"followers": 2455,
"following": 9,
"created_at": "2011-01-25T18:44:36Z",
"updated_at": "2018-11-22T16:00:23Z"
}

I have used the username octocat, but you can replace it with the username of your choice and  use cURL to make this request in the command-line or Postman if you require a GUI. While the request was simple, think about all the extra information you are getting from this response. If you were to process data from a million such users and filter out all the unnecessary data using then that is not efficient. You are wasting bandwidth, memory and compute in getting, storing and filtering away all the millions extra key-value pairs that you will never you

Also the structure of the response is not something you know beforehand. This JSON response is equivalent to dictionary object in Python, or an object in JavaScript. Other endpoints will respond with JSON objects that may be composed of nested objects, nested list within the object or any arbitrary combination of JSON data types, and you will need to refer the documentation to get the specifics. When you are processing the request, you need to be cognizant of this format which change from endpoint to endpoint.

GraphQL doesn’t rely on HTTP verbs like POST, GET, PUT and DELETE to perform CRUD operations on the server. Instead, there is only one type of HTTP request type and endopint for all CRUD related operations. In case of GitHub this involves requests of type POST with only one endpoint https://api.github.com/graphql

Being a POST request it can carry with it a JSON like body of text through which will be our GraphQL operations. These operations can be of typea query if all it wants to do is read some information, or it can be a mutation in case data needs to be modified.

To make GraphQL API calls you can use GitHub’s GraphQL explorer. Take a look at this GraphQL query to fetch the same kind of data (a user’s public bio ) as we did above using REST.

Request: POST https://api.github.com/graphql
query{
user (login: "ranvo") {
bio
}
}
 
Response:
 
{
"data": {
"user": {
"bio": "Tech and science enthusiasts. I am into all sorts of unrelated stuff from
servers to quantum physics.\r\nOccasionally, I write blog posts on the above interests."

}
}
}

As you can see, the response consists of only what you asked for, that’s the user’s bio. You select a specific user by passing the username (in my case, it’s ranvo) and then you ask for the value of an attribute of that user, in this case that attribute is bio. The API server looks up the exact specific information and responds with that and nothing else.

On the flip side, GraphQL also let’s you make a single request and extract information that would have taken you multiple requests in traditional REST API. Recall that all GraphQL requests are made to only one API endpoint. Take for example the use case where you need to ask the GitHub API server for user’s bio and one of its SSH keys. It would require two GET resquests.

REST Requests: GET https://api.github.com/<username>/
GET https://api.github.com/<username>/keys
 
GraphQL request: POST https://api.github.com/graphql/
 
query{
user (login: "ranvo") {
bio
publicKeys (last:1){
edges {
node {
key
}
}
}
}
}
 
GraphQL Response:
 
{
"data": {
"user": {
"bio": "Tech and science enthusiasts. I am into all sorts of unrelated stuff from
servers to quantum physics.\r\nOccasionally, I write blog posts on the above interests."
,
"publicKeys": {
"edges": [
{
"node": {
"key": "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH31mVjRYdzeh8oD8jvaFpRuIgL65SwILyKpeGBUNGOT"
}
}
]
}
}
}
}

There are nested object, but if you look at your request, they pretty much match your request so you can know and, in some sense, shape the structure of the response you get .

Conclusion

GraphQL does come with its own learning curve, which is very steep, or not steep at all depending on who it is that you are asking. From an objective standpoint, I can lay the following facts for you. It is flexible as you have seen above, it is introspective — that is to say, you can query the GraphQL API about the API itself. Even if you are not going to build your API server using it, chances are you will have to interface with an API that does allows only GraphQL.

You can learn a bit more about its technicalities here and if you want to make GraphQL API calls from you local workstation then use Graphiql. ]]> Shahriar Shovon <![CDATA[Configuring Nano Text Editor with nanorc]]> https://linuxhint.com/?p=33226 2018-12-05T14:57:14Z 2018-12-05T05:38:36Z

Nano is a very lightweight command line text editor. Many Linux system administrators use Nano to do basic editing of Linux configuration files as it is easier to work with than Vim. Vim has a little bit of a learning curve that Nano don’t have.In this article, I will show you how to configure the Nano text editor. So, let’s get started.

Configuration File of Nano Text Editor:

You can configure Nano text editor system wide using the /etc/nanorc file.

You can also do user specific configuration of Nano text editor. In that case, you will have to create a .nanorc file in the HOME directory of the user you want to configure Nano for.

I will talk about many of the configuration options Nano has and how they work. You can use the ~/.nanorc file or the system wide /etc/nanorc file. It will work for both of them.

Using ~/.nanorc File for User Specific Configuration of Nano:

The ~/.nanorc file does not exist in your login users HOME directory by default. But, you can create one very easily with the following command:

$ touch ~/.nanorc

Now, you can edit the ~/.nanorc file as follows:

$ nano ~/.nanorc

~/.nanorc file should be opened with Nano text editor. Now, type in your required configuration options here.

Once you’re done, you have to save the file. To save the file, press <Ctrl> + x. Then, press y.

Now, press <Enter>. The changes to the ~/.nanorc configuration file should be saved.

Displaying Line Numbers in Nano:

Nano does not show line numbers by default. I will show you how to display line numbers using ~/.nanorc file and /etc/nanorc file in this section. So you will figure out how it works. From the next sections, I will use the ~/.nanorc file only for simplicity.

Using the ~/.nanorc File:

To show line numbers, type in set linenumbers in ~/.nanorc and save it.

As you can see, the line numbers are displayed.

Using /etc/nanorc File:

To display line numbers on nano system wide, open /etc/nanorc with the following command:

$ sudo nano /etc/nanorc

The /etc/nanorc file should be opened. It should look as follows. As you can see, all the nano options are already here. Most of them are disabled (commented out using # at the beginning) and some of them are enabled.

To display line numbers, find the line as marked in the screenshot below.

Now, uncomment the set linenumbers line and save the file.

As you can see, the line numbers are not displayed.

Enabling Auto Indentation in Nano:

Auto indentation is not enable by default in Nano text editor. But, you can use the set autoindent option in ~/.nanorc or /etc/nanorc file to enable auto indentation in Nano text editor.

Enabling Mouse Navigation in Nano:

If you’re using Nano text editor in a graphical desktop environment, then you can use your mouse to navigate. To enable this feature, use the set mouse option in ~/.nanorc or /etc/nanorc file.

Enable Smooth Scrolling in Nano:

You can use the set smooth option in ~/.nanorc or /etc/nanorc file to enable smooth scrolling.

Enable Word Wrapping in Nano:

Word wrapping is a very important feature of any text editor. Luckily, Nano has the ability to do word wrapping. It is not enabled by default. To enable word wrapping in Nano text editor, use the set softwrap option in ~/.nanorc or /etc/nanorc file.

Setting Tab Size in Nano:

On Nano text editor, the default tab size is 8 characters wide. That’s too much for most people. I prefer a tab size of 4 characters wide. Anything more than that makes me very uncomfortable.

To define the tab size (let’s say 4 characters wide) in Nano text editor, use the following option in your ~/.nanorc or /etc/nanorc file.

set tabsize 4

If you want to use a tab size of 2, then use the following option in your ~/.nanorc or /etc/nanorc file.

set tabsize 2

Automatically Converting Tabs to Spaces in Nano:

Tabs width can vary system to system, editor to editor. So, if you use tabs in your program source code, it may look very ugly if you open it with a different text editor with different tab width. If you replace tabs with specific numbers of spaces, then you won’t have to face this problem again.

Luckily, Nano can automatically convert tabs to spaces. It is not enabled by default. But you can enable it with the set tabstospaces option in your ~/.nanorc or /etc/nanorc file.

Changing Title Bar Color in Nano:

You can change the title bar color in Nano text editor using the following option in your ~/.nanorc or /etc/nanorc file.

set titlecolor foregroundColorCode,backgroundColorCode

Here, the supported foregroundColorCode and the backgroundColorCode are:

white, black, blue, green, red, cyan, yellow, magenta

For example, let’s say, you want to set the background title bar color to yellow and the foreground/text color to red, the option to put in the ~/.nanorc or /etc/nanorc file should be.

set titlecolor red,yellow

Changing Other Colors in Nano:

You can change colors in other parts of your Nano text editor. Other than titlecolor, there are statuscolor, keycolor, functioncolor, numbercolor options in Nano. These options are used the same way as the titlecolor option shown in the earlier section of this article.

You can see what option changes colors of which part of Nano text editor below:

Getting Help with nanorc Options:

There are many more options for configuring Nano text editor. It is out of the scope of this article to cover each and every one of them. I covered the basics. If you need something that is not available here, feel free to take a look at the manpage of nanorc.

You can read the manpage of nanorc with the following command:

$ man nanorc

The manpage of nanorc.

So, that’s how you configure Nano text editor with nanorc. Thanks for reading this article.

]]>
Ranvir Singh <![CDATA[Interfacing with GitHub API using Python 3]]> https://linuxhint.com/?p=33146 2018-12-04T16:11:47Z 2018-12-04T14:08:25Z GitHub as a web application is a huge and complex entity. Think about all the repositories, users, branches, commits, comments, SSH keys and third party apps that are a part of it. Moreover, there are multiple ways of communicating with it. There are desktop apps for GitHub, extensions for Visual Studio Code and Atom Editor, git cli, Android and iOS apps to name a few.

People at GitHub, and third party developers alike, can’t possibly manage all this complexity without a common interface. This common interface is what we call the GitHub API. Every GitHub utility like a cli, web UI, etc uses this one common interface to manage resources (resources being entities like repositories, ssh keys, etc).

In this tutorial we will learn a few basics of how one interfaces with an API using GitHub API v3 and Python3. The latest v4 of GitHub API requires you to learn about GraphQL which results in steeper learning curve. So I will stick to just version three which is still active and pretty popular.

How to talk to a web API

Web APIs are what enable you to use all the services offered by a web app, like GitHub, programmatically using language of your choice. For example, we are going to use Python for our use case, here. Technically, you can do everything you do on GitHub using the API but we will restrict ourselves to only reading the publicly accessible information.

Your Python program will be talking to an API just the same way as your browser talks to a website. That is to say, mostly via HTTPS requests. These requests will contain different ‘parts’, starting from the method of the request [GET, POST, PUT, DELETE], the URL itself, a query string, an HTTP header and a body or a payload. Most of these are optional. We will however need to provide a request method and the URL to which we are making the request.

What these are and how they are represented in an HTTPS request is something we will see slow as we start writing Python Scripts to interact with GitHub.

An Example

Adding SSH keys to a newly created server is always a clumsy process. Let’s write a Python script that will retrieve your public SSH keys from GitHub and add it to the authorized_keys file on any Linux or Unix server where you run this script. If you don’t know how to generate or use SSH keys, here is an excellent article on how to do exactly that. I will assume that you have created and added your own public SSH keys to your GitHub account.

A very simple and naive Python implementation to achieve the task we described above is as shown below:

import requests
import os
 
# Getting user input
unix_user = input("Enter your Unix username: ")
github_user = input("Enter your GitHub username: ")
 
# Making sure .ssh directory exists and opening authorized_keys file
ssh_dir = '/home/'+unix_user+'/.ssh/'
if not os.path.exists(ssh_dir):
    os.makedirs(ssh_dir)
 
authorized_keys_file = open(ssh_dir+'authorized_keys','a')
 
# Sending a request to the GiHub API and storing the response in a variable named'response'
api_root = "https://api.github.com"
request_header = {'Accept':'application/vnd.github.v3+json'}
response = requests.get(api_root+'/users/'+github_user+'/keys', headers = request_header)
 
## Processing the response and appending keys to authorized_keys file
for i in response.json():
    authorized_keys_file.write(i['key']+'\n')

Let’s ignore Python file handling and miscellaneous details and look strictly at the request and response. First we imported the requests module import requests this library allows us to make API calls very easily. This library is also one of the best examples of an open source project done right. Here’s the official site in case you want to have a closer look at the docs.

Next we set a variable api_root.

api_root = "https://api.github.com"

This is the common substring in all of the URLs to which we will be making API calls. So instead of typing “https://api.github.com” everytime we need to access https://api.github.com/users or https://api.github.com/users/<username> we just write api_root+'/users/' or api_root+'/users/<username>', as shown in the code snippet.

Next, we set the header in our HTTPS request, indicating that responses are meant for version 3 API and should be JSON formatted. GitHub would respect this header information.

1.  GET Request

So now that we have our URL and (an optional) header information stored in different variables, it’s time to make the request.

response = requests.get(api_root+'/users/'+github_user+'/keys', headers = request_header)

The request is of type ‘get’ because we are reading publicly available information from GitHub. If you were writing something under your GitHub user account you would use POST. Similarly other methods are meant for other functions like DELETE is for deletion of resources like repositories.

2.  API Endpoint

The API endpoint that we are reaching out for is:

https://api.github.com/users/<username>/keys

Each GitHub resource has its own API endpoint. Your requests for GET, PUT, DELETE, etc are then made against the endpoint you supplied. Depending on the level of access you have, GitHub will then either allow you to go through with that request or deny it.

Most organizations and users on GitHub set a huge amount of information readable and public. For example, my GitHub user account has a couple of public repositories and public SSH keys that anyone can read access(even without a GitHub user account). If you want to have a more fine-grained control of your personal account you can generate a “Personal Access Token” to read and write privileged information stored in your personal GitHub account. If you are writing a third party application, meant to be used by users other than you, then an OAuth Token of the  said user is what your application would require.

But as you can see, a lot of useful information can be accessed without creating any token.

3.  Response

The response is returned from the GitHub API server and  is stored in the variable named response. The entire response could be read in several ways as documented here. We explicitly asked for JSON type content from GitHub so we will process the request, as though it is JSON. To do this we call the json() method from the requests module which will decode it into Python native objects like dictionaries and lists.

You can see the keys being appended to the authorized_keys file in this for loop:

for i in response.json():
    authorized_keys_file.write(i['key']+'\n')

If you print the response.json() object, you will notice that it is a Python list with Python dictionaries as members. Each dictionary has a key named ‘key’ with your public SSH key as value to that key. So you can append these values one by one to your authorized_keys file. And now you can easily SSH into your server from any computer that has anyone of the private SSH keys corresponding to one of the public keys we just appended.

Exploring Further

A lot of work with APIs involves careful inspection of the API documentation itself more than writing lines of code. In case of GitHub, the documentation is one of the finest in the industry. But reading up on API docs and making API calls using Python is rather uninteresting as a standalone activity.

Before you go any further, I would recommend you to come up with one task that you would like to perform using Python on your GitHub account. Then try to implement it by reading only the official documentations provided by Python, its dependent libraries and GitHub. This will also help you adopt a healthier mindset where you understand what’s going on inside your code and improve it gradually over time.

]]>
Shahriar Shovon <![CDATA[How to Install Jetbrains DataGrip on Ubuntu]]> https://linuxhint.com/?p=33174 2018-12-04T16:06:59Z 2018-12-04T11:00:10Z DataGrip is a SQL database IDE from JetBrains. It has auto completion support for SQL language. It even analyzes your existing databases and helps you write queries faster. DataGrip can be used to manage your SQL databases graphically as well. You can also export your database to various formats like JSON, CSV, XML etc. It is very user friendly and easy to use.

In this article, I will show you how to install DataGrip on Ubuntu. The procedure shown here will work on Ubuntu 16.04 LTS and later. I will use Ubuntu 18.04 LTS in this article for demonstration. So, let’s get started.

Installing DataGrip:

On Ubuntu 16.04 LTS and later, the latest version of DataGrip is available as a snap package in the official snap repository. So, you can easily install DataGrip on Ubuntu 16.04 LTS and later.

To install DataGrip snap package on Ubuntu 16.04 LTS and later, run the following command:

$ sudo snap install datagrip --classic

As you can see, DataGrip is being installed.

DataGrip is installed.

Initial Configuration of DataGrip:

Now, you can start DataGrip from the Application Menu of Ubuntu. Search for datagrip in the Application Menu and you should see the DataGrip icon. Just click on it.

As you’re running DataGrip for the first time, you will have to do some initial configuration. From this window, select Do not import settings and then click on OK.

Now, you will see the activation window. DataGrip is not free. To use DataGrip, you will have to buy it from JetBrains. Once you buy it, you will be able to use this window to activate DataGrip.

If you want to try out DataGrip before you buy it, select Evaluate for free and click on Evaluate.

DataGrip is being started.

Now, you will have to customize DataGrip. From here, select an UI theme. You can either use Darcula dark theme from JetBrains or the Light theme depending on your preferences. Just, select the one you like.

If you don’t want to customize DataGrip now, instead leave the defaults, then click on Skip Remaining and Set Defaults.

Otherwise, click on Next: Database Options.

Now, select the default SQL dialect. For example, if you mostly use MySQL, then, you should select MySQL. You may also set the default script directory for your chosen database dialect. It’s optional.

Once you’re done, click on Start using DataGrip.

DataGrip should start. You may click on Close to close to Tip of the Day.

This is the main window of DataGrip.

Connecting to a Database:

In this section, I will show you how to connect to a SQL database with DataGrip.

First, from the Database tab, click on the + icon as marked in the screenshot below.

Now, from Data Source, select the database you want to connect to. I will pick MariaDB.

As you are running DataGrip for this database (MariaDB in my case) for the first time, you will have to download the database driver. You can click on Download as marked in the screenshot below to download the database driver.

As you can see, the required database driver files are being downloaded.

Once the driver is downloaded, fill in all the details and click on Test Connection.

If everything is alright, you should see a green Successful message as shown in the screenshot below.

Finally, click on OK.

You should be connected to your desired database.

Creating Tables with DataGrip:

You can create tables in your database graphically using DataGrip.  First, right click your database from the list and go to New > Table as marked in the screenshot below.

Now, type in your table name. To add new columns to the table, click on + icon as marked in the screenshot below.

Now, type in the column name, type, default value if it does have in your design, and check the column attributes such as Auto Increment, Not null, Unique, Primary key depending on your need.

If you want to create another column, just click on the + icon again. As you can see, I created id, firstName, lastName, address, age, phone, and country columns. You can also use the icon to remove a column, Up and Down arrow icons to change the position of the column. Once you’re satisfied with your table, click on Execute.

Your table should be created.

You can double click on the table to open it in a graphical editor. From here, you can add, modify, delete table rows very easily. This is the topic of the next section of this article.

Working with Tables in DataGrip:

To add a new row, from the table editor, just click on the + icon as marked in the screenshot below.

A new blank row should show up.

Now, click on the columns and type in the values that you want for the new row. Once you’re done, click on DB upload icon as marked in the screenshot below.

As you can see, the changes are saved permanently in the database.

I added another row of dummy data just to demonstrate how delete and modify works.

To delete a row, select any column of the row you want to delete and click on the icon marked in the screenshot below.

As you can see, the row is not in gray color. To save the changes, click on the DB upload icon as marked in the screenshot below.

As you can see, the table is gone.

To edit any row, just double click on the column of the row that you want to edit and type in the new value.

Finally, click somewhere else and then click on DB upload icon for the changes to be saved.

Running SQL Statements in DataGrip:

To run SQL statements, just type in the SQL statement, move the cursor to the end of the SQL statement and press <Ctrl> + <Enter>. It will execute and the result will be displayed as you can see in the screenshot below.

So, that’s how you install and use DataGrip on Ubuntu. Thanks for reading this article.

]]>
Sidratul Muntaha <![CDATA[How to Install Manjaro Linux]]> https://linuxhint.com/?p=33153 2018-12-04T16:01:25Z 2018-12-04T10:46:28Z In the world of Linux distros, there are numerous ones that you can give your machine a unique touch. With a intuitive user interface and Arch Linux at the base, Manjaro Linux is one of the elite distros you can get right now.  Arch Linux is always considered a tough distro. That’s why most users don’t use it. Manjaro Linux takes the effort of making Arch more user friendly with easier installation and usage method while maintaining the tremendous power of Arch Linux itself.  Without farther ado, let’s get Manjaro Linux ready on our system!

Getting Manjaro Linux

At first, grab the installation media of Manjaro Linux.

Install manjaro Linux

Now, make a bootable USB flash drive. You can use Linux Live USB Creator and Rufus etc. This way, the installation will go smoother and faster.

Installing Manjaro Linux

Boot into the USB flash drive you just created.

Select the option “Boot: Manjaro.x86_64”.

Once the system loads, this is where you’ll land.

Double-click “Install Manjaro Linux” on the screen.

This is the first step of installing Manjaro Linux. Choose your language.

At the next step, select the location you’re in. This is important for the system locale and update server.

Select the keyboard layout of your system.

Now, it’s time to partition the installation disk. You can choose auto partitioning (Erase disk) or “Manual” partitioning. Choose whichever you need.

Time to create your user account. Fill in the forms with appropriate credentials.

Check out all the installation steps you’ve configured and make sure that all of them are right.

Once everything is set, start the installation process.

Restart the system for completing the installation.

Voila! Your system is ready to use!

]]> Ranvir Singh <![CDATA[Virtual Environments in Python 3]]> https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=33132 2018-12-01T17:25:24Z 2018-11-30T18:22:46Z

Like most people, I hate installing unnecessary packages on my workstation. After you are done with them, uninstalling them is never enough. Packages leave behind tonnes of folders and files. They require many other (obscure) packages also left behind in the wake of things. Slowly but surely these things build up in your workstation and while they may not hog up any significant portion of your disk space, they can cause other issues.

Older Dependencies

Older packages may linger around and your Python code will happily use them. This is not a problem if your Python scripts are meant to run locally, and not for industrial purposes. Data scientists, students and even regular people automating their everyday task can just keep using the older packages without much of a problem.

The problem begins when you ship your code to production. When you do that, chances are you will just send your main script and not all the package dependencies. For example, if you have written a microservice to be shipped as AWS Lambda function, the first few lines might import request module like this:

import request

The request package supplied by AWS lambda will be different from your older one and as a result the program might crash.

Conflicts

Conflicts might also come into the picture where different projects use different versions of the same package. Maybe some of your older projects need the older pip packages. But you might need the newer package for other projects. Running pip install -U <package_name> will upgrade the package across your OS causing issues when you go back to maintaining your older projects.

Python Virtual Environments

If you are using any version of Python above 3.5, you can use a built-in module called venv to create what are called Python Virtual Environments. What this module does is create an isolated folder or directory where all your pip packages and other dependencies can live. The folder also contains an ‘activate’ script in it. Whenever you want to use a particular virtual environment,  you simply run this script after which only the packages contained within this folder can be accessed. If you run pip install, the packages will be installed inside this folder and nowhere else. After you are done using an environment, you can simply ‘deactivate’ it and then only the global pip packages will be available to you.

If you are using Ubuntu 18.04 and above, you don’t even need to install the pip package manager across your entire system. Pip can only exist inside your virtual environment if you prefer it that way.

Installing venv and Creating Virtual Environments

Ubuntu 18.04 LTS comes out of the box with Python 3.6.x, but the Python venv module is not installed, neither is pip. Let’s install just venv.

$ apt install python3-venv

Next, we go to the directory inside which you want your Virtual Environment directory to be created. For me it is ~/project1

$ cd ~/project1

Create your venv with the following command, notice the my-env is just the name of that environment, you can name it whatever you want :

$ python3 -m venv my-env

Note: Some Python3 installations, like the ones available on Windows, you call the Python interpreter using just python and not python3, but that changes from system to system. For the sake of consistency I will be using only python3.

After the command has finished execution, you will notice a new folder ~/project1/my-evn. To activate the my-env virtual environment, you will have to:

  1. Run,
    $source ~/project1/my-env/bin/activate if you are using Bash.
    There are alternative scripts called activate.fish and activate.csh for people who use fish and csh shells, respectively.
  2. On Windows, the script can be invoked by running:
    >.\my-env\Scripts\activate.bat if you are using command prompt, or,
    >.\my-env\Scripts\activate.ps1 if you are using PowerShell.

Using Virtual Environments

Once you do run the script successfully, you will notice that the prompt changes to something like what’s shown below, you can now install packages using pip:

(my-env) $ pip3 install requests
## We can list the installed packages using `pip freeze` command
(my-env) $ pip3 freeze
certifi==2018.10.15
chardet==3.0.4
idna==2.7
pkg-resources==0.0.0
requests==2.20.1
urllib3==1.24.1

As long as the virtual environment is active (as indicated by the prompt) all the packages will be saved only in the virtual environment directory (my-env), no matter where you are in the file system.

To get out of the virtual environment, you can type deactivate into the prompt and you will be back to using the system-wide installation of Python. You can notice that the new packages we just installed won’t be shown in the global pip installation.

To get rid of the virtual environment, simply delete the my-env folder that was created after running the module. You can create as many of these environments as you like.

Conclusion

With venv module, virtual environments are now available as a standard feature of Python, especially if you install from Python.org. Previously, we used to have many third party implementations called virtualenv,pyenv,etc.

This gave rise to more and more bloated software like Anaconda, especially popular among data scientists. It is good to finally have a simplistic tool for managing Python packages without having to install a lot of other unrelated junk. You can read more about venv here. ]]> Fahmida Yesmin <![CDATA[PHP Tutorial For Beginners]]> https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=33088 2018-12-01T17:19:41Z 2018-11-30T09:16:16Z

If you are new in PHP then this tutorials will help you to learn PHP from the beginning.

PHP Basics:

  1. Hello World
  2. Comments
  3. Variables with Strings
  4. Concatenate Strings
  5. Trim Strings
  6. Substrings
  7. Variables with Numbers
  8. Math
  9. Current Date
  10. Date checking
  11. If Statements
  12. Else and ElseIf
  13. If with (OR and AND)
  14. Arrays
  15. while Loop
  16. foreach loop
  17. functions
  18. function arguments
  19. die and exit
  20. Include Files
  21. JSON usage
  22. XML usage
  23. HTML Form Inputs
  24. get_browser function
  25. Session storage
  26. Server Request Method
  27. HTTP POST
  28. Sending Email
  29. Object and Class
  30. Exception Handling

Hello World

The extension of PHP file is .php. <?php  and ?> tags are used to define PHP code block and using ‘;’ at the end of the line is mandatory for PHP script. Create a new file named ‘first.php’ to run your first script and save the file in /www/html/htdocs folder. Add the following script to print a simple text, “Hello World”.

<?php
//Print text
echo "Hello World";
?>

Output:

Run the file from the browser.

http://localhost/first.php

Top

Comments

Like other standard programming languages, you can use ‘//’ for single line comment and ‘/* */’ for multiple line comment. Create a PHP file named ‘comment.php’ with the following code to show the use of single and multiple lines comment in PHP.

<?php
//Assign a value in the variable $n
$n = 10;
/* Print
the value of $n */

echo "n = $n";
?>

Output:

Run the file from the browser.

http://localhost/comment.php

Top

Variables with strings

‘$’ symbol is used to declare and read any variable in PHP. Create a PHP file named ‘strings.php‘ with the following code. You can use single quote (‘ ‘) or double quote (” “) to declare or print any string variable but the double quote is used to print the value of the string variable with other string data. Different uses of string variables are shown in this example.

<?php
$site = 'LinuxHint';
echo "$site is a good blog site.<br/>";
$os = 'Linux';
echo "You can read different topics of $os on $site.";
?>

Output:

Run the file from the browser.

http://localhost/strings.php

Top

Concatenate Strings

. operator is used in PHP to combine multiple variables. Create a PHP file named ‘concate.php’ and add the following code to combine multiple string variables. The sum of two number variables is stored in another variable and the values of three variables are printed by combining with other string.

<?php
$a = 30;
$b = 20;
$c = $a + $b;
echo "The sum of ".$a." and ".$b." is ".$c;
?>

Output:

Run the file from the browser.

http://localhost/concate.php

Top

Trim Strings

trim() function is used in PHP to remove any character from left and right side of any string. There are two other functions in PHP for removing character from left or right side. These are ltrim() and rtrim(). Create a PHP file named ‘trimming.php’ with the following code to show the uses of these three functions. Three trimming functions are used in the script and the character ‘a’ is removed from the starting or ending or both sides based on applied string.

<?php
$text = "aa I like programming aa";
echo "Before trim:$text<br/>";
echo "After <b>trim</b>:".trim($text,'a')."<br/>";
echo "After <b>ltrim</b>:".ltrim($text,'a')."<br/>";
echo "After <b>rtrim</b>:".rtrim($text,'a')."<br/>";
?>

Output:

Run the file from the browser.

http://localhost/trimming.php

You can learn more about trimming from the following tutorial link.

https://linuxhint.com/trim_string_php/

Top

Substrings

substr() function is used in PHP to read a particular part of a string. This function can take three parameters. The first parameter is the main string that you want to cut, the second parameter is the starting index and the third parameter is the length of the string. The third parameter is optional for this method.  Create a PHP file named ‘substring.php‘ with the following code to show the use of this function. In this function, the starting index counts from 0 but negative starting index value counts from 1. And the length value counts from 1. If you omit the third parameter of this function then the characters from the starting index to the end of the main string will be cut.

<?php
echo substr("Web Programming",4,7)."<br>";
echo substr("Web Programming",4)."<br>";
echo substr("Web Programming",-8,4)."<br>";
?>

Output:

Run the file from the browser.

http://localhost/substring.php

Top

Variables with Numbers

You can declare different types of number variables in PHP. Number value can be integer or float. Three types of numbers are declared and added in the following script. Create a PHP file named ‘numbers.php’ to show the use of number variable.

<?php
$a = 8;
$b = 10.5;
$c = 0xFF;
echo $a+$b+$c;
?>

Output:

Run the file from the browser.

http://localhost/numbers.php

Top

Math

PHP contains many built-in functions to do various types of mathematical tasks, such as abs(), ceil(), floor(), hexdec(), max(), min(), rand() etc. The use of abs() function is shown in the following script. abs() function returns the absolute value of any number. If you provide any negative number then abs() function will return only the value without any sign.

absval.php

<?php
$number = -17.87;
$absnum = abs($number);
echo $absnum;
?>

Output:

Run the file from the browser.

http://localhost/absval.php

Top

Current Date

You can get data and time related all information in PHP in two ways. One way to use date() function and another way to use DateTime class. How you can get the current date by using mentioned two ways is shown in the following script. The script will show the current date in ‘day-month-year’ format.
 
currentdate.php

<?php
$CurrentDate1 = date('d-m-Y');
echo $CurrentDate1."<br/>";
$CurrentDate2 = new DateTime();
echo $CurrentDate2->format('d-m-Y');
?>

Output:

Run the file from the browser.

http://localhost/currentdate.php

 

Date checking

checkdate() function is used in PHP to check a date is valid or not. The use of this function is shown in the following script. This script will check a year is a leap year or not based on a date.

leapyear.php

<?php
if(checkdate(02, 29, 2018))
echo "The year is leap year";
else
echo "The year is not leap year";
?>

Output:

Run the file from the browser.

http://localhost/leapyear.php

Top

if Statements

if statement is used for declaring conditional statement. The syntax of if statement in PHP is similar to other standard programming languages. The following script shows the use of simple if statement. According to the script, the condition is true and it will print the output, ”You are eligible for this offer”.

if.php

<?php
$age = 20;
if ($age >= 18)
echo "You are eligible for this offer";
?>

Output:

Run the file from the browser.

http://localhost/if.php

Top

Else and ElseIf

You can use else and elseif with if statement if you want to execute different statements based on different conditions. Three types of conditions are checked in the following script. The second condition will be true according to the script and it will print “You won the second prize”.
 
elseif.php

<?php
$n = 220;
if ($n == 1010) {
echo "You won the first prize";
} elseif ($n == 220) {
echo "You won the second prize";
} else {
echo "Try again later";
}
?>

Output:

Run the file from the browser.

http://localhost/elseif.php

Top

If with (OR and AND)

You can use multiple conditions in if statement by using logical OR and AND. Logical OR returns true when any condition of multiple conditions becomes true. Logical AND returns true when all declared conditions become true. The following script shows the uses of if statement with OR and AND logic. Here, if-else-if statement is used with logical AND that will print the output based on assigned $current_time. Another if statement is used with logical OR that will print the output if any of the conditions becomes true.
 
orand.php

<?php
$current_time = 17;
$break_time = false;
if ($current_time >= 9 AND $current_time <= 12)
echo "Morning<br>";
elseif ($current_time > 13 AND $current_time <= 16)
echo "Afternoon<br>";
else
{
echo "Evening<br>";
$break_time = true;
}
if ($current_time > 16 OR $break_time == true)
echo "Go to home<br>";
?>

Output:

Run the file from the browser.

http://localhost/orand.php

Top

Arrays

When you want to add multiple values in a single variable then you can use array or object variable. Mainly two types of array can be declared in any programming language. These are numeric and associative array. Array can be categorized by one dimensional and multidimensional array also. The following example shows the use of simple numeric and associative array. Here, numeric array, $names is read and printed by using for loop and associative array, $emails is read and printed by foreach loop.

array.php

<?php
//Numeric Array
$names = array("Jim", "Riffat", "Ella");
for($i = 0; $i<count($names); $i++)
echo "Name: ".$names[$i]."<br/>";

//Associative Array
$emails=array("Jim"=>"jim@yahoo.com","Riffat"=>"riffat@gmail.com",
"Ella"=>"ella@hotmail.com");
foreach($emails as $name=>$email)
{
echo "<br/>The email address of $name is $email<br/>";
}
?>

Output:

Run the file from the browser.

http://localhost/array.php

You can visit the following tutorial link to know more about PHP array.

https://linuxhint.com/php-arrays-tutorial/

Top

while Loop

PHP uses three types of loops to iterate a block of code multiple times. while loop is one of them that continues the iteration until the loop reach the termination condition. The syntax of while loop declaration is similar to the other standard programming languages. The following example shows the use of while loop. The loop is used here to find out even numbers from 1 to 10. The loop will iterate for 10 times and check each number is divisible by 2 or not. The numbers which are divisible by 2 will print.

 

while.php

<?php
$n = 1;
echo "Even numbers from 1-10<br/>";
while($n < 11)
{
if(($n % 2) == 0)
echo "$n <br/>";
$n++;
}
?>

Output:

Run the file from the browser.

http://localhost/while.php

Top

foreach loop

PHP uses foreach loop to read an array or object variable. This loop can read key/value pair from an associative array. The use of this loop is shown in the following script. Here, an associative array named $books is declared. The index of the array contains the book type and the value of the array contains the book name. foreach loop is used to iterate the array with key and value and print them by concatenating with other string.
 
foreach.php

<?php
$books = array("cms"=>"Wordpress", "framework"=>"Laravel 5","javascript library"=>
"React 16 essentials");
foreach ($books as $type=>$bookName) {
echo "<b> $bookName </b> is a popular <b> $type </b> <br>";
}
?>

Output:

Run the file from the browser.

http://localhost/foreach.php

Top

functions

If you want to use the same block of code many times in many parts of the same script then it is better to create a function with the common block of code and call the function where the code needs to execute. A simple use of the function is shown in the following example. Here, a function without any argument is declared that will print a text after calling.

function.php

<?php
//Declare the function
function WelcomeMessage() {
echo "<center><h3> Welcome to Linuxhint </h3></center>";
}
// call the function
WelcomeMessage();
?>

Output:

Run the file from the browser.

http://localhost/function.php

Top

function arguments

You can use a function with arguments or without arguments. The previous example shows the use of argument less function. You can send argument in function by value or reference.  The argument is passed by value to the function in the following example. Here, a function with one argument is defined that will take the radius value of a circle and calculate the area of the circle based on that value. The function is called three times with three different radius values.
 
circlearea.php

<?php
//Declare the function
function circleArea($radius) {
$area = 3.14*$radius*$radius;
echo "<h3> The area of the circle is $area </h3>";
}
// call the function
circleArea(12);
circleArea(34);
circleArea(52);
?>

Output:

Run the file from the browser.

http://localhost/circlearea.php

Top

die and exit

PHP uses die() and exit() functions to exit from the script by displaying an error message. There is no basic difference between these two functions. The uses of these both functions are shown in the following examples.

die() function

The following script will generate an error if newfile.txt doesn’t exist in the current location and stops the execution by displaying the error message included in die() method.

dieerr.php

<?php
if(!fopen("newfile.txt","r"))
die("Unable to open the file");
echo "Reading the file content...";
?>

Output:

Run the file from the browser.

http://localhost/dieerr.php

exit() function

The following script will stop the execution of the script by displaying error message if the value of $n not equal to 100.

exiterr.php

<?php
$n=10;
if($n != 100)
exit("n is not equal to 100");
else
echo "n is equal to 100";
?>

Output:

Run the file from the browser.

http://localhost/exiterr.php

 

Top

Include Files

When you need to use the same code in multiple PHP scripts then it is better to save the common script in any file and use the code multiple times by including the file. You can include file in PHP by using four methods. These are require(), require_once(), include() and include_once(). If require() or require_once() fails to include the file then it stops the execution of the script forcibly but include() or include_once() doesn’t stop the execution of the script if an error occurs in inclusion. The use of the two methods is shown in the following example. Create a PHP file named “welcome.php” with the following code that will be included later. This script will print a simple text.

 

welcome.php

<?php
echo "Start reading from here<br/>";
?>

Create another PHP file named “include_file.php” and add the following code. Here, include() method will not stop the execution for inclusion error and print the message “Laravel is a very popular PHP framework now”. But require() method will stop the execution for inclusion error and will not print the last two echo messages after require() statement.

 

include_file.php

<?php
include('welcom.php');
echo "Laravel is a very popular PHP framework now<br/>";
require('welcom.php');
echo "You can use Magento for developing ecommerce site<br/>";
echo "Thank you for reading<br>";
?>

Output:

Run the file from the browser.

http://localhost/include_file.php

Top

JSON Usage

There is a built-in method in PHP to read data from the web server in JSON format and display in the web page. One of the common methods of PHP is json_encode() for creating JSON data. This method is used in the following script to convert PHP array into JSON data.

json.php

<?php
$items = array("Pen", "Pencil", "Eraser", "Color Book");
$JSONdata = json_encode($items);
echo $JSONdata;
?>

Output:

Run the file from the browser.

http://localhost/json.php

Top

XML Usage

PHP has an extension named SimpleXML for parsing XML data. simplexml_load_string() is a built-in function of PHP to parse XML file. The following example shows how you can use simplexml_load_string() function to read data from XML content. Here, XML data are stored in a variable, $XMLData and $xml variable is used to read the data of $XMLData. After reading the data, the content is print as an array structure with data type.

xml.php

<?php
$XMLData =
"<?xml version='1.0' encoding='UTF-8'?>
<book>
<title>Easy Laravel 5</title>
<author>W. Jason Gilmore</author>
<website>easylaravelbook.com</website>
</book>"
;
 
$xml=simplexml_load_string($XMLData) or die("Error in reading");
var_dump($xml);
?>

Output:

Run the file from the browser.

http://localhost/xml.php

Top

HTML Form Inputs

You can use different types of built-in array of PHP to read submitted form data based on the method attribute value of the form. You have to use $_POST array if the form data is submitted using POST method and you have to use $_GET array if the form is submitted using GET method. The following example uses POST method to submit the form data into the server. You have to create two files to test the following script. One is “login.html” and another is “check.php”. HTML file contains a form of two elements. These are username and password. The form data is submitted to check.php file by using post method. PHP script will check the submitted value of username and password. If the username is ‘admin’ and password is ‘1234’ then it will print ‘Valid user’ otherwise it will print ‘Invalid user’.

login.html

<html>
<body>
<form action="check.php" method="post">
Username: <input type="text" name="username"><br>
password: <input type="password" name="pwd"><br>
<input type="submit">
</form>
</body>
</html>

check.php

<?php
if($_POST['username'] == 'admin' && $_POST['pwd'] == '1234')
echo "Valid user";
else
echo "Invalid user";
?>

Output:

Run the file from the browser.

http://localhost/login.html

If the username and password will not match then the following output will appear.

Top

get_browser function

get_browser() is a built-in function of PHP that is used to read all information related to the browser by reading browscap.ini file. The following script shows the output of this function in array format.
 
getbrowser.php

<?php
echo $_SERVER['HTTP_USER_AGENT'];
$browser = get_browser();
print_r($browser);
?>

Output:

Run the file from the browser.

http://localhost/getbrowser.php

Top

Session storage

You can store session information in PHP by using $_SESSION array. PHP has many built-in functions to handle the session. session_start() function is used in the following script to start the session and two session values are stored in $_SESSION array.

session.php

<?php
session_start();
$_SESSION["name"] = "John";
$_SESSION["color"] = "Blue";
echo "Session data are stored.";
?>

Output:

Run the file from the browser.

http://localhost/session.php

Top

Server Request Method

It is mentioned earlier that PHP has many super global variables to handle server request. $_SERVER array is one of these variables that are used to get server information. The following script will print the filename of the executing script and the name of the running server.

 

serverrequest.php

<?php
echo $_SERVER['PHP_SELF'];
echo "<br>";
echo $_SERVER['SERVER_NAME'];
echo "<br>";
?>

Output:

Run the file from the browser.

http://localhost/serverrequest.php

Top

HTTP POST

 

HTTP protocol is used to communicate between the server and the client. Any browser works as a client to send HTTP request to the server and the server sends the response to the client based on the request. HTTP request can be sent by using POST and GET method. The following example shows the use of HTTP POST request in PHP. Here, an HTML form is designed to take height and width value of any rectangle and send to the server. $_POST array is used to read the values and calculate the area of the rectangle and print.
 
httppost.php

<html>
<body>
<form action = "#" method = "post">
Height: <input type = "text" name = "ht" /> <br/><br/>
Width: <input type = "text" name = "wd" /> <br/><br/>
<input type = "submit" />
</form>
 
</body>
</html>
<?php
if( $_POST["ht"] || $_POST["wd"] )
{
$area = $_POST["ht"] * $_POST["wd"];
echo "The area of the rectangle is $area";
}
?>

Output:

Run the file from the browser.

http://localhost/httppost.php

If the user types 10 and 20 as height and width then the following output will occur.

Top

Sending Email

 

PHP has a built-in function named mail() for sending an email. It has four arguments. First three arguments are mandatory and last argument is optional. The first argument takes the receiver’s email address, the second argument takes email subject, the third argument takes email body and forth argument takes header content. But this function works in the live server only. How you can use this function is shown in the following script.
 
email.php

<?php
$to = 'newuser@example.com';
$subject = 'Thank you for contacting us';
$message = 'We will solve your problem soon';
mail($to, $subject, $message);
?>

If you want to send email from local server by using PHP then you can use PHPMailer class. You can visit the following tutorial link to know more about this class.

https://linuxhint.com/how-to-send-email-from-php/

Top

Class and Object

Object-oriented programming feature is added in PHP from version 5.  Class and object are the major parts of any object-oriented programming. A class is a collection of variables and methods and an object is an instance of a class.  How you can create and use a simple class and object is shown in the following example. Here, a class named Customer is defined with three public variables and one method. After creating the object named $custobj, variables are initialized by calling setValue method and printed later.
 
classobject.php

<?php
class Customer
{
//Declare  properties/variables
public $name;
public $address;
public $phone;
 

//Set the customer data
public function setValue($name, $addr, $phone){
$this->name = $name;
$this->address = $addr;
$this->phone = $phone;
}
}
// Create a new object of Customer
$custobj = new Customer;

// Set the properties values
echo $custobj->setValue("Alia","Dhaka, Bangladesh","+8801673434456");

// Print the customer value
echo "Name: ".$custobj->name."<br/>";
echo "Address: ".$custobj->address."<br/>";
echo "Phone: ".$custobj->phone."<br/>";
?>

Output:

Run the file from the browser.

http://localhost/classobject.php

Top

Exception Handling

One of the important features of object-oriented programming is exception handling. Exception handling has two parts. These are try block and catch block. Try block contains the script and when any error appears in the script then an exception is thrown by try block to catch block.  A simple use of exception handling is shown in the following example. Here, try block will check the value of $number. If $number is greater than 9 then it will throw an exception with the message “You have to select one digit number” otherwise the script will print the value of $number with other text.
 
exception.php

<?php
$number = 15;
//try block
try {
if($number > 10) {
throw new Exception("You have to select one digit number<br/>");
}
//Print the output if no exception occurs
echo "Selected number is $number<br/>";
}
//catch exception
catch(Exception $e) {
echo 'Error Message: ' .$e->getMessage();
}
?>

Output:

Run the file from the browser.

http://localhost/exception.php

Top

Conclusion

The basic PHP programming is explained in this tutorial by using 30 examples. If you want to learn PHP or want to become a web developer in the future then this tutorial will assist you to start writing scripts in PHP.

]]>
Shahriar Shovon <![CDATA[How to Measure Distance with Raspberry Pi]]> https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=33066 2018-11-27T15:12:23Z 2018-11-27T07:03:57Z You can measure distance using the HC-SR04 ultrasonic sensor with Raspberry Pi. The HC-SR04 sensor can measure distance from 2mm (.02m) to 400cm (4m). It sends 8 burst of 40KHz signals and then waits for it to hit an object and get reflected back. The time it takes for the ultrasonic 40KHz sound wave to travel back and forth is used to calculate the distance between the sensor and the object on its way. That’s basically how the HC-SR04 sensor works.

In this article, I will show you how to use a HC-SR04 ultrasonic sensor to measure distance between your sensor and an object in its way using Raspberry Pi. Let’s get started.

Components You Need:

To successfully measure distance with Raspberry Pi and HC-SR04 sensor, you need,

  • A Raspberry Pi 2 or 3 single board computer with Raspbian installed.
  • A HC-SR04 ultrasonic sensor module.
  • 3x10kΩ resistors.
  • A breadboard.
  • Some male to female connectors.
  • Some male to male connectors.

I have written a dedicated article on installing Raspbian on Raspberry Pi, which you can check at https://linuxhint.com/install_raspbian_raspberry_pi/ if you need.

HC-SR04 Pinouts:

The HC-SR04 has 4 pins. VCC, TRIGGER, ECHO, GROUD.

Fig1: HC-SR04 pinouts (https://www.mouser.com/ds/2/813/HCSR04-1022824.pdf)

The VCC pin should be connected to +5V pin of the Raspberry Pi, which is pin 2. The GROUND pin should be connected to the GND pin of the Raspberry Pi, which is pin 4.

The TRIGGER and ECHO pins should be connected to the GPIO pins of the Raspberry Pi. While, the TRIGGER pin can be directly connected to one of the GPIO pins of the Raspberry Pi, the ECHO pin needs a voltage divider circuit.

Circuit Diagram:

Connect the HC-SR04 ultrasonic sensor to your Raspberry Pi as follows:

Fig2: HC-SR04 ultrasonic sensor connected to Raspberry Pi.

Once everything is connected, this is how it looks like:

Fig3: HC-SR04 ultrasonic sensor connected to Raspberry Pi on breadboard.

Fig4: HC-SR04 ultrasonic sensor connected to Raspberry Pi on breadboard.

Writing A Python Program for Measuring Distance with HC-SR04:

First, connect to your Raspberry Pi using VNC or SSH. Then, open a new file (let’s say distance.py) and type in the following lines of codes:

Here, line 1 imports the raspberry pi GPIO library.

Line 2 imports the time library.

Inside the try block, the actually code for measuring the distance using HC-SR04 is written.

The finally block is used to clean up the GPIO pins with GPIO.cleanup() method when the program exits.

Inside the try block, on line 5, GPIO.setmode(GPIO.BOARD) is used to make defining pins easier. Now, you can reference pins by physical numbers as it is on the Raspberry Pi board.

On line 7 and 8, pinTrigger is set to 7 and pinEcho is set to 11. The TRIGGER pin of HC-SR04 is connected to the pin 7, and ECHO pin of HC-SR04 is connected to the pin 11 of the Rapsberry Pi. Both of these are GPIO pins.

On line 10, pinTrigger is setup for OUTPUT using GPIO.setup() method.

On line 11, pinEcho is setup for INPUT using GPIO.setup() method.

Lines 13-17 are used for resetting pinTrigger (by setting it to logic 0) and setting the pinTrigger to logic 1 for 10ms and then to logic 0. In 10ms, the HC-SR04 sensor sends 8 40KHz pulse.

Lines 19-24 are used to measure the time it takes for the 40KHz pulses to be reflected to an object and back to the HC-SR04 sensor.

On line 25, the distance is measured using the formula,

Distance = delta time * velocity (340M/S) / 2

=> Distance = delta time * (170M/S)

I calculated the distance in centimeters instead of meters, just to be precise. I calculated distance is also rounded to 2 decimal places.

Finally, on line 27, the result is printed. That’s it, very simple.

Now, run the Python script with the following command:

$ python3 distance.py

As you can see, the distance measured is 8.40 cm.

Fig5: object placed at about 8.40cm away from the sensor.

I moved to object a little bit farther, the distance measured is 21.81cm. So, it’s working as expected.

Fig6: object placed at about 21.81 cm away from the sensor.

So that’s how you measure distance with Raspberry Pi using the HC-SR04 ultrasonic sensor.  See the code for distance.py below:

import RPi.GPIO as GPIO
import time
try:
    GPIO.setmode(GPIO.BOARD)
    pinTrigger = 7
    pinEcho = 11
 
    GPIO.setup(pinTrigger, GPIO.OUT)
    GPIO.setup(pinEcho, GPIO.IN)
 
    GPIO.output(pinTrigger, GPIO.LOW)
    GPIO.output(pinTrigger, GPIO.HIGH)
 
    time.sleep(0.00001)
    GPIO.output(pinTrigger, GPIO.LOW)
 
    while GPIO.input(pinEcho)==0:
        pulseStartTime = time.time()
    while GPIO.input(pinEcho)==1:
        pulseEndTime = time.time()
 
    pulseDuration = pulseEndTime - pulseStartTime
    distance = round(pulseDuration * 17150, 2)
 
    print("Distance: %.2f cm" % (distance))
finally:
    GPIO.cleanup()
]]>
Habeeb Kenny Shopeju <![CDATA[Logging Into Websites With Python]]> https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=33057 2018-11-27T15:05:05Z 2018-11-26T19:10:00Z The login feature is an important functionality in today’s web applications. This feature helps keep special content from non-users of the site and is also used to identify premium users too. Therefore if you intend web scraping a website, you could come across the login feature if the content is only available to registered users.

Web scraping tutorials have been covered in the past, therefore this tutorial only covers the aspect of gaining access into websites by logging in with code instead of doing it manually by using the browser.

To understand this tutorial and be able to write scripts for logging into websites, you would need some understanding of HTML. Maybe not enough to build awesome websites, but enough to understand the structure of a basic web page.

Installation

This would be done with the Requests and BeautifulSoup Python libraries. Asides those Python libraries, you would need a good browser such as Google Chrome or Mozilla Firefox as they would be important for initial analysis before writing code.

The Requests and BeautifulSoup libraries can be installed with the pip command from the terminal as seen below:

pip install requests
pip install BeautifulSoup4

To confirm the success of the installation, activate Python’s interactive shell which is done by typing python into the terminal.

Then import both libraries:

import requests
from bs4 import BeautifulSoup

The import is successful if there are no errors.

The process

Logging into a website with scripts requires knowledge of HTML and an idea of how the web works. Let’s briefly look into how the web works.

Websites are made of two main parts, the client-side and the server-side. The client-side is the part of a website that the user interacts with, while the server-side is the part of the website where business logic and other server operations such as accessing the database are executed.

When you try opening a website through its link, you are making a request to the server-side to fetch you the HTML files and other static files such as CSS and JavaScript. This request is known as the GET request. However when you are filling a form, uploading a media file or a document, creating a post and clicking let’s say a submit button, you are sending information to the server side. This request is known as the POST request.

An understanding those two concepts would be important when writing our script.

Inspecting the website

To practice the concepts of this article, we would be using the Quotes To Scrape website.

Logging into websites requires information such as the username and a password.

However since this website is just used as a proof of concept, anything goes. Therefore we would be using admin as the username and 12345 as the password.

Firstly, it is important to view the page source as this would give an overview of the structure of the web page. This can be done by right clicking on the web page and clicking on “View page source”. Next, you inspect the login form. You do this by right clicking on one of the login boxes and clicking inspect element. On inspecting element, you should see input tags and then a parent form tag somewhere above it. This shows that logins are basically forms being POSTed to the server-side of the website.

Now, note the name attribute of the input tags for the username and password boxes, they would be needed when writing the code. For this website, the name attribute for the username and the password are username and password respectively.

Next, we have to know if there are other parameters which would be important for login. Let’s quickly explain this. To increase the security of websites, tokens are usually generated to prevent Cross Site Forgery attacks.

Therefore, if those tokens are not added to the POST request then the login would fail. So how do we know about such parameters?

We would need to use the Network tab. To get this tab on Google Chrome or Mozilla Firefox, open up the Developer Tools and click on the Network tab.

Once you are in the network tab, try refreshing the current page and you would notice requests coming in. You should try to watch out for POST requests being sent in when we try logging in.

Here’s what we would do next, while having the Network tab open. Put in the login details and try logging in, the first request you would see should be the POST request.

 

Click on the POST request and view the form parameters. You would notice the website has a csrf_token parameter with a value. That value is a dynamic value, therefore we would need to capture such values using the GET request first before using the POST request.

For other websites you would be working on, you probably may not see the csrf_token but there may be other tokens that are dynamically generated. Over time, you would get better at knowing the parameters that truly matter in making a login attempt.

The Code

Firstly, we need to use Requests and BeautifulSoup to get access to the page content of the login page.

from requests import Session
from bs4 import BeautifulSoup as bs
 
with Session() as s:
    site = s.get("http://quotes.toscrape.com/login")
    print(site.content)

 

This would print out the content of the login page before we log in and if you search for the “Login” keyword. The keyword would be found in the page content showing that we are yet to log in.

Next, we would search for the csrf_token keyword which was found as one of the parameters when using the network tab earlier. If the keyword shows a match with an input tag, then the value can be extracted every time you run the script using BeautifulSoup.

from requests import Session
from bs4 import BeautifulSoup as bs
 
with Session() as s:
    site = s.get("http://quotes.toscrape.com/login")
    bs_content = bs(site.content, "html.parser")
    token = bs_content.find("input", {"name":"csrf_token"})["value"]
    login_data = {"username":"admin","password":"12345", "csrf_token":token}
    s.post("http://quotes.toscrape.com/login",login_data)
    home_page = s.get("http://quotes.toscrape.com")
    print(home_page.content)

This would print the page’s content after logging in, and if you search for the “Logout” keyword. The keyword would be found in the page content showing that we were able to successfully log in.

Let’s take a look at each line of code.

from requests import Session
from bs4 import BeautifulSoup as bs

The lines of code above are used to import the Session object from the requests library and the BeautifulSoup object from the bs4 library using an alias of bs.

with Session() as s:

Requests session is used when you intend keeping the context of a request, so the cookies and all information of that request session can be stored.

bs_content = bs(site.content, "html.parser")
token = bs_content.find("input", {"name":"csrf_token"})["value"]

This code here utilizes the BeautifulSoup library so the csrf_token can be extracted from the  web page and then assigned to the token variable. You can learn about extracting data from nodes using BeautifulSoup.

login_data = {"username":"admin","password":"12345", "csrf_token":token}
s.post("http://quotes.toscrape.com/login", login_data)

The code here creates a dictionary of the parameters to be used for log in. The keys of the dictionaries are the name attributes of the input tags and the values are the value attributes of the input tags.

The post method is used to send a post request with the parameters and log us in.

home_page = s.get("http://quotes.toscrape.com")
print(home_page.content)

After a login, these lines of code above simply extract the information from the page to show that the login was successful.

Conclusion

The process of logging into websites using Python is quite easy, however the setup of websites are not the same therefore some sites would prove more difficult to log into than others. There is more that can be done to overcome whatever login challenges you have.

The most important thing in all of this is the knowledge of HTML, Requests, BeautifulSoup and the ability to understand the information gotten from the Network tab of your web browser’s Developer tools.

]]>
Shahriar Shovon <![CDATA[Work with VMware Workstation Pro Shared VMs on Ubuntu]]> https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=33030 2018-11-27T14:53:09Z 2018-11-26T18:56:43Z You can share virtual machines with VMware Workstation Pro. A shared VM can be accessed over the network from another computer with VMware Workstation Pro installed. It is a great feature in my opinion. In this article, I will show you how to work with shared VMs with VMware Workstation Pro on Ubuntu host. Let’s get started.

Changing The Shared VM Path:

The path where the share VMs are stored is different from the path where new VMs are stored. To change the shared VM path, go to Edit > Preferences as marked in the screenshot below.

Now, go to the Shared VMs tab from the Preferences window. As you can see, the default Shared VMs Location is /var/lib/vmware/Shared VMs

To change the default Shared VMs Location, just click on the textbox and type in a new path for your Shared VMs. Once you’re done, click on Apply.

Now, you may see the following dialog box. Just type in your Ubuntu login user’s password and click on Authenticate.

The Shared VMs Location should be changed. Now, click on Close.

Sharing a Virtual Machine on VMware Workstation Pro:

Now, right click on a virtual machine that you want to share and go to Manage > Share… as marked in the screenshot below.

NOTE: To share a virtual machine, the virtual machine that you want to share must be powered off. Otherwise, you won’t be able to share that virtual machine.

Now, you will see the following wizard. If you share a virtual machine, you won’t be able to use some of the VMware Workstation Pro functionalities such as Shared Folders, AutoProtect, Drag & Drop, Copy & Paste. But you can access the VM remotely, use User Access Control for the VM, start and stop the VM automatically.

Click on Next.

You can either create a new clone of the virtual machine and share it or just share the virtual machine. To just share the virtual machine, select Move the virtual machine from VM Sharing Mode section. To create a new clone of the virtual machine and share it, select Create a new clone of this virtual machine from the VM Sharing Mode section. You can also change the name of your shared VM from the Shared VM Name section of the wizard.

Once you’re done, click on Finish.

Your virtual machine should be shared. Now, click on Close.

As you can see, the virtual machine is in the Shared VMs section.

Now, start the virtual machine.

As you can see, the virtual machine has started.

Accessing the Shared Virtual Machines:

Now, you can access the shared virtual machine from another computer with VMware Workstation Pro installed. First, run the following command to find out the IP address of the computer where you shared a VM from.

$ ip a

As you can see, the IP address in my case is 192.168.21.128. Yours should be different. So, make sure to replace 192.168.21.128 with yours from now on.

Now, open VMware Workstation Pro on another computer and go to File > Connect to Server… as marked in the screenshot below.

Now, type in the IP address, login information of your Ubuntu machine where VMware Workstation Pro VM is shared from and click on Connect.

Now, click on Connect Anyway.

Now, click on any one of the three options depending on whether you want to save login information or not.

You should be connected. As you can see, all the information about the Ubuntu machine is displayed here. Also, all the shared VMs should be listed here. The Debian 9 LXDE VM that I shared is listed here. Double click on the VM that you want to use from the list.

As you can see, the VM is opened. Now, you can use it from this remote VMware Workstation Pro instance.

Stop Sharing VMs:

You can also stop sharing VMs. If you stop sharing a VM, it will be moved to the default virtual machine directory from the default share directory. To stop sharing VMs, first, power off the VM that you don’t want to share anymore.

Now, right click on the shared VM and go to Manage > Stop Sharing… as marked in the screenshot below.

Now, make sure the path where it will be moved is correct and will not replace other virtual machines. If you want, you can change it. Just, click on the Browse… button and select a new folder for your VM. Once you’re done, click on Finish.

The VM will not be shared anymore. Now, click on Close.

So, that’s how you work with shared VMs on VMware Workstation Pro on Ubuntu. Thanks for reading this article.

]]>
Shahriar Shovon <![CDATA[How to Install Ubuntu on Windows 10 WSL]]> https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=32999 2018-11-27T02:57:15Z 2018-11-26T18:23:05Z The full form of WSL is Windows Subsystem for Linux. It is a feature of Windows 10 that lets you install and run a full-fledged Linux environment on Windows 10. Windows did not use any virtualization technique here. Instead, Microsoft built a way (WSL) to run Linux binaries on Windows. So, it’s fast and does not require much memory to run.  In this article, I will show you how to install Ubuntu on Windows 10 using Windows WSL. Let’s get started.

Enabling WSL:

First, you have to enable WSL on Windows 10. It is really easy.  First, go to the Settings app from the Start menu.

Now, click on Apps.

Now, from the Apps & features tab, click on Programs and Features as marked in the screenshot below.

Now click on Turn Windows features on or off from the Programs and Features as marked in the screenshot below.

Now, check the Windows Subsystem for Linux checkbox as marked in the screenshot below and click ok OK.

Now, click on Restart now. Windows 10 should reboot.

Installing and Configuring Ubuntu on Windows 10 WSL:

Once your computer starts, open Microsoft Store from the Start menu as shown in the screenshot below.

Now, search for ubuntu. As you can see in the screenshot below, you can install Ubuntu 16.04 LTS or Ubuntu 18.04 LTS at the time of this writing.

I decided to install Ubuntu 16.04 LTS in this article. So, I clicked on it. Now, click on Get as marked in the screenshot below to install Ubuntu.

As you can see, Ubuntu is being installed from the Microsoft Store. It may take a while to complete.

After a while, Ubuntu should be installed.

Now, start Ubuntu from the Start menu as shown in the screenshot below.

As you’re running Ubuntu on Windows 10 for the first time, you will have to configure it. Just press <Enter> to continue.

Now, you have to create a user account on Ubuntu. Type in the username and press <Enter>.

Now, type in a new password for the username you picked and press <Enter>.

Now, retype the password and press <Enter>.

A new user for Ubuntu should be created.

Now, you can run any Ubuntu Linux command here. I ran the lsb_release -a command and as you can see in the screenshot below, I am running Ubuntu 16.04.5 LTS on Windows 10 through WSL.

Ubuntu WSL version is using a custom version of the Linux kernel as you can see in the screenshot below.

You can also exit out of the bash like you always do with the exit command.

Once you do the initial configuration, every time you run the Ubuntu app, you will see a bash console as shown in the screenshot below.

As you can see, the Ubuntu’s free command also works.

Installing Ubuntu Packages:

You can also install Ubuntu packages here as well. The popular apt and apt-get commands are available.  For example, let’s install the htop package on this version of Ubuntu and see what happens.  First, open Ubuntu app and run the following command to update the APT package repository cache:

$ sudo apt update

As you can see, the APT package repository cache is updated.

Now, install htop with the following command:

$ sudo apt install htop

As you can see, htop is installed.

Now, you can run htop with the command:

$ htop

As you can see, htop is running.

So, that’s how you install and use Ubuntu on Windows 10 through WSL. Thanks for reading this article.

]]>
Sidratul Muntaha <![CDATA[Install Google Chrome from Ubuntu PPA]]> https://linuxhint-com.zk153f8d-liquidwebsites.com/?p=32984 2018-11-27T02:51:44Z 2018-11-26T18:00:40Z Google Chrome is, without a doubt, one of the best web browsers in the world. It’s fast, powerful and looks really great. Developed and maintained by Google, Chrome is available across a number of platforms – Windows, Linux, and mobile devices (Android, iOS etc.). If you’re using Ubuntu or any other Debian/Ubuntu based distro, you can easily install Google Chrome on your system using the official DEB package. We’ve discussed on installing Google Chrome in that way.

Today, we’ll follow a less troublesome and a better way of installing Google Chrome – using the official Google repository.

Setting up the Chrome repository

For getting access to the Chrome repository, you have to have the public key set in your system. Fire up a terminal and run the following commands –

wget -q -O - https://dl.google.com/linux/linux_signing_key.pub | sudo apt-key add -

After the key is installed, it’s time to set the Google repository for Chrome.

sudo sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main"
>> /etc/apt/sources.list.d/google.list'

Update your APT cache –

sudo apt update

It’s finally time to install Google Chrome!

Installing Google Chrome

Run the following command –

sudo apt install google-chrome-stable

Other Google Chrome channels

If you’re a long time user of Google Chrome, you may already know about the various Google Chrome channels, for example, Beta, Canary etc. In the case of Linux, Google offers 3 different flavors of Chrome – Stable, Beta and Unstable.  Unstable and Beta offers the bleeding edge features and other tweaks that are under revision. Those features may or may not come in the Stable channel anytime soon. If you would like to help the dev or enjoy the newest features, you can get them as your browser.

Run the following command –

# Chrome beta
sudo apt install google-chrome-beta

# Chrome unstable
sudo apt install google-chrome-unstable

Make sure that you add the key and repo first.

Enjoy!

]]>