Among the most effective and user-friendly Plugins for building deep-learning algorithms is a Python-based elevated artificial neural framework called Keras, which would be constructed on top of well-known deep-learning frameworks like TensorFlow or CNTK. To enable quicker exploration utilizing deeper neural networks, it is designed user-friendly, extendable, and adaptable. It handles both Feedforward and Retractable Networks separately, as well as in combo. It uses the Backend package to address small operations because it is unable to manage them. The deployment of Keras, fundamentals of deep learning, Keras structures, Keras layering, Keras packages, and real-time programming will be covered in this lesson.
Set Up Keras on Linux
Step 01: Update System
Before having the full demonstration of the use of the “Keras” library of Python, we have to fully update our Linux machine to make it easy for further installations. For this purpose, we have to quickly open the “console” application from the built-in applications of the system. Within the particular query area, we have added the “update” query of Linux with the “apt” utility and “sudo” privilege to quickly update the system we have. It has required our user password to continue this process so that our system can be updated properly.
Step 02: Install Python and Pip
For the use of Deep Learning through Keras and TensorFlow, we must have Python’s latest version configured on our machine. Therefore, we begin to install Python’s updated package along with its necessary “pip” utility on our system. For that, we have to again utilize the “apt” utility of Ubuntu 20.04 Linux system in the “install” query on the shell followed by the names of packages to be installed, i.e., Python3 and Python3-pip. On the execution of this simple query on the console area, the system will start to install and configure both packages in our system.
On the other hand, if your system has an old version of the “pip” utility for Python installed, you should be updating it before moving forward.
After the successful configuration of Python and its “pip” utility, it’s time to upgrade the Setuptools for Python to avoid any issues in near future. Hence, we have tried the install query with the “pip3” utility and –upgrade option to install the upgrade the Setuptools, i.e., setup tools. It asks for the current password we have for our system, and we have added it.
Step 03: Install TensorFlow
For building machine learning and supervised neural models, TensorFlow is the most well-known symbolic math package. After going through the installations, we have been executing the same “pip3” install query followed by the “Tensorflow” package name.
Other TensorFlow-related utilities are required to be fully installed on the system. Those utilities will be installed along with the TensorFlow, and it may take up to 10 or more minutes.
Step 04: Install Essential Packages
After the fruitful configuration of TensorFlow in the Ubuntu 20.04 system, we also need to configure some build packages along with some other utilities like “git” and “cmake”. By trying the same “apt” tool, we have installed many of the necessary packages, as shown below:
This step is taking our most attention by confirming this installation. Tap “y” and continue.
Step 05: Create Virtual Environment
After the necessary installations, it’s time to create a virtual environment. Therefore, we have to use the Python3 utility with the “-m” option to create the virtual environment “kerasenv” via the “venv” variable. The “ls” query shows that the environment is created.
Now, we need to move within the virtual environment of the Keras folder. So, we have been using the “cd” instruction along with the name of a virtual environment folder. After that, we have been moving within the “bin” folder of this virtual environment and listed its sub. To activate this Python environment, we tried the “source” instruction at its query area along with the “activate” file. The Virtual environment gets activated with the name “kerasenv”.
Step 06: Install Python Libraries
After setting the Python virtual environment successfully, you have to install all the required Python libraries before the installation of Keras. Therefore, we have been installing the panda’s library first in the same virtual environment using the “pip” package of Python.
The system will start configuring it within Python’s virtual environment, as shown in the image:
After installing the pandas’ library, try installing the NumPy library using the following method:
In a very similar way, install the scipy library of Python in the same environment.
Now, install the matplotlib library of Python in the environment.
Python uses clustering and regression algorithms in machine learning to perform neural network models. For this, it has the sci-kit learn library that we are been installing with the “pip” utility along with the “-u” option to configure the required packages, as well.
The processing of the scikit library installation has been shown below:
For visualization in deep learning, we need the seaborn library of Python to be installed. Therefore, we have been installing it in the same environment with the “install” query.
Step 07: Install Keras Library
After the installation of all the necessary prerequisite libraries of Python, we can finally install Keras within the Virtual environment of Python. The “pip” utility will be utilized for this purpose within our “install” query with the module name, i.e., “Keras”. If the system shows that its requirement is already satisfied, this means it’s already installed and configured.
If it’s not already installed, this query will start downloading and configuring it in the virtual environment without a delay of one second and the processing would be displayed, as below:
After the full configuration and installation of the “Keras” library on the Virtual Environment, it’s time to show the full information regarding it on the shell via the “pip show” query. The execution of this “show” query has been presenting the version of Keras installed in our virtual environment of Python, its name, its summary, its web homepage, author, author’s email, license, the location it takes on our system, and many more as presented below:
After the finest installations of the Keras and TensorFlow libraries of Python, we need to quit the virtual environment. For that, try the “deactivate” query on the shell and quit.
Step 08: Install Anaconda Cloud
Python has a cloud named “Anaconda” that is necessary to build neural network examples in Python. Therefore, we have downloaded its execution file to our system.
This file has been residing in the current home folder of the Linux machine as per the “ls” query. You need to make sure it’s checksum first, i.e., if it’s fully correct or not via the sha256sum query.
After that, we need to install the downloaded Bash file of anaconda in our system using the “Bash” instruction and the file name on the same console. It has been asking us to review the license agreement before the installation. So, we tapped “Enter” to continue.
After going through its license agreement, it asks us to tap “yes” if we agree with the terms. You have to press Enter to continue installing it at the same place or write the path to the directory where you want to install it. Otherwise, use “Ctrl-c” to cancel the installation.
It will be displaying the long list of packages that will be installed in this process. After some transaction execution, it will start installing the packages.
After a while, the anaconda was successfully installed with its additional packages.
You need to run the “activate” file from the anaconda folder via the “source” query as a root.
Try launching the anaconda navigator employing the following query.
To create and work on the new conda environment, try the “conda create” instruction with the name option followed by the new environment name, i.e., PyCPU.
This process requires our confirmation on the creation of the new environment. Tap “y”.
To activate and run the newly made conda environment, use the “conda activate” query with the name of your new environment, i.e., PyCPU environment is now activated.
Step 09: Install Spyder IDE
The Spyder IDE must be installed within this environment for the execution of Python programs. For this, we have tried the conda install query at the PyCPU environment shell with the keyword “spyder”.
Tap “y” to continue installing Spyder.
Step 10: Install Pandas and Keras Library
After the installation of Spyder, install the panda’s library of Python in an anaconda environment using the conda install query with the –c option.
Again, hit the “y” button to proceed.
After the successful configuration of pandas, install the Keras library with the same query.
Proceed after clicking the “y” button.
You can launch the Spyder IDE within the anaconda current environment console as follows:
The Spyder IDE has been preparing to launch.
The hidden folder “.keras” has been located in the home directory. Unhide it and open its “keras.json” file to add the following configurations in it.
Set Up Keras and TensorFlow on Windows
To set up Keras and TensorFlow in a Windows environment, you need to make sure that the Python language along with its “pip” library and Anaconda Navigator is already set up on it. After setting it up, you should be opening it from your search area and moving within the “environments” tab. In this tab, you will find the environment name you are currently working in i.e., base. In the area below, you will find the following tab. Tap the “create” option.
Here, you have to create a new environment name “TensorFlow”, i.e., are currently inside the base environment. Select Python’s latest version to be used and tap the “Create” button to carry on.
You will see that the environment has started to load.
After a while, the TensorFlow environment gets fully installed.
From its leftmost area, you can see all the installed and available libraries and modules for Python, as presented below:
Now, we need to install the TensorFlow backend library of Python using this area. In the search bar, write “TensorFlow” and mark the same case package from the shown list to install it. Tap on the “Apply” button to proceed with the installation of TensorFlow along with its sub-modules like “Keras”.
It has started to work and configure the TensorFlow on our Anaconda environment.
While installation it will display the list of sub-packages that are going to be installed on the Anaconda environment. Pat the “Apply” button and wait for a while until it has finished.
After a while, you will find all the installed packages in the same modules area. You can see that the Keras library has been installed with other packages and we don’t have to install it now.
From the Windows Search bar, search the “Jupyter” keyword. The application named “Jupyter Notebook (TensorFlow)” would be shown along with others. Tap on it to launch the jupyter Notebook with the backend TensorFlow enabled. Create a new Python file and start working.
Deep Learning Via Keras
Deep learning includes layer-by-layer analysis of the intake, with each layer gradually extracting advanced-level details from the input. A full framework is provided by Keras to form any sort of neural network. Both creative and incredibly simple to understand, Keras. It enables neural network models ranging from the most naive to the biggest and supreme complex.
Artificial Neural Network (ANN)
The “Artificial Neural Network” (ANN) methodology seems to be the most widely used and fundamental method of deep learning. They take their cues out from the human mind, our body’s natural most complicated component, which serves as their model. Over 90 billion microscopic cells called “neurons” make up an individual’s brain. Axons and dendrites are types of nerve fibers that link neurons together. The primary function of an axon is to send data from one linked neuron to the next. For more info, please search from the Google Search Engine.
The Keras API architecture has been classified into three main parts listed below. Let’s take a guise at each one distinctly.
- Core Modules
The Keras model consists of exactly two types, i.e., sequential and functional API.
Fundamentally, a sequential model is a chronological compilation of Keras Layers. The simple, simplistic sequential model can describe almost all of the neural networks that are currently in use. A customized model may be made using the Model class that the sequential model reveals. The sub-classing approach may be used to build a sophisticated model of our very own. The demonstration of the sequential model has been presented below.
The script has been started from the import of sequential mode via the keras.models and the other line has been creating a sequential model. After that, importing the dense layer creates an input layer and adds an input layer to a model. The hidden dense layer has been created and added to the model and the same has been performed for the output dense layer.
Access the Model
You can get information regarding your model layers, the input data it has been using, and its output data. The model.layers function allows you to access all the layers. The model.inputs would be showing input tensors, and model.output will display output tensors.
Serialize the Model
It is easy to return the model used in the script as an object or JSON. For instance, the get_config() function yields the model as an entity/object. The from_config() function creates a new model using the object as a parametric value.
You can also change your model to JSON using the to_json() function.
To get the whole summary regarding the layers used within the model along with some additional info, call the summary() function.
Train and Predict the Model
To train and predict, we should be using the compile function, fit function, evaluate the function, and predict function in this regard.
Every input, concealed, and yield layer in the suggested model of neural networks corresponds to a different Keras layer in the real model. Any sophisticated neural network may be quickly developed using plenty of pre-built layers of the Keras library. There are different Keras layers we have, i.e., core layers, pooling layers, recurrent layers, and convolution layers. You can study them by searching on the web. The first two lines have been importing the Sequential mode, dense, activation, and Dropout layer.
We have been trying the Sequential() API for creating a dropout sequential model. By casting off the activation model “relu” we have been creating a dense layer via the “Dense” API. To cater to the over-fitting of the dense layer, we have been using the Dropout() API, i.e., dropout layering via dropout() function. After this, we have been using a more dense layer here with the “relu” activation model. To handle the dense layers from over-fitting, we have to make use of Dropout layers. In the end, we have been casting off our final dense layers using the “softmax” type activation model.
Have you ever performed layering while cooking? If so, then this concept would not be difficult for you to understand. The result of one level will serve as the input data for the succeeding layer. Here are the basic things required to build a whole new layer:
- Input Data Shape
- Total neurons/units in a layer
Input Data Shape
Within Python language, every sort of input has been converted into an array of integers and then added to the algorithm model. Within Python, we need to specify the input shape to get the output as per our requirement. In the following examples, we have specified the input shape (3,3), i.e., 3 rows and 3 columns. The output has been displaying the matrix.
The initializers module of Keras Layers provides us with many functions to specify a specific weight for input data. For instance, the zeros() function specifies 0 for all, the ones() would specify for all, and the constant() function will specify a specified constant value added by a user for all and more. For a better understanding, we have used the identity() function to generate an identity matrix. The rest of the functions can be searched as well from the search engine.
There are different constraint functions available to apply constraints on the “weight” parameter of the Layer, i.e., non-negative, unit norm, max norm, minmaxnorm, and many more. Within the following illustration, we have applied the constraint norm less than or equal to the weight. The “max_value” parameter is the upper bound of the constraint to be applied and the axis is the dimension on which the constraint would be applied, i.e., dimension 1.
Throughout optimization, it imposes various charges on the layer property. It also came up with some functions to do so, i.e., L1 regularizer, L2 regularizer, and “LI and L2” Regularizer. Here is the simplest illustration of the L1 regularizer function:
A unique function called the activation function is employed to determine if a particular neuron is active or not. The activation function transforms the incoming data in a complex manner, which helps the neurons study more effectively. Here are several activation methods presented in the examples provided below:
As we know that programming modules usually contain functions, classes, and variables to be used for different and specific purposes. Just like that, Python’s Keras library contains many modules in it. You can get all the required knowledge about the Keras modules from the web.
One of its most well-known and used modules is the “Backend” module that has been designed to use the backend libraries of Python like TensorFlow and Theano. Using the backend module, we can utilize as many backend functions as possible from the TensorFlow and Theano library. To use the backend library module, we need to specify the backend library to be used in the configuration file “keras.json, which we have created in the hidden .keras folder. By default, the backend has been specified as “TensorFlow”, but you can change it to some other as well, i.e., Theano, or CNTK.
Within our example, we will be using the TensorFlow library as a backend. To load the configurations of the backend from the keras.json file of the root “keras” folder, use:
- from keras import backend as k
After successfully importing the backend from the keras.json file, it’s time to get the backend information using the variable “k” with the variable to be fetched. First, we have been fetching the name of a backend we have been using and already imported using the “backend()” function. It returns the “Tensorflow” as its backend value. To get the float value of the backend, we have been calling the floatx() function via the Keras’s “k” variable object. It is showing that we have been using the float32 value.
To get the format of image data, use the image_Data_format() function with the “k” variable. On using it, it shows that our backend has been utilizing the “channels_last” image data format. To get the exponent power for the backend, call the epsilon() function with the variable “k”. It returns that the backend will be using the exponential power of “07”. That’s all about the fetching of backend information.
It’s time to take a look at some backend functions of TensorFlow to understand its functionality. One of its most used backend functions “get_uid() function that is utilized to identify the default graph we have been using. Using it with the prefix=’’ parameter would be returning “1”, i.e., as per the usage. Again, using it would return “2” as we have been calling it again and the value of the graph has been incremented. After using the “reset_uids” function, the graph user ID value would be reset to 0. Hence, using the get_uid() function once again would increment it by 1.
The tensor has been using the placeholder() function to hold different dimensional shapes in it. For example, within the following illustration, we have been using it to hold the 3-D image in tensor via the Keras variable “k” and save it to another variable “d”. The output of variable “d” is showing the properties of a shape used within the placeholder.
The “int_shape()” function is used to display the shape of a value saved in the placeholder “d”.
Have you ever multiplied two vectors? If so, it will not be challenging for you to multiply two tensors. For this, the backend library came up with the “dot” function. First, to hold the two different shapes, we have been using the shape values in the placeholder() function in the first 2 lines to create two holders “x” and “y”. The dot() function has been taking the “x” and “y” holders to multiply both the tensors and saving the result to another variable “z”. On using the “z” tensor for printing, it displayed the multiplied resultant tensor shape(1, 5) on the screen.
The ones() function of the backend module has been known for initializing all the values of a particular shape to 1. For instance, we have been using the ones() function on the tensor shape (3,3) and saving the result to the variable “v”. The eval() function is cast off here to evaluate the value of a variable “v” and display at in the Python environment. In return, it has converted the shape (3,3) to an array matrix of all ones with the float32 data type.
The tensor batch would specify the total samples to sort before updating a model. The batch_dot() function of the TensorFlow backend is mainly used to find out the multiplication result of two different batch data. Therefore, we have created two tensor variables v1 and v2 and used the Input() function to save them in v1 and v2 as input. After that, we have been trying the batch_dot() function on both the tensor variables, v1 and v2, and the resultant value would be saved to another variable “v3”. On printing the variable v3, we found the resultant shape (2,2) in return.
If you have ever worked on any other language, you may have initialized many variables with the keyword “var” or without it. Many times, you may have initialized the variables with their data types like integer, string, or character. Within the Python Keras library, we can create any variable using the variable() function on some tensor data in the form of samples.
Within the following image, we have created a variable “d” by adding the sample two list data into a variable() function with the Keras object “k”. After adding this variable, we have been calling the transpose() function on this variable “d” to find out the transpose of a sample data within it via the Keras object “k”. The resultant transpose would be saved to a variable “val”. The print statement of Python language has been used here to print the value of the “val” resultant variable. The print statement has been displaying the choice of function we have applied to the variable “d” and the total number of elements in each list.
After this, we tried the “eval” function on the “val” variable to get the transpose of the samples added to the variable “d” and the print function was displaying it. You can see the transpose of two lists in the output.
The previous code illustration was achieved by the use of simple Python functions without importing any particular Python library. The “transpose” of two data sets can be find using the NumPy arrays. For this, we need to import the NumPy library as “n” at the start. The basic format is the same, but we need to initialize the shape data set with the “array” keyword instead of using the “variable” keyword. The sample NumPy array should be kept back to the variable “d”. The same NumPy object “n” is used to call the transpose() function on the “d” variable and save its result to the variable “val”.
The print statement has been calling the “val” variable in it to display its transpose tensor. You can see, to display the resultant transposed value of the “val” variable, we don’t need the “eval” function here. Now, we have used the variable function with the argument “d” and saved the result to the variable “z”. After trying the print statement by adding the argument value “z” in it, it displayed the output in the same previous format we have tried in the above variable example.
The word “sparse” in tensor is used for a sparse tensor containing entries with zeros mostly. Within this example, we will be using the is_sparse() function of the backend module to check whether the tensor has most of the zeros or not.
First, we have been calling the placeholder() function to hold the tensor shape (3,3) along with the argument Sparse set to true. This placeholder value would be kept to the mutable “x” and displayed. The output has been displaying the information regarding the placeholder variable “x”.
For instance, its data type, shape, and function are applied to it. After this, we tried the print statement once more calling the is_Sparse() function in it. This function has been taking the variable “x” as its argument to display whether the “x” tensor is sparse or not. The output displays “true”.
The dense tensor is said to be the one that used the chronological block of memory to store the information in an adjacent manner and represent the values of information, as well. The “to_dense()” function of the backend module let us convert the sparse tensor to a dense tensor. Hence, we are taking the same placeholder function to add the tensor to variable “x” and this tensor has been set to “sparse”.
The “to_dense()” function is applied to the dense tensor variable “x”, i.e., to convert it to a dense tensor and save it to another variable “res”. Now, the “res” is a dense tensor itself. The print statement has been cast off to print out the “res” variable. The use of print statements for the “res” variable displayed the information regarding the “res” converted variable, i.e., successfully converted sparse to dense and a lot more.
Then, another print function is called by using the is_sparse() function in it to check whether the variable “res” is sparse or not. The output has been showing that the variable “res” is not sparse, i.e., as we have converted it to a “dense” tensor already.
The random_uniform_variable() function in the Keras backend module is specifically designed for the initialization of a tensor via the uniform distribution. It takes a total of three arguments. The very first argument “shape” is used to define the shape’s rows and columns within the tuple form. If you have done mathematics, you may have learned the concept of mean and standard deviation.
In the random_uniform_variable() method, the next two arguments are the mean and typical deviation from a uniform distribution. Within this illustration, we have initialized two tensors “x” and “y” using the standard uniform distribution via the random_uniform_variable() function. Both the tensors contain different shape formats, i.e., rows and columns with the same mean and standard deviation, i.e., low=0, and high=1.
After this, we are casting off the “dot” function taking the “x” and “y” tensors in it for multiplication. The result of this multiplication would be saved to the variable “z”. In the end, the int_shape() is a must to use to display the shape of a resultant tensor “z”. The output is showing the tensor (2,2).
If you want to use some of the very useful functions from the deep learning concept of Python, you must utilize the utils module of the Keras library in your scripts. For instance, if you want to display your data in HDF5Matrix format, you need to import the HDF5Matrix class and use its HDF5Matrix function in the script.
This function allows you to modify a class vector into a matrix, i.e., binary class matrix. Let’s say, we have imported the to_categorical() function from the utils module and initialized a vector “A”. The vector “A” has been passed to the to_categorical() function. The binary matrix for this class vector “A” has been displayed.
To print out the summary of a model we have been casting off in our environment, the print_summary function was used.
The plot_model() function signifies the model in a dot format and lets you save it to a document.
To sum up, we can say that the Python language is a necessary language for today’s era as everything is getting fast and technology has been evolving so crazily fast. Throughout this learning guideline, we have been up to the use of Python’s Keras library in Deep Learning and Artificial Neural Networks. For this, we have also gone through the importance and use of its backend library “TensorFlow” to get a clear understanding. Additionally, we have discussed and explained every configuration required to set up the Keras and Anaconda environment in Python within the Ubuntu 20.04 Linux operating system. After this, we have thoroughly discussed the Keras Models, Layers, and Modules one by one along with their most-used functions. For the demonstration of the Function API model, please check for the official documentation.