Neural networks in deep learning include convolution neural networks, recurrent neural networks, etc. In this article, we will teach neural networks in deep learning and define the types of neural networks that are used in deep learning.

Deep learning is a subset of artificial intelligence in which a machine is used instead of a human. First, we must say that the human brain is made up of nerve fibers that are connected and process information. It is based on the inputs we receive and basically, our brain behaves like a function that takes inputs and performs operations, and delivers the output to us. Deep learning also uses a deep neural network that works like the human brain. Slow and does the processing.

## What is a neural network?

The neural network is trained by the inputs and consists of three input, hidden, and output layers, and each of the nerves has a threshold value and an activation function that gives us an output, the result we get with the expected output. We are comparing that these two values should be close to each other, the model learns to adjust the weights and the threshold value so that it receives the correct output.

An artificial neural network is a group of algorithms used to identify and recognize patterns.

These artificial neural networks (ANNs) can be used in investment plans to increase returns, predict the price of houses based on the main characteristics that define the price of houses (location, size, etc.), and categorize images, etc.

## Deep neural network (DNN)

The more the number of layers and nerves in each hidden layer, the more complex the model becomes, when these neural networks that contain more than three layers of input and output layers are called deep neural networks and their learning is deep. It is believed that by using these deep neural networks, very complex problems in the field of prediction and classification are solved into simple problems.

Deep learning is a function that transforms the input into the output. A deep neural network finds the connection between input and output data. The deepness of the neural network means that these networks are multi-layered. The layers of the neural network are made up of nodes. A node, like the human brain, is a place to perform calculations. In a node, the input data is multiplied by a weight. The more this weight is, the greater the impact of the data, after that, the sum of the data multiplied by their weight is calculated. Finally, to reach the output, the obtained total passes through an activation function and outputs.

## Deep learning (DL)

Deep learning is learning using neural networks that have many hidden layers. In deep learning, for example, they divide an image into different layers, which is how the human brain works and brain neurons are sensitive to masses. So that they can show sensitivity to the whole image and process it.

The layers of the neural network are composed of nodes. A node, like neurons in the human brain, is a place where calculations are done. A set of activated neurons leads to learning.

Just like the human brain, the deep learning algorithm gains experience with each repetition of a task.

For example, the Chinese company SenseTime developed a system of automatic facial recognition to identify criminals, which uses real-time cameras to find criminals in the crowd. Today, this has become a common practice in the police and other government agencies.

American company Pony.ai is another example of how DNN can be used. They developed a system for artificial intelligence cars that can operate without a driver. It requires more than just a simple algorithm of actions, but a much deeper learning system, which should be able to recognize people, road signs, and other signs such as trees and other important objects. The famous company UbiTech AI robots create One of their creations is the Alpha 2 robot, which can live in a family, talk to its members, search for information, write messages and execute voice commands.

## Constituents of neural network

A neural network consists of the following:

### Neuron

An artificial neuron is a mathematical function. It takes one or more inputs that are multiplied by values called weights and added together. This value is then passed to a non-linear function, known as an activation function, to become the output of the neuron.

### Weight

Weight is a parameter in a neural network that modifies the input data in the hidden layers of the network. A neural network is a collection of nodes or neurons. Inside each node, there is a set of inputs, weight, and bias amount. Often a neural network is located in the hidden layers of the network.

The weights and biases (usually called w and b) are the learnable parameters of the machine learning model. As inputs are passed between neurons, weights along with biases are applied to the inputs.

### Function

n inputs as X1 to Xn are given to the corresponding weights from Wk1 to Wkn. First, the weights are multiplied by their inputs and then summed by the bias value. We call the result u.

u=∑w×x+b

Then the activation function is applied to u, i.e. (f(u), and finally we get the final output value as (Yk = f(u) from the neuron.

The best definition of a neural network is given by a person named Liping Yang. He defined the neural network as follows:

Neural networks consist of several artificial neurons that exchange information with each other, and each one has weights that are based on the experience of the network. Neurons have an activation point that they activate if the sum of the weight and data sent to them crosses that point. Neurons that are activated cause learning.

## Examples of different types of neural networks

### Convolutional Neural Networks (CNN)

Convolutional Neural Network or CNN is a special type of neural network that is used for image recognition and classification. Apart from creating vision in self-driving cars and robots, they are very skilled in areas such as recognizing objects, faces, and traffic signs.

In deep learning, a convolutional neural network is a class of deep neural networks commonly used to analyze visual images. They have applications in image and video recognition, image classification, medical image analysis, and language processing.

CNN consists of multi-layered receptors. Multi-layered receptors are fully connected networks, where each neuron in one layer is connected to all neurons in the next layer. The complete connection of these networks causes more than Use the data. CNNs use a hierarchical pattern in the data and obtain more complex patterns by using smaller and simple patterns.

CNNs use almost little pre-processing compared to other image classification algorithms. A convolutional neural network consists of an input and an output layer as well as several hidden layers.

Convolution neural network shows that the network uses a mathematical operation called convolution. Convolution is a specialized type of linear operation. Convolutional networks are simple like neural networks that use convolution instead of general matrix multiplication in at least one of their layers.

### Recurrent Neural Network (RNN)

A recurrent neural network has a recurrent neuron, which is a neuron whose output returns to itself t times. It remains the same as connecting t different neurons.

A recurrent neural network (RNN), which is also called a recurrent neural network, is a type of artificial neural network that is used for speech recognition, sequential data processing, and natural language processing.

A large number of deep networks such as CNN are feedforward networks the signal moves in these networks in one direction from the input layer to the hidden layers, and then to the output layer, and the previous data is stored in the memory. they don’t get But Recurrent Neural Networks (RNN) have a feedback layer where the output of the network is fed back to the network along with the next input. Due to its internal memory, RNN can remember its previous input and uses this memory to process a sequence of inputs. In other words, recurrent neural networks have a feedback loop that prevents the loss of previously acquired information so that this information remains in the network.