Machine Learning (ML) is today getting applied in various areas like Natural Language Processing, Object Recognition, Computational Learning, Pattern Recognition, etc. ML makes computer systems capable of imitating actions a human would do by analysing given inputs and memorizing it. For this, these machines make use of Deep Learning algorithms, which develop an artificial brain for them.

Also known as Deep Neural Networks (DNN), Deep Learning algorithms make a computer system capable of interpreting an input set of data, memorizing it and thus drawing patterns out of it. These mathematical models function like the neurons in the human brain, and can be structured as well as unstructured. Having 3 kinds of layers, i.e., Input Layer, Output Layer, and Hidden Layers, these networks are able to perform extremely complex tasks. The complexity level depends on the number of hidden layers in a deep learning network.

The Deep Learning network is gaining significant prominence, and the market is expected to have a market value of $18.16 billion in 2023 [**Source:** Markets and Markets]. The sector will show a CAGR of 41.7% in the period 2018-2023, thus creating many new job roles. Students looking to work in such a job can pursue online courses to improve their skills. A free deep learning course would be an added advantage at such a time.

These courses train students on various kinds of Deep Learning networks, with each of them having its own pros and cons. In this article, we would be discussing these different types of DNNs and for what purpose they are meant.

**Feed Forward Neural Network**

- Feed Forward Networks are the most basic kind of deep learning networks, where there is a systematic flow of control from the input towards the output.
- These are the most simple category under the Artificial Neural Networks (ANNs), which are represented by multiple neurons at 1 layer.
- There is only 1 single hidden layer in FF Neural networks and no back propagation of control is allowed, since it is unidirectional in nature.
- This basic DNN form is useful in cases of computer vision activities like image recognition and face detection.

**Radial Basis Function Network (RBFN)**

- RBFNs are a special case of deep learning networks where no computation is performed in the input layer and also no non-linearity is inserted in the flow. In the RBF network, there are only 1 or 2 hidden layers.
- The input layer here is very simplistic and its only task is to push the received input to the hidden layer, where the computations are performed.
- For this network to perform efficiently, the number of neurons in the input layer should be equal to the dimensionality of the data. Also, the hidden layer should have neurons less than or equal to the number of samples in the overall dataset.
- This type of network is especially used in cases where actions like classification and regression need to be performed. Power restoration systems see an application of radial basis function networks.

**Multilayer Perceptron (MLP)**

- Multilayer Perceptrons are the most popular kind of deep neural networks which typically have more than 3 layers.
- In these DNNs, each neuron is fully connected with other neurons, thus helping in solving various complex problems.
- Multilayer perceptrons are mostly used in speech recognition and are suitable when dealing with nonlinear functions.
- These are a special case of feed forward neural networks as, here also, the input is subjected to weighted sum and an activation function before being passed over to the hidden layer.

**Convolutional Neural Network (CNN)**

- Also known as ConvNet, convolutional neural networks are kinds of deep neural networks which are used for computer vision. They get their name from the convolution function which is performed on the inputs.
- CNN is a variant of multilayer perceptron where the network is deep and has fewer parameters.
- The architecture of a CNN helps it to extract different features from an image and classify them separately. This reduces the amount of pre-processing required.
- Convolution functions provide the network with high level features like extracting information related to edges in an image.
- CNNs are widely used in cases like satellite imaging, medical imaging, detecting anomalies, etc.

**Recurrent Neural Network**

- Recurrent neural networks have a special feature that here the output of a particular neuron is fed back as an input into that neuron itself. This creates a small phase of memory and thus helps the neuron in predicting the next output.
- Such deep learning networks are helpful for making applications like chatbots and text-to-speech converters.
- RNNs can deal with input data sets that do not have any fixed length, and thus they are also deployed for activities like Natural Language Processing (NLP), where the input length can not be predetermined.
- RNN architecture asks for fewer parameters to train a neural network and thus also reduce cost.

**Long Short Term Neural Network (LSTM)**

- A long short term neural network is a special case of recurrent neural network (RNN) that can deal with even long term dependencies.
- These neural networks can remember previous inputs, and can retain information for a longer period of time. Thus, they are useful in applications like time-series prediction, speech recognition and music detection.
- LSTM architecture is made in a chain-like format with 4 layers arranged in a unique form.
- Since they are memorizing long term data, LSTM are most efficient when sequential data is fed into them.

**Generative Adversarial Network (GAN)**

- GANs are considered to be one of the most important developments in the field of deep neural networks and machine learning.
- These networks are also capable of generating new data on the basis of given inputs, which show very close resemblance to the data available in the training set.
- Due to this “magic” like feature of generating new data, GANs are used in various applications like video game development, dark matter research, rendering 3D objects, etc.

A candidate could be asked to work on any of these neural networks and they can even be asked to tell their preference. Therefore, it is recommended to keep a handy knowledge of all these deep learning networks and also plan out which one brings the best developer out of you.