Artificial Neural Network, Neural Architecture Search (NAS) and their Applications - Part-1

Artificial Neural Network, Neural Architecture Search (NAS) and its applications - Part-1

 
May 21, 2021
Jayaramakrishnan Sundararaj

Author

Jayaramakrishnan Sundararaj
Technical Manager
May 21, 2021
Share

The artificial neural network (ANN) is a hot topic of research now, as there is no guarantee that a specific artificial neural network model will give good accuracy of a problem. So, we need an appropriate architecture for the neural system instead of repeated trial and error neural network algorithm modeling. The advantage of the neural network is that it can accurately identify and respond to patterns that are similar but not identical to the trained data.

Neural networks are complex machine learning structures that can take in multiple inputs to produce a single output with many hidden layers to process input. All the neurons or hidden nodes in neural networks are connected and influence each other and share the different parts of input data relation. Weights are assigned to the nodes or neurons based on their relative importance with other inputs.

There are two types of communication that can happen in neural networks:

  1. Feedforward Neural Network: In a feedforward neural network, the data or signal travels in only one direction toward the result or target variable.
  2. Feedback Neural Network: In this, data or signals travels in both directions through the hidden layers.

Deep learning using neural networks mainly concentrates on the following core neural network categories:

  1. Recurrent Neural Network
  2. Convolutional Neural Network
  3. Multi-Layer Perceptron
  4. Radial Basis Network
  5. Generative Adversarial Network
Neural Network Architecture
Figure 1: Neural Network Architecture

Neural network architecture needs the following design decision parameters:

  1. Number of hidden layers (depth)
  2. Number of units per hidden layer (width)
  3. Type of activation function (nonlinearity)
  4. Form of the objective function (Learning rules)
  5. Learnable parameters (Weight, bias)

Taking a simple example, if a rocket has to land in Mars, it needs to have different components to function properly, let’s say, a central processor, engine components, fuel, avionics and payload, and sensor components such as altitude, velocity, pressure, weigh cells, thrust, etc. We can label these different components and their associated features as “neurons.” Then, the values correspond to velocity, pressure, altitude, and trajectory, weight cells, fuel efficiency, etc., are “weights”. How the weights, along with neurons, are functioning to bring the rocket to Mars can be considered as the activation function. Finally, how well the rocket lands on Mars is the final output. Image classification, medical supervision, and vehicle movement from the source to reach a destination with intelligent guidance are some of the neural network use cases.

So, why do we need the ideal architecture for neural networks? Because, the best neural network architecture will get a near-perfect, accurate prediction model for data points in real-world scenarios.

The best neural network architecture will get an accurate prediction model for data points in real world scenarios.

Please check Part-2 for information on neural architecture search.

References

  1. https://arxiv.org/pdf/1806.10282.pdf
  2. https://arxiv.org/pdf/2006.02903.pdf
  3. https://towardsdatascience.com/getting-started-with-autokeras-8c5332b829
  4. Interstellar: Searching Recurrent Architecture for Knowledge Graph Embedding
  5. Efficient Neural Architecture Search for End-to-end Speech Recognition via Straight-Through Gradients

Get HCLTech Insights and Updates delivered to your inbox

Tags:
Share On