How Neural Network is changing World?

Tirth Patel
7 min readFeb 19, 2021

Since the invention of the computer, there have been people talking about the things that computers will never be able to do. Whether it was beating a grand master at chess or winning on Jeopardy!, these predictions have always been wrong. However, some such nay-saying always had a better grounding in computer science. There were goals that, if you knew how computers worked, you knew they would be virtually impossible to achieve. Recognizing human emotions through facial expressions. Reading a wide variety of cursive handwriting. Correctly identifying the words in spoken language. Driving autonomously through busy streets.

Well, computers are now starting to be able to do all of those things, and quite a bit more. Were the nay-sayers really just too cynical about the true capabilities of digital computers? In a way, no. To solve those monumental challenges, scientists were forced to come up with a whole new type of computer, one based on the structure of the brain. These artificial neural networks (ANNs) only ever exist as a simulation running on a regular digital computer, but what goes on inside that simulation is fundamentally very different from classical computing.

What are ANN’s?

Most people already know that the neurons that do the computation in our brain are not organized like the semiconductors in a computer processor, in a linear sequence, attached to the same board, and controlled by one unifying clock cycle. Rather, in the brain each neuron is nominally its own self-contained actor, and it’s wired to most or all of the neurons that physically surround it in highly complex and somewhat unpredictable ways.

What this means is that for a digital computer to achieve an ordered result, it needs one over-arching program to direct it and tell each semiconductor just what to do to contribute toward the overall goal. A brain, on the other hand, unifies billions of tiny, exceedingly simple units that can each have their own programming and make decisions without the need for an outside authority. Each neuron works and interacts with the neurons around it according to its own simple, pre-defined rules.

An artificial neural network is (supposed to be) the exact same thing, but simulated with software. In other words, we use a digital computer to run a simulation of a bunch of heavily interconnected little mini-programs which stand in for the neurons of our simulated neural network. Data enters the ANN and has some operation performed on it by the first “neuron,” that operation being determined by how the neuron happens to be programmed to react to data with those specific attributes. It’s then passed on to the next neuron, which is chosen in a similar way, so that another operation can be chosen and performed. There are a finite number of “layers” of these computational neurons, and after moving through them all, an output is produced.

The overall process of turning input into output is an emergent result of the programming of each individual neuron the data touches, and the starting conditions of the data itself. In the the brain, the “starting conditions” are the specific neural signals arriving from the spine, or elsewhere in the brain. In the case of an ANN, they’re whatever we’d like them to be, from the results of a search algorithm to randomly generated numbers to words typed out manually by researchers.

So, to sum up: artificial neural networks are basically simulated brains. But it’s important to note that we can give our software “neurons” basically any programming we want; we can try to set up their rules so their behavior mirrors that of a human brain, but we can also use them to solve problems we could never consider before.

How ANN works?

You’ll hear the word “non-deterministic” used to describe the function of a neural network, and that’s in reference to the fact that our software neurons often have weighted statistical likelihoods associated with different outcomes for data; there’s a 40% chance than an input of type A gets passed to this neuron in the next layer, a 60% chance it gets passed to that one instead. These uncertainties quickly add up as neural networks get larger or more elaborately interconnected, so that the exact same starting conditions might lead to many different outcomes or, more importantly, get to the same outcome by many different paths.

So, we introduce the idea of a “learning algorithm.” A simple example is improving efficiency: send the same input into the network over and over and over, and every time it generates the correct output, record the time it took to do so. Some paths from A to B will be naturally more efficient than others, and the learning algorithm can start to reinforce neuronal behaviors that occurred during those runs that proceeded more quickly.

Much more complex ANNs can strive for more complex goals, like correctly identifying the species of animal in a Google image result. The steps in image processing and categorization get adjusted slightly, relying on an evolution-like sifting of random and non-random variation to produce a cat-finding process the ANN’s programmers could never have directly devised.

Non-deterministic ANNs becomes much more deterministic as they restructure themselves to be better at achieving certain results, as determined by the goals of their learning algorithms. This is called “training” the ANN — you train an ANN with examples of its desired function, so it can self-correct based on how well it did on each of these runs. The more you train an ANN, the better it should become at achieving its goals.

There’s also the idea of “unsupervised” or “adaptive” learning, in which you run the algorithm with no desired outputs in mind, but let it start evaluating results and adjusting itself according to its own… whims? As you might imagine, this isn’t well understood just yet, but it’s also the most likely path down which we might find true AI — or just really, really advanced AI. If we’re ever truly going to send robots out into totally unknown environments to figure out totally unforeseen problems, we’re going to need programs that can assign significance to stimuli on their own, in real time.

How ISRO and other space organization uses Neural Network?

* Structural health monitoring through classification of strain patterns using artificial neural network

In November 2018, ISRO published a compendium for preparing project proposals by universities, which mentioned about programs and research areas in space such as launch vehicle, satellite communication, earth observations, space sciences, and meteorology. The Vikram Sarabhai Space Centre (VSSC) introduced structural health monitoring technology through classification of strain patterns using artificial neural network (ANN). The technology increases safety and reduces maintenance costs of high-performance composite structures used in aircraft and re-entry vehicles. ANN helps in detection of damages such as fibre failure, matrix cracking, de-laminations, skin-stiffener de-bonds in composite structures. It is used by ISRO to classify sensor malfunctioning and structural failures based on strain patterns of healthy and unhealthy structures. ISRO demanded an analytical study for the adopted methodology in its compendium.

* AI-enabled monitoring system for forest conservation

The National Remote Sensing Centre (NRSC), which ISRO has designed and developed, is a monitoring system to observe forest cover change and combat deforestation by leveraging optical remote sensing, geographic information system, AI, and automation technologies. The monitoring system allows experts to detect small-scale deforestation and improve the frequency of reporting. It also enables scientists to process satellite imagery faster and reduces the time frame for new reports from one year to one month. NSRC aims at preventing negative changes in the green cover and protection of wildlife. The NRSC technology makes it possible for monitoring forest cover changes over small areas of one hectare by improving the resolution from 50 meters to 30 meters through optical remote sensing which provides insights into the smallest of deforestation activity.

* Autonomously Navigating Robot for Space Mission (IISU)

ISRO’s challenge was to build and send unmanned robots to help fetch critical space information in multiple missions throughout the year.

A half Vyomnoid with Sensing and perception of surroundings with 3D vision and Dexterous manipulative abilities to carry out defined crew functions in an unmanned mission or assist crew in manned missions. Design & Realization of FULL Vyomnoid with features that include full autonomy with 3D vision, dynamically controlled movement in zero ‘g’, Artificial Intelligence / Machine Learning enabled real time decision making with vision optimization and path planning algorithms.

ISRO leveraged state of technologies to design and develop following:

  • Sensing & Perception
  • Dexterous Manipulation
  • Hierarchical Control System
  • Artificial Intelligence enabled Path Navigation algorithms

Other use-cases includes Multi Object Tracking Radar (SDSC-SHAR), Image Processing and Pattern Recognition (IIRS), Geospatial Analysis and pattern recognition, etc.

The usefulness of ANNs falls into one of two basic categories: as tools for solving problems that are inherently difficult for both people and digital computers, and as experimental and conceptual models of something — classically, brains. Let’s talk about each one separately.

First, the real reason for interest (and, more importantly, investment) in ANNs is that they can solve problems. Google uses an ANN to learn how to better target “watch next” suggestions after YouTube videos. The scientists at the Large Hadron Collider turned to ANNs to sift the results of their collisions and pull the signature of just one particle out of the larger storm. Shipping companies use them to minimize route lengths over a complex scattering of destinations. Credit card companies use them to identify fraudulent transactions. They’re even becoming accessible to smaller teams and individuals — Amazon, MetaMind, and more are offering tailored machine learning services to anyone for surprisingly modest a fee.Things are just getting started. Google’s been training its photo-analysis algorithms with more and more pictures of animals, and they’re getting pretty good at telling dogs from cats in regular photographs. Both translation and voice synthesis are progressing to the point that we could soon have a babelfish-like device offering natural, real time conversations between people speaking different languages. And, of course, there are the Big Three ostentatious examples that really wear the machine learning on their sleeve: Siri, Now, and Cortana.

Thank you for reading :)

--

--