Neural Network

Yukta chakravarty
6 min readMar 4, 2021

Let’s begin by asking some questions to ourselves

How do we know what we know?

How do we learn our mother tongue?

Thanks to our brain we keep on learning new things through our experiences.

Remember the time you were learning to ride a bicycle

You fell for first time then learnt balancing, for me I crashed into the gate as I didn’t know how to apply brakes , then I learnt it and I can never forget it.

It is the experience that helps us to learn.

Similarly, we are training machines on data and trying to make them learn like our brain using neural network.

So let’s have a look at it in this article.

Machine learning uses algorithms to parse data, learn from that data, and make informed decisions based on what it has learned. Deep learning structures algorithms in layers to create an “artificial neural network” that can learn and make intelligent decisions on its own.

Deep learning is considered an evolution of machine learning. It uses a programmable neural network that enables machines to make accurate decisions without help from humans.

A typical neural network has many neurons called units arranged in a series of layers, each of which connects to the layers on either side. Some of them, known as input units, are designed to receive various forms of information from the outside world that the network will attempt to learn about, recognize, or otherwise process. Other units sit on the opposite side of the network and signal how it responds to the information it’s learned; those are known as output units. In between the input units and output units are one or more layers of hidden units, which, together, form the majority of the artificial brain. Most neural networks are fully connected, which means each hidden unit and each output unit is connected to every unit in the layers either side. The connections between one unit and another are represented by a number called a weight

How neural network learn?

Whenever we hear someone singing we can tell if the person is good or bad in singing.

How do we come to this conclusion?

Whenever we hear songs we learn how the song is and how it should be, we form conclusions that if we song sounds like this it is good and if we hear song in some other way it is bad or not so good.

So we get input from our sensory organs that are eyes, nose, ears, tongue and skin and learn from it and form conclusions and use these conclusions whenever required.

Similarly, patterns of information are fed into the network via the input units, which trigger the layers of hidden units, and these in turn arrive at the output units. This common design is called a feedforward network. Not all units “fire” all the time. Each unit receives inputs from the units to its left, and the inputs are multiplied by the weights of the connections they travel along. Every unit adds up all the inputs it receives in this way and (in the simplest type of network) if the sum is more than a certain threshold value, the unit “fires” and triggers the units it’s connected to (those on its right).

Neural networks learn things using a feedback process called back propagation. This involves comparing the output a network produces with the output it was meant to produce, and using the difference between them to modify the weights of the connections between the units in the network, working from the output units through the hidden units to the input units — going backward, in other words. In time, back propagation causes the network to learn, reducing the difference between actual and intended output to the point where the two exactly coincide, so the network figures things out exactly as it should.

Where are Neural Networks used ?

Google’s search engine was always driven by algorithms that automatically generate a response to each query. But these algorithms amounted to a set of definite rules. Google engineers could readily change and refine these rules. And unlike neural nets, these algorithms didn’t learn on their own. But now, Google has incorporated deep learning into its search engine. And with its head of AI taking over search, the company seems to believe this is the way forward.

Google Maps’ Driving Mode estimates where you are headed and helps you navigate without any commands.

Youtube Safe Content uses machine learning techniques to ensure that brands are not displayed next to offensive content.

Google Photos suggesting which photos you should share with friends.

Gmail Smart Reply suggesting replies that match your style and the email you received.

Google Drive Smart Scheduling suggests meeting schedules based on user’s existing schedule and habits.

Google Calendar Quick Access feature predicts which files will be used improving performance and user experience.

Google Translate uses an artificial neural network called Google Neural Machine Translation (GNMT) to increase fluency and accuracy of translations.

Google Chrome uses AI to present short and highly related parts of a video while searching for something in Google Search, analyze the images on a website and plays an audio description or the alt text(when available) for people who are blind or have low vision.

Google News uses AI to understand the people, places and things involved in a story as it evolves, organize them based on how they relate to one another as explained in Google Blog.

Google Assistant is a voice assistant for smart phones or wearables that can search online your flight status or the weather when you get there. Touch and hold the Home button and find your Google Photos, access your music playlists and more. Google Assistant remembers what you’ve already said, speaks in foreign languages. It is much more than an assistant, despite the name: it will read you poetry, tell you a joke, or play a game with you.

Google Home: You will be able to get hands-free help from your Assistant embedded in Google Home. Say “Ok Google” to get the morning news or manage your schedule.

Facial Recognition: Facial Recognition is among the many wonders of Machine Learning on Facebook. So now the question is, What is the use of enabling Facial Recognition on Facebook? Well, in case any newly uploaded photo or video on Facebook includes your face but you haven’t been tagged, the Facial Recognition algorithm can recognize your template and send you a notification. Also, if another user tries to upload your picture as their Facebook profile picture (maybe to get more popular!), then you can be notified immediately. Facial Recognition in conjugation with other accessibility options can also inform people with visual impairments if they are in a photo or video.

Textual Analysis: Facebook uses DeepText which is a text engine based on deep learning that can understand thousands of posts in a second in more than 20 languages with as much accuracy as you can!

But understanding a language-based text is not that easy. In order to truly understand the text, DeepText has to understand many things like grammar, idioms, slang words, context, etc. For example: If there is a sentence “I love Apple” in a post, then does the writer mean the fruit or the company? Most probably it is the company (Except for Android users!) but it really depends on the context and DeepText has to learn this. Because of these complexities, and that too in multiple languages, DeepText uses Deep Learning and therefore it handles labeled data much more efficiently than traditional Natural Language Processing models.

And Many more….

--

--