A neural network is a type of machine learning which models itself after the human brain. This creates an artificial neural network that via an algorithm allows the computer to learn by incorporating new data.
While there are plenty of artificial intelligence algorithms these days, neural networks are able to perform what has been termed deep learning. While the basic unit of the brain is the neuron, the essential building block of an artificial neural network is a perceptron which accomplishes simple signal processing, and these are then connected into a large mesh network.
The computer with the neural network is taught to do a task by having it analyze training examples, which have been previously labeled in advance. A common example of a task for a neural network using deep learning is an object recognition task, where the neural network is presented with a large number of objects of a certain type, such as a cat, or a street sign, and the computer, by analyzing the recurring patterns in the presented images, learns to categorize new images.
How neural networks learn
Unlike other algorithms, neural networks with their deep learning cannot be programmed directly for the task. Rather, they have the requirement, just like a child’s developing brain, that they need to learn the information. The learning strategies go by three methods:
- Supervised learning: This learning strategy is the simplest, as there is a labeled dataset, which the computer goes through, and the algorithm gets modified until it can process the dataset to get the desired result.
- Unsupervised learning: This strategy gets used in cases where there is no labeled dataset available to learn from. The neural network analyzes the dataset, and then a cost function then tells the neural network how far off of target it was. The neural network then adjusts to increase accuracy of the algorithm.
- Reinforced learning: In this algorithm, the neural network is reinforced for positive results, and punished for a negative result, forcing the neural network to learn over time.
History of neural networks
While neural networks certainly represent powerful modern computer technology, the idea goes back to 1943, with two researchers at the University of Chicago, Warren McCullough, a neurophysiologist and Walter Pitts, a mathematician.
Their paper, “A Logical Calculus of the Ideas Immanent in Nervous Activity,” was first published in the journal Brain Theory, which popularized the theory that activation of a neuron is the basic unit of brain activity. However, this paper had more to do with the development of cognitive theories at the time, and the two researchers moved to MIT in 1952 to start the first cognitive science department.
Neural networks in the 1950’s were a fertile area for computer neural network research, including the Perceptron which accomplished visual pattern recognition based on the compound eye of a fly. In 1959, two Stanford University researchers developed MADALINE (Multiple ADAptive LINear Elements), with a neural network going beyond the theoretical and taking on an actual problem. MADALINE was specifically applied to decrease the amount of echo on a telephone line, to enhance voice quality, and was so successful, it remains in commercial use to current times.
Despite initial enthusiasm in artificial neural networks, a noteworthy book in 1969 out of MIT, Perceptrons: An Introduction to Computational Geometry tempered this. The authors expressed their skepticism of artificial neural networks, and how this was likely a dead end in the quest for true artificial intelligence. This significantly dulled this area for research throughout the 1970’s, both in overall interest, as well as funding. Despite this, some efforts did continue, and in 1975 the first multi-layered network was developed, paving the way for further development in neural networks, an accomplishment that some had thought impossible less than a decade prior.
Interest in 1982 was significantly renewed in neural networks when John Hopfield, a professor at Princeton University, invented the associative neural network; the innovation was that data could travel bidirectionally as previously it was only unidirectional, and is also known for its inventor as a Hopfield Network. Going forward, artificial neural networks have enjoyed wide popularity and growth.
Real world uses for neural networks
Handwriting recognition is an example of a real world problem that can be approached via an artificial neural network. The challenge is that humans can recognize handwriting with simple intuition, but the challenge for computers is each person’s handwriting is unique, with different styles, and even different spacing between letters, making it difficult to recognize consistently.
For example, the first letter, a capital A, can be described as three straight lines where two meet at a peak at the top, and the third is across the other two halfway down, and makes sense to humans, but is a challenge to express this in a computer algorithm.
Taking the artificial neural network approach, the computer is fed training examples of known handwritten characters, that have been previously labeled as to which letter or number they correspond to, and via the algorithm the computer then learns to recognize each character, and as the data set of characters is increased, so does the accuracy. Handwriting recognition has various applications, as varied as automated address reading on letters at the postal service, reducing bank fraud on checks, to character input for pen based computing.
Another type of problem for an artificial neural network is the forecasting of the financial markets. This also goes by the term ‘algorithmic trading,’ and has been applied to all types of financial markets, from stock markets, commodities, interest rates and various currencies. In the case of the stock market, traders use neural network algorithms to find undervalued stocks, improve existing stock models, and to use the deep learning aspects to optimize their algorithm as the market changes. There are now companies that specialize in neural network stock trading algorithms, for example, MJ Trading Systems.
Artificial neural network algorithms, with their inherent flexibility, continue to be applied for complex pattern recognition, and prediction problems. In addition to the examples above, this includes such varied applications as facial recognition on social media images, cancer detection for medical imaging, and business forecasting.
- Interested in AI? We've highlighted 7 everyday uses for AI you've never thought of before
source http://www.techradar.com/news/what-is-a-neural-network
No comments:
Post a Comment