Artificial Neural Networks
Neural nets process information by passing it through a hierarchy of interconnected layers, somewhat akin to the brain's biological circuitry. The first layer of digital neurons
—called nodes—receives raw inputs (such as pixels in a photograph of a cat), mixes and scores these inputs according to simple mathematical rules, and then passes the outputs to the next layer of nodes. Deep nets contain anywhere from three to hundreds of layers, the last of which distills all of this neural activity into a singular prediction: This is a picture of a cat, for example.
If that prediction is wrong, a neural net will then tweak the links between nodes, steering the system closer to the right result. Actually, the learning process tunes the weights or numerical coefficients on each artificial neuron. By adjusting the weights to satisfy millions of examples, the neural net creates a structured set of relationships—a model—that can classify new images or generally perform actions under conditions it has never encountered before.
That process, known as deep learning, allows neural nets to create AI models that are too complicated or too tedious to code by hand. These models can be mind-bogglingly complex, with the largest nearing one trillion parameters (weights). You don't need to tell the system what to look for. You just present it with a few million pictures of cats and it works out what a cat looks like.
Curiously, some of the rules that get encoded in the model no human can properly understand. They are opaque...
Neural networks is deemed a subsymbolic approach and had been pursued from early days and reemerged strongly in 2012. Early examples are Rosenblatt's perceptron learning work, the backpropagation work of Rumelhart, Hinton and Williams, and work in convolutional neural networks by LeCun et al. in 1989.[15] However, neural networks were not viewed as successful until about 2012: Until Big Data became commonplace, the general consensus in the Al community was that the so-called neural-network approach was hopeless. Systems just didn't work that well, compared to other methods. ... A revolution came in 2012, when a number of people, including a team of researchers working with Hinton, worked out a way to use the power of GPUs to enormously increase the power of neural networks.
Over the next several years, deep learning had spectacular success in handling vision, speech recognition, speech synthesis, image generation, and machine translation.
However, since 2020, as inherent difficulties with bias, explanation, comprehensibility, and robustness became more apparent with deep learning approaches; an increasing number of AI researchers have called for combining the best of both the symbolic and neural network approaches and addressing areas that both approaches have difficulty with, such as common-sense reasoning.