Deep Learning is stemming from neural network.
Neural network is an algorithms that try to mimic how the brain functions.
This topics was extensively studied in the 90s and moderate success due to lack of sufficient computational power and data.
Here is the history of neural networks:
1950-60s: Simplest Neural networks (Rosenblatt, etc)
1970’s: Slow Progress
1990s: Convolutional neural networks (LeCun)
1990s: Recurrent neural networks (Schmidhuber)
2006: Breakthrough: Deep belief networks amd Autoencoders
2013: Huge industrial interest
For 1990s neural network development, it has helps to switch from manual or semi-manual recognition to an automatic recognition of handwritten digits on envelopes as well as reading digits on checks.
Since year 2013, the deep learning get hugh industrial interest from giants such as Facebook, IBM, Google, Amazon, etc.
Simple reason, the innovation and breakthrough of computational power and sufficient data enable multiple layers of neural networks to be realized.
An important idea in neural networks is how to learn the weights of a network of neurons.
Meaning, the way to perform backpropagation to propagate back the erros when the machine do the learning.
Note: Backpropagation is learn the weights from a multilayer network.
According to some recent research, researchers found that actually the visual cortex is a hierarchy of five to ten levels for the visual system.
This is pretty powerful, because it says that human vision, which is actually something that came with us, we don’t make any effort to see and detect objects in our scene, that are looking,
that we are watching. So, basically we discovered that human being is also doing it in phases.
And this is what actually inspired the use of deep architectures today.
In short, success of deep learning depends on algorithm, data and computational power.
Thus, the companies that contains all elements above has big advantage to have deep learning on their existing data network.
From technical way of explanation, Weight values are associated with each vector and node in the network, and these values constrain how input data (e.g., satellite image values) are related to output data (e.g., land-cover classes).
From layman terms, the weights in an artificial neural network are an approximation of multiple processes combined that take place in biological neurons.
Reference source: https://courses.edx.org/courses/course-v1:ColumbiaX+CSMM.101x+2T2017/course/