New Theory on Deep Learning: Information Bottleneck

IoT For All News Team
Image Credit: Baaz, Illustration by IFA

Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented a new theory explaining how deep learning works, called the “information bottleneck.”

The theory says that deep learning takes place because of an information bottleneck procedure that compresses noisy data while preserving information about what the data represent. There is a threshold a system reaches, where it compresses the data as much as possible without sacrificing the ability to label and generalize the output.

Tishby and his students also made the intriguing discovery that deep learning occurs in two phases: a short “fitting” phase and a longer “compression” phase. In the first phase, the system learns to label the training data accurately, and in the second phase, the system learns what input features are relevant in making that classification.

This is one of many new and exciting discoveries made in the fields of machine learning and deep learning, as people break ground in training machines to be more human- and animal-like.

Read the full article from Quanta Magazine here.

Author
IoT For All News Team
IoT For All News Team
The News Team curates the top news from around the web, helping you to stay up-to-date on what's important!
The News Team curates the top news from around the web, helping you to stay up-to-date on what's important!