In the field of data science and specifically in machine learning, it’s not uncommon to wonder “what happens in the hidden layers?” when thinking about deep neural networks. These networks can appear to be “black box” algorithms as a lot of the internal computation is not currently well-understood, and a lack of understanding of these behaviors can be costly when it comes to time, hardware, and finances especially in the realm of “Big Data”. In “A Closer Look at Memorization in Deep Networks,” Arpit et al. …

Svitlana Glibova

Former sommelier venturing into the world of data science | B.S. in Mathematics | Seattle, WA

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store