Neural Nets As Universal Approximators

This week we’re starting with Multilayer Perceptrons (MLPs), the most basic form of neural networks. We’ll go over their architecture, activation functions, and see how they serve as universal function approximators.




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • a post with code diff
  • a distill-style blog post
  • a post with echarts
  • a post with diagrams
  • a post with jupyter notebook