Neural Networks

2021-08-09

TOC

Definition of Neural Networks

Neural Networks are function approximations that stack affine transformations followed by non-linear transformations.

Review: GoogLeNet (Inception v1)— Winner of ILSVRC 2014 (Image  Classification) | by Sik-Ho Tsang | Coinmonks | Medium

1D Input Linear Neural Networks

image-20210809212936712

  • input is 1d, output is 1d.
  • Data is dots on 2d plane
  • Model: y_hat = wx + b
  • Loss: mean squared error(MSE) as loss function

Minimizing mean squared error loss function based on partial derivative. image-20210809213447949

image-20210809221240484

  • Backpropagation is (partial) differentiating loss function with all the parameters.
  • Gradient descent is the process of updating each individual weights based on partial differentiation value.
  • Eta(n) is stepsize.

Multi-Dimensional Input

  • Model: y = W_transpose * x + b

image-20210809221530430

Multi-layer perceptron

Stacking Layers of Matrices and adding non-linear transformation(activation function) in between stacks

image-20210809221639897

  • Model: W*p*W*x
  • Universal Approximation Theorem: There is single hidden layer feedforward netowrk that approximates any measurable function to any dessired degree of accuracy on some compact set K.

Loss function

image-20210809223135091

  • Regression Task: Mean Squared Error Loss function
  • Classification Task: Cross Entropy Loss Function
  • Probabilistic Task:
Written by

@Young Jin Ahn

break, compose, display
©snoop2head