Code
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import matplotlib as mp
import sklearn
import networkx as nx
from IPython.display import Image, HTML
import laUtilities as ut
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import matplotlib as mp
import sklearn
import networkx as nx
from IPython.display import Image, HTML
import laUtilities as ut
The content builds upon
Deep Neural Networks have been effective in many applications.
Understanding Deep Learning, Simon J.D. Prince, MIT Press, 2023
Emergent Abilities of Large Language Models. J. Wei et al., Oct. 26, 2022.
Invention | Theory |
---|---|
Telescope (1608) | Optics (1650-1700) |
Steam Engine (1695-1715) | Thermodynamics (1824…) |
Electromagnetism (1820) | Electrodynamics (1821) |
Sailboat (??) | Aerodynamics (1757), Hydrodynamics (1738) |
Airplane (1885-1905) | Wing Theory (1907-1918) |
Computer (1941-1945) | Computer Science (1950-1960) |
Teletype (1906) | Information Theory (1948) |
The Power and Limits of Deep Learning, Yann LeCun, March 2019.
Underlying Neural Networks is the minimization of a loss function.
Many of the machine learning models we studied this semester are based on training a parameterized model. Such a model is trained by finding the parameters which minimized the error defined through a loss function.
Here are examples we considered so far this semester:
Similarly we’ll want to find good parameter settings in neural networks.
We introduce a generic approach to find good values for the parameters in problems like these.
What allows us to unify our approach to many such problems is the following:
Thinking visually, we will see that the loss function defines a corresponding loss surface.
In this picture the
For each parameter pair
The lowest point on the surface corresponds to optimal parameters.
Notice the difference between the two kinds of surfaces.
The surface on the left corresponds to a strictly convex loss function.
A local minimum of a convex function is a global minimum.
The surface on the right corresponds to a non-convex loss function. There are local minima that are not globally minimal.
Both kinds of loss functions arise in machine learning.
For example, convex loss functions arise in
While non-convex loss functions arise in
The intuition of gradient descent is the following.
Imagine you are lost in the mountains and it is foggy out.
You want to find a valley, but since it is foggy, you can only see the local area around you.
What would you do?
The natural thing is:
The key to this idea is formalizing the “direction of steepest descent.”
We do this by using the differentiability of the loss function. This means that the gradient, which represents the rate of spatial change, of the loss function is defined.
The (negative) of the gradient represents the direction of steepest descent.
import math
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
%matplotlib inline
We’ll build our understanding of the gradient by considering derivatives of single variable functions.
Let’s start with a quadratic function
The function
def f(x):
return 3*x**2 - 4*x + 5
Here is a plot of
import numpy as np
=(6, 6))
plt.figure(figsize
= np.arange(-5, 5, 0.25)
xs = f(xs)
ys
plt.plot(xs, ys)'$f(x) = 3x^2 - 4x + 5$')
plt.title( plt.show()
Assume that
Question
What do we know about where the minimum is in terms of the slope of the curve?
Answer
The slope must necessarily be zero.
Question
How do we calculate the slope?
Answer
Calculate the derivative.
The derivative of a function
or
You may see both notations. The nice thing about Leibniz’ notation is that it is easy to express partial derivatives. A partial derivative of a function
A function
exists at
The derivative of our example is
You can compute this using the derivative rule
The minimum of
Plotting the point
= np.arange(-5, 5, 0.25)
xs = f(xs)
ys
plt.plot(xs, ys)
# Add a circle point at (2, 5)
2/3], [f(2/3)], 'o')
plt.plot([
# Show the plot
plt.show()
Assume
is differentiable. The derivative is the slope of the tangent line at the point .
Here is an example of a tangent line at
import matplotlib.pyplot as plt
import numpy as np
import ipywidgets as widgets
from IPython.display import display
# Define the function f(x)
def f(x):
return 3 * x ** 2 - 4 * x + 5
# Define the derivative f'(x)
def df(x):
return 6 * x - 4
# Function to plot f(x) and its tangent line at x = x_value
def plot_with_tangent(x_value):
# Generate x values for the function
= np.linspace(-5, 5, 400)
x = f(x)
y
# Compute the slope and function value at x = x_value
= df(x_value)
slope_at_x_value = f(x_value)
f_at_x_value
# Generate x and y values for the tangent line near x = x_value
= np.linspace(x_value - 2, x_value + 2, 400)
x_tangent = f_at_x_value + slope_at_x_value * (x_tangent - x_value)
y_tangent
# Create the plot
=(7, 4))
plt.figure(figsize='$f(x) = 3x^2 - 4x + 5$')
plt.plot(x, y, label='--', label=f'Asymptotic slope of {df(x_value):.2f} at x = {x_value:.2f}')
plt.plot(x_tangent, y_tangent, linestyle='red') # point of tangency
plt.scatter([x_value], [f_at_x_value], color'Plot of the function $f(x) = 3x^2 - 4x + 5$')
plt.title('$x$')
plt.xlabel('$f(x)$')
plt.ylabel(True)
plt.grid(
plt.legend()
plt.show()
-2)
plot_with_tangent(
# Create an interactive widget
# widgets.interact(plot_with_tangent, x_value=widgets.FloatSlider(value=-2, min=-5, max=5, step=0.1));
Important Note:
From the graph above, consider
The slope (derivative) of a function
This is key to understanding how we adjust the weights of our model in order to minimize the loss function.
In 2 or higher dimensions, the concept of derivative generalizes to the gradient. The gradient of a multivariate function
It is a vector of dimension
For multivariate functions, there are multiple directions to move. Similar to the 1-D case, we want to make take a step in a direction that moves us closer to the minimum value.
It turns out that if we are going to take a small step of unit length, then the gradient is the direction that maximizes the change in the loss function.
To descend, we compute the negative of the gradient. This is the idea of gradient descent for minimization.
To formalize gradient descent, consider a vector
We introduce a differentiable loss function
In linear regression, the loss function is:
where
The gradient is the vector
As you can see from the above figure, in general the gradient varies depending on where you are in the parameter space.
Each time we seek to improve our parameter estimates
… “negative direction” because the gradient specifies the direction of maximum increase – and we want to decrease the loss function.
How big a step should we take?
For step size, will use a scalar value
The learning rate is a hyperparameter that needs to be tuned for a given problem. It can also be modified adaptively as the algorithm progresses.
The gradient descent algorithm is:
How do we know if the gradient descent converged?
The stopping criteria of the algorithm is:
Assume we have the following plotted data.
def centerAxes(ax):
'left'].set_position('zero')
ax.spines['right'].set_color('none')
ax.spines['bottom'].set_position('zero')
ax.spines['top'].set_color('none')
ax.spines['bottom')
ax.xaxis.set_ticks_position('left')
ax.yaxis.set_ticks_position(= np.array([ax.axes.get_xlim(), ax.axes.get_ylim()])
bounds 0][0],bounds[1][0],'')
ax.plot(bounds[0][1],bounds[1][1],'')
ax.plot(bounds[
= 10
n = np.array([1., 0.5])
beta = plt.figure(figsize = (7, 7)).add_subplot()
ax
centerAxes(ax)1)
np.random.seed(= -10.0 + 20.0 * np.random.random(n)
xlin = beta[0] + (beta[1] * xlin) + np.random.randn(n)
y 'ro', markersize = 10)
ax.plot(xlin, y, plt.show()
Let’s fit a least-squares line to this data.
The loss function for this problem is the least-squares error:
Rather than solve this problem using the normal equations, let’s solve it with gradient descent.
Here is the line we’d like to find.
= plt.figure(figsize = (7, 7)).add_subplot()
ax
centerAxes(ax)'ro', markersize = 10)
ax.plot(xlin, y, 0] + beta[1] * xlin, 'b-')
ax.plot(xlin, beta[-9, 3, r'$y = \beta_0 + \beta_1x$', size=20)
plt.text( plt.show()
There are xlin
and y
.
First, let’s create our
= np.column_stack([np.ones((n, 1)), xlin]) X
Now, let’s visualize the loss function
= ut.three_d_figure((23, 1), '',
fig -12, 12, -4, 4, -1, 2000,
= (7, 7))
figsize = np.array(X.T @ X)
qf = 60, elev = 22)
fig.ax.view_init(azim @ X, -2 * (y.T @ X), y.T @ y, alpha = 0.5)
fig.plotGeneralQF(X.T r'$\mathcal{L}$')
fig.ax.set_zlabel(r'$\beta_0$')
fig.ax.set_xlabel(r'$\beta_1$')
fig.ax.set_ylabel(r'$\Vert \mathbf{y}-X\mathbf{\beta}\Vert^2$', '',
fig.set_title(= False, size = 18)
number_fig # fig.save();
The gradient for a least squares problem is
For those interested in a little more insight into what these plots are showing, here is the derivation.
We start from the rule that
Applying this rule to our loss function:
The first term,
To find the gradient, we can use standard calculus rules for derivates involving vectors. The rules are not complicated, but the bottom line is that in this case, you can almost use the same rules you would if
And by the way – since we’ve computed the derivative as a function of
Which of course, are the normal equations for this linear system.
So here is our code for gradient descent:
def loss(X, y, beta):
return np.linalg.norm(y - X @ beta) ** 2
def gradient(X, y, beta):
return X.T @ X @ beta - X.T @ y
def gradient_descent(X, y, beta_hat, eta, nsteps = 1000):
= [loss(X, y, beta_hat)]
losses = [beta_hat]
betas #
for step in range(nsteps):
#
# the gradient step
= beta_hat - eta * gradient(X, y, beta_hat)
new_beta_hat = new_beta_hat
beta_hat #
# accumulate statistics
losses.append(loss(X, y, new_beta_hat))
betas.append(new_beta_hat)
return np.array(betas), np.array(losses)
We’ll start at the point
= np.array([-8, -3.2])
beta_start = 0.002
eta = gradient_descent(X, y, beta_start, eta) betas, losses
What happens to our loss function per GD iteration?
'.-')
plt.plot(np.log(losses), r'$\log\mathcal{L}$', size = 14)
plt.ylabel('Iteration', size = 14)
plt.xlabel('Improvement in Loss Per Iteration of GD', size = 16)
plt.title( plt.show()
And how do the parameter values
0], betas[:, 1], '.-')
plt.plot(betas[:, r'$\beta_0$', size = 14)
plt.xlabel(r'$\beta_1$', size = 14)
plt.ylabel(r'Evolution of $\beta$', size = 16)
plt.title( plt.show()
Notice that the improvement in loss decreases over time. Initially the gradient is steep and loss improves fast, while later on the gradient is shallow and loss doesn’t improve much per step.
Remember, in reality we are the person trying to find their way down the mountain in the fog.
In general, we cannot “see” the entire loss function surface.
Nonetheless, since we know what the loss surface looks like in this case, we can visualize the algorithm “moving” on that surface.
This visualization combines the last two plots into a single view.
# set up view
from IPython.display import HTML
import matplotlib
import matplotlib.animation as animation
'animation.html'] = 'jshtml'
matplotlib.rcParams['animation.embed_limit'] = 30000000
matplotlib.rcParams['animation.html'] = 'html5'
matplotlib.rcParams[
= np.array(list(range(10)) + [2 * x for x in range(5, 25)] + [5 * x for x in range(10, 100)])
anim_frames
= ut.three_d_figure((23, 1), 'z = 3 x1^2 + 7 x2 ^2',
fig -12, 12, -4, 4, -1, 2000,
= (5, 5))
figsize
plt.close()= 60, elev = 22)
fig.ax.view_init(azim = np.array(X.T @ X)
qf @ X, -2 * (y.T @ X), y.T @ y, alpha = 0.5)
fig.plotGeneralQF(X.T r'$\mathcal{L}$')
fig.ax.set_zlabel(r'$\beta_0$')
fig.ax.set_xlabel(r'$\beta_1$')
fig.ax.set_ylabel(r'$\Vert \mathbf{y}-X\mathbf{\beta}\Vert^2$', '',
fig.set_title(= False, size = 18)
number_fig #
def anim(frame):
0], betas[:frame, 1], 'o-', zs = losses[:frame], c = 'k', markersize = 5)
fig.ax.plot(betas[:frame, # fig.canvas.draw()
#
# create the animation
= animation.FuncAnimation(fig.fig, anim,
ani = anim_frames,
frames = None,
fargs = 100,
interval = False)
repeat
HTML(ani.to_html5_video())
We can also see how evolution of the parameters translate to the line fitting to the data.
= plt.subplots(figsize = (5, 5))
fig, ax
plt.close()
centerAxes(ax)'ro', markersize = 10)
ax.plot(xlin, y, = ax.plot([], [])
fit_line
#
#to get additional args to animate:
#def animate(angle, *fargs):
# fargs[0].view_init(azim=angle)
def animate(frame):
0].set_data(xlin, betas[frame, 0] + betas[frame, 1] * xlin)
fit_line[
fig.canvas.draw()#
# create the animation
= animation.FuncAnimation(fig, animate,
ani = anim_frames,
frames =None,
fargs=100,
interval=False)
repeat
HTML(ani.to_html5_video())
Gradient Descent can be applied to many optimization problems.
However, here are some issues to consider in practice
Setting the learning rate can be a challenge. The previous learning rate was
Observe what happens when we set it to
= np.array([-8, -2])
beta_start = 0.0065
eta = gradient_descent(X, y, beta_start, eta, nsteps = 100) betas, losses
=(4, 4))
plt.figure(figsize'.-')
plt.plot(np.log(losses), r'$\log\mathcal{L}$', size = 14)
plt.ylabel('Iteration', size = 14)
plt.xlabel('Improvement in Loss Per Iteration of GD', size = 16)
plt.title( plt.show()
=(4, 4))
plt.figure(figsize0], betas[:, 1], '.-')
plt.plot(betas[:, r'$\beta_0$', size = 14)
plt.xlabel(r'$\beta_1$', size = 14)
plt.ylabel(r'Evolution of $\beta$', size = 16)
plt.title( plt.show()
This is a total disaster. What is going on?
It is helpful to look at the progress of the algorithm using the loss surface:
= ut.three_d_figure((23, 1), '',
fig -12, 2, -4, 4, -1, 2000,
= (4, 4))
figsize = np.array(X.T @ X)
qf = 142, elev = 58)
fig.ax.view_init(azim @ X, -2 * (y.T @ X), y.T @ y, alpha = 0.5)
fig.plotGeneralQF(X.T r'$\mathcal{L}$')
fig.ax.set_zlabel(r'$\beta_0$')
fig.ax.set_xlabel(r'$\beta_1$')
fig.ax.set_ylabel(r'$\Vert \mathbf{y}-X\mathbf{\beta}\Vert^2$', '',
fig.set_title(= False, size = 18)
number_fig = 18
nplot 0], betas[:nplot, 1], 'o-', zs = losses[:nplot], markersize = 5);
fig.ax.plot(betas[:nplot, #
We can now see what is happening clearly.
The steps we take are too large and each step overshoots the local minimum.
The next step then lands on a portion of the surface that is steeper and in the opposite direction.
As a result the process diverges.
For an interesting comparison, try setting
and observe the evolution of .
It is important to decrease the step size when divergence appears.
Unfortunately, on a complicated loss surface, a given step size may diverge in one location or starting point, but not in another.
The loss surface for linear regression is the best possible kind. It is strictly convex, so it has a single global minimum.
For neural networks, the loss surface is more complex.
In general, the larger the neural network, the more complex the loss surface.
And deep neural networks, especially transformers have billions of parameters.
Here’s a visualization of the loss surface for the 56 layer neural network VGG-56, from Visualizing the Loss Landscape of Neural Networks.
For a fun exploration, see https://losslandscape.com/explorer.
So far we applied gradient descent on a simple linear regression model.
As we’ll soon see, deep neural networks are much more complicated multi-stage models, with millions or billions of parameters to differentiate.
Fortunately, the Chain Rule from calculus gives us a relatively simple and scalable algorithm, called Back Propagation, that solves this problem.
Let’s switch gears and discuss how to construct neural networks. An _artificial neuron_is loosely modeled on a biological neuron.
From cs231n
From cs231n
The more common artifical neuron
From cs231n
Multiple artificial neurons can act on the same inputs,. This defines a layer. We can have more than one layer until we produce one or more outputs.
The example above shows a network with 3 inputs, two layers of neurons, each with 4 neurons, followed by one layer that produces a single value output.
This architecture can be used as a binary classifier.