[ad_1]
Last Updated on June 29, 2022
We usually use TensorFlow to build a neural network. However, TensorFlow is not limited to this. Behind the scene, TensorFlow is a tensor library with automatic differentiation capability. Hence we can easily use it to solve a numerical optimization problem with gradient descent. In this post, we are going to show how TensorFlow’s automatic differentiation engine, autograd, works.
After finishing this tutorial, you will learn:
- What is autograd in TensorFlow
- How to make use of autograd and an optimizer to solve an optimization problem
Let’s get started.
Overview
This tutorial is in three parts; they are:
- Autograd in TensorFlow
- Using autograd for Polynomial Regression
- Using autograd to Solve a Math Puzzle
Autograd in TensorFlow
In TensorFlow 2.x, we can define variables and constants as TensorFlow objects and build an expression with them. The expression is essentially a function of the variables. Hence we may derive its derivative function, i.e., the differentiation or the gradient. This feature is one of the many fundamental features in TensorFlow. The deep learning model would make use of this in the training loop.
It is easier to explain autograd with an example. In TensorFlow 2.x, we can create a constant matrix as follows:
import tensorflow as tf
x = tf.constant([1, 2, 3]) print(x) print(x.shape) print(x.dtype) |
The above prints:
tf.Tensor([1 2 3], shape=(3,), dtype=int32) (3,) <dtype: ‘int32’> |
Which means we created an integer vector (in the form of Tensor object). This vector can work like a NumPy vector in most of the cases. For example, we can do x+x
or 2*x
and the result is just as what we would expect. TensorFlow comes with many functions for array manipulation that match NumPy, such as tf.transpose
or tf.concat
.
Creating variables in TensorFlow is just the same, for example:
import tensorflow as tf
x = tf.Variable([1, 2, 3]) print(x) print(x.shape) print(x.dtype) |
This would print:
<tf.Variable ‘Variable:0’ shape=(3,) dtype=int32, numpy=array([1, 2, 3], dtype=int32)> (3,) <dtype: ‘int32’> |
and the operations (such as x+x
and 2*x
) that we can apply to Tensor objects can also be applied to variables. The only difference between variables and constants is the former allows the value to change while the latter is immutable. This distinction is important when we run a gradient tape, as follows:
import tensorflow as tf
x = tf.Variable(3.6)
with tf.GradientTape() as tape: y = x*x
dy = tape.gradient(y, x) print(dy) |
This prints:
tf.Tensor(7.2, shape=(), dtype=float32) |
What it does is the following: We defined a variable x
(with value 3.6) and then created a gradient tape. While the gradient tape is working, we compute y=x*x
or $y=x^2$. The gradient tape monitored how the variables are manipulated. Afterwards, we ask the gradient tape to find the derivative $dfrac{dy}{dx}$. We know $y=x^2$ means $y’=2x$. Hence the output would give us a value of $3.6times 2=7.2$.
Using autograd for Polynomial Regression
How this feature in TensorFlow helpful? Let’s consider a case that we have a polynomial in the form of $y=f(x)$ and we are given several $(x,y)$ samples. How can we recover the polynomial $f(x)$? One way to do it is to assume random coefficient for the polynomial and feed in the samples $(x,y)$. If the polynomial is found, we should see the value of $y$ matches $f(x)$. The closer they are, the closer our estimate is to the correct polynomial.
This is indeed a numerical optimization problem such that we want to minimize the difference between $y$ and $f(x)$. We can use gradient descent to solve it.
Let’s consider an example. We can build a polynomial $f(x)=x^2 + 2x + 3$ in NumPy as follows:
import numpy as np
polynomial = np.poly1d([1, 2, 3]) print(polynomial) |
This prints:
We may use the polynomial as a function, such as:
And this prints 8.25
, for $(1.5)^2+2times(1.5)+3 = 8.25$.
Now we may generate a number of samples from this function, using NumPy:
N = 20 # number of samples
# Generate random samples roughly between -10 to +10 X = np.random.randn(N,1) * 5 Y = polynomial(X) |
In the above, both X
and Y
are NumPy arrays of shape (20,1)
and they are related as $y=f(x)$ for the polynomial $f(x)$.
Now assume we do not know what is our polynomial except it is quadratic. And we would like to recover the coefficients. Since a quadratic polynomial is in the form of $Ax^2+Bx+C$, we have three unknowns to find. We can find them using gradient descent algorithm that we implement, or using an existing gradient descent optimizer. The following demonstrates how it works:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
import tensorflow as tf
# Assume samples X and Y are prepared elsewhere
XX = np.hstack([X*X, X, np.ones_like(X)])
w = tf.Variable(tf.random.normal((3,1))) # the 3 coefficients x = tf.constant(XX, dtype=tf.float32) # input sample y = tf.constant(Y, dtype=tf.float32) # output sample optimizer = tf.keras.optimizers.Nadam(lr=0.01) print(w)
for _ in range(1000): with tf.GradientTape() as tape: y_pred = x @ w mse = tf.reduce_sum(tf.square(y – y_pred)) grad = tape.gradient(mse, w) optimizer.apply_gradients([(grad, w)])
print(w) |
The print
statement before the for loop gives three random number, such as
<tf.Variable ‘Variable:0’ shape=(3, 1) dtype=float32, numpy= array([[–2.1450958 ], [–1.1278448 ], [ 0.31241694]], dtype=float32)> |
but the one after the for loop gives us the coefficients very close to that in our polynomial:
<tf.Variable ‘Variable:0’ shape=(3, 1) dtype=float32, numpy= array([[1.0000628], [2.0002015], [2.996219 ]], dtype=float32)> |
What the above code does is the following: First we create a variable vector w
of 3 values, namely the coefficients $A,B,C$. Then we create an array of shape $(N,3)$, which $N$ is the number of samples in our array X
. This array has 3 columns, which are respectively the value of $x^2$, $x$, and 1. We build such an array from the vector X
using np.hstack()
function. Similarly, we build the TensorFlow constant y
from NumPy array Y
.
Afterwards, we use a for loop to run gradient descent in 1000 iterations. In each iteration, we compute $x times w$ in matrix form to find $Ax^2+Bx+C$ and assign it to the variable y_pred
. Then we compare y
and y_pred
and find the mean square error. Next, we derive the gradient, i.e., the rate of change of the mean square error with respect to the coefficients w
. And based on this gradient, we use gradient descent to update w
.
In essence, the above code is to find the coefficients w
that minimizes the mean square error.
Putting everything together, the following is the complete code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
import numpy as np import tensorflow as tf
N = 20 # number of samples
# Generate random samples roughly between -10 to +10 polynomial = np.poly1d([1, 2, 3]) X = np.random.randn(N,1) * 5 Y = polynomial(X)
# Prepare input as an array of shape (N,3) XX = np.hstack([X*X, X, np.ones_like(X)])
# Prepare TensorFlow objects w = tf.Variable(tf.random.normal((3,1))) # the 3 coefficients x = tf.constant(XX, dtype=tf.float32) # input sample y = tf.constant(Y, dtype=tf.float32) # output sample optimizer = tf.keras.optimizers.Nadam(lr=0.01) print(w)
# Run optimizer for _ in range(1000): with tf.GradientTape() as tape: y_pred = x @ w mse = tf.reduce_sum(tf.square(y – y_pred)) grad = tape.gradient(mse, w) optimizer.apply_gradients([(grad, w)])
print(w) |
Using autograd to Solve a Math Puzzle
In the above, we used 20 samples and it is more than enough to fit a quadratic equation. We may use gradient descent to solve some math puzzle as well. For example, the following problem:
[ A ] + [ B ] = 9 + – [ C ] – [ D ] = 1 = = 8 2 |
In other words, we would like to find the values of $A,B,C,D$ such that:
$$begin{aligned}
A + B &= 9 \
C – D &= 1 \
A + C &= 8 \
B – D &= 2
end{aligned}$$
This can also be solved using autograd, as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
import tensorflow as tf import random
A = tf.Variable(random.random()) B = tf.Variable(random.random()) C = tf.Variable(random.random()) D = tf.Variable(random.random())
# Gradient descent loop EPOCHS = 1000 optimizer = tf.keras.optimizers.Nadam(lr=0.1) for _ in range(EPOCHS): with tf.GradientTape() as tape: y1 = A + B – 9 y2 = C – D – 1 y3 = A + C – 8 y4 = B – D – 2 sqerr = y1*y1 + y2*y2 + y3*y3 + y4*y4 gradA, gradB, gradC, gradD = tape.gradient(sqerr, [A, B, C, D]) optimizer.apply_gradients([(gradA, A), (gradB, B), (gradC, C), (gradD, D)])
print(A) print(B) print(C) print(D) |
There can be multiple solutions to this problem. One solution is the following:
<tf.Variable ‘Variable:0’ shape=() dtype=float32, numpy=4.6777573> <tf.Variable ‘Variable:0’ shape=() dtype=float32, numpy=4.3222437> <tf.Variable ‘Variable:0’ shape=() dtype=float32, numpy=3.3222427> <tf.Variable ‘Variable:0’ shape=() dtype=float32, numpy=2.3222432> |
Which means $A=4.68$, $B=4.32$, $C=3.32$, and $D=2.32$. We can verify this solution fits the problem.
What in the above code does is to define the four unknown as variables with a random initial value. Then we compute the result of the four equations and compare it to the expected answer. We then sum up the squared error and ask TensorFlow to minimize it. The minimum possible square error is zero attained when our solution fits exactly the problem.
Note in the way we ask the gradient tape to produce the gradient: We ask the gradient of sqerr
respective to A
, B
, C
, and D
. Hence four gradients are found. We then apply each gradient to the respective variables in each iteration. Rather than looking for the gradient in four different calls to tape.gradient()
, this is required in TensorFlow because the gradient of sqerr
can only be recalled once by default.
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
Articles:
Summary
In this post, we demonstrated how TensorFlow’s automatic differentiation works. This is the building block for carrying out deep learning training. Specifically, you learned:
- What is automatic differentiation in TensorFlow
- How we can use gradient tape to carry out automatic differentiation
- How we can use automatic differentiation to solve a optimization problem
[ad_2]
Image and article originally from machinelearningmastery.com. Read the original article here.