Natural Language Processing with Deep Learning
Stanford - Winter 2022
These are my solutions for the CS224n course assignments offered by Stanford University (Winter 2022). Written questions are explained in detail, the code is brief and commented (see examples below). From what I investigated, these should be the most explained solutions.
Check out my solutions for CS231n. From what I've checked, they should be the shortest.
For conda users, the instructions on how to set-up the environment are given in the handouts. For pip
users, I've gathered all the requirements in one file. Please set up the virtual environment and install the dependencies (for linux users):
$ python -m venv venv
$ source venv/bin/activate
$ pip install -r requirements.txt
You can install everything with conda too (see this). For code that requires Azure Virtual Machines, I was able to run everything successfully on Google Colab with a free account.
Note: Python 3.8 or newer should be used
For every assignment, i.e., for directories a1
through a5
, there is coding and written parts. The solutions.pdf
files are generated from latex directories where the provided templates were filled while completing the questions in handout.pdf
files and the code.
- A1: Exploring Word Vectors (Done)
- A2: word2vec (Done)
- A3: Dependency Parsing (Done)
- A4: Neural Machine Translation with RNNs and Analyzing NMT Systems (Done)
- A5: Self-Attention, Transformers, and Pretraining (Done)
Written (Attention Exploration)
Question (b) ii.
As before, let
Hint: while the softmax function will never exactly average the two vectors, you can get close by using a large scalar multiple in the expression.
Answer
Assume that
Code (Negative Sampling)
def negSamplingLossAndGradient(
centerWordVec,
outsideWordIdx,
outsideVectors,
dataset,
K=10
):
""" Negative sampling loss function for word2vec models
Implement the negative sampling loss and gradients for a centerWordVec
and a outsideWordIdx word vector as a building block for word2vec
models. K is the number of negative samples to take.
Note: The same word may be negatively sampled multiple times. For
example if an outside word is sampled twice, you shall have to
double count the gradient with respect to this word. Thrice if
it was sampled three times, and so forth.
Arguments/Return Specifications: same as naiveSoftmaxLossAndGradient
"""
# Negative sampling of words is done for you. Do not modify this if you
# wish to match the autograder and receive points!
negSampleWordIndices = getNegativeSamples(outsideWordIdx, dataset, K)
indices = [outsideWordIdx] + negSampleWordIndices
### YOUR CODE HERE (~10 Lines)
### Please use your implementation of sigmoid in here.
# We will multiply where same words are involved, avoiding recalculations
un, idx, n_reps = np.unique(indices, return_index=True, return_counts=True)
U_concat = outsideVectors[un]
# For convenience
n_reps[idx==0] *= -1
U_concat[idx!=0] *= -1
S = sigmoid(centerWordVec @ U_concat.T)
# Find loss and derivatives w.r.t. v_c, U
loss = -(np.abs(n_reps) * np.log(S)).sum()
gradCenterVec = np.abs(n_reps) * (1 - S) @ -U_concat
gradOutsideVecs = np.zeros_like(outsideVectors)
gradOutsideVecs[un] = n_reps[:, None] * np.outer(1 - S, centerWordVec)
### END YOUR CODE
return loss, gradCenterVec, gradOutsideVecs