diff --git a/seq2seq-translation/seq2seq-translation-batched.ipynb b/seq2seq-translation/seq2seq-translation-batched.ipynb index 465ce6f..519f2ed 100644 --- a/seq2seq-translation/seq2seq-translation-batched.ipynb +++ b/seq2seq-translation/seq2seq-translation-batched.ipynb @@ -35,7 +35,7 @@ "This is made possible by the simple but powerful idea of the [sequence to sequence network](http://arxiv.org/abs/1409.3215), in which two recurrent neural networks work together to transform one sequence to another. An encoder network condenses an input sequence into a single vector, and a decoder network unfolds that vector into a new sequence.\n", "\n", "To improve upon this model we'll use an [attention mechanism](https://arxiv.org/abs/1409.0473), which lets the decoder learn to focus over a specific range of the input sequence." - ] + ] }, { "cell_type": "markdown",