Seq2Seq Learning Part C: Basic Encoder-Decoder Architecture & Design
Welcome to the Part C of the Seq2Seq Learning Tutorial Series. In this tutorial, we will design a Basic Encoder-Decoder model to solve the sample Seq2Seq problem introduced in Part A.
We will use LSTM as the Recurrent Neural Network layer in Keras.
SEQ2SEQ LEARNING PART D: Encoder-Decoder with Teacher Forcing
Welcome to Part D of the Seq2Seq Learning Tutorial Series. In this tutorial, we will design an Encoder-Decoder model to be trained with “Teacher Forcing” to solve the sample Seq2Seq problem introduced in Part A.
We will use the LSTM layer in Keras as the Recurrent Neural Network.
SEQ2SEQ LEARNING PART E: Encoder-Decoder for Variable Input And Output Sizes: Padding & Masking
Welcome to Part E of the Seq2Seq Learning Tutorial Series. In this tutorial, we will design an Encoder-Decoder model to handle variable-size input and output sequences by using Padding and Masking methods. We will train the model by using the Teacher Forcing technique which we covered in Part D.
Seq2Seq Learning PART F: Encoder-Decoder with Bahdanau & Luong Attention Mechanism
Welcome to Part F of the Seq2Seq Learning Tutorial Series. In this tutorial, we will design an Encoder-Decoder model to handle longer input and output sequences by using two global attention mechanisms: Bahdanau & Luong. During the tutorial, we will be using the Encoder-Decoder model developed in Part C.
First, we will observe that the Basic Encoder-Decoder model will fail to handle long input sequences. Then, we will discuss how to relate each output with all the inputs using the global attention mechanism. We will implement the Bahdanau attention mechanism as a custom layer in Keras by using subclassing. Then, we will integrate the attention layer into the Encoder-Decoder model to efficiently process the longer data. After observing the effect of the attention layer on performance, we will depict the attention between inputs and outputs. Lastly, we will code the Luong attention.
Character Level Text Generation with an Encoder-Decoder Model
This tutorial is the sixth part of the “Text Generation in Deep Learning with Tensorflow & Keras” series. In this series, we have been covering all the topics related to Text Generation with sample implementations in Python, Tensorflow & Keras.
After opening the file, we will apply the TensorFlow input pipeline that we have developed in Part B to prepare the training dataset by preprocessing and splitting the text into input character sequence (X) and output character (y). Then, we will design an Encoder-Decoder approach with Bahdanau Attentionas the Language Model. We will train this model using the train set. Later on, we will apply several sampling methods that we have implemented in Part D to generate text and observe the effect of these sampling methods on the generated text. Thus, in the end, we will have a trained Encoder Decoder-based Language Model for character-level text generation with three sampling methods.
If you would like to learn more about Deep Learning with practical coding examples, please subscribe to Murat Karakaya Akademi YouTube Channel or followmy blog on muratkarakaya.net. Do not forget to turn on notifications so that you will be notified when new parts are uploaded.
This is the index page of the “All About LSTM in Tensorflow & Keras” tutorial series. We will cover all the topics related to LSTM with sample implementations in Python Tensorflow Keras. You can access the codes, videos, and posts from the below links.
If you would like to learn more about Deep Learning with practical coding examples, please subscribe to the Murat Karakaya Akademi YouTube Channel or followmy blog on muratkarakaya.net. Do not forget to turn on notifications so that you will be notified when new parts are uploaded.