Showing posts with label Encoder _Decoder. Show all posts
Showing posts with label Encoder _Decoder. Show all posts

Thursday, November 10, 2022

Seq2Seq Learning Part C: Basic Encoder Decoder Architecture & Design

 

Seq2Seq Learning Part C: Basic Encoder-Decoder Architecture & Design

Welcome to the Part C of the Seq2Seq Learning Tutorial Series. In this tutorial, we will design a Basic Encoder-Decoder model to solve the sample Seq2Seq problem introduced in Part A.

We will use LSTM as the Recurrent Neural Network layer in Keras.

You can access all my SEQ2SEQ Learning videos on Murat Karakaya Akademi Youtube channel in ENGLISH or in TURKISH

You can access all the tutorials in this series from my blog at www.muratkarakaya.net

If you would like to follow up on Deep Learning tutorials, please subscribe to my YouTube Channel or follow my blog on muratkarakaya.net. Thank you!


Photo by Med Badr Chemmaoui on Unsplash

Seq2Seq Learning: PART D: Encoder-Decoder with Teacher Forcing

 

SEQ2SEQ LEARNING PART D: Encoder-Decoder with Teacher Forcing

Welcome to Part D of the Seq2Seq Learning Tutorial Series. In this tutorial, we will design an Encoder-Decoder model to be trained with “Teacher Forcing” to solve the sample Seq2Seq problem introduced in Part A.

We will use the LSTM layer in Keras as the Recurrent Neural Network.

You can access all my SEQ2SEQ Learning videos on Murat Karakaya Akademi Youtube channel in ENGLISH or in TURKISH

You can access all the tutorials in this series from my blog at www.muratkarakaya.netIf you would like to follow up on Deep Learning tutorials, please subscribe to my YouTube Channel or follow my blog on muratkarakaya.net.  You can also access this Colab Notebook using the link.

If you are ready, let’s get started!



Photo by Vedrana Filipović on Unsplash

Seq2Seq Learning PART E: Encoder-Decoder for Variable Input And Output Sizes: Padding & Masking

 

SEQ2SEQ LEARNING PART E: Encoder-Decoder for Variable Input And Output Sizes: Padding & Masking

Welcome to Part E of the Seq2Seq Learning Tutorial Series. In this tutorial, we will design an Encoder-Decoder model to handle variable-size input and output sequences by using Padding and Masking methods. We will train the model by using the Teacher Forcing technique which we covered in Part D.

You can access all my SEQ2SEQ Learning videos on Murat Karakaya Akademi Youtube channel in ENGLISH or in TURKISHYou can access all the tutorials in this series from my blog at www.muratkarakaya.netYou can access this Colab Notebook using the link.

If you would like to follow up on Deep Learning tutorials, please subscribe to my YouTube Channel or follow my blog on muratkarakaya.net.  

If you are ready, let’s get started!



Photo by Jeffrey Brandjes on Unsplash


Seq2Seq Learning PART F: Encoder-Decoder with Bahdanau & Luong Attention Mechanism

Seq2Seq Learning PART F: Encoder-Decoder with Bahdanau & Luong Attention Mechanism

Welcome to Part F of the Seq2Seq Learning Tutorial Series. In this tutorial, we will design an Encoder-Decoder model to handle longer input and output sequences by using two global attention mechanisms: Bahdanau & LuongDuring the tutorial, we will be using the Encoder-Decoder model developed in Part C.


First, we will observe that the Basic Encoder-Decoder model will fail to handle long input sequences. Then, we will discuss how to relate each output with all the inputs using the global attention mechanism. We will implement the Bahdanau attention mechanism as a custom layer in Keras by using subclassingThen, we will integrate the attention layer into the Encoder-Decoder model to efficiently process the longer data. After observing the effect of the attention layer on performance, we will depict the attention between inputs and outputs. Lastly, we will code the Luong attention.

You can access all my SEQ2SEQ Learning videos on Murat Karakaya Akademi Youtube channel in ENGLISH or in TURKISHYou can access all the tutorials in this series from my blog at www.muratkarakaya.netYou can access the whole code on Colab.

If you would like to follow up on Deep Learning tutorials, please subscribe to my YouTube Channel or follow my blog on muratkarakaya.net. Thank you!

If you are ready, let’s get started!


Photo by Bradyn Trollip on Unsplash

Tuesday, November 8, 2022

Character Level Text Generation with an Encoder-Decoder Model

Character Level Text Generation with an Encoder-Decoder Model


This tutorial is the sixth part of the “Text Generation in Deep Learning with Tensorflow & Keras” series. In this series, we have been covering all the topics related to Text Generation with sample implementations in Python, Tensorflow & Keras

After opening the file, we will apply the TensorFlow input pipeline that we have developed in Part B to prepare the training dataset by preprocessing and splitting the text into input character sequence (X) and output character (y). Then, we will design an Encoder-Decoder approach with Bahdanau Attentionas the Language Model. We will train this model using the train set. Later on, we will apply several sampling methods that we have implemented in Part D to generate text and observe the effect of these sampling methods on the generated text. Thus, in the end, we will have a trained Encoder Decoder-based Language Model for character-level text generation with three sampling methods.

You can access to all parts of the Deep Learning with Tensorflow & Keras Series at my blog muratlkarakaya.netYou can watch all these parts on the Murat Karakaya Akademi YouTube channel in ENGLISH or TURKISHYou can access the complete Python Keras code here.  

If you would like to learn more about Deep Learning with practical coding examples, please subscribe to Murat Karakaya Akademi YouTube Channel or follow my blog on muratkarakaya.net.  Do not forget to turn on notifications so that you will be notified when new parts are uploaded.

If you are ready, let’s get started!

Last updated on 25th March 2022.



Photo by Emile Perron on Unsplash

All About LSTM Tutorial Series

 

All About LSTM Tutorial Series

This is the index page of the “All About LSTM in Tensorflow & Keras” tutorial series. We will cover all the topics related to LSTM with sample implementations in Python Tensorflow KerasYou can access the codesvideos, and posts from the below links.

You may also like to check out the SEQ2SEQ (SEQUENCE TO SEQUENCE) Learning tutorial series in which the LSTM layer has been used heavily.

If you would like to learn more about Deep Learning with practical coding examples, please subscribe to the Murat Karakaya Akademi YouTube Channel or follow my blog on muratkarakaya.net.  Do not forget to turn on notifications so that you will be notified when new parts are uploaded.

Last updated: 11/05/2022



Photo by Laura Fuhrman on Unsplash