Showing posts with label Attention. Show all posts
Showing posts with label Attention. Show all posts

Thursday, November 10, 2022

Seq2Seq Learning PART F: Encoder-Decoder with Bahdanau & Luong Attention Mechanism

Seq2Seq Learning PART F: Encoder-Decoder with Bahdanau & Luong Attention Mechanism

Welcome to Part F of the Seq2Seq Learning Tutorial Series. In this tutorial, we will design an Encoder-Decoder model to handle longer input and output sequences by using two global attention mechanisms: Bahdanau & LuongDuring the tutorial, we will be using the Encoder-Decoder model developed in Part C.


First, we will observe that the Basic Encoder-Decoder model will fail to handle long input sequences. Then, we will discuss how to relate each output with all the inputs using the global attention mechanism. We will implement the Bahdanau attention mechanism as a custom layer in Keras by using subclassingThen, we will integrate the attention layer into the Encoder-Decoder model to efficiently process the longer data. After observing the effect of the attention layer on performance, we will depict the attention between inputs and outputs. Lastly, we will code the Luong attention.

You can access all my SEQ2SEQ Learning videos on Murat Karakaya Akademi Youtube channel in ENGLISH or in TURKISHYou can access all the tutorials in this series from my blog at www.muratkarakaya.netYou can access the whole code on Colab.

If you would like to follow up on Deep Learning tutorials, please subscribe to my YouTube Channel or follow my blog on muratkarakaya.net. Thank you!

If you are ready, let’s get started!


Photo by Bradyn Trollip on Unsplash