Wednesday, November 9, 2022

Sequence To Sequence Learning With Tensorflow & Keras Tutorial Series

Sequence To Sequence Learning With Tensorflow & Keras Tutorial Series

This is the Index page of the “SEQ2SEQ Learning in Deep Learning with TensorFlow & Keras” tutorial series.

You can access all the content of the series in English and Turkish as YouTube videos, Medium posts, and Collab / GitHub Jupyter Notebooks using the below links..

Last Updated: 27 May 2021



Encoder-Decoder Model with Global Attention

You can access all my SEQ2SEQ Learning videos on Murat Karakaya Akademi Youtube channel in ENGLISH or in TURKISH

You can access all tutorial on the kmkarakaya Medium blog.

All the codes are shared on the kmkarakaya GitHub Pages.

Here is the list of the tutorials:

Part A: AN INTRODUCTION TO SEQ2SEQ LEARNING AND A SAMPLE SOLUTION WITH MLP NETWORK

Part B: SEQ2SEQ LEARNING WITH RECURRENT NEURAL NETWORKS (LSTM)

Part C: SEQ2SEQ LEARNING WITH A BASIC ENCODER DECODER MODEL

Part D: SEQ2SEQ LEARNING WITH AN ENCODER DECODER MODEL WITH TEACHER FORCING

Part E: SEQ2SEQ LEARNING WITH AN ENCODER DECODER MODEL WITH TEACHER FORCING FOR VARIABLE INPUT AND OUTPUT SIZE: MASKING & PADDING

Part F: SEQ2SEQ LEARNING WITH AN ENCODER DECODER MODEL + BAHDANAU & LUONG ATTENTION

Part G: To be continued! :)

WHY WE HAVE SO MANY PARTS?

  • Our aim is to code an Encoder-Decoder Model with Attention.
  • However, I would like to develop the solution by showing the shortcomings of other possible approaches.
  • Therefore, in the first 2 parts, we will observe that initial models have their own weaknesses.
  • We also understand why the Encoder-Decoder paradigm is so successful.

So, please patiently follow the parts as we develop a better solution :)

You can follow Murat Karakaya Akademi on these social networks:

YouTube

Facebook

Instagram

LinkedIn

Github

Kaggle 

Blogger