Showing posts with label Keras. Show all posts
Showing posts with label Keras. Show all posts

Thursday, November 10, 2022

LSTM: Understanding Output Types

 

LSTM: Understanding Output Types

INTRODUCTION

In this tutorial, we will focus on the outputs of the LSTM layer in Keras. To create powerful models, especially for solving Seq2Seq learning problems, LSTM is the key layer. To use LSTM effectively in models, we need to understand how it generates different results with respect to given parameters. Therefore, in this tutorial, we will learn and use 3 important parameters (units, return_sequences, and return_state).

At the end of the tutorial, you will be able to manage the LSTM layer to satisfy the model requirements correctly.

If you would like to follow up on Deep Learning tutorials, please subscribe to my YouTube Channel or follow my blog on muratkarakaya.net.  Thank you!


Photo by Victor Barrios on Unsplash

LSTM: Understanding the Number of Parameters

 

LSTM: Understanding the Number of Parameters

In this tutorial, we will focus on the internal structure of the Keras LSTM layer in order to understand how many learnable parameters an LTSM layer has.

Why do we need to care about calculating the number of parameters in the LSTM layer since we can easily get this number in the model summary report?

Well, there are several reasons:

  • First of all, to calculate the number of learnable parameters correctly, we need to understand how LSTM is structured and how LSTM operates in depth. Thus, we will delve into LSTM gates and gate functions. We will gain precious insight into how LSTM handles time-dependent or sequence input data.
  • Secondly, in ANN models, a number of parameters is a really important metric for understanding the model capacity and complexity. We need to keep an eye on the number of parameters of each layer in the model to handle overfitting or underfitting situations. One way to prevent these situations is to adjust the number of parameters of each layer. We need to know how the number of parameters actually affects the performance of each layer.

If you want to enhance your understanding of the LSTM layer and learn how many learnable parameters it has please continue this tutorial.

By the way, I would like to mention that in my Youtube channel I have a dedicated playlist in English (All About LSTM) and in Turkish (LSTM Hakkında Herşey). You can check these playlists to learn more about LSTM.

Lastly, if you want to be notified of upcoming tutorials about LSTM and Deep Learning please subscribe to my Youtube channel and activate notifications.

Thank you!

Now, let’s get started!




Photo by Sigmund on Unsplash

How to solve Binary Classification Problems in Deep Learning with Tensorflow & Keras?

 

How to solve Binary Classification Problems in Deep Learning with Tensorflow & Keras?

In this tutorial, we will focus on how to select Accuracy Metrics, Activation & Loss functions in Binary Classification Problems. First, we will review the types of Classification ProblemsActivation & Loss functionslabel encodings, and accuracy metricsFurthermore, we will also discuss how the target encoding can affect the selection of Activation & Loss functions. Moreover, we will talk about how to select the accuracy metric correctly. Then, for each type of classification problem, we will apply several Activation & Loss functions and observe their effects on performance.

We will experiment with all the concepts by designing and evaluating a deep learning model by using Transfer Learning on horses and humans datasets. In the end, we will summarize the experiment results.

You can access the code at Colab and all the posts of the classification tutorial series at muratkarakaya.netYou can watch all these parts on YouTube in ENGLISH or TURKISH as well.

If you would like to follow up on Deep Learning tutorials, please subscribe to my YouTube Channel or follow my blog on muratkarakaya.net.  Do not forget to turn on notifications so that you will be notified when new parts are uploaded.

If you are ready, let’s get started!



Photo by Mitya Ivanov on Unsplash


Wednesday, November 9, 2022

Sequence To Sequence Learning With Tensorflow & Keras Tutorial Series

Sequence To Sequence Learning With Tensorflow & Keras Tutorial Series

This is the Index page of the “SEQ2SEQ Learning in Deep Learning with TensorFlow & Keras” tutorial series.

You can access all the content of the series in English and Turkish as YouTube videos, Medium posts, and Collab / GitHub Jupyter Notebooks using the below links..

Last Updated: 27 May 2021



Encoder-Decoder Model with Global Attention

Tuesday, November 8, 2022

How to solve Multi-Label Classification Problems in Deep Learning with Tensorflow & Keras?

How to solve Multi-Label Classification Problems in Deep Learning with Tensorflow & Keras?

This is the fourth part of the “How to solve Classification Problems in Keras?” series. Before starting this tutorial, I strongly suggest you go over Part A: Classification with Keras to learn all related concepts. 

In this tutorial, we will focus on how to solve Multi-Label Classification Problems in Deep Learning with Tensorflow & KerasFirst, we will download a sample Multi-label dataset. In multi-label classification problems, we mostly encode the true labels with multi-hot vectors. We will experiment with combinations of various last layer’s activation functions and loss functions of a Keras CNN model and we will observe the effects on the model’s performance.

During experiments, we will discuss the relationship between Activation & Loss functionslabel encodings, and accuracy metrics in detail. We will understand why sometimes we could get surprising results when using different parameter settings other than the generally recommended ones. As a result, we will gain insight into activation and loss functions and their interactions. In the end, we will summarize the experiment results in a cheat table.

You can access the code at Colab and all the posts of the classification tutorial series at muratkarakaya.netYou can watch all these parts on YouTube in ENGLISH or TURKISH as well.

If you would like to follow up on Deep Learning tutorials, please subscribe to my YouTube Channel or follow my blog on muratkarakaya.net Do not forget to turn on Notifications so that you will be notified when new parts are uploaded.

If you are ready, let’s get started!



Photo by Luca Martini on Unsplash


Character Level Text Generation with an Encoder-Decoder Model

Character Level Text Generation with an Encoder-Decoder Model


This tutorial is the sixth part of the “Text Generation in Deep Learning with Tensorflow & Keras” series. In this series, we have been covering all the topics related to Text Generation with sample implementations in Python, Tensorflow & Keras

After opening the file, we will apply the TensorFlow input pipeline that we have developed in Part B to prepare the training dataset by preprocessing and splitting the text into input character sequence (X) and output character (y). Then, we will design an Encoder-Decoder approach with Bahdanau Attentionas the Language Model. We will train this model using the train set. Later on, we will apply several sampling methods that we have implemented in Part D to generate text and observe the effect of these sampling methods on the generated text. Thus, in the end, we will have a trained Encoder Decoder-based Language Model for character-level text generation with three sampling methods.

You can access to all parts of the Deep Learning with Tensorflow & Keras Series at my blog muratlkarakaya.netYou can watch all these parts on the Murat Karakaya Akademi YouTube channel in ENGLISH or TURKISHYou can access the complete Python Keras code here.  

If you would like to learn more about Deep Learning with practical coding examples, please subscribe to Murat Karakaya Akademi YouTube Channel or follow my blog on muratkarakaya.net.  Do not forget to turn on notifications so that you will be notified when new parts are uploaded.

If you are ready, let’s get started!

Last updated on 25th March 2022.



Photo by Emile Perron on Unsplash

Controllable Text Generation in Deep Learning with Transformers (GPT3) using Tensorflow & Keras Tutorial Series

 

Controllable Text Generation in Deep Learning with Transformers (GPT3) using Tensorflow & Keras Tutorial Series

This is the index page of the “Controllable Text Generation in Deep Learning with Transformers (GPT3) using Tensorflow & Keras” tutorial series.

We will cover all the topics related to Controllable Text Generation with sample implementations in Python Tensorflow Keras.

You can access the codesvideos, and posts from the below links.

You may like to start with learning the Text Generation methods in Deep Learning with Tensorflow (TF) & Keras from this tutorial series.

If you would like to learn more about Deep Learning with practical coding examples, please subscribe to the Murat Karakaya Akademi YouTube Channel or follow my blog on muratkarakaya.net. Do not forget to turn on notifications so that you will be notified when new parts are uploaded.

Last updated: 14/05/2022



Photo by Ibrahim Boran on Unsplash

tf.data: Tensorflow Data Pipelines Tutorial Series

 

tf.data: Tensorflow Data Pipelines Tutorial Series

INDEX PAGE: This is the index page of the “tf.data: Tensorflow Data Pipelines” series.

We will cover all the topics related to tf.data Tensorflow Data Pipeline with sample implementations in Python Tensorflow Keras.

You can access the codesvideos, and posts from the below links.

If you would like to learn more about Deep Learning with practical coding examples, please subscribe to my YouTube Channel or follow my blog on Blogger. Do not forget to turn on notifications so that you will be notified when new parts are uploaded.



Photo by Florian Wächter on Unsplash

Saturday, November 5, 2022

Fundamentals of Classification by Deep Learning with Tensorflow & Keras

Fundamentals of Classification by Deep Learning with Tensorflow & Keras

In this post, we will focus on fundamental concepts for solving Classification Problems by Deep Learning with Tensorflow & KerasWhen we design a model in Deep Neural Networks, we need to know how to select proper label encodingActivation, and Loss functions, along with accuracy metrics according to the classification task at hand.

Thus, in this tutorial, we will first investigate the types of Classification Problems. Then, we will see the most frequently used label encodings in Keras. We will learn how to select Activation & Loss functions according to the given classification type and label encoding. Moreover, we will examine the details of accuracy metrics in TensorFlow / Keras.

At the end of the tutorial, I hope that we will have a good understanding of these concepts and their implementation in Keras.
Contents:
  • types of Classification Problems,
  • possible label encodings,
  • Activation & Loss functions,
  • accuracy metrics

Furthermore, we will also discuss how the target encoding can affect the selection of Activation & Loss functions.

You can access the code at Colab and all the posts of the classification tutorial series at muratkarakaya.netYou can watch all these parts on YouTube in ENGLISH or TURKISH as well.
If you would like to follow up on Deep Learning tutorials, please subscribe to my YouTube Channel or follow my blog on muratkarakaya.net Do not forget to turn on Notifications so that you will be notified when new parts are uploaded. Do not forget to turn on Notifications so that you will be notified when new parts are uploaded.

If you are ready, let’s get started!



Friday, November 4, 2022

How to save and load a TensorFlow / Keras Model with Custom Objects?

 

How to save and load a TensorFlow / Keras Model with Custom Objects?

Author: Murat Karakaya
Date created: 30 May 2021
Last modified: 30 July 2021
Description: This tutorial will design and train a Keras model (miniature GPT3) with some custom objects (custom layers). We aim to learn how to save the trained model as a whole and load the saved model “TensorFlow SevedModel”.
Accessible on:



Photo by Markus Winkler on Unsplash