Don’t want to read a PDF file but want to listen to it, then read this article and find out how…

This short article explores the text to speech Python library pyttxs3, reads the PDF files and converts the text to audio, and adjusts the listening speed and volume.

Image for post
Image for post
Photo by Findaway Voices on Unsplash

My daughter wanted a program that she can use to listen to a PDF file, an HTML file, or a word document. She wanted to adjust the speed and the volume of the audio and this was my quick solution for her problem using a python text to speech library.

Install pyttsx3 library

pip install pyttsx3

Listen to a text

Import the python text to the speech library: pyttsx3, initialize the library, then pass the text to say(), and finally, flush the say() queue to play the text as audio. …


Find out what is TFRecord and how to create TFRecord files to train a deep learning model.

In this post, you will learn the basics of TFRecord, benefits of using TFRecord. How to create a TFRecord file for an image dataset to train a deep learning model.

Image for post
Image for post
Photo by Anthony Martino on Unsplash

What is TFRecord?

TFRecord format stores structured data in a simple protocol buffer message format as a sequence of binary records for effiecient serialization

TFRecord uses tf.train.Example to create the protocol buffer(protobuf) message format that is represented by {“string”: value} where value is generated using tf.train.Feature.

Representation then is {“string”: tf.train.Feature}

tf.train.Feature accepts three different message type

  1. tf.train.BytesList — used for images and strings
  2. tf.train.FloatList- used for float and double data
  3. tf.train.Int64List — used for interger values, booleans and…

Learn the common techniques to handle Imbalanced classification datasets for structured data

In this article, you will learn about the Imbalanced dataset and issues that arise when a classification dataset is Imbalanced. Understand the common techniques like Over Sampling, Under Sampling, generate synthetic data to handle imbalanced datasets, and finally, apply all the concepts to an imbalanced dataset.

Image for post
Image for post
Photo by Azzedine Rouichi on Unsplash

An imbalanced dataset is one in which one class has disproportionate observations compared to the other classes. Each class in the dataset does not have an equal representation and imbalance causes the skewed class distribution.

You have to run a classification algorithm to distinguish between a benign tumor and a cancerous tumor. There 20,000 observations with benign tumors and just 100 observations related to cancerous tumors; this causes the dataset to be imbalanced.


Understand and implement the Integrated Gradient technique to a variety of deep learning networks to explain the model's predictions

This post will help you to understand the two basic axioms of Integrated Gradients and how to implement Integrated Gradient using TensorFlow using a transfer learned model.

What is Integrated Gradient?

Integrated Gradient(IG) is an interpretability or explainability technique for deep neural networks which visualizes its input feature importance that contributes to the model's prediction

Can IG be applied to only a specific use case of deep learning or only to a specific neural network architecture?

Integrated Gradient(IG) computes the gradient of the model’s prediction output to its input features and requires no modification to the original deep neural network.

IG can be applied to any differentiable model like image, text, or structured data.


Understand and Implement Guided Grad CAM to visually explain class discriminative visualization for any CNN based models

CNN Deep Learning Models — Why it interpreted what it interpreted?

Deep Learning models are now able to give very high accuracy. The most critical piece for adopting computer vision algorithms at scale for Image Classification, Object Detection, Semantic Segmentation, Image Captioning, or Visual Question-Answer is understanding why the CNN model interpreted what they interpreted.

Explainability or Interpretability of a CNN model is the key to build the trust and its adoption

Only if we understand why the model failed to identify a class or an object, then we can concentrate our efforts to address the failure, of the model. …


Learn the different configuration settings to manage different models and different versions of the model using TensorFlow Serving

This article explains how to manage multiple models and multiple versions of the same model in TensorFlow Serving using configuration files along with a brief understanding of batching.

Prerequisites:

Deploying a TensorFlow Model to Production made Easy

Image for post
Image for post
Photo by Loverna Journey on Unsplash

You have TensorFlow deep learning models with different architectures or have trained your models with different hyperparameters and would like to test them locally or in production. The easiest way is to serve the models using a Model Server Config file.

A Model Server Configuration file is a protocol buffer file(protobuf), which is a language-neutral, platform-neutral extensible yet simple and faster way to serialize the structure data. …


A quick and simple guide to understanding what is gRPC and how to serve a deep learning model using gRPC API.

In this post, you will learn What is gRPC, how does it work, the benefits of gRPC, the difference between gRPC and REST API, and finally implement gRPC API using Tensorflow Serving to serve a model in Production?

gRPC is a Remote procedure call platform developed by Google.

GRPC is a modern open-source, high-performance, low latency and high speed throughput RPC framework that uses HTTP/2 as transport protocol and uses protocol buffers as the Interface Definition Language(IDL) and also as its underlying message interchange format

How does gRPC work?

Image for post
Image for post
Inspired by:https://www.grpc.io/docs/what-is-grpc/introduction/

A gRPC channel is created that provides a connection to a gRPC server on a specified port. The client invokes a method on the stub as if it is a local object; the server is notified of the client gRPC request. gRPC uses Protocol Buffers to interchange messages between client and server. Protocol Buffers are a way to encode structured data in an efficient, extensible format. …


Deploy a Deep Learning Model to Production using TensorFlow Serving.

Learn step by step deployment of a TensorFlow model to Production using TensorFlow Serving.

You created a deep learning model using Tensorflow, fine-tuned the model for better accuracy and precision, and now want to deploy your model to production for users to use it to make predictions.

What’s the best way to deploy your model to production?

Fast, flexible ways to deploy a TensorFlow deep learning model is to use high performing and highly scalable serving system-Tensorflow Serving

TensorFlow Serving allows you to

  • Easily manage multiple versions of your model, like an experimental or stable version.
  • Keep your server architecture and APIs the…

Learn to create an input pipeline for images to efficiently use CPU and GPU resources to process the image dataset and reduce the training time for a deep learning model.

In this post, you will learn

  • How are the CPU and GPU resources used in a naive approach during model training?
  • How efficiently use the CPU and GPU resources for data pre-processing and training?
  • Why use tf.data to build an efficient input pipeline?
  • How to build an efficient input data pipeline for images using tf.data?

How does a naive approach work for input data pipeline and model training?

When creating an input data pipeline, typically, we perform the ETL(Extract, Transform, and Load) process.

Image for post
Image for post
  • Extraction, extract the data from different data sources like local data sources, which can be from a hard disk or extract data from remote data sources like cloud storage. …

Machine Learning

In this post, you will learn how to use tf.data.Dataset to create an efficient data input pipeline for a structured dataset using the Feature columns.

Image for post
Image for post
Photo by Markus Winkler on Unsplash

The dataset used is Online Shoppers Behaviour prediction.

The dataset will use online customer’s behavior insights for targeted advertising to increase the sales and hence increase the revenue.

Dataset Description

Image for post
Image for post

The input data pipelines for machine learning consist of Extracting data, Transforming, and then Loading it for the model to train and predict(ETL).

  • Extracting the data from the source
  • Transforming or processing the data into a format for the model to use
  • Load the data for the model to train and…

About

Renu Khandelwal

Loves learning, sharing, and discovering myself. Passionate about Machine Learning and Deep Learning

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store