Find optimal cluster in K-Means clustering using Elbow method, Silhouette score, and Gap statistics.

In this article, you will gain an understanding of

  • What is K-Means clustering?
  • How does K-Means work?
  • Applications of K-Means Clustering
  • Implementation of K-Means Clusterings in Python
  • Finding optimal clusters using the Elbow method, Silhouette score, and Gap Statistics

K-Means clustering is a simple, popular yet powerful unsupervised machine learning algorithm. An iterative algorithm to finds groups of data with similar characteristics for an unlabeled data set into clusters.

The K-Means algorithm aims to have cohesive clusters based on the defined number of clusters, K. …


Machine Learning

Learn to Understand the Root Cause of Performance Degradation of Machine Learning Models in Production

You will learn

  • Why ML models degrade after deployment?
  • Difference between Data Drift and Concept Drift
  • Different techniques to handle model degradation

You trained an ML model with great performance metrics and then deployed it in production. The model worked great in production for some time, but your users observed the model recently is not predicting reliable results.

What must be going on with the model? Is it the model that is the issue or the data in the production that is the root cause?

We provide the data and the results during the training of a Machine Learning/Deep Learning…


Measure Statistical Distributional Similarity using Kullback–Leibler Divergence, JensenShannon divergence, and Kolmogorov–Smirnov

In this article, you will explore common techniques like KL Divergence, Jensen-Shannon divergence, and KS test used in Machine Learning to measure the similarity between distributions statistically.

Why do we need to measure similarity or divergence between probability distribution used in Machine Learning?

In Machine Learning, you will encounter probability distributions for continuous and discrete input data, outputs from models, and error calculation between the actual and the predicted output.

  • Measuring the probability distribution of all input and output features helps identify the data drift.
  • When training models, you would like to minimize the error. The error can be minimized…


Explore Deep Convolutional Autoencoders to identify Anomalies in Images.

This article is an experimental work to check if Deep Convolutional Autoencoders could be used for image anomaly detection on MNIST and Fashion MNIST.

Autoencoder in a nutshell

Functionality: Autoencoders encode the input to identify important latent feature representation. It then decodes the latent features to reconstruct output values identical to the input values.

Objective: Autoencoder’s objective is to minimize reconstruction error between the input and output. This helps autoencoders to learn important features present in the data.

Architecture: Autoencoders consists of an Encoder network and a Decoder network. The encoder encodes the high dimension input into a lower-dimensional latent representation also referred to…


Change your mindset to transform your life

We are shaped by our thoughts; we become what we think- Gautam Buddha

Introspection on my mindset and thought process helped me change my destiny.

Around 6 years ago, I was working with a mindset that my abilities were fixed, my learning was limited, and as a result, my growth was stagnant. I was not happy with my then-current state.

Words of the great philosopher echoed.

Knowing yourself is the beginning of all wisdom-Aristotle

When I was looking outside, I believed that someone else was responsible for my situation. I was not happy or peaceful. Introspection helped me understand that…


How Yolo V4 object detection delivers higher mAP and shorter inference time

Enhanced Features of Yolo v4

  • Yolo v4 has a faster inference speed for an object detector in production systems.
  • Optimization for parallel computations
  • Yolo v4 is an efficient and powerful object detection model using a single GPU to deliver an accurate object detector quickly.

Object detector models are composed of

  • A pre-trained Backbone
  • Neck
  • Head that is used to predict classes and bounding boxes of objects.

The backbone of the Object detector can be pre-trained neural network.

Example: ImageNet, VGG16 , ResNet-50 , SpineNet , EfficientNet-B0/B7, CSPResNeXt50 or, CSPDarknet53 or ShuffleNet running on CPU.

Object detector models insert additional layers between the backbone and head…


Improve the MNIST image generation by implementing Wasserstein GAN(WGAN) with Weight Clipping and Gradient Penalty using PyTorch.

What will you learn?

  • Challenges with DCGAN
  • How does Wasserstein GAN solve the challenges with DCGAN?
  • What is Earth mover’s distance
  • 1-Lipshitz constraint via weight clipping and gradient penalty
  • Implement WGAN with weight clipping and gradient penalty in PyTorch using MNIST dataset

Prerequisites:

Deep Convolutional Generative Adversarial Network using PyTorch

Generative Adversarial Network consists of two deep neural networks: Generator and Discriminator

Generator: Its objective is to learn the data distribution from the training data to produce images that resemble the training data.

Discriminator: A binary classifier whose objective is to distinguish the real training data from the fake data…


Learn how to generate MNIST images with a DCGAN using PyTorch

This post will learn to create a DCGAN using PyTorch on the MNIST dataset.

Prerequisites

A basic understanding of CNN

A sample implementation using CNN

Understanding Deep Convolutional GAN

GANs were invented by Ian Goodfellow in 2014 and first described in the paper Generative Adversarial Nets.

GAN is Generative Adversarial Network is a generative model to create new data instances that resemble the training data set. GAN is implemented using two neural networks: Generator and Discriminator

Generator and Discriminator

The Generator’s objective is to learn the data distribution for the training data to produce fake images that resemble the training data.

The Discriminator is…


Read to understand how Reinforcement learning is influenced by human learning.

What will you learn here?

  • What is the difference between Supervised Learning, Unsupervised Learning, and Reinforcement Learning?
  • Understand how Reinforcement Learning mimics human behavior,
  • Different components of Reinforcement Learning(RL) and how they interact
  • Applications of Reinforcement Learning(RL) in real-world scenarios.

This article is adapted and inspired from Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto.

What is the difference between Supervised Learning, Unsupervised Learning, and Reinforcement Learning?

Supervised Learning

Supervised Learning algorithms learn from a labeled dataset. The labels in the dataset provide the answer to the input data.

The Supervised algorithm's objective is to find the function(f)…


Train a CNN on MNIST Dataset using Keras and PyTorch

Here you will learn to create a CNN model with similar architecture using Keras and PyTorch on MNIST dataset

Features of Keras and PyTorch.

Keras

  • Keras is a simpler, concise deep learning API written in Python that runs on TensorFlow's machine learning platform.
  • It enables fast experimentation.
  • Keras provides abstractions and building blocks for developing deep learning models.
  • The model built using Keras is more readable, and skips the neural network implementation details.
  • Keras can run on TPU or large clusters of GPUs.
  • It implicitly performs computation on GPU.
  • Supported by Google

PyTorch

  • PyTorch is a lower-level API focused to directly work with array expressions
  • PyTorch has…

Renu Khandelwal

Loves learning, sharing, and discovering myself. Passionate about Machine Learning and Deep Learning

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store