TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
Rajat CV
1. Rajat Nagpal
| M.Tech Artificial Intelligence |
| Medical Intelligence and Language Engineering (MILE) Lab |
Indian Institute of Science, Bengaluru
EDUCATION
INDIAN INSTITUTE OF SCIENCE,
BENGALURU (IISC)
M.TECH ARTIFICIAL INTELLIGENCE
Machine Learning| Natural Language
Processing|
USICT, GGSIPU, DELHI
B.TECH IN ELECTRONICS AND
COMMUNICATION ENGINEERING
2013 - 2017 | Aggregate Score: 66.1%
M.TECH COURSEWORK
BASIC
Matrix theory
Machine Learning
Practical Data Science
Digital Image Processing
Data Structures and Algorithms
Stochastic Models and Applications
Linear and Non-Linear Optimization
ADVANCED
Deep Learning
Reinforcement Learning
Speech Information Processing
Natural Language Understanding
INTERNSHIPS
CITYFLO 350, BOMBARDIER,
DMRC, DELHI
May-July 2015
SKILLS
PROGRAMMING
Python • C
TOOLS
PyTorch • Latex
OS
Linux •Windows
PROJECTS
A STUDY OF TEXT SUMMARIZATION TECHNIQUES AND
THEIR APPLICATIONS ON INDIAN LANGUAGES
| MASTER’S PROJECT
August 2019 - June 2020
In this work, we review the main approaches for text summarization. We apply
these techniques on the Hindi language (one of the official languages of India) and
compare the results. In addition, we explore various combinations of different
sentence representation and extractive summarization techniques on Hindi data,
which improves the summary in terms of the rouge score.
NEURAL MACHINE TRANSLATION WITH ATTENTION USING
WORD2VEC | SUMMER PROJECT
Word2Vec embeddings were trained to perform the task, followed by
machine translation on French, German and English sentences using
different kinds of attention mechanisms. The metric used for this task is
BLEU score, to analyze the effectiveness of these mechanisms.
NEURAL RELATION EXTRACTION
| NATURAL LANGUAGE UNDERSTANDING COURSE PROJECT
In this project, PCNN, BiGRU were analyzed. It has been observed that
PCNN, being a window-based method, effectively captures representation
of each word in a local context whereas, Bi-GRU captures the global
representation of the words in the context of the entire sentence.
GENERATIVE ADVERSARIAL NETWORKS (GAN)
|MACHINE LEARNING COURSE PROJECT
Deep Convolutional GAN (DCGAN) has been used to perform
unsupervised learning task. Convincing evidence has been shown that
DCGAN learns a hierarchy of representations from object parts to scenes
in both the generator and discriminator.
FEATURE ENGINEERING FOR VOICED UNVOICED
SEPARATION
|SPEECH INFORMATION PROCESSING COURSE PROJECT
Experiments on novel combination of features have been done for V/UV
separation with baseline features such as zero crossing rate and short
term energy, to better the overall performance.
PERSISTENCE IN DATA STRUCTURES
|DATA STRUCTURES AND ALGORITHMS COURSE PROJECT
I propose a persistent model for beam search decoding in which there is no
need to backtrack, this, in turn, saves time. Another advantage of this
method is that the process can be started from the previous versions if the
probabilities have changed for a certain number of bi-gram pairs and it also
saves time as well as space if the prefix of the two sentences is the same
because we have a version saved at each iteration.
2. ACHIEVEMENTS
GATE 2018 (ECE) - AIR 21
(99.98 percentile)
ACTIVITIES
Placement coordinator at IISc
Captain of cricket team (Undergrad)
Runner-up in chess tournament
(Undergrad)
MATRIX SKETCHING ALGORITHM ANALYSIS
|PRACTICAL DATA SCIENCE COURSE MINI-PROJECT
In this project, evaluation of the accuracy of linear-least-square-solution
has been done on the original matrix and on the sketched matrix to analyze
the efficiency of the matrix sketching algorithm. The original matrix is
considerably large, therefore, the stochastic gradient descent algorithm
has been used for the linear regression problem whereas, for the sketched
matrix, the LLS equation has been directly implemented. Considerable
evidence has been shown in favor of the matrix sketching algorithm.
FIZZ-BUZZ
|DEEP LEARNING COURSE MINI-PROJECT 1
This project aims to design a pytorch implementation of the famous
FizzBuzz problem to make decisions on the neural network architecture
(no. of layers, no of units per layer), hyper-parameter settings such as the
learning rate, number of epochs, loss function, regularizer, etc. Model’s
performance has been evaluated for different settings for the architecture
and the hyper-parameters.
CNN
|DEEP LEARNING COURSE MINI-PROJECT 2
In this project, convolutional neural network and neural networks are
analyzed for the task of classification for Fashion-MNIST dataset.
NATURAL LANGUAGE INFERENCE
|DEEP LEARNING COURSE MINI-PROJECT 3
In this project, two types of features have been used, namely, infersent
features by facebook research for deep learning task and TF-IDF features
for logistic regression classification. The classification task is natural
language inference to get the similarity and dissimilarity between the two
sentences. Classifiers are trained using SNLI dataset.