Premanand Naik has over 5 years of experience in data science and machine learning. He has worked as a consultant at Capgemini and Saama Technologies where he developed machine learning models for tasks like information extraction and optimized data pipelines. Some of his projects include developing an intelligent ETA prediction system for logistics with 80% accuracy and designing an AI chatbot for another logistics company. He has expertise in technologies like Spark, Hadoop, Python and R and machine learning algorithms like gradient boosting, random forest and deep learning models. He holds a Bachelor's degree in Computer Science from MIT Pune.
Mahadevan Nagarajan is seeking a full-time position in VLSI design utilizing his Master's degree in Electrical Engineering from the University of Minnesota and Bachelor's degree from Anna University, India. He has experience with VLSI CAD tools, programming languages, and HDL synthesis tools. Current projects include designing a 128kb SRAM and 16-bit adder using the NCSU 45nm process in Cadence. Previous experience includes power analysis internship at Intel and systems engineering role at Tata Consultancy Services.
Utilizing Open Data for interactive knowledge transferMonika Steinberg
The document proposes a qKAI application framework to provide interactive knowledge transfer through the use of open data, utilizing semantic web technologies and linked open data to build a hybrid data layer and incentivize user interaction through knowledge games that instantiate gaming components and sequences. The framework is intended to enhance the quality of open content through metadata analysis and user enrichment while refining interaction through lightweight interfaces.
The document discusses the Computer Architecture Group at the University of A Coruña and its work over the last 10 years on performance and scalability of distributed enterprise applications. The group has been working to achieve scalable performance on high speed clusters, handling millions of messages per second, and reducing application runtime from weeks to hours. The group's research covers areas like cluster/cloud computing, Java communication middleware, GPUs, and GIS/visualization. It develops JavaFastComms products for high performance Java communication and offers professional services in performance analytics, computer engineering, and training.
The unification of big and little data processing onto a single platform is an important requirement for Hadoop. How can this be achieved? I explain what is needed for three important use cases.
This document is a resume for Venkatesh Avula. It summarizes his objective, education, skills, and work experience. For education, it lists a Master's degree in Computer Science from UT Dallas and a Bachelor's degree in Computer Science from Osmania University in India. For work experience, it details multiple internships and jobs in software engineering, with responsibilities including developing web applications, optimizing code, and implementing reports. It also lists multiple academic projects completed as part of coursework at UT Dallas.
These slides outline the common distributed computing abstractions necessary to implement data science at scale. It starts with a characterization of the computations required to realize common machine learning at scale. Introductions to Hadoop MR, Spark, GraphLab are covered currently. Going forward, we shall update with Flink, Titan and TensorFlow and how to realize machine learning/deep learning algorithms on top of these frameworks as well as trade-offs between these frameworks.
Mahadevan Nagarajan is seeking a full-time position in VLSI design utilizing his Master's degree in Electrical Engineering from the University of Minnesota and Bachelor's degree from Anna University, India. He has experience with VLSI CAD tools, programming languages, and HDL synthesis tools. Current projects include designing a 128kb SRAM and 16-bit adder using the NCSU 45nm process in Cadence. Previous experience includes power analysis internship at Intel and systems engineering role at Tata Consultancy Services.
Utilizing Open Data for interactive knowledge transferMonika Steinberg
The document proposes a qKAI application framework to provide interactive knowledge transfer through the use of open data, utilizing semantic web technologies and linked open data to build a hybrid data layer and incentivize user interaction through knowledge games that instantiate gaming components and sequences. The framework is intended to enhance the quality of open content through metadata analysis and user enrichment while refining interaction through lightweight interfaces.
The document discusses the Computer Architecture Group at the University of A Coruña and its work over the last 10 years on performance and scalability of distributed enterprise applications. The group has been working to achieve scalable performance on high speed clusters, handling millions of messages per second, and reducing application runtime from weeks to hours. The group's research covers areas like cluster/cloud computing, Java communication middleware, GPUs, and GIS/visualization. It develops JavaFastComms products for high performance Java communication and offers professional services in performance analytics, computer engineering, and training.
The unification of big and little data processing onto a single platform is an important requirement for Hadoop. How can this be achieved? I explain what is needed for three important use cases.
This document is a resume for Venkatesh Avula. It summarizes his objective, education, skills, and work experience. For education, it lists a Master's degree in Computer Science from UT Dallas and a Bachelor's degree in Computer Science from Osmania University in India. For work experience, it details multiple internships and jobs in software engineering, with responsibilities including developing web applications, optimizing code, and implementing reports. It also lists multiple academic projects completed as part of coursework at UT Dallas.
These slides outline the common distributed computing abstractions necessary to implement data science at scale. It starts with a characterization of the computations required to realize common machine learning at scale. Introductions to Hadoop MR, Spark, GraphLab are covered currently. Going forward, we shall update with Flink, Titan and TensorFlow and how to realize machine learning/deep learning algorithms on top of these frameworks as well as trade-offs between these frameworks.
Parmanand Sahu is a senior data scientist with over 7 years of experience in machine learning, natural language processing, and data engineering. He has worked on projects involving predictive modeling, knowledge graph development, conversational agents, and data pipeline development. Currently he is a senior associate data scientist at Capital One where he builds deep learning models for predicting mortgage security prepayment rates.
Parmanand Sahu has over 8 years of experience in data science and machine learning. He has worked on various projects including predicting mortgage prepayment rates, developing an AI assistant, and building knowledge graphs. Currently he is a senior associate data scientist at Capital One where he developed a neural network model to predict mortgage prepayment rates.
Shantanu Gupta is a data scientist with over 5 years of experience in data analysis, cloud computing, and web development. He holds an MS in Computer Engineering from Arizona State University and a BTech in Information Technology. His technical skills include programming languages like Java, Python, and SQL as well as tools like AWS, Hadoop, Spark, and machine learning libraries like TensorFlow and Keras. He has worked on various projects involving pattern recognition, email spam detection, and handwritten digit recognition. Currently, he is a Data Intelligence and Cloud Developer at ASU's Smart City Cloud Innovation Center where he is building prototypes for smart city initiatives utilizing AWS cloud services.
The document is a resume for Parmanand Sahu. It summarizes his education, including a Master's degree in Computer Science from UT Dallas and a Bachelor's degree in Metallurgical Engineering from India. It also outlines his technical skills and work experience in data science and machine learning roles at various companies, including developing neural networks, NLP models, and data pipelines. It lists several personal projects applying machine learning techniques to tasks like question answering, named entity recognition, and fraud detection.
Parmanand Sahu is a data scientist with experience applying machine learning algorithms like neural networks, random forests, and LSTM to problems in NLP, computer vision, and time series analysis. He has worked on projects at Capital One, Ocrolus, Lantern Pharma, and Huddl.ai developing pipelines and models for tasks like predicting mortgage rates, document classification, cancer drug exploration, and conversational agents. He holds an MS in Computer Science from UT Dallas and a BTech in Metallurgical Engineering from NIT Raipur with skills in Python, R, C/C++, Java, Scala, databases, cloud technologies, and deep learning frameworks.
Parmanand Sahu is seeking a data science role. He has a master's degree in computer science from UT Dallas and a bachelor's degree in metallurgical engineering from India. He has worked as a data science intern at Capital One and held data science roles at several companies in India focusing on machine learning, natural language processing, and knowledge graphs. His skills include Python, SQL, PyTorch, TensorFlow, and experience with AWS, MongoDB, and Neo4j. He has completed several online courses and certifications in machine learning, deep learning, and data science topics.
Parmanand Sahu is seeking a data science role and has a MS in Computer Science from UT Dallas. He has 2+ years of experience in data science roles at Capital One and VHSS Lab at UT Dallas. His technical skills include Python, SQL, MongoDB, TensorFlow and he has experience in machine learning algorithms like random forest, logistic regression, RNNs and CNNs. He has experience building models for tasks like named entity recognition, fraud detection and question answering. He also has relevant academic projects on tasks like image classification and decision trees.
Parmanand Sahu is seeking a data science role. He has a Master's degree in Computer Science from UT Dallas and a Bachelor's degree in Metallurgical Engineering from India. He has 2 years of experience in data science roles doing projects involving machine learning algorithms like random forest, logistic regression and deep learning models. He is proficient in Python, SQL, C/C++ and has experience with databases, cloud platforms and machine learning libraries.
Christine Straub is an experienced machine learning engineer and data scientist with skills in Python, natural language processing, deep learning, computer vision, cloud computing, and data analysis. She has work experience developing AI chatbots, building scalable data warehouses, creating machine learning pipelines, and analyzing bio-metrics and geo-image data. Her education includes a BS in Computer Science from UC Berkeley and machine learning courses from Stanford.
Herambeshwar Pendyala is a computer science graduate student at Old Dominion University seeking an internship where he can apply his problem solving and decision making skills. He has 2 years of experience as a backend developer specializing in natural language processing. His areas of interest include machine learning techniques to solve business problems. He has skills in languages like Java, Python, and frameworks like Spark, Hadoop, TensorFlow and Keras.
This document provides a summary of Rangarajan Chari's background and experience. It includes 3 sentences of experience as a data scientist and machine learning specialist with skills in neural networks, natural language processing, and big data technologies. Chari has worked on projects involving text classification, face recognition, and troubleshooting techniques for vehicles. The summary also lists education including a PhD program in artificial intelligence and masters degrees in computer science, math, and physics.
Herambeshwar Pendyala is a computer science graduate student at Old Dominion University actively looking for internships applying data science skills. He has 2 years of experience in natural language processing and big data technologies. His technical skills include programming languages like Java, Python, C/C++, frameworks like Spark and Tensorflow, and databases like SQL and MongoDB. His past experience includes developing chatbots, building a Raspberry Pi cluster for distributed computing, and software engineering internships applying machine learning.
This document is a resume for Yu Wang, who is pursuing an MS in Computer Science from UT Dallas with a 3.35 GPA. Wang has experience in web development, big data, databases, and programming languages like Java, C#, Python, R and SQL. He is looking for a summer/fall 2016 internship in computer science. Some of Wang's projects include developing predictive models using Spark and machine learning algorithms, building web applications using ASP.NET and AngularJS, and performing data analysis on large datasets with tools like Hadoop, Pig, and Hive.
Aayush Saxena is a software developer currently working as a machine learning engineer at Jacobs in Singapore. He has a B.Tech in computer science from the International Institute of Information Technology in Hyderabad, India. His work experience includes roles as a software research engineer at the Singapore Management University and as a high frequency trading developer at Silverleaf in Mumbai, India. He has strong skills in Python, C/C++, Java, SQL, machine learning algorithms, and web development frameworks.
Ragahava Prasad Sridar is a business analytics professional with a Masters in Business Analytics from The University of Texas at Dallas. He has over 3 years of experience in data analysis, machine learning, and database management. His technical skills include Python, R, SQL, Tableau, and Hadoop. He is pursuing a career in data science and analytics and is looking for internship or full-time opportunities.
This document is a resume for Aastha Grover, who is seeking a full-time data science position starting in June 2017. She has a Master's degree in Information Systems from Northeastern University and is proficient in programming languages like R, Java, Python and databases like Neo4j, SQL Server and MongoDB. She has work experience in data science roles at Fidelity Investments and as a programmer analyst at Cognizant. She has also completed academic projects involving predictive modeling, big data analysis and recommendation systems.
This document provides a summary of an individual's experience as a data scientist and solution architect. It includes details on their skills and qualifications in areas like machine learning, deep learning, neural networks, NLP, computer vision, and data engineering. It also lists several projects they have worked on involving building machine learning models for tasks like threat detection, hand detection for safety, and endpoint detection and response. Their roles included developing and validating models, deploying solutions, and collaborating with cross-functional teams.
This document provides a summary of Han Yan's skills, education, and work experience. It lists Han Yan's contact information and outlines his extensive experience in areas such as Java programming, Python, machine learning, databases, and computer graphics. Specifically, it describes his current role as a senior backend engineer developing features for a healthcare app, and past roles developing recommendation systems, financial trade processing systems, and a dance video game.
Saurabh Shanbhag is a software engineer with experience in Java, Python, and algorithms. He has a Master's in Computer Science from North Carolina State University and a Bachelor's in IT from MIT College of Engineering in India. His projects include a meeting scheduler bot using Node.js, a gender predictor using Naive Bayes classification and K-NN clustering, and a hospital management system in Java and MongoDB. He is looking for opportunities as an innovative programmer and collaborative team leader.
This document provides a summary of Gaurav Srivastava's contact information, career profile, and experience. It highlights that he is seeking a challenging role in IT and big data analytics with over 10 years of experience in project management, consulting, and software engineering. His expertise includes big data technologies, distributed systems, data analytics, and Agile methodologies.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Parmanand Sahu is a senior data scientist with over 7 years of experience in machine learning, natural language processing, and data engineering. He has worked on projects involving predictive modeling, knowledge graph development, conversational agents, and data pipeline development. Currently he is a senior associate data scientist at Capital One where he builds deep learning models for predicting mortgage security prepayment rates.
Parmanand Sahu has over 8 years of experience in data science and machine learning. He has worked on various projects including predicting mortgage prepayment rates, developing an AI assistant, and building knowledge graphs. Currently he is a senior associate data scientist at Capital One where he developed a neural network model to predict mortgage prepayment rates.
Shantanu Gupta is a data scientist with over 5 years of experience in data analysis, cloud computing, and web development. He holds an MS in Computer Engineering from Arizona State University and a BTech in Information Technology. His technical skills include programming languages like Java, Python, and SQL as well as tools like AWS, Hadoop, Spark, and machine learning libraries like TensorFlow and Keras. He has worked on various projects involving pattern recognition, email spam detection, and handwritten digit recognition. Currently, he is a Data Intelligence and Cloud Developer at ASU's Smart City Cloud Innovation Center where he is building prototypes for smart city initiatives utilizing AWS cloud services.
The document is a resume for Parmanand Sahu. It summarizes his education, including a Master's degree in Computer Science from UT Dallas and a Bachelor's degree in Metallurgical Engineering from India. It also outlines his technical skills and work experience in data science and machine learning roles at various companies, including developing neural networks, NLP models, and data pipelines. It lists several personal projects applying machine learning techniques to tasks like question answering, named entity recognition, and fraud detection.
Parmanand Sahu is a data scientist with experience applying machine learning algorithms like neural networks, random forests, and LSTM to problems in NLP, computer vision, and time series analysis. He has worked on projects at Capital One, Ocrolus, Lantern Pharma, and Huddl.ai developing pipelines and models for tasks like predicting mortgage rates, document classification, cancer drug exploration, and conversational agents. He holds an MS in Computer Science from UT Dallas and a BTech in Metallurgical Engineering from NIT Raipur with skills in Python, R, C/C++, Java, Scala, databases, cloud technologies, and deep learning frameworks.
Parmanand Sahu is seeking a data science role. He has a master's degree in computer science from UT Dallas and a bachelor's degree in metallurgical engineering from India. He has worked as a data science intern at Capital One and held data science roles at several companies in India focusing on machine learning, natural language processing, and knowledge graphs. His skills include Python, SQL, PyTorch, TensorFlow, and experience with AWS, MongoDB, and Neo4j. He has completed several online courses and certifications in machine learning, deep learning, and data science topics.
Parmanand Sahu is seeking a data science role and has a MS in Computer Science from UT Dallas. He has 2+ years of experience in data science roles at Capital One and VHSS Lab at UT Dallas. His technical skills include Python, SQL, MongoDB, TensorFlow and he has experience in machine learning algorithms like random forest, logistic regression, RNNs and CNNs. He has experience building models for tasks like named entity recognition, fraud detection and question answering. He also has relevant academic projects on tasks like image classification and decision trees.
Parmanand Sahu is seeking a data science role. He has a Master's degree in Computer Science from UT Dallas and a Bachelor's degree in Metallurgical Engineering from India. He has 2 years of experience in data science roles doing projects involving machine learning algorithms like random forest, logistic regression and deep learning models. He is proficient in Python, SQL, C/C++ and has experience with databases, cloud platforms and machine learning libraries.
Christine Straub is an experienced machine learning engineer and data scientist with skills in Python, natural language processing, deep learning, computer vision, cloud computing, and data analysis. She has work experience developing AI chatbots, building scalable data warehouses, creating machine learning pipelines, and analyzing bio-metrics and geo-image data. Her education includes a BS in Computer Science from UC Berkeley and machine learning courses from Stanford.
Herambeshwar Pendyala is a computer science graduate student at Old Dominion University seeking an internship where he can apply his problem solving and decision making skills. He has 2 years of experience as a backend developer specializing in natural language processing. His areas of interest include machine learning techniques to solve business problems. He has skills in languages like Java, Python, and frameworks like Spark, Hadoop, TensorFlow and Keras.
This document provides a summary of Rangarajan Chari's background and experience. It includes 3 sentences of experience as a data scientist and machine learning specialist with skills in neural networks, natural language processing, and big data technologies. Chari has worked on projects involving text classification, face recognition, and troubleshooting techniques for vehicles. The summary also lists education including a PhD program in artificial intelligence and masters degrees in computer science, math, and physics.
Herambeshwar Pendyala is a computer science graduate student at Old Dominion University actively looking for internships applying data science skills. He has 2 years of experience in natural language processing and big data technologies. His technical skills include programming languages like Java, Python, C/C++, frameworks like Spark and Tensorflow, and databases like SQL and MongoDB. His past experience includes developing chatbots, building a Raspberry Pi cluster for distributed computing, and software engineering internships applying machine learning.
This document is a resume for Yu Wang, who is pursuing an MS in Computer Science from UT Dallas with a 3.35 GPA. Wang has experience in web development, big data, databases, and programming languages like Java, C#, Python, R and SQL. He is looking for a summer/fall 2016 internship in computer science. Some of Wang's projects include developing predictive models using Spark and machine learning algorithms, building web applications using ASP.NET and AngularJS, and performing data analysis on large datasets with tools like Hadoop, Pig, and Hive.
Aayush Saxena is a software developer currently working as a machine learning engineer at Jacobs in Singapore. He has a B.Tech in computer science from the International Institute of Information Technology in Hyderabad, India. His work experience includes roles as a software research engineer at the Singapore Management University and as a high frequency trading developer at Silverleaf in Mumbai, India. He has strong skills in Python, C/C++, Java, SQL, machine learning algorithms, and web development frameworks.
Ragahava Prasad Sridar is a business analytics professional with a Masters in Business Analytics from The University of Texas at Dallas. He has over 3 years of experience in data analysis, machine learning, and database management. His technical skills include Python, R, SQL, Tableau, and Hadoop. He is pursuing a career in data science and analytics and is looking for internship or full-time opportunities.
This document is a resume for Aastha Grover, who is seeking a full-time data science position starting in June 2017. She has a Master's degree in Information Systems from Northeastern University and is proficient in programming languages like R, Java, Python and databases like Neo4j, SQL Server and MongoDB. She has work experience in data science roles at Fidelity Investments and as a programmer analyst at Cognizant. She has also completed academic projects involving predictive modeling, big data analysis and recommendation systems.
This document provides a summary of an individual's experience as a data scientist and solution architect. It includes details on their skills and qualifications in areas like machine learning, deep learning, neural networks, NLP, computer vision, and data engineering. It also lists several projects they have worked on involving building machine learning models for tasks like threat detection, hand detection for safety, and endpoint detection and response. Their roles included developing and validating models, deploying solutions, and collaborating with cross-functional teams.
This document provides a summary of Han Yan's skills, education, and work experience. It lists Han Yan's contact information and outlines his extensive experience in areas such as Java programming, Python, machine learning, databases, and computer graphics. Specifically, it describes his current role as a senior backend engineer developing features for a healthcare app, and past roles developing recommendation systems, financial trade processing systems, and a dance video game.
Saurabh Shanbhag is a software engineer with experience in Java, Python, and algorithms. He has a Master's in Computer Science from North Carolina State University and a Bachelor's in IT from MIT College of Engineering in India. His projects include a meeting scheduler bot using Node.js, a gender predictor using Naive Bayes classification and K-NN clustering, and a hospital management system in Java and MongoDB. He is looking for opportunities as an innovative programmer and collaborative team leader.
This document provides a summary of Gaurav Srivastava's contact information, career profile, and experience. It highlights that he is seeking a challenging role in IT and big data analytics with over 10 years of experience in project management, consulting, and software engineering. His expertise includes big data technologies, distributed systems, data analytics, and Agile methodologies.
Similar to Premanand naik data_scientist_4years_pune (20)
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
1. Premanand Naik
premanand.naik09@gmail.com
+91 8983118219, +91 7972550240
www.linkedin.com/in/premanand-naik-83866377
Experience
Consultant @ Capgemini,Pune, India, 12/2019 – present
My responsibilities in the data science team were:
- Developing the machine learning models to extract paragraph from unstructured text and
cluster them based on different control engineering loops.
- Designing of information retrieval system using Spacy NLP,MITIE that resulted in new
contextual patterns and relations between control blocks in control engineering
technical design documents.
Associate Consultant @ Saama Technologies,Pune,India, 07/2019 – 12/2019
- Liaising with the clients and stakeholders to understand their requirements, challenges
etc. and keeping them up to date on the progress of proposed solution.
- Designing of data pipelines using spark jobs to extract data from oracle, performing
transformations and loading them into hive tables.
- Optimization of spark jobs and use of parallelization to increase the efficiency of data
transformations and data loading by almost 30 percent faster compared to legacy
systems.
Senior Software Engineer @ Accenture, Pune, 10/2015 – 06/2019
- Providing high-end consulting to Supply chain business clients to help them sharpen
their business strategy by implementing machine learning and statistical models.
- Collaborative participation and team leading role in Digital Co-Innovation platforms to
solve realworld business problems and generating insights by harnessing the power of
machine learning, AI.
- Designing of data engineering pipelines using big data technologies like Hadoop, Spark,
Hive etc. and consulting clients across supply chain, insurance & oil industries by
providing data driven solutions using applied machine learning and data science
solutions.
- Developed machine learning model using GBM in R and Python to predict iETA for
shipment delivery to end customer with an accuracy of 82 percent and resulted in
improved on time shipment deliveries by almost 50 percent.
- Developed and presented iETA prediction system for Logistics giant. Achieved 10%
better accuracy and reduced variance by 40% versus historical analysis. Designed
ML/NLP powered chat bot for Insurance and Logistic giant and presented it as Co-
Innovation asset to industry clients. Achieved 1st rank in chatbot Co-Innovation
challenge.
- Designed NLP pipelines using deep learning architectures like sequence to sequence
modelling, RNN, LSTM, BERT to understand the contextual relations between words
and predicting the next word sequences and also improved performance by using
2. different activation functions and advanced optimizers. Also applied different NLP
techniques like tf-idf, word2vec
- Designing data driven solutions using data science and analytics by leveraging big data
technologies and machine learning, and proposing them to leadership by visualization
and storytelling techniques.
- Performed predictive analytics and designed various ML applications using Stochastic
models, Bayesian Modeling, Classification Models (SVM/Random Forest/Decision
Tree), Cluster Analysis, NeuralNetwork, Non-parametric Methods, Multivariate
Statistics along with their performance optimization and analysis.
Education
MIT Pune, University OfPune, Pune, Maharashtra, India
Bachelor of Engineering, Computers & IT Major,June 2015
Grade: distinction
Courses and Trainings
Machine Learning,by Stanford, @Coursera
Neural Networks and Deep Learning,by Deeplearning.ai, @Coursera
Convolution Neural Network,by Deeplearning.ai, @Coursera
Deep learning specialization,by Deeplearning.ai, @Coursera
Technical Skills
- Experience in solving various forecasting business problems using statistical learning
models like ARIMA, SARIMAX and deep learning models like LSTM, neural
networks.
- Proficiency in building applications using tools (R/Python) and use of appropriate
Python/R libraries (e.g. pandas, numpy, Keras, TensorFlow, nltk, matplotlib, sklearn
(scikit-learn), ggplot2 etc.) and delivering business values to clients.
- Experience in supervised and unsupervised machine learning algorithms such as liner
regression, logistic regression, k-NN, decision tress, random forest, PCA, GBM,
XGBOOST, k-means, SVM, bagging and boosting techniques. Well versed with NLP
techniques like tf-idf, bag of words, word2vec, glove embedding’s.
- Strong experience in working with distributed data systems like Hadoop and related
technologies like Spark, Hive, PySpark, kafka, SparkR
- Experience in building recommender systems for supply chain management. Hands on
experience in deep learning & state of art NN architectures like CNN (ConvNet), RNN,
BERT, LSTM, neural networks.
Tools:
- Programming Languages: Java, Python, R programming, Scala
- Machine Learning Packages:TensorFlow, sklearn, Keras,pytorch
- Distributed Computing: AWS, Hadoop, Hive,Apache Spark, Amazon Redshift
- Python Libraries: pandas, scikit-learn, numpy, nltk, spacy, matplotlib, seaborn, flask
- R libraries: Caret, caTools,Rshiny, ggplot2,dplyr
3. - Machine learning algorithms: linear regression, logistic regression, k-means, Gradient
Boost machine / gbm, xgboost, decision trees, random forest
- Deep learning algorithms: RNN, CNN, LSTM, ConvNet, BERT, neural networks,
State ofart
- Forecasting models: ARIMA, SARIMA
- Cloud based analytics platform: azure, GCP, AWS, AWS Sagemaker, Google Cloud
- Others: pyspark, sql, HQL, nosql, mysql, natural language processing, openCV,
Tesseract-OCR, Analytical Skills, Client Relations,, Builds Relationships, Problem
Solving, Demand Forecasting, Interpersonal Skill, information retrieval, data
mining, Consulting
- Operating system: Windows 10, UNIX, Linux
Functional Skills
- Experience in Insurance/ Automotive/ Supply chain/ Engineering services/ Digital
industry domains
- Hands on experience in analytics delivery, through the entire life cycle of an analytics
engagement
- Client facing experience along with requirement understanding, UAT and enhancements
- Onsite team engagement experience
- Engaged in client interactions, participating in technical workshops
- Developing models, mentoring junior team members, building models as per standards,
creating intermediate documents and getting the work approved with stakeholders as per
plan
Awards
ACCENTURE Co-Innovation Contest,2018
First winner in Accenture’s innovation contest for designing AI based chatbot.
ACCENTURE Excellence Awards, 2018
Accenture excellence award winner.
International Karate Championship, 2017
Gold medal winner in Kata and Kumite.
Projects
Control Narrative system for Schneider Electric, 2020
Designing of information retrieval system using Spacy NLP that resulted in findings of new
patterns and relations in technical design documents. Knowledge extraction based on contextual
relation understanding using NLP techniques and word embedding’s. Creation of pseudo code from
knowledge extract using parse tree and instructions. Keyword extraction of equipment and IO tags
from design documents and creation of visualization using PySimpleGUI python library.
Intelligent ETA Prediction (Real Time) for a Global Logistics Giant, 2019
Trained Gradient boosting machine to predict estimated time of arrival of shipments with 80%
accuracy. Trained K-means clustering to build country clusters for entire globe and achieved great
reduction in variance and over fitting of data. Tuned Gradient boosting machine which enhanced the
4. prediction accuracy by approx. 5 to 10 times of Random forest and decision tress. Optimized Python
& R code to generate 10 different country level models.
Chat bot for a Global Logistics Giant, 2018
Designed voice & text featured chat bot for automating routine tasks of technical support
team using reinforcement learning. Achieved strategic cost cutting of resources by 2-3 times by
deploying ML and NLP based production ready chat bot. Increased efficiency of technical tasks with
chat bot by 30 times versus performing these tasks manually by support engineers. Strategic reduction
in number of client incident tickets by 10 percent after chat bot deployment. Customer centric UI,
machine learning and NLP capabilities of chatbot has made easier for non-technical user to access
business reports or other logistics related information. Facilitated customer with provision to expedite
the shipment delivery to arrive early, if it is delayed. Trained Decision tress to predict the delayed
shipments with 90 percent accuracy and proposed alternative MOT’s (Mode of transport) and LSP’s
(Carrier) to expedite the shipment.
HP Logistics, 2017
Technologies: Spark, Hadoop, kafka, spring boot, SQL, Java, Scala, AWS Redshift, AWS S3,
R, machine learning. Designed spark job using scala api to load json files from FTP server into RDD
and pushing the json record to kafka topic. Designed spark jobs to load streaming data from kafka
topic into RDD and applied some transformations to achieve desired business outcomes and storing it
back into Redshift to further layers of processing.
ARIMA & LSTM time series models for S&P 500 data,
Designed ARIMA & LSTM models to forecast the closing price and compared the
performance of ARIMA and LSTM models based on RMSE metric. Applied gaussian filtering
approach to smooth the dataset which resulted in improved performance of models. The Gaussian
filtered predictions reduced the RMSE by close to 100%. Average RMSEs for Gaussian filtered data,
ARIMA: 10.98 & LSTM: 12.22
Digit Recognizer, Image recognition using CNN,
Developed CNN based deep learning model to detect the digits out of images with an
improved accuracy of 0.98 with Kaggle rank of top 42%.
Sentimental analysis – IMDB movie reviews using BERT NLP,
Predicting the sentiments of IMDB movie reviews as Positive, Negative, Neutral using BERT
pre-trained model with tensor-flow as backend. Achieved overall accuracy and AUC score 0.86.
Crypto currencies price prediction,
Developed deep learning model to predict Bitcoin and Ethereum market prices using LSTM
model. Trained LSTM model of 20 neurons, 8 features with 50 epochs and Linear as activation
function. Achieved MAE of 0.04 with LSTM model.