Sunil Kumar has over 3 years of experience as a software developer in Java and NoSQL technologies. He has worked on projects involving web and mobile applications for domains including telecom and transportation. His technical skills include Java, Python, PHP, JavaScript, MySQL, PostgreSQL, Cassandra, and tools such as Eclipse and NetBeans. He is looking for new opportunities as a software developer.
Actively looking for full time position as a Software Engineering role. I have experience with Java, Javascript, HTML/CSS, REST web services, AWS cloud and mobile application development.
Building Streaming And Fast Data Applications With Spark, Mesos, Akka, Cassan...Lightbend
It’s become clear to many business that the ability to extract real-time actionable insights from data is not only a source of competitive advantage, but also a way to defend their existing business models from disruption. So while legacy models such as nightly batch jobs aren’t disappearing, an era of fast, streaming data (aka “Fast Data”) is upon us, and represents the state of the art for gaining real-time perishable insights that can then be used to serve existing customers better, acquiring new markets and keep the competition at bay.
That said, distributed, Fast Data architectures are much harder to build, and carry their own set of challenges. Enterprises looking to move quickly are presented with a growing ecosystem of technologies, which often delays fast decisions and provides its own set of risks:
* With so many choices, what tools should you use?
* How do you avoid making rookie mistakes?
* What are the best patterns and practices for streaming applications?
In this webinar with Sean Glover, Senior Consultant at Lightbend and industry veteran, we examine the rise of streaming systems built around Spark, Mesos, Akka, Cassandra and Kafka, their role in handling endless streams of data to gain real-time insights. Sean then reviews how the Lightbend Fast Data Platform (FDP) brings them together in a comprehensive, easy to use, integrated platform, which includes installation, integration, and monitoring tools tuned for various deployment scenarios, plus sample applications.
Disrupting Risk Management through Emerging TechnologiesDatabricks
In the Financial markets with credit card companies there is always a need to measure the risk optimally and understand the performance of products before we could invest and make strategic decisions.
Effective AIOps with Open Source Software in a WeekDatabricks
Classic event, incident, problem and change management are ITSM practices that are getting integrated with DevOps/SRE and ML through a competency known as AIOps. Large streams of data generated through logs, metrics and traces are organized and computed using machine learning algorithms to extract insights on the anomalies of system behavior that could be impacting end-users and business transactions. Businesses cannot afford to see their end-users impacted by those anomalies and therefore would want to proactively predict the likelihood of systems regressing and take corrective action long before any material impact.
In this talk, we show the use of simple linear regression and multivariate linear regression techniques to predict the likelihood of system behavior resulting in one or two sigma of standard deviation. We show how to use FOSS tools to predict them using various decision trees that are integrated to high performing streaming platforms like Apache Flink, Apache Beam, Prometheus and Grafana which makes it a lot easier to visualize the various alerts and triage their way back to performing root cause analysis. These high performing systems are also backed by KAFKA for its streaming and distributed computing capabilities by partitioning the data for various staged analysis some of which can be done in parallel and concurrently based on the use cases. We present a fully integrated architecture that helps you realize a commercial AIOps capability without having to license expensive software products. The above open architecture allows you to implement various ML algorithms as needed and its agnostic to programming languages and tools.
The talk will combine various techniques with demos and is focused to practicing engineers and developers who are familiar with ML.
Risk Management in Retail with Stream Processing (Daniel Jagielski, Virtuslab...HostedbyConfluent
Every 2 seconds, another person becomes a victim of identity theft. The number of online account takeovers is constantly increasing. In this talk we'll show how stream processing was used to combat this for Tesco, one of Europe's largest retailers. The massive scale of e-commerce makes it an interesting target for malicious users. We implemented a risk-management platform built around Kafka and the Confluent Platform to detect and prevent attacks, including those that come through the website's authentication page. We'll present how this project evolved over 2 years to its current state in production, together with some of the challenges we encountered on the way. As the project has had a couple of phases, we will see and compare alternative designs, summarize their pros and cons, and refer them to well known techniques - like Event Sourcing. We'll discuss the architecture and integration with external systems, before moving onto a detailed examination of the stream processors implementation and key internals such as co-partitioning of data. We'll also cover the role of stack components that we used, including Kafka Connect and Schema Registry, as well as the deployment platform, Kubernetes. Over the course of the talk we will put special emphasis on highlighting key factors to take into consideration when designing data pipelines and stream processing platform.
INTERFACE, by apidays - Apache Cassandra now speaks developer with Stargate ...apidays
INTERFACE, by apidays 2021 - It’s APIs all the way down
June 30, July 1 & 2, 2021
Apache Cassandra now speaks developer with Stargate: Rethinking database APIs
Ash Ryan Arnwine, Developer Experience Architect at Datastax
Operationalizing Edge Machine Learning with Apache Spark with Nisha Talagala ...Databricks
Machine Learning is everywhere, but translating a data scientist’s model into an operational environment is challenging for many reasons. Models may need to be distributed to remote applications to generate predictions, or in the case of re-training, existing models may need to be updated or replaced. To monitor and diagnose such configurations requires tracking many variables (such as performance counters, models, ML algorithm specific statistics and more).
In this talk we will demonstrate how we have attacked this problem for a specific use case, edge based anomaly detection. We will show how Spark can be deployed in two types of environments (on edge nodes where the ML predictions can detect anomalies in real time, and on a cloud based cluster where new model coefficients can be computed on a larger collection of available data). To make this solution practically deployable, we have developed mechanisms to automatically update the edge prediction pipelines with new models, regularly retrain at the cloud instance, and gather metrics from all pipelines to monitor, diagnose and detect issues with the entire workflow. Using SparkML and Spark Accumulators, we have developed an ML pipeline framework capable of automating such deployments and a distributed application monitoring framework to aid in live monitoring.
The talk will describe the problems of operationalizing ML in an Edge context, our approaches to solving them and what we have learned, and include a live demo of our approach using anomaly detection ML algorithms in SparkML and others (clustering etc.) and live data feeds. All datasets and outputs will be made publicly available.
Risk Management in Retail with Stream Processingconfluent
Risk Management in Retail with Stream Processing, Daniel Jagielski, Tech lead/Software Development Manager, VirtusLab/Tesco
Meetup link: https://www.meetup.com/Warsaw-Kafka-meetup/events/274900733/
Actively looking for full time position as a Software Engineering role. I have experience with Java, Javascript, HTML/CSS, REST web services, AWS cloud and mobile application development.
Building Streaming And Fast Data Applications With Spark, Mesos, Akka, Cassan...Lightbend
It’s become clear to many business that the ability to extract real-time actionable insights from data is not only a source of competitive advantage, but also a way to defend their existing business models from disruption. So while legacy models such as nightly batch jobs aren’t disappearing, an era of fast, streaming data (aka “Fast Data”) is upon us, and represents the state of the art for gaining real-time perishable insights that can then be used to serve existing customers better, acquiring new markets and keep the competition at bay.
That said, distributed, Fast Data architectures are much harder to build, and carry their own set of challenges. Enterprises looking to move quickly are presented with a growing ecosystem of technologies, which often delays fast decisions and provides its own set of risks:
* With so many choices, what tools should you use?
* How do you avoid making rookie mistakes?
* What are the best patterns and practices for streaming applications?
In this webinar with Sean Glover, Senior Consultant at Lightbend and industry veteran, we examine the rise of streaming systems built around Spark, Mesos, Akka, Cassandra and Kafka, their role in handling endless streams of data to gain real-time insights. Sean then reviews how the Lightbend Fast Data Platform (FDP) brings them together in a comprehensive, easy to use, integrated platform, which includes installation, integration, and monitoring tools tuned for various deployment scenarios, plus sample applications.
Disrupting Risk Management through Emerging TechnologiesDatabricks
In the Financial markets with credit card companies there is always a need to measure the risk optimally and understand the performance of products before we could invest and make strategic decisions.
Effective AIOps with Open Source Software in a WeekDatabricks
Classic event, incident, problem and change management are ITSM practices that are getting integrated with DevOps/SRE and ML through a competency known as AIOps. Large streams of data generated through logs, metrics and traces are organized and computed using machine learning algorithms to extract insights on the anomalies of system behavior that could be impacting end-users and business transactions. Businesses cannot afford to see their end-users impacted by those anomalies and therefore would want to proactively predict the likelihood of systems regressing and take corrective action long before any material impact.
In this talk, we show the use of simple linear regression and multivariate linear regression techniques to predict the likelihood of system behavior resulting in one or two sigma of standard deviation. We show how to use FOSS tools to predict them using various decision trees that are integrated to high performing streaming platforms like Apache Flink, Apache Beam, Prometheus and Grafana which makes it a lot easier to visualize the various alerts and triage their way back to performing root cause analysis. These high performing systems are also backed by KAFKA for its streaming and distributed computing capabilities by partitioning the data for various staged analysis some of which can be done in parallel and concurrently based on the use cases. We present a fully integrated architecture that helps you realize a commercial AIOps capability without having to license expensive software products. The above open architecture allows you to implement various ML algorithms as needed and its agnostic to programming languages and tools.
The talk will combine various techniques with demos and is focused to practicing engineers and developers who are familiar with ML.
Risk Management in Retail with Stream Processing (Daniel Jagielski, Virtuslab...HostedbyConfluent
Every 2 seconds, another person becomes a victim of identity theft. The number of online account takeovers is constantly increasing. In this talk we'll show how stream processing was used to combat this for Tesco, one of Europe's largest retailers. The massive scale of e-commerce makes it an interesting target for malicious users. We implemented a risk-management platform built around Kafka and the Confluent Platform to detect and prevent attacks, including those that come through the website's authentication page. We'll present how this project evolved over 2 years to its current state in production, together with some of the challenges we encountered on the way. As the project has had a couple of phases, we will see and compare alternative designs, summarize their pros and cons, and refer them to well known techniques - like Event Sourcing. We'll discuss the architecture and integration with external systems, before moving onto a detailed examination of the stream processors implementation and key internals such as co-partitioning of data. We'll also cover the role of stack components that we used, including Kafka Connect and Schema Registry, as well as the deployment platform, Kubernetes. Over the course of the talk we will put special emphasis on highlighting key factors to take into consideration when designing data pipelines and stream processing platform.
INTERFACE, by apidays - Apache Cassandra now speaks developer with Stargate ...apidays
INTERFACE, by apidays 2021 - It’s APIs all the way down
June 30, July 1 & 2, 2021
Apache Cassandra now speaks developer with Stargate: Rethinking database APIs
Ash Ryan Arnwine, Developer Experience Architect at Datastax
Operationalizing Edge Machine Learning with Apache Spark with Nisha Talagala ...Databricks
Machine Learning is everywhere, but translating a data scientist’s model into an operational environment is challenging for many reasons. Models may need to be distributed to remote applications to generate predictions, or in the case of re-training, existing models may need to be updated or replaced. To monitor and diagnose such configurations requires tracking many variables (such as performance counters, models, ML algorithm specific statistics and more).
In this talk we will demonstrate how we have attacked this problem for a specific use case, edge based anomaly detection. We will show how Spark can be deployed in two types of environments (on edge nodes where the ML predictions can detect anomalies in real time, and on a cloud based cluster where new model coefficients can be computed on a larger collection of available data). To make this solution practically deployable, we have developed mechanisms to automatically update the edge prediction pipelines with new models, regularly retrain at the cloud instance, and gather metrics from all pipelines to monitor, diagnose and detect issues with the entire workflow. Using SparkML and Spark Accumulators, we have developed an ML pipeline framework capable of automating such deployments and a distributed application monitoring framework to aid in live monitoring.
The talk will describe the problems of operationalizing ML in an Edge context, our approaches to solving them and what we have learned, and include a live demo of our approach using anomaly detection ML algorithms in SparkML and others (clustering etc.) and live data feeds. All datasets and outputs will be made publicly available.
Risk Management in Retail with Stream Processingconfluent
Risk Management in Retail with Stream Processing, Daniel Jagielski, Tech lead/Software Development Manager, VirtusLab/Tesco
Meetup link: https://www.meetup.com/Warsaw-Kafka-meetup/events/274900733/
Datasheet Fluke 902. Hubungi PT. Siwali Swantika 021-45850618PT. Siwali Swantika
Datasheet Fluke True-rms HVAC Clamp Meter. Informasi lebih detail hubungi PT. Siwali Swantika, Jakarta Office : 021-45850618 atau Surabaya Office : 031-8421264
Proyecto: Investigación aplicada sobre biodiversidad y servicios ecosistémico...UrabaAntioquia
Investigación aplicada sobre biodiversidad y servicios ecosistémicos como base para la gestión de territorio en el Urabá antioqueño.
Objetivo General
Generar un proceso permanente de gestión de la biodiversidad y sus servicios ecosistémicos en el desarrollo del Urabá antioqueño.
Rosecastle Joinery manufacture bespoke joinery items to your specific requirements. From handmade kitchens to staircases, doors and windows to your design.
Vea cómo gestionar el crecimiento de su empresa a través de nuestra herramienta administrativa-contable que le permitira mejorar sus operaciones comerciales.
Introducción General a los Materiales de Construcción - Axel Martínez NietoAxel Martínez Nieto
Una breve introducción a los Materiales de Construcción. En este resumen se aborda una definición general, su importancia para el ingeniero civil y como se han venido desarrollando los diversos tipos de materiales. Además, se encuentra un cuadro sinóptico con algunas de las clasificaciones más populares. Comentarios y sugerencias son bienvenidas.
Senior software engineer with product experienceSrikant Mukundan
I have worked on cutting edge product development companies with extensive experience
in designing and building product from the scratch.Also I have worked on an array of projects in domains like Banking and Financial services,Communication,Healthcare and Travel.
1. + 9 1 9 8 9 9 4 4 6 9 3 3
Sunil Kumar s k . y a d a v 9 0 @ g m a il. c o m
PROFESSIONAL SUMMARY
• 3 years and 4 months of software development experience on JAVA technology.
• 1 year software development experience in NoSql database and java technology.
• 6 months experience in SSRS(Reporting server).
• Currently working as Software Developer in Vihaan networks limited.
• Proficiency in Java,Python,PHP,Javascript,jquery, Mysql 5.x,Postgres,NoSql,Cassandra.
• Involved in all phases of software development life cycle i.e. Requirement, Analysis,
Design, and Implementation.
WORK EXPERIENCE
Company Designation Duration
IT Experience
Vihaan Networks Ltd., Delhi(Gurgaon) Software Developer Sep 2011– Till Date
TECHNICAL SKILLS
Skills Details
Languages JAVA,PythonPHP,Javascript
Others SQL,Nosql,SNMP,PHP-XML,PHP-SOAP,Html
Application Server Apache,IIS, Geoserver,Wamp
Databases MySql,Postgress,Cassandra
Others Tools & IDE Eclipse,Netbeans,sqlyog,visualstudio,Reporting Server
EDUCATIONAL QUALIFICATION
• Bachelor of Technology(B-Tech) Rajasthan Technical university(Yr - 2011)
PROJECT SUMMARY:
OMC-Web Client
Module Topology
Synopsis The user can perform various operations from the Topology of OMC-Client
:-Create Objects, Modify Objects, Delete Objects, Display Objects(Maps or
Views),Query Objects, View Chassis view of the particular node, Create Link
between nodes, Modify Link information, Delete Link, Display Link, Change ‘Link
Type’ from Unsupported to Supported. This is not supported as of now, Move
View Node.
Domain NMS (Telecom)
Technologies JAVA ,postgres,activemq,jquery,javascript,html,css,bootstrap
Role Software Developer
2. Responsibilities • Responsible to design controller and service layer and for end-to-end
realization of the given requirement without breaking any of the existing
functionality across the OMC.Database and designing
• Front End lead.
Control Center
Synopsis Control Center is a project management software program, which is designed to
assist a project manager in developing a plan, assigning resources to tasks to the
concerned persons, tracking progress of project , managing the budget , and
analyzing workloads. This projects backend is Microsoft project planning tool and
Postgres.
Domain Telecom(BSNL)
Technologies JAVA,Servlets,postgres,jquery,javascript,html,css,microsoft project planning toll
Role Software Developer
Responsibilities • Requirement gathering, analyzing and Development
• Database and designing
• Front end and backend development.
Connection Bridge
Synopsis It is a bridge between the Cassandra and MySQL. This bridge captures the data
from the MySQL when data gets 3 months older from the current date and places
it in to the Cassandra. Does reporting of the data when application request for
the data that is 3 months older from the current data
Technologies JAVA, Cassandra, NoSql, Hector, CQL ,MySQL
Tools Net Beans, Cassandra Cluster Admin
Role Software Developer
Responsibilities • Requirement gathering, analyzing and understanding the existing code.
• Responsible for application, DB architecture
PlanScape
Synopsis It is a network planning tool that plans the telecom network between geographic
locations.
it uses the Google map for the geographic locations. It calculates geographic
area covered by signals of different device like BSC MSC etc .Apart from that it
also calculates the cost for the network.
Domain Telecom
Technologies PHP, Mysql,javasript,Google Map
Role Support/Software Developer
Responsibilities • Requirement gathering, analyzing and understanding the existing code.
• Database and UI design
3. • Development of forms, scripting, Map integration, BLL Programing.
Bus Surveillances System
Synopsis
Bus Surveillance System, incorporates worldwide location based asset monitoring
through the utilization of state-of-the-art hardware and our uniquely developed
Bus Surveillance System, software. We offer our clients the use of different
types of hardware, dependent on the requirements and environment; this
includes cellular based, satellite based or a combination of the two.
Technologies JAVA, Javascript, jquery, google map api
Role Software Developer
Responsibilities • Requirement analysis, design and development.
• Development of web forms, scripting,
• Google Map-integration and programing.
RATMS(Revenue Assurance & Tax Monitoring System)
Synopsis Revenue Assurance & Tax Monitoring System is a web based application use
for calculate to Revenue and Tax of telecom vendors. Application Dashboard
contains of comparison charts of revenue and tax of vendors in given time
period. A module of this application uploads the bulk telecom vendor’s data
in database, analysis the data, normalize the data and process the data for
reporting.
Technologies C#, SSRS
Role Software Developer
Responsibilities Reporting
DMIS(Daily Management Information System)
Synopsis
A module Development in an open source vtigerCRM(customer relationship
management system ) by vtiger.com that is use to send reports about each
sales person to his superior about his yesterday work. It tracks the whole sales
team and send there reports via email.
Technologies PHP, Javascript, jquery, pear-php
Role Developer
Responsibilities Understanding Code and whole Module Development.
4. PERSONAL DETAILS:
Date of Birth : 27 Mar, 1990
Gender : Male
Marital Status : Single
Languages
known
: English, Hindi
CONTACT DETAILS:
Mobile : +91 9899446933
Email : sk.yadav90@gmail.com
Address : T-590/B-3,Street Number 3
Baljeet Nagar,
New Delhi,110008