Puneet Mathur has over 7 years of experience as a Storage and Backup Engineer. He has expertise in HP storage arrays including 3PAR, EVA, and StoreVirtual as well as EMC, NetApp, and SAN switches. He holds a Bachelor's degree in Electronics and Communication. Currently he works as a Storage and Backup Project Implementation Engineer at Hewlett-Packard where he works on projects involving HP 3PAR, NetApp, and HP Data Protector systems. Previously he held similar roles at HCL Technologies and Mphasis supporting storage infrastructure and performing backups, volume management, and troubleshooting.
Big Data, Simple and Fast: Addressing the Shortcomings of HadoopHazelcast
In this webinar
This talk identifies several shortcomings of Apache Hadoop and presents an alternative approach for building simple and flexible Big Data software stacks quickly, based on next generation computing paradigms, such as in-memory data/compute grids. The focus of the talk is on software architectures, but several code examples using Hazelcast will be provided to illustrate the concepts discussed.
We’ll cover these topics:
-Briefly explain why Hadoop is not a universal, or inexpensive, Big Data solution – despite the hype
-Lay out technical requirements for a flexible Big/Fast Data processing stack
-Present solutions thought to be alternatives to Hadoop
-Argue why In-Memory Data/Compute Grids are so attractive in creating future-proof Big/Fast Data applications
-Discuss how well Hazelcast meets the Big/Fast Data requirements vs Hadoop
-Present several code examples using Java and Hazelcast to illustrate concepts discussed
-Live Q&A Session
Presenter:
Jacek Kruszelnicki, President of Numatica Corporation
Webinar: Performance vs. Cost - Solving The HPC Storage Tug-of-WarStorage Switzerland
The HPC storage performance tier is well defined: scale-out solid state storage systems. But the capacity tier is up for debate. Should you use a high end NAS file system or make the switch to object storage? More importantly: How do you move data from the performance tier to the capacity tier without placing additional burden on already overworked IT personnel?
We answer these questions and provide designs that solve the HPC storage-tug-of-war in our webinar with Caringo. Listen as experts on HPC, NAS and Object Storage discuss the HPC storage challenge, debate the potential solutions and provide you guidance on how to create the right architecture.
Best practices: running high-performance databases on KubernetesMariaDB plc
Databases benefit greatly from containerization in terms of performance, ease-of-deployment, and scalability. However, building a database-as-a-service (DBaaS) on Kubernetes without the right infrastructure can be a complex, time-consuming project where some database services have to be run outside of the cluster for the sake of leveraging persistent storage. This session offers up a global financial institution’s real-world account of how bare metal Kubernetes infrastructure can further enhance the performance of MariaDB’s innovative, load-balanced database services – and how the requisite persistent storage can be best provisioned, managed and backed up without service interruption or creating an additional burden for application owners and developers.
Transforming Data Architecture Complexity at Sears - StampedeCon 2013StampedeCon
At the StampedeCon 2013 Big Data conference in St. Louis, Justin Sheppard discussed Transforming Data Architecture Complexity at Sears. High ETL complexity and costs, data latency and redundancy, and batch window limits are just some of the IT challenges caused by traditional data warehouses. Gain an understanding of big data tools through the use cases and technology that enables Sears to solve the problems of the traditional enterprise data warehouse approach. Learn how Sears uses Hadoop as a data hub to minimize data architecture complexity – resulting in a reduction of time to insight by 30-70% – and discover “quick wins” such as mainframe MIPS reduction.
Big Data, Simple and Fast: Addressing the Shortcomings of HadoopHazelcast
In this webinar
This talk identifies several shortcomings of Apache Hadoop and presents an alternative approach for building simple and flexible Big Data software stacks quickly, based on next generation computing paradigms, such as in-memory data/compute grids. The focus of the talk is on software architectures, but several code examples using Hazelcast will be provided to illustrate the concepts discussed.
We’ll cover these topics:
-Briefly explain why Hadoop is not a universal, or inexpensive, Big Data solution – despite the hype
-Lay out technical requirements for a flexible Big/Fast Data processing stack
-Present solutions thought to be alternatives to Hadoop
-Argue why In-Memory Data/Compute Grids are so attractive in creating future-proof Big/Fast Data applications
-Discuss how well Hazelcast meets the Big/Fast Data requirements vs Hadoop
-Present several code examples using Java and Hazelcast to illustrate concepts discussed
-Live Q&A Session
Presenter:
Jacek Kruszelnicki, President of Numatica Corporation
Webinar: Performance vs. Cost - Solving The HPC Storage Tug-of-WarStorage Switzerland
The HPC storage performance tier is well defined: scale-out solid state storage systems. But the capacity tier is up for debate. Should you use a high end NAS file system or make the switch to object storage? More importantly: How do you move data from the performance tier to the capacity tier without placing additional burden on already overworked IT personnel?
We answer these questions and provide designs that solve the HPC storage-tug-of-war in our webinar with Caringo. Listen as experts on HPC, NAS and Object Storage discuss the HPC storage challenge, debate the potential solutions and provide you guidance on how to create the right architecture.
Best practices: running high-performance databases on KubernetesMariaDB plc
Databases benefit greatly from containerization in terms of performance, ease-of-deployment, and scalability. However, building a database-as-a-service (DBaaS) on Kubernetes without the right infrastructure can be a complex, time-consuming project where some database services have to be run outside of the cluster for the sake of leveraging persistent storage. This session offers up a global financial institution’s real-world account of how bare metal Kubernetes infrastructure can further enhance the performance of MariaDB’s innovative, load-balanced database services – and how the requisite persistent storage can be best provisioned, managed and backed up without service interruption or creating an additional burden for application owners and developers.
Transforming Data Architecture Complexity at Sears - StampedeCon 2013StampedeCon
At the StampedeCon 2013 Big Data conference in St. Louis, Justin Sheppard discussed Transforming Data Architecture Complexity at Sears. High ETL complexity and costs, data latency and redundancy, and batch window limits are just some of the IT challenges caused by traditional data warehouses. Gain an understanding of big data tools through the use cases and technology that enables Sears to solve the problems of the traditional enterprise data warehouse approach. Learn how Sears uses Hadoop as a data hub to minimize data architecture complexity – resulting in a reduction of time to insight by 30-70% – and discover “quick wins” such as mainframe MIPS reduction.
Developing Software for Persistent Memory / Willhalm Thomas (Intel)Ontico
NVDIMMs provide applications the ability to access in-memory data that will survive reboots. This is a huge paradigm shift happening in the industry. Intel has announced new instructions to support persistence. In this presentation, we educate developers on how to take advantage of this new kind of persistent memory tier. Using simple practical examples [3] [4], we discuss how to identify which data structures that are suited for this new memory tier, and which data structures are not. We provide developers a systematic methodology to identify how their applications can be architected to take advantage of persistence in the memory tier. Furthermore, we will provide basic programming examples for persistent memory and present common pitfalls.
Big Data Day LA 2016/ Big Data Track - Apply R in Enterprise Applications, Lo...Data Con LA
Prototypes are typically re-implemented in another language due to compatibility issues with R in the enterprise, but TIBCO Enterprise Runtime for R (TERR) allows the language to be run on several platforms. Enterprise-level scalability has been brought to the R language, enabling rapid iteration without the need to recode, re-implement and test. This presentation will delve further into these topics, highlighting specific use cases and the true value that can be gained from utilizing R. The session will be followed by a lively, open Q&A discussion.
Cassandra is well-known for its best-in-class multi-DC. However, some advanced use cases require replicating data between clusters. Some of these use cases involve constrained deployment scenarios such as limited bandwidth or frequent network disruptions which can put significant strains on multi-DC setups. Other use cases include advanced data flows with Hub-and-Spoke configurations.
In this talk we will introduce you to DSE's new Advanced Replication capability that allows for these data flows in such constrained environments. We will provide some motivating use cases, an architecture overview, and some customer and testing results.
About the Speakers
Brian Hess Senior Product Manager, Analytics, DataStax
Brian has been in the analytics space for over 15 years ranging from government to data mining applied research to analytics in enterprise data warehousing and NoSQL engines, in roles ranging from Cryptologic Mathematician to Director of Advanced Analytics to Senior Product Manager. In all these roles he has pushed data analytics and processing to massive scales in order to solve problems that were previously unsolvable.
Cliff Gilmore Solution Architect, DataStax
Cliff Gilmore
This session will address Cassandra's tunable consistency model and cover how developers and companies should adopt a more Optimistic Software Design model.
The flash market started out monolithically. Flash was a single media type (high performance, high endurance SLC flash). Flash systems also had a single purpose of accelerating the response time of high-end databases. But now there are several flash options. Users can choose between high performance flash or highly dense, medium performance flash systems. At the same time, high capacity hard disk drives are making a case to be the archival storage medium of choice. How does an IT professional choose?
Debunking Common Myths of Hadoop Backup & Test Data ManagementImanis Data
These slides are from a webinar where Hari Mankude, CTO at Talena, discussed key concepts associated with Hadoop data management processes around scalable backup, recovery and test data management.
From distributed caches to in-memory data gridsMax Alexejev
A brief introduction into modern caching technologies, starting from distributed memcached to modern data grids like Oracle Coherence.
Slides were presented during distributed caching tech talk in Moscow, May 17 2012.
DataStax Training – Everything you need to become a Cassandra RockstarDataStax
Looking to strengthen your expertise of Cassandra and DataStax Enterprise? This DataStax Training Webinar provides an overview of what you need to get the most out of Cassandra and your DataStax Enterprise environment. Whether you’re a developer or administrator, novice or a Cassandra expert, there is a class that will meet your experience level and needs.
Webinar: End NAS Sprawl - Gain Control Over Unstructured DataStorage Switzerland
The key to ending NAS Sprawl is to fix the file system so it can offer cost effective, scalable, high performance storage. In this webinar Storage Switzerland Lead Analyst George Crump, Quantum VP of Global Marketing Molly Rector, and the Quantum StorNext Solution Marketing Senior Director Dave Frederick discuss the challenges facing the typical scale-out storage environment and what IT professionals should be looking for in solutions to eliminate NAS Sprawl once and for all.
Engineering Machine Learning Data Pipelines Series: Streaming New Data as It ...Precisely
Tackling the challenge of designing a machine learning model and putting it into production is the key to getting value back – and the roadblock that stops many promising machine learning projects. After the data scientists have done their part, engineering robust production data pipelines has its own set of challenges. Syncsort software helps the data engineer every step of the way.
Building on the process of finding and matching duplicates to resolve entities, the next step is to set up a continuous streaming flow of data from data sources so that as the sources change, new data automatically gets pushed through the same transformation and cleansing data flow – into the arms of machine learning models.
Some of your sources may already be streaming, but the rest are sitting in transactional databases that change hundreds or thousands of times a day. The challenge is that you can’t affect performance of data sources that run key applications, so putting something like database triggers in place is not the best idea. Using Apache Kafka or similar technologies as the backbone to moving data around doesn’t solve the problem of needing to grab changes from the source pushing them into Kafka and consuming the data from Kafka to be processed. If something unexpected happens – like connectivity is lost on either the source or the target side, you don’t want to have to fix it or start over because the data is out of sync.
View this 15-minute webcast on-demand to learn how to tackle these challenges in large scale production implementations.
Mtc learnings from isv & enterprise (dated - Dec -2014)Govind Kanshi
This is little dated deck for our learnings - I keep getting multiple requests for it. I have removed one slide for access permissions (RBAC -which are now available).
Mtc learnings from isv & enterprise interactionGovind Kanshi
This is one of the dated presentation for which I keep getting requests for, please do reach out to me for status on various things as Azure keeps fixing/innovating whole of things every day.
There are bunch of other things I can help you on to ensure you can take advantage of Azure platform for oss, .net frameworks and databases.
Spanish language has some types and widely spoken all over the world. I try to explain about the difference of Spanish language which is used in Spain and South America.
Developing Software for Persistent Memory / Willhalm Thomas (Intel)Ontico
NVDIMMs provide applications the ability to access in-memory data that will survive reboots. This is a huge paradigm shift happening in the industry. Intel has announced new instructions to support persistence. In this presentation, we educate developers on how to take advantage of this new kind of persistent memory tier. Using simple practical examples [3] [4], we discuss how to identify which data structures that are suited for this new memory tier, and which data structures are not. We provide developers a systematic methodology to identify how their applications can be architected to take advantage of persistence in the memory tier. Furthermore, we will provide basic programming examples for persistent memory and present common pitfalls.
Big Data Day LA 2016/ Big Data Track - Apply R in Enterprise Applications, Lo...Data Con LA
Prototypes are typically re-implemented in another language due to compatibility issues with R in the enterprise, but TIBCO Enterprise Runtime for R (TERR) allows the language to be run on several platforms. Enterprise-level scalability has been brought to the R language, enabling rapid iteration without the need to recode, re-implement and test. This presentation will delve further into these topics, highlighting specific use cases and the true value that can be gained from utilizing R. The session will be followed by a lively, open Q&A discussion.
Cassandra is well-known for its best-in-class multi-DC. However, some advanced use cases require replicating data between clusters. Some of these use cases involve constrained deployment scenarios such as limited bandwidth or frequent network disruptions which can put significant strains on multi-DC setups. Other use cases include advanced data flows with Hub-and-Spoke configurations.
In this talk we will introduce you to DSE's new Advanced Replication capability that allows for these data flows in such constrained environments. We will provide some motivating use cases, an architecture overview, and some customer and testing results.
About the Speakers
Brian Hess Senior Product Manager, Analytics, DataStax
Brian has been in the analytics space for over 15 years ranging from government to data mining applied research to analytics in enterprise data warehousing and NoSQL engines, in roles ranging from Cryptologic Mathematician to Director of Advanced Analytics to Senior Product Manager. In all these roles he has pushed data analytics and processing to massive scales in order to solve problems that were previously unsolvable.
Cliff Gilmore Solution Architect, DataStax
Cliff Gilmore
This session will address Cassandra's tunable consistency model and cover how developers and companies should adopt a more Optimistic Software Design model.
The flash market started out monolithically. Flash was a single media type (high performance, high endurance SLC flash). Flash systems also had a single purpose of accelerating the response time of high-end databases. But now there are several flash options. Users can choose between high performance flash or highly dense, medium performance flash systems. At the same time, high capacity hard disk drives are making a case to be the archival storage medium of choice. How does an IT professional choose?
Debunking Common Myths of Hadoop Backup & Test Data ManagementImanis Data
These slides are from a webinar where Hari Mankude, CTO at Talena, discussed key concepts associated with Hadoop data management processes around scalable backup, recovery and test data management.
From distributed caches to in-memory data gridsMax Alexejev
A brief introduction into modern caching technologies, starting from distributed memcached to modern data grids like Oracle Coherence.
Slides were presented during distributed caching tech talk in Moscow, May 17 2012.
DataStax Training – Everything you need to become a Cassandra RockstarDataStax
Looking to strengthen your expertise of Cassandra and DataStax Enterprise? This DataStax Training Webinar provides an overview of what you need to get the most out of Cassandra and your DataStax Enterprise environment. Whether you’re a developer or administrator, novice or a Cassandra expert, there is a class that will meet your experience level and needs.
Webinar: End NAS Sprawl - Gain Control Over Unstructured DataStorage Switzerland
The key to ending NAS Sprawl is to fix the file system so it can offer cost effective, scalable, high performance storage. In this webinar Storage Switzerland Lead Analyst George Crump, Quantum VP of Global Marketing Molly Rector, and the Quantum StorNext Solution Marketing Senior Director Dave Frederick discuss the challenges facing the typical scale-out storage environment and what IT professionals should be looking for in solutions to eliminate NAS Sprawl once and for all.
Engineering Machine Learning Data Pipelines Series: Streaming New Data as It ...Precisely
Tackling the challenge of designing a machine learning model and putting it into production is the key to getting value back – and the roadblock that stops many promising machine learning projects. After the data scientists have done their part, engineering robust production data pipelines has its own set of challenges. Syncsort software helps the data engineer every step of the way.
Building on the process of finding and matching duplicates to resolve entities, the next step is to set up a continuous streaming flow of data from data sources so that as the sources change, new data automatically gets pushed through the same transformation and cleansing data flow – into the arms of machine learning models.
Some of your sources may already be streaming, but the rest are sitting in transactional databases that change hundreds or thousands of times a day. The challenge is that you can’t affect performance of data sources that run key applications, so putting something like database triggers in place is not the best idea. Using Apache Kafka or similar technologies as the backbone to moving data around doesn’t solve the problem of needing to grab changes from the source pushing them into Kafka and consuming the data from Kafka to be processed. If something unexpected happens – like connectivity is lost on either the source or the target side, you don’t want to have to fix it or start over because the data is out of sync.
View this 15-minute webcast on-demand to learn how to tackle these challenges in large scale production implementations.
Mtc learnings from isv & enterprise (dated - Dec -2014)Govind Kanshi
This is little dated deck for our learnings - I keep getting multiple requests for it. I have removed one slide for access permissions (RBAC -which are now available).
Mtc learnings from isv & enterprise interactionGovind Kanshi
This is one of the dated presentation for which I keep getting requests for, please do reach out to me for status on various things as Azure keeps fixing/innovating whole of things every day.
There are bunch of other things I can help you on to ensure you can take advantage of Azure platform for oss, .net frameworks and databases.
Spanish language has some types and widely spoken all over the world. I try to explain about the difference of Spanish language which is used in Spain and South America.
1. Puneet Mathur
Email: puneet_mathur20@yahoo.com
Mobile: +91-8971006662
Career Objective:
To work in an environment where one can have ample of opportunities to explore and values which can add to
myself in terms of skills and knowledge. To constantly strive for new works with great responsibilities and prove
upon my choices.
Work Experience:
• Storage and Backup Engineer with 7+ years of valuable experience with Mphasis, An HP Company, HCL
Technologies Ltd. and Hewlett-Packard.
Educational Background:
• B.E in Electronics and Communication (University of Rajasthan) (2008) with Honors.
Technical Skills:
• HP Storage: HP EVA, HP 3PAR, HP LeftHand P4000 [Now StoreVirtual] (Expert).
• EMC Storage: VNX, VMAX, symcli, unishpere. (Intermediate)
• NetApp Storage: NetApp 2000 and 3000 series Filers. (Expert).
• SAN Switches: Brocade 3000 and 4000 Series, Cisco MDS (Expert).
• Storage Software’s Tools: NetApp Filer View, On Command system manager and On Command protection
Manager.
• HP Data Protector: (Expert)
• VMware: Aware about the basic ESXi 5.1 administration.
• Certified on Associate level for VCE Vblock systems foundation,
Technical Competencies:
• Implement/Configure the, 3PAR and NetApp Appliances in a SAN environment.
• Support and administration for iSCSI/FC SAN storage using LUN mapping on UNIX and Windows based systems.
• Support and administration for NFS/CIFS NAS storage.
• Volume, Q tree creation, quota management, NFS and CIFS share Administration and Management.
• Managing space reservation, fractional reserve and space guarantee for error free writes operations.
• Providing LUNs according to the requirement of the clients.
• HP Data Protector Management
• Implement/Configure the HP Data Protector.
• Manage the arrays through SYMCLI and Unisphere
• Performing SANCOPY migration from CX to VNX.
Storage and Backup
Administrator
2. Professional Experience:
Hewlett-Packard (Nov’13 to till now) Storage and Backup
Project Implementation Engineer
• Work with the Design team to understand the customer requirements.
• Work on the customer inputs and contribute towards a successful solution.
• Do POCs to validate the customer requirements in the solution and get POC signoff.
• Find a work around for the issues and risks in the Solution.
• Work on the automation workflows for the Storage and Backup automation in Cloud Solutions.
• Have an understanding of the Virtualization used in the Cloud Solution.
• Implementing project mainly on HP 3PAR, NetApp and HP Data Protector.
• Review the Low level design for the project with the design team, plan and implement the project.
• Take care of all the migrations, stand-ups and upgrades on NetApp Filers.
• Prepare the Implementation plan for the project.
• Handover the Infrastructure to the run operations after the implementation with proper KT and
documentation.
• Implementing project mainly on HP 3PAR, NetApp and HP Data Protector,
• Escalating the risks and design mistakes to get them rectified or to be approved by the customer.
• Mentor other team members on the cross technology skills.
HCL Technologies (June’11 to till 25th
Oct’13) Storage and
Backup Admin
• Supporting HP Data Protector, 3PAR and NetApp storage infrastructure for customers.
• Storage daily check hardware critical Events.
• Installation, Configuration and troubleshooting backups in HP Data Protector.
• Configuring Oracle, SAP & SQL database and NDMP backups and restores.
• Configuring and managing NDMP backup from NetApp filers
• Managing the HP Storages in the environment and working on the performance or hardware failures with
OEM.
• Maintain Different Libraries like HP ESL, MSL Library, HP Storage Works Virtual Library Configured o
n SAN
• Maintaining IDB.
• Managing the Aggregates and volumes, Snapshot management.
• Working on OSSV via protection manager and error/Backup monitoring.
• Escalating the issues identified, opening case with vendor for speedy solutions.
• Follow ups with NetApp on open vendor cases for troubleshooting purposes.
• Management of over NetApp16 Node’s.
• Volume, Q tree creation, quota management, NFS and CIFS share Administration and Management.
3. Mphasis, An HP Company (Jan’09 to till June'11) Storage and Backup
Admin
Key Deliverables: At NetApp Storages Platform
• Supporting HP Data Protector and NetApp storage infrastructure for customers.
• Media Management.
• Configuring Libraries.
• Attend Change Advisory Board meetings.
• Storage daily check hardware critical Events.
• Perform technical troubleshooting and issue resolution.
• Creating and Managing the Aggregates and volumes, Snapshot management.
• Working on scheduled activities like rebooting the filer, upgrading firmware.
• Escalating the issues identified, opening case with vendor for speedy solutions.
• Incident management following ITIL regulations.
• Create and maintain procedural documentation, reports, templates and knowledge base.
• Follow up with NetApp on open vendor cases for troubleshooting purposes.
• Working on Daily Incident / Service Requests for Allocation / Deallocation on filers.
• Volume, Q tree creation, quota management, NFS and CIFS share Administration and Management.
• Worked on Data ONTAP 7.2.x, 7.3.x, 8.0.1 7 Mode.
• Planning/ Execution of File systems Migration Using SnapMirror
• Upgrade of ONTAP, Disk Shelf/Disk Firmware, SP and RLM firmware upgrade FAS Appliance.
• Creating & configuring SAN Zoning on Brocade Fabric
• Work on Performance issues with TSE, Collection/Upload of perfstat & Core files
• Perform Diagnostic Checks on NetApp Hardware.
• Batch and Shell scripting.
4. Achievements:
• Got the certificate for President’s Quality Award in HP.
• Got High-flyer award multiple times for performance in Storage and Backup Projects.
Personal Dossier:
Date of Birth: 28h
June 1987
Language known: English, Hindi
Nationality: Indian
Local Address: FF2, 141, 6th
Cross, Celebrity Layout, Electronic City, Bangalore, 560100
Permanent Address: 23/217A, Chopasni Housing Board, Jodhpur, Rajasthan, 342008
Place: Bangalore
Date: 25-Aug-2016 (Puneet Mathur)