This resume is for Brian Bills, a senior system administrator with over 10 years of experience managing IT infrastructure. He has extensive experience migrating data centers, implementing enterprise systems, and automating IT tasks through custom scripting. His skills include Linux, Windows, networking, storage, and virtualization technologies. Recent roles include positions at Mentor Graphics and Hewlett Packard where he supported DevOps systems and end users as a desktop support and system administrator.
This document discusses Project Amaterasu, a tool for simplifying the deployment of big data applications. Amaterasu uses Mesos to deploy Spark jobs and other frameworks across clusters. It defines workflows, actions, and environments in YAML and JSON files. Workflows contain a series of actions like Spark jobs. Actions are written in Scala and interface with Amaterasu's context. Environments configure settings for different clusters. Amaterasu aims to improve collaboration and testing for big data teams through continuous integration and deployment of data pipelines.
Mukul Upadhyay is seeking a position in Big Data technology with an IT company. He has over 5 years of experience developing Hadoop applications and working with technologies like MapReduce, Hive, HBase, HDFS, and Sqoop. Some of his responsibilities include architecting Big Data platforms, developing custom MapReduce jobs, importing and exporting data between HDFS and relational databases, and tuning and monitoring Hadoop clusters. He has worked on projects for clients in the USA and India involving building Hadoop-based analytics platforms and processing terabytes of device log data.
This professional summary highlights the candidate's 20+ years of experience in network operations and data center management. He has led many projects involving network upgrades, virtualization initiatives, and data center consolidations. The candidate also has expertise in networking protocols, security, and various operating systems and platforms.
CERN's IT infrastructure is reaching its limits and needs to expand to support increasing computing capacity demands while maintaining a fixed staff size. CERN is addressing this by expanding its data center capacity through a new remote facility in Budapest, Hungary, and by adopting new open source configuration, monitoring and infrastructure tools to improve efficiency. Key projects include deploying OpenStack for infrastructure as a service, Puppet for configuration management, and integrating monitoring across tools. The transition will take place between 2012-2014 alongside LHC upgrades.
C19013010 the tutorial to build shared ai services session 2Bill Liu
This document provides an agenda and overview for a tutorial on building shared AI services. The session will cover AI engineering platforms, data pipelines, traditional AI roles and their challenges, skills required for AI engineers, and benchmarking machine learning and deep learning approaches. It includes a live demo of building an end-to-end AI pipeline with Kafka, NiFi, Spark Streaming and Keras on Spark.
This document discusses using data virtualization to accelerate application projects by 50%. It outlines some common problems with physical data copies, such as bottlenecks, bugs due to old data, difficulty creating subsets, and delays. The document then introduces the concept of using a data virtualization appliance to take snapshots of production data and create thin clones for development and testing environments. This allows for fast, full-sized, self-service clones that can be refreshed quickly. Use cases discussed include improved development and testing workflows, faster production support like recovery and migration, and enabling continuous business intelligence functions.
Handling Kernel Upgrades at Scale - The Dirty Cow StoryDataWorks Summit
Apache Hadoop at Yahoo is a massive platform with 36 different clusters spread across YARN, Apache HBase, and Apache Storm deployments, totaling 60,000 servers made up of 100s of different hardware configurations accumulated over generations, presenting unique operational challenges and a variety of unforeseen corner cases. In this talk, we will share methods, tips and tricks to deal with large scale kernel upgrade on heterogeneous platforms within tight timeframes with 100% uptime and no service or data loss through the Dirty COW use case (privilege escalation vulnerability found in the Linux Kernel in late 2016).
We will dive deep into our three phased approach that led to eventual success of the program - pre work, kernel upgrade itself, and post work / cleanup. We will share the details on automation tools, UIs, and reporting tools developed and used to achieve the stated objectives of 800+ server upgrades per hour, track the upgrade progress, validate and report data blocks, and recover quickly from bad blocks encountered. Throughout the talk, we will highlight the importance of process management, communicating with 100s of custom teams to ensure they are onboard and aware, and successful coordination tactics with SREs and Site Operations. We will also touch upon some of the unique challenges we faced along with way such as BIOS updates necessary on over 20,000 hosts along the way, and explain system rolling upgrade support we added to HBase and Storm for avoiding service disruption to low latency customer during these upgrades.
This document discusses Project Amaterasu, a tool for simplifying the deployment of big data applications. Amaterasu uses Mesos to deploy Spark jobs and other frameworks across clusters. It defines workflows, actions, and environments in YAML and JSON files. Workflows contain a series of actions like Spark jobs. Actions are written in Scala and interface with Amaterasu's context. Environments configure settings for different clusters. Amaterasu aims to improve collaboration and testing for big data teams through continuous integration and deployment of data pipelines.
Mukul Upadhyay is seeking a position in Big Data technology with an IT company. He has over 5 years of experience developing Hadoop applications and working with technologies like MapReduce, Hive, HBase, HDFS, and Sqoop. Some of his responsibilities include architecting Big Data platforms, developing custom MapReduce jobs, importing and exporting data between HDFS and relational databases, and tuning and monitoring Hadoop clusters. He has worked on projects for clients in the USA and India involving building Hadoop-based analytics platforms and processing terabytes of device log data.
This professional summary highlights the candidate's 20+ years of experience in network operations and data center management. He has led many projects involving network upgrades, virtualization initiatives, and data center consolidations. The candidate also has expertise in networking protocols, security, and various operating systems and platforms.
CERN's IT infrastructure is reaching its limits and needs to expand to support increasing computing capacity demands while maintaining a fixed staff size. CERN is addressing this by expanding its data center capacity through a new remote facility in Budapest, Hungary, and by adopting new open source configuration, monitoring and infrastructure tools to improve efficiency. Key projects include deploying OpenStack for infrastructure as a service, Puppet for configuration management, and integrating monitoring across tools. The transition will take place between 2012-2014 alongside LHC upgrades.
C19013010 the tutorial to build shared ai services session 2Bill Liu
This document provides an agenda and overview for a tutorial on building shared AI services. The session will cover AI engineering platforms, data pipelines, traditional AI roles and their challenges, skills required for AI engineers, and benchmarking machine learning and deep learning approaches. It includes a live demo of building an end-to-end AI pipeline with Kafka, NiFi, Spark Streaming and Keras on Spark.
This document discusses using data virtualization to accelerate application projects by 50%. It outlines some common problems with physical data copies, such as bottlenecks, bugs due to old data, difficulty creating subsets, and delays. The document then introduces the concept of using a data virtualization appliance to take snapshots of production data and create thin clones for development and testing environments. This allows for fast, full-sized, self-service clones that can be refreshed quickly. Use cases discussed include improved development and testing workflows, faster production support like recovery and migration, and enabling continuous business intelligence functions.
Handling Kernel Upgrades at Scale - The Dirty Cow StoryDataWorks Summit
Apache Hadoop at Yahoo is a massive platform with 36 different clusters spread across YARN, Apache HBase, and Apache Storm deployments, totaling 60,000 servers made up of 100s of different hardware configurations accumulated over generations, presenting unique operational challenges and a variety of unforeseen corner cases. In this talk, we will share methods, tips and tricks to deal with large scale kernel upgrade on heterogeneous platforms within tight timeframes with 100% uptime and no service or data loss through the Dirty COW use case (privilege escalation vulnerability found in the Linux Kernel in late 2016).
We will dive deep into our three phased approach that led to eventual success of the program - pre work, kernel upgrade itself, and post work / cleanup. We will share the details on automation tools, UIs, and reporting tools developed and used to achieve the stated objectives of 800+ server upgrades per hour, track the upgrade progress, validate and report data blocks, and recover quickly from bad blocks encountered. Throughout the talk, we will highlight the importance of process management, communicating with 100s of custom teams to ensure they are onboard and aware, and successful coordination tactics with SREs and Site Operations. We will also touch upon some of the unique challenges we faced along with way such as BIOS updates necessary on over 20,000 hosts along the way, and explain system rolling upgrade support we added to HBase and Storm for avoiding service disruption to low latency customer during these upgrades.
Monika Raghuvanshi is seeking a position as a Hadoop Administrator where she can apply her 7 years of experience in Hadoop and Unix administration. She has expertise in installing, configuring, and maintaining Hadoop clusters as well as ensuring security through Kerberos and SSL. She is proficient in Linux, networking, programming languages, and databases. Her experience includes projects with Barclays, GE Healthcare, Ontario Ministry of Transportation, and Nortel where she administered Hadoop and Unix systems.
Diagnosability versus The Cloud, Redwood Shores 2011-08-30Cary Millsap
In our increasingly virtualized environments, it's ever more difficult to diagnose application defects—especially performance defects that affect response time or throughput expectations. Runtime diagnosis of defects can be an unbearably complicated problem to solve once the application is sealed up and put into production use. But having excellent runtime diagnostics is surprisingly easy if you design the diagnostic features into the application from its inception, as it is being grown, like you would with any other desired application feature.
Hadoop & DevOps : better together by Maxime Lanciaux.
From deployment automation with tools (like jenkins, git, maven, ambari, ansible) to full automation with monitoring on HDP2.5+.
Shabeer K is a system administrator currently working at Hamad Medical Corporation in Doha, Qatar. He has over 15 years of experience in storage administration including NetApp, HP 3PAR, and Hitachi storage systems. He is proficient in Windows server administration, VERITAS NetBackup, and Linux. He holds professional certifications in Cisco CCNA, ITIL, Citrix CCA, and multiple NetApp certifications.
Nalini Kanta Sahoo is a senior software engineer with over 6 years of experience in application development using Hadoop, Oracle PL/SQL, Python, and related technologies. He has extensive expertise in data analytics, data migration, and cluster administration. Notable projects include developing ETL processes and complex queries in Hive for an IADP monitoring tool, migrating data and metadata from Teradata to Hadoop clusters, and automating Unix scripts for monitoring and reporting. Sahoo is proficient in various programming languages, databases, and tools and has managed several projects for clients such as Apple and BP.
The SQLT utility provides concise summaries of SQL performance and plans. It works by calling the SQL Tuning Advisor and Trace Analyzer to analyze execution plans, profiles, and trace files. The utility outputs comprehensive HTML reports on configuration findings, recommendations, and metadata for troubleshooting SQL performance issues.
Alan Resume Release Management 16NOV2016Alan Williams
This document is a resume for K. Alan Williams seeking a position as a Release/Project Manager. It summarizes his technical skills including experience with various software, platforms, and certifications. It also outlines his soft skills and provides details on his professional experience in Release/Configuration Management roles at Kaplan University and as a Senior Systems Engineer at Courtesy Computers, Inc. highlighting his responsibilities and accomplishments in managing releases, environments, and infrastructure changes across multiple platforms.
Kevin Blandford has over 10 years of experience providing third line technical support and server administration. His skills include Windows server administration, Exchange administration, virtualization, Active Directory administration, and hardware installation and maintenance. He has a SC security clearance and certifications in Prince2, Windows server, and networking. He is looking for an infrastructure support role where he can utilize his experience administering Windows servers, Exchange, and supporting end users.
DevOps for Big Data - Data 360 2014 ConferenceGrid Dynamics
This document discusses implementing continuous delivery for big data applications using Hadoop, Vertica, and Tableau. It describes Grid Dynamics' initial state of developing these applications in a single production environment. It then outlines their steps to implement continuous delivery, including using dynamic environments provisioned by Qubell to enable automated testing and deployment. This reduced risks and increased efficiency by allowing experimentation and validation prior to production releases.
Walk Through a Software Defined Everything PoCMidoNet
This document summarizes a proof of concept for a software defined everything architecture using OpenStack, Ceph, and Midonet. The objectives are to enable on-demand provisioning of resources, optimize efficiency through automatic balancing, provide isolation, maintain high availability and data consistency. The proof of concept leverages SDN, OpenStack, and software defined storage. It includes configuration details for Midonet, Ceph, and OpenStack services on the infrastructure. Lessons learned are discussed regarding each component's performance, resiliency, and operational challenges. Potential business benefits highlighted include rapid deployment, reduced costs, improved productivity and agility.
Lessons from Large-Scale Cloud Software at DatabricksMatei Zaharia
1) Building cloud software presents unique challenges compared to on-premise software, such as the need for faster release cycles, upgrades without regressions, and multitenancy.
2) Scaling issues are a major cause of outages for cloud systems, including problems reaching resource limits and insufficient isolation between users.
3) Testing cloud systems requires evaluating how they scale and handling varying loads, and failures can indicate problems with dimensions like output size or number of tasks.
This document provides a summary of Keith Givens' work experience at AIG Retirement Services from March 2011 to February 2014. It lists 18 projects he worked on, including migrating hardware and software platforms, upgrading operating systems and databases, implementing new systems, and managing disaster recovery procedures. The projects involved tasks like moving systems to new hardware, reallocating disk space, retiring legacy platforms, and implementing new networking and security solutions.
Introducing Lenovo XClarity: Simplified Hardware Resource ManagementLenovo Data Center
Lenovo XClarity is a virtualized application that helps speed deployment of Lenovo hardware systems while reducing manual intervention. XClarity provides a simplified graphical user interface, PowerShell tools, REST APIs, and out-of-box Integrators into external higher level software to allow you to manage hardware systems on your terms.
This document provides a summary of James Machie's qualifications, including his contact information, work experience in IT roles focusing on VMware infrastructure and support, education and certifications in related technologies, and personal interests including martial arts. He has over 15 years of experience in systems administration, virtualization, and customer support roles.
This document provides an overview and technical details of Oracle NoSQL Database. It describes NoSQL databases as evolving from vertically integrated applications to modern web-scale architectures. It outlines key characteristics of NoSQL such as eventual consistency and high availability. The document then details Oracle NoSQL Database's data model, APIs, administration capabilities, and architecture. It positions Oracle NoSQL Database as suitable for large-scale, low-latency applications requiring simple key-value access.
Edo Koops is an experienced IT engineer with over 15 years of experience in various roles such as application administrator, server administrator, VMware specialist, Exchange specialist, and project coordinator. He has a broad set of technical skills including Windows server administration, Active Directory, Exchange, SQL, VMware, Azure, PowerShell, and security tools. Edo is highly skilled at complex infrastructure environments and likes challenges. He is structured, communicative, and able to play a leading role in optimizing systems.
Catalogic ECX: Copy Data Management for InterSystems Caché and Epic EHRCatalogic Software
Copy Data Management is fast becoming a must-have solution for any Epic EHR environment. Catalogic ECX provides native application aware integration with InterSystems Caché and Clarity databases (SQL), VMs and file systems to automate the creation and use of Caché database copies — snapshots, clones and replicas – on your existing enterprise storage infrastructure. This allows you to meet Epic protection and recovery requirements as well as provide quick, easy and secure access to clones or full copies for development, testing, release, MDR, SUP, Build and training environments. ECX supports Caché on Red Hat Linux (virtual and physical) and AIX.
Its Finally Here! Building Complex Streaming Analytics Apps in under 10 min w...DataWorks Summit
Imagine if you could build and deploy an end to end complex streaming analytics app on a streaming engine like Storm or Flink that did the following:
1. Joining Streams
2. Aggregations over Windows (Time or Count based)
3. Complex Event Processing
4. Pattern Matching
5. Model scoring.
Now imagine implementing and deploying this without writing a single line of code in under 10 mins.
Imagine no more; it is indeed here. In this talk, we will discuss an exciting open source project led by Hortonworks on building and deploying streaming applications using a drag and drop paradigm.
By upgrading from the legacy solution we tested to the new Intel processor-based Dell and VMware solution, you could do 18 times the work in the same amount of space. Imagine what that performance could mean to your business: Consolidate workloads from across your company, lower your power and cooling bills, and limit datacenter expansion in the future, all while maintaining a consistent user experience—the list of potential benefits is huge.
Try running DPACK, which can help you identify bottlenecks in your environment and inform you about your current performance needs. Then consider how the consolidation ratio we proved could be helpful for your company. The Intel processor-powered Dell PowerEdge R730 solution with VMware vSphere and Dell Storage SC4020, also powered by Intel, could be the right destination for your upgrade journey.
Stephanie Roberts has over 15 years of experience in monitoring and systems administration. She currently works as a Design and Implementation Engineer at CDK Global, where she is responsible for monitoring thousands of servers across multiple environments. Some of her key responsibilities include being a subject matter expert for monitoring tools like Dynatrace and Microsoft SCOM, creating monitoring configurations, assisting with application migrations, and providing training. Prior to her current role, she worked as a NOC Engineer and PC Support Specialist for ADP Dealer Services and Cobalt, where she monitored systems, created monitoring checks, and participated in outage analysis.
This document provides a summary of David Sailors' experience and qualifications. He has over 11 years of experience in change and configuration management supporting software builds, releases, and deployments across various environments and languages. He is proficient in many programming languages and tools related to configuration management, continuous integration/delivery, and infrastructure provisioning and maintenance. His work history includes senior roles supporting DevOps processes and engineering systems at two software companies.
Monika Raghuvanshi is seeking a position as a Hadoop Administrator where she can apply her 7 years of experience in Hadoop and Unix administration. She has expertise in installing, configuring, and maintaining Hadoop clusters as well as ensuring security through Kerberos and SSL. She is proficient in Linux, networking, programming languages, and databases. Her experience includes projects with Barclays, GE Healthcare, Ontario Ministry of Transportation, and Nortel where she administered Hadoop and Unix systems.
Diagnosability versus The Cloud, Redwood Shores 2011-08-30Cary Millsap
In our increasingly virtualized environments, it's ever more difficult to diagnose application defects—especially performance defects that affect response time or throughput expectations. Runtime diagnosis of defects can be an unbearably complicated problem to solve once the application is sealed up and put into production use. But having excellent runtime diagnostics is surprisingly easy if you design the diagnostic features into the application from its inception, as it is being grown, like you would with any other desired application feature.
Hadoop & DevOps : better together by Maxime Lanciaux.
From deployment automation with tools (like jenkins, git, maven, ambari, ansible) to full automation with monitoring on HDP2.5+.
Shabeer K is a system administrator currently working at Hamad Medical Corporation in Doha, Qatar. He has over 15 years of experience in storage administration including NetApp, HP 3PAR, and Hitachi storage systems. He is proficient in Windows server administration, VERITAS NetBackup, and Linux. He holds professional certifications in Cisco CCNA, ITIL, Citrix CCA, and multiple NetApp certifications.
Nalini Kanta Sahoo is a senior software engineer with over 6 years of experience in application development using Hadoop, Oracle PL/SQL, Python, and related technologies. He has extensive expertise in data analytics, data migration, and cluster administration. Notable projects include developing ETL processes and complex queries in Hive for an IADP monitoring tool, migrating data and metadata from Teradata to Hadoop clusters, and automating Unix scripts for monitoring and reporting. Sahoo is proficient in various programming languages, databases, and tools and has managed several projects for clients such as Apple and BP.
The SQLT utility provides concise summaries of SQL performance and plans. It works by calling the SQL Tuning Advisor and Trace Analyzer to analyze execution plans, profiles, and trace files. The utility outputs comprehensive HTML reports on configuration findings, recommendations, and metadata for troubleshooting SQL performance issues.
Alan Resume Release Management 16NOV2016Alan Williams
This document is a resume for K. Alan Williams seeking a position as a Release/Project Manager. It summarizes his technical skills including experience with various software, platforms, and certifications. It also outlines his soft skills and provides details on his professional experience in Release/Configuration Management roles at Kaplan University and as a Senior Systems Engineer at Courtesy Computers, Inc. highlighting his responsibilities and accomplishments in managing releases, environments, and infrastructure changes across multiple platforms.
Kevin Blandford has over 10 years of experience providing third line technical support and server administration. His skills include Windows server administration, Exchange administration, virtualization, Active Directory administration, and hardware installation and maintenance. He has a SC security clearance and certifications in Prince2, Windows server, and networking. He is looking for an infrastructure support role where he can utilize his experience administering Windows servers, Exchange, and supporting end users.
DevOps for Big Data - Data 360 2014 ConferenceGrid Dynamics
This document discusses implementing continuous delivery for big data applications using Hadoop, Vertica, and Tableau. It describes Grid Dynamics' initial state of developing these applications in a single production environment. It then outlines their steps to implement continuous delivery, including using dynamic environments provisioned by Qubell to enable automated testing and deployment. This reduced risks and increased efficiency by allowing experimentation and validation prior to production releases.
Walk Through a Software Defined Everything PoCMidoNet
This document summarizes a proof of concept for a software defined everything architecture using OpenStack, Ceph, and Midonet. The objectives are to enable on-demand provisioning of resources, optimize efficiency through automatic balancing, provide isolation, maintain high availability and data consistency. The proof of concept leverages SDN, OpenStack, and software defined storage. It includes configuration details for Midonet, Ceph, and OpenStack services on the infrastructure. Lessons learned are discussed regarding each component's performance, resiliency, and operational challenges. Potential business benefits highlighted include rapid deployment, reduced costs, improved productivity and agility.
Lessons from Large-Scale Cloud Software at DatabricksMatei Zaharia
1) Building cloud software presents unique challenges compared to on-premise software, such as the need for faster release cycles, upgrades without regressions, and multitenancy.
2) Scaling issues are a major cause of outages for cloud systems, including problems reaching resource limits and insufficient isolation between users.
3) Testing cloud systems requires evaluating how they scale and handling varying loads, and failures can indicate problems with dimensions like output size or number of tasks.
This document provides a summary of Keith Givens' work experience at AIG Retirement Services from March 2011 to February 2014. It lists 18 projects he worked on, including migrating hardware and software platforms, upgrading operating systems and databases, implementing new systems, and managing disaster recovery procedures. The projects involved tasks like moving systems to new hardware, reallocating disk space, retiring legacy platforms, and implementing new networking and security solutions.
Introducing Lenovo XClarity: Simplified Hardware Resource ManagementLenovo Data Center
Lenovo XClarity is a virtualized application that helps speed deployment of Lenovo hardware systems while reducing manual intervention. XClarity provides a simplified graphical user interface, PowerShell tools, REST APIs, and out-of-box Integrators into external higher level software to allow you to manage hardware systems on your terms.
This document provides a summary of James Machie's qualifications, including his contact information, work experience in IT roles focusing on VMware infrastructure and support, education and certifications in related technologies, and personal interests including martial arts. He has over 15 years of experience in systems administration, virtualization, and customer support roles.
This document provides an overview and technical details of Oracle NoSQL Database. It describes NoSQL databases as evolving from vertically integrated applications to modern web-scale architectures. It outlines key characteristics of NoSQL such as eventual consistency and high availability. The document then details Oracle NoSQL Database's data model, APIs, administration capabilities, and architecture. It positions Oracle NoSQL Database as suitable for large-scale, low-latency applications requiring simple key-value access.
Edo Koops is an experienced IT engineer with over 15 years of experience in various roles such as application administrator, server administrator, VMware specialist, Exchange specialist, and project coordinator. He has a broad set of technical skills including Windows server administration, Active Directory, Exchange, SQL, VMware, Azure, PowerShell, and security tools. Edo is highly skilled at complex infrastructure environments and likes challenges. He is structured, communicative, and able to play a leading role in optimizing systems.
Catalogic ECX: Copy Data Management for InterSystems Caché and Epic EHRCatalogic Software
Copy Data Management is fast becoming a must-have solution for any Epic EHR environment. Catalogic ECX provides native application aware integration with InterSystems Caché and Clarity databases (SQL), VMs and file systems to automate the creation and use of Caché database copies — snapshots, clones and replicas – on your existing enterprise storage infrastructure. This allows you to meet Epic protection and recovery requirements as well as provide quick, easy and secure access to clones or full copies for development, testing, release, MDR, SUP, Build and training environments. ECX supports Caché on Red Hat Linux (virtual and physical) and AIX.
Its Finally Here! Building Complex Streaming Analytics Apps in under 10 min w...DataWorks Summit
Imagine if you could build and deploy an end to end complex streaming analytics app on a streaming engine like Storm or Flink that did the following:
1. Joining Streams
2. Aggregations over Windows (Time or Count based)
3. Complex Event Processing
4. Pattern Matching
5. Model scoring.
Now imagine implementing and deploying this without writing a single line of code in under 10 mins.
Imagine no more; it is indeed here. In this talk, we will discuss an exciting open source project led by Hortonworks on building and deploying streaming applications using a drag and drop paradigm.
By upgrading from the legacy solution we tested to the new Intel processor-based Dell and VMware solution, you could do 18 times the work in the same amount of space. Imagine what that performance could mean to your business: Consolidate workloads from across your company, lower your power and cooling bills, and limit datacenter expansion in the future, all while maintaining a consistent user experience—the list of potential benefits is huge.
Try running DPACK, which can help you identify bottlenecks in your environment and inform you about your current performance needs. Then consider how the consolidation ratio we proved could be helpful for your company. The Intel processor-powered Dell PowerEdge R730 solution with VMware vSphere and Dell Storage SC4020, also powered by Intel, could be the right destination for your upgrade journey.
Stephanie Roberts has over 15 years of experience in monitoring and systems administration. She currently works as a Design and Implementation Engineer at CDK Global, where she is responsible for monitoring thousands of servers across multiple environments. Some of her key responsibilities include being a subject matter expert for monitoring tools like Dynatrace and Microsoft SCOM, creating monitoring configurations, assisting with application migrations, and providing training. Prior to her current role, she worked as a NOC Engineer and PC Support Specialist for ADP Dealer Services and Cobalt, where she monitored systems, created monitoring checks, and participated in outage analysis.
This document provides a summary of David Sailors' experience and qualifications. He has over 11 years of experience in change and configuration management supporting software builds, releases, and deployments across various environments and languages. He is proficient in many programming languages and tools related to configuration management, continuous integration/delivery, and infrastructure provisioning and maintenance. His work history includes senior roles supporting DevOps processes and engineering systems at two software companies.
Dean Hagen has over 22 years of experience in IT roles including 15 years of experience with UNIX/Linux systems. He has expertise in security auditing, firewall administration, web/application servers, virtualization, storage, and networking. His background includes roles as a solutions architect, senior cloud infrastructure engineer, technical lead, and senior technical support engineer.
Herman Stacy Jackson has over 30 years of experience in technical fields including programming, system administration, and management. He has expertise in languages like Java, Perl, C, SQL, and Visual Basic. He has also managed teams of technical support engineers and systems administrators. His system administration experience includes various UNIX platforms, Windows, VMware, NetApp storage systems, and tape backup systems.
The document summarizes the skills and experience of Olabintan V Akinsola as a Linux systems administrator. They have over 6 years of experience administering Linux, Windows, and Unix systems and applications such as Oracle, Apache, Tomcat, and Jboss. They are proficient in technologies including AWS, Ansible, Git, and programming languages like Bash, Ruby and YAML.
This document provides a summary of Samkumar Gandi's experience and qualifications. He has over 8 years of experience in Linux system administration, application server management, and virtualization. He is proficient in technologies like Apache, JBoss, Tomcat, MySQL, Oracle, and cloud platforms. He has worked on projects involving server setup, maintenance, performance tuning, security, and incident/problem management. He is certified in Red Hat technologies and holds qualifications like Bachelor of Technology in Computer Science.
- Kevin Slade is seeking a new IT role involving technical challenge, people contact, and making a positive contribution to a company in the Auckland region.
- He has over 30 years of experience in various IT roles including software development, testing, project management, and training/mentoring.
- His background includes managing teams, full software development life cycle experience, and skills in languages like C, C++, Perl, Java, and SQL.
Arkadiy Kogan has over 15 years of experience as a senior software engineer developing tools and applications to improve productivity and automation. He has a background in languages like Perl, Java, and databases like PostgreSQL and Oracle. The document provides details on his work history at EMC2 and ClearStory Systems where he developed custom applications, user interfaces, and databases to support testing, release management, and digital asset management systems.
This document provides a summary of Jason P Taylor's qualifications. He has over 12 years of experience as a senior Windows systems administrator managing VMware and Windows environments. He has extensive experience with IIS 8, Windows Server 2012/R2, vSphere 5.1, and SQL Server. He has supported various enterprise software deployments and upgrades. His experience includes roles at Charter Communications, Tesoro Oil and Gas, Synchronoss Technologies, and other companies.
This document is a resume for Giri Uppuganti that summarizes his experience and qualifications. It lists his contact information and over 18 years of experience in information technology with expertise in systems, storage, networking, virtualization, and operations management. It also provides details of his education and various technical certifications. The resume then outlines his professional experience with various employers, describing his roles and responsibilities in infrastructure engineering, systems engineering, and systems administration.
Fred McLain has over 15 years of experience as a software engineer and technical lead. He currently works at General Dynamics developing software for NASA's satellite communications systems. Previously he has worked on aircraft structural analysis tools at Boeing and developed open source accessibility tools for blind developers. He has extensive experience with Java, REST, distributed systems, and Agile development practices.
Ayanava Mitra is seeking a role utilizing their 4.5 years of experience in application/production support with various technologies including Linux, Unix, Windows administration, WebSphere, Tomcat, Apache, IIS. They currently work as a System Engineer at TCS providing middleware operations and support for Southern California Edison, including WebSphere, Tomcat, Apache administration and automation initiatives. They have a Bachelor's Degree in Computer Science.
• 4.5 years of IT experience on Application/Production Support with Linux/Unix Windows Admin, Unix Shell Scripting, WebSphere, Tomcat Admin, Apache webserver, IIS, Business Analyst, Service Management.
Ravi Banamigi is a senior technology specialist at Wells Fargo India Solutions seeking roles involving Linux, HPUX, and Solaris systems administration. He has over 8 years of experience in IT, networking, systems administration, and maintenance. The document provides details of his employment history, roles and responsibilities, skills, and education.
Chris Bucklin has over 18 years of experience in information security, networking, and systems administration. He currently holds a secret security clearance and manages security tools like McAfee HBSS and Assured Compliance Assessment Solution (ACAS) for the National Guard Bureau. He has extensive experience implementing security solutions, performing vulnerability assessments, and ensuring compliance with security standards.
Seeking position as a Linux Administrator by utilizing “6+ years of experience”
In multiple Linux & UNIX platforms, specialized in Red Hat Linux. Self-motivated, dedicated and up to any task that I am given.
Tarek Zanaty has over 20 years of experience in IT with a focus on database administration, development, and big data. He has extensive experience installing, configuring, and maintaining Hadoop clusters as well as tools like Hive, HBase, Kafka, and Spark. Currently he is a Hadoop Administrator at Sinai Medical Associates where he manages their Hadoop infrastructure.
Karen Buchanan has over 28 years of experience as an Oracle Database Administrator. She has experience administering Oracle databases from versions 7-12c on UNIX, Linux, and Windows platforms. Her experience includes database installation, configuration, backups, performance tuning, and patching. She is currently a senior Oracle DBA providing patching as a service for clients at Taos, Inc. Previously she served as an Oracle DBA at companies such as SSN, Epocrates, Cigna, Hewlett-Packard, and Rapidigm, where she supported databases ranging in size from 100MB to over 1TB.
This document contains a summary of Tanmay Mitra's skills and experience. He has over 7 years of experience working with virtualization technologies like VMware and Linux administration. Some of the projects he has worked on include managing T-Mobile's data center and implementing Ericsson products for them. He is looking for a new position that allows him to utilize his technical skills and experience in virtualization, Linux, and data center administration.
CoVi Luong has over a decade of experience in information technology with a focus on data storage solutions. They are seeking a position that leverages their expertise in storage area networks, network attached storage, servers, and major storage vendors like EMC. As a senior storage engineer, they have experience designing, implementing, and managing complex storage infrastructure and troubleshooting technical issues.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Lee Barnes - Path to Becoming an Effective Test Automation Engineer.pdfleebarnesutopia
So… you want to become a Test Automation Engineer (or hire and develop one)? While there’s quite a bit of information available about important technical and tool skills to master, there’s not enough discussion around the path to becoming an effective Test Automation Engineer that knows how to add VALUE. In my experience this had led to a proliferation of engineers who are proficient with tools and building frameworks but have skill and knowledge gaps, especially in software testing, that reduce the value they deliver with test automation.
In this talk, Lee will share his lessons learned from over 30 years of working with, and mentoring, hundreds of Test Automation Engineers. Whether you’re looking to get started in test automation or just want to improve your trade, this talk will give you a solid foundation and roadmap for ensuring your test automation efforts continuously add value. This talk is equally valuable for both aspiring Test Automation Engineers and those managing them! All attendees will take away a set of key foundational knowledge and a high-level learning path for leveling up test automation skills and ensuring they add value to their organizations.
What is an RPA CoE? Session 2 – CoE RolesDianaGray10
In this session, we will review the players involved in the CoE and how each role impacts opportunities.
Topics covered:
• What roles are essential?
• What place in the automation journey does each role play?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
Getting the Most Out of ScyllaDB Monitoring: ShareChat's TipsScyllaDB
ScyllaDB monitoring provides a lot of useful information. But sometimes it’s not easy to find the root of the problem if something is wrong or even estimate the remaining capacity by the load on the cluster. This talk shares our team's practical tips on: 1) How to find the root of the problem by metrics if ScyllaDB is slow 2) How to interpret the load and plan capacity for the future 3) Compaction strategies and how to choose the right one 4) Important metrics which aren’t available in the default monitoring setup.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
In our second session, we shall learn all about the main features and fundamentals of UiPath Studio that enable us to use the building blocks for any automation project.
📕 Detailed agenda:
Variables and Datatypes
Workflow Layouts
Arguments
Control Flows and Loops
Conditional Statements
💻 Extra training through UiPath Academy:
Variables, Constants, and Arguments in Studio
Control Flow in Studio
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
"NATO Hackathon Winner: AI-Powered Drug Search", Taras KlobaFwdays
This is a session that details how PostgreSQL's features and Azure AI Services can be effectively used to significantly enhance the search functionality in any application.
In this session, we'll share insights on how we used PostgreSQL to facilitate precise searches across multiple fields in our mobile application. The techniques include using LIKE and ILIKE operators and integrating a trigram-based search to handle potential misspellings, thereby increasing the search accuracy.
We'll also discuss how the azure_ai extension on PostgreSQL databases in Azure and Azure AI Services were utilized to create vectors from user input, a feature beneficial when users wish to find specific items based on text prompts. While our application's case study involves a drug search, the techniques and principles shared in this session can be adapted to improve search functionality in a wide range of applications. Join us to learn how PostgreSQL and Azure AI can be harnessed to enhance your application's search capability.
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Keywords: AI, Containeres, Kubernetes, Cloud Native
Event Link: https://meine.doag.org/events/cloudland/2024/agenda/#agendaId.4211
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
1. Brian Bills| 1119 Northumberland Drive Sunnyvale, CA 94087 | 415-324-0279 | brianbills99@gmail.com
Senior System Administrator
DESKTOP INTEGRATION | DATA CENTER MIGRATION| ENTERPRISE IMPLEMENTATIONS
Converted data center fromNIS to LDAP at Sprint Labs
Created customscripts to add and remove users from NIS and Active Directory
Created customscripts to manage storage space for GIT Repo environment
Created custom, automated, monitoring solution for EMC for both RTP and Santa Clara
Migrated data centers for Computer Associates and Cisco
Facilitated SyncML calendar, contacts, and email solution for enterprise clients at Sun Microsystems
Skills Summary
Automation OS and Servers Hardware
Kickstart, Cobbler, Puppet Windows Server 2K8 R2, 2012 Cisco UCS 6508, 200, 210
CFEngine, YUM, Cron Redhat, Debian, CentOS, SUSE Cisco 2232 FEX, Nexus 5010
Custom BASH and PERL Scripts AIX, HPUX, Solaris, FreeBSD Nexsan E60 and E18 SAN Storage
Ghost and Altiris Imaging Servers VCenter 4x and 5x, Xen 4x HP Proliant, Blades, 3PAR Storage
Expect, SSH, AWK, SED, and SCP Hyper-V, Exchange 07, 10, 13 NetApp, Hitachi, Brocade
Career Progression
Over 10 years of experience with heavy involvement in the entire IT Stack, racking and stacking, Linux and Windows Systems
Administration, NOC Jockey, DevOps, Project Management, Ticket Jockey, Desktop Support, Hand Holder, Mentor, and Technical
Writer. I have a passion for increasing the company bottomline through improving efficiency and mitigating risks. Personal favorites
are team chemistry, morale building, customer is always right, no issue is too small to brush aside and ignore, everybody is
important.
Sprint Labs Re/Max RealEstate Cisco
Wells Fargo Honda of America EMC
Wal-Mart.com Sony of America IBM
Mentor GraphicsFebruary 2015 – July 2015 Fremont,CA
Desktop Support /System Administrator
Responsible for physicaland virtual Suse and Redhat based DevOps systems.
Created BASHscripts using Expect and NMAP to generate reports and automatically populate Nagios configuration files.
Responsible for end user laptops and troubleshooting end user issues for Windows 7 and 8.1.
Worked with HP Service Manager to resolve tickets and work on IT projects.
Worked with HP Service Automation, Insight Manager, and VMware Virtual Center to provision new test systems.
Worked with a customized and automated provisioning systemfor new laptops.
Worked extensively with Kickstart to provision new virtualand physical machines.
Multi-discipline individual contributor on multiple projects spanning network, systems, automation, provisioning, and
configuration changes.
2. Hewlett Packard June2013 – February 2015 Fremont,CA
IT Project Lead / SystemAdministrator
Managed hundreds of physical and virtual Debian, Redhat, and a fair amount of Solaris x86 and SPARC based systems.
(most of which were DevOps)
I worked in the 3PAR division of HP which is a storage acquisition. The developers I supported were working on firmware
and software related to their latest storage product offerings.
Managed SSH, x11, and telnet sessions, open files, total space consumed, load averages, processes, ZFS volumes, NFS
shares, NIS access, DNS entries, and AD access.
Utilized P2V, V2V, and P2P using VMware converter.
Responsible for granting account access to NIS, Active Directory, Application Servers, and solving end user issues like
VPN and resetting of passwords. Managed, Agile, Jenkins, Bugzilla, and other company specific applications.
Managed multi-tiered web architecture over J2EE, web services, and shared storage using Fiber Channel.
Managed all Git servers and storage devices. Was responsible for adding access and group membership to Git Repos.
Multi-discipline individual contributor on multiple projects spanning network, systems, automation, provisioning, and
configuration changes.
Cisco - Systems San Jose,CA October2011 – June 2013
IT Project Lead / SystemAdministrator
Project lead and individual contributor for datacenter migration in San Jose, CA.
Worked with two product teams that were recent acquisitions which dealt with the UCS Management System.
I managed physical storage volumes, HA clusters, DNS entries, NFS shares, networkthroughput and security, and working
with product teams to constantly update the environment to better meet their needs.
Supported and managed a mixed environment of Linux file systems (Redhat, CentOS, AIX, Solaris x86 and SPARC, OpenVMS,
and HP-UX).
Migrated Win2k8 and Win2k12 domain controllers from Virtual to Physical.
Managed three Virtual (VMware) and Physical (Cisco UCS, HP C7K, Proliant) networks in Houston, Austin, and San Jose.
Responsible for all interfaces, wiring, switches, routers, power, and cooling.
Managed a virtualization initiative to consolidate lab resources into a smaller space, energy, and administrative footprint.
Supported development teams to restructure server, network, and storage environments.
Implemented several custom vCloud implementations for the San Jose, Houston, and Austin labs.
Integrated Active Directory accounts into Virtual Center to manage permissions and resource objects and connect to SQL
Server 2K8.
3. EMC - Santa Clara,CA March 2009 – October 2011
IT Team Lead
Created customNagios solution for Lab environment and core infrastructure.
Created customBASH scripts to automate the process of populating Nagios with systems and services.
Worked with networking and systems engineers to tailor a solution of monitoring that best suits the needs of the department.
Monitored core infrastructure switches, routers, and storage devices via SNMP, with Nagios and MRTG for things like fans,
ports, throughput, FRU modules, etc...
Created scripts to automate the process of adding new devices to Nagios and MRTG
Created Redhat Satellite Server for creating new virtualand physical machines with Kickstart and Cobbler
Did troubleshooting of devices that were alarming in Nagios for things like high load averages, stale file handles, full root
partitions, systemup or down, NRPE agent not running, network throughput slow, systemswapping and caching, iowaits
causing high load averages, etc...
Monitored APS PDUs via SNMP for things like core fans, LED, sockets used, volts, and amps etc...
Wrote extensive expect scripts to enable hosts to be Nagios clients and to copy over the trusted keys, systat, and other
programs, plug-ins, and configuration file changes.
Life Technologies- Foster City,CA Personal Consulting Assignment 2009
SystemAdministrator
Built HPC Clusters for pairing and sequencing, and mapping genomes
Worked side by side with scientists and PhD’s
Primary Life Technologies point of contact between Universities, Hospitals, and Science Labs
Customers were Johns Hopkins, Monsanto, UCLA, USC, and many Universities and Science Labs around the globe
Assisted customers with the installation of Bioscope on their infrastructure and on Life Technologies “Solid Instrument”
Installed Torque, SGE, Maui, SKYLD, Rocks, Bioscope, SLURM
Supported customers all over the globe troubleshooting Bioscope
Setup Nagios monitoring server for HPC Clusters
Setup CentOS RPM Repository with failover
Setup Redhat Enterprise Cluster
Worked with Penguin Computing and Dell servers and storage solutions
Installed ESXi 4.0 for testing software on virtual servers
Supported Solid Instruments [Genome mapping and pairing Hardware/Software]
Assisted developers by making hardware/firmware changes to support better performance for the newest software updates
and releases
Created scripts to facilitate the managing of jobs in the SGE/Torque queues
Wrote Nagios plugins in Perl and BASH for monitoring HPC cluster queues like torque and SGE.
4. Chevron - Concord,CA June 2007 – Jan 2009
IT Team Lead
Coordinated RFIDtag effort to label all parts in the worldwide corporate refinery inventory.
Coordinated effort to implement Maximo, WebLogic, and Actuate iServer installation and patch upgrades.
Coordinated effort to develop On Demand content integration for Maximo
Project lead to trackpayroll and HR with IBM Maximo
Peer review of enhancement packages - Java classes, JSP, SQL, and documentation
Performed solution design and configuration of LDAP integration
Troubleshooting of MEA interface and JMS messaging
DB configuration, and Oracle triggers
Managed Corporate Active Directory user and group management
Responsible for creating aliases and groups for Exchange server to send out alerts to various technicalteams and user groups
Computer Associates- South San Francisco,CA Sept 2005 – June 2007
SystemAdministrator
Led project to migrate all systems from the Wiley networkto the Computer Associates network. Worked together with
corporate security teams to ensure that all systems were properly scanned and hardened before adding them to the corporate
network.
Conducted project to clone and replicate test systems for QA, DEV, and Performance testing.
Led project to migrate data center from Brisbane to Redwood Shores. Responsible for tracking inventory, host names, IP
addresses, labeling, networkdrops, power, and the functionality of the networkand systems after the move.
Led a project to create software to identify computer systemowners, whether or not they’re still being used, If they’re IP
addresses can be reclaimed, the re-imaging or building of the reclaimed machines, the issuance of the IP address to the new
systemowners, and the installation of OS, and applications.
Sprint Labs - Burlingame,CA Personal Consulting Assignment 2006
SystemAdministrator
Upgraded NIS to LDAP in a 50 plus system environment of mixed Solaris, Linux, and Windows clients
Set up auto mounting for both Solaris and Linux clients with LDAP Sun Directory Server 5.2
Upgraded firmware on NetApp NAS servers to enable LDAP protocol
Setup OpenSSL and TLS to run on port 636 of the directory server so that LDAP client requests are challenged and
communications are encrypted on the network
Used the OpenSSL certutiltools to setup a local CA and sign my own certificates
Setup iPlanet Web server
Upgraded VERITAS NetBackup from version 4.5 to version 5.1 MP3
Improved the performance of the NetBackup configuration by setting the memory properties in the /etc/sysconfig file on the
master server
Changed the backup policies to include multi-streaming for speed
Upgraded firmware, controllers, and drives on the tape storage library
Backed up Exchange Server mail boxes
Worked with tape storage library.
5. Wells Fargo - San Francisco,CA Personal Consulting Assignment2005
Web Engineer
Performed trouble shooting for Wells Fargo & Company Online Banking environment using HP OpenView, Mercury Topaz, and
custom monitoring tools
Monitored and interpreted log files to determine source of problems
Routed online traffic using customized load balancing and DNS tools such as BigIP F5
Took application instances, platforms, or data centers out of rotation for emergencies and scheduled maintenance
Opened and directed technical bridge lines for problems requiring multiple production support groups to resolve
Performed checkouts for production applications before placing them into rotation
Used start and stop scripts for Bea WebLogic 8.1, SunOne Web Server 6.1, and other applications
Updated Frontline Procedures, Escalation Procedures, Business Resumption Plans, and project specific documents for
departmentaland business wide use
Created change requests, trouble tickets, and other tracking functionality using Remedy
Walmart.com - Brisbane,CA Personal Consulting Assignment 2004
NOC Analyst
Troubleshooting and escalating production environment failures relating to the website on a 24x7 basis
Single point of contact for all website, production malfunctions
Ability to multitask under pressure to complete tasks while exceeding expectations
Orchestrated conference calls with many individuals and teams representing a range of technical disciplines in order to resolve
critical issues
Ran scripts to change out tapes in libraries, push search tables in databases, update the scratch pools on the NetBackup media
servers, pollthe tape libraries for currently available tapes, and run backup reports
Used Proactive Net to monitor all daemons, services, and networkinterfaces
Managed networks of remote servers, agents, and systems
Monitored Solaris production environment, which included servers, databases, reports, tape backups, and x86 Solaris
Handled and worked on code releases, code deployments, and code patches to production servers
Initiated corrective action through provided tools to assist third-party business partners, customers and vendors with problem
isolation and resolution
Used MRTG to monitor inbound and outbound networktraffic
Worked with tape storage library.
6. Sun Microsystems - SantaClara,CA Jan 2000 – May 2005
IT Project Manager
Project Lead building four usability labs, interviewed vendors, tookcompeting bids, tracked progress, managed vendor
installation, and came in under budget and weeks ahead of time.
Project Lead for conducting usability studies.
Project lead on creation of the Java Desktop Environment.
Project lead synchronizing enterprise data with employee smart phones. Implemented networkchanges to the corporate
infrastructure and worked together with multiple teams and vendors. Made drawings and hosted meetings.
Did all the recording, mixing, and editing of the usability study videos and distributed the results by streaming them with Real
Media Server.
Belonged to the Bay Area Streaming Media Users Group and hosted meetings to better learn how leaders in other companies
were using streaming media. Other BASMUG members represented AMD, Intel, Oracle, and many other prominent bay area
companies.
Conducted reverse auctions with bidders worldwide to do the localization for Star Office.
Conducted a video conferenced usability study with the Hamburg Star Office L10N team, the Yokohama L10N team, and the
Menlo Park Usability team all participating and viewing the studies simultaneously. I did all the coordination of reserving the
conference room, preparing the equipment and environment, and ensuring that the video conferencing went off smoothly.
Configured and managed VERITAS Cluster, Sun Cluster, and was on the Sun Enterprise Identity Management Team.
Education
UNIX SystemAdministrator LevelII Course 2001 | Storage Management with Backup Course 2001 | UNIX System Administrator
Level I Course Nov 2001 | BS/CS Cal State Fullerton University Fall 1992 | Advanced Electronics Course Nov 1989