The document discusses different tools for automating processes in Salesforce - Workflow, Process Builder, and Apex - and provides guidance on which tool to use for different situations. It notes that Workflow is quick but limited, Process Builder can meet most needs but has scalability risks, and Apex can do anything but requires more effort. Key considerations for each tool include ease of use, functionality, scalability, debugging, and time to deploy.
A DevOps Journey - An experience report after 6 years of implementing DevOps and Continuous Delivery in Frende Forsikring, a small insurance company in Norway.
The document discusses how LinkedIn performs traffic shifting to mitigate user impact from infrastructure issues, validate disaster recovery plans, test capacity headroom across datacenters, and perform maintenance. It describes how edge and datacenter traffic shifts are done using techniques like withdrawing IPVS routes and marking user buckets as online/offline. Single master failovers are also discussed as an extreme option that leverages distributed locking in Zookeeper. Regular practice of traffic shifts through testing and automation helps LinkedIn prepare for potential disasters.
Outsmarting Merge Edge Cases in Component Based DesignPerforce
This document discusses edge cases and challenges that can occur when merging code changes between component-based software development streams. It outlines several types of complex merge scenarios, such as renames that cross stream views and "shadowed deletes" not caught by integration tools. The key lessons are to consider the big picture problem rather than symptoms, have a simple managed workflow, and continuously test upgrades. An ideal solution would involve source control at the file object level rather than filenames to more easily handle renames and component changes.
This presentation discusses continuous database deployments. It begins with an introduction of the presenter and an overview of topics to be covered. It then contrasts manual database change management with continuous deployment. The main methods covered are schema-based, using the database schema in source control; script-based, using change scripts; and code-based, coding database changes. Benefits include reduced errors and faster releases. Best practices discussed include backing up data and deploying breaking changes in steps. The presentation concludes with a call for questions.
Using SaltStack to Auto Triage and Remediate Production SystemsMichael Kehoe
LinkedIn created an auto-remediation system named Nurse which leverages SaltStack and the CherryPy API to auto-triage and remediate issues with production systems. See how LinkedIn uses SaltStack with Nurse in its production environment and learn how to architect your own auto-triage and remediation system.
This document discusses Spring Boot, a framework for building Java applications. It makes building Java web applications easier by providing sensible defaults and automatic configuration. Spring Boot allows building applications that are easy to test, debug and deploy. It supports adding additional libraries and frameworks like Spring Data JPA with minimal configuration. The document demonstrates how to create a basic application with Spring Boot and Spring Data JPA with auto-configured infrastructure and shows how Spring Boot helps with development, operations and deployment of Java applications.
The document discusses different tools for automating processes in Salesforce - Workflow, Process Builder, and Apex - and provides guidance on which tool to use for different situations. It notes that Workflow is quick but limited, Process Builder can meet most needs but has scalability risks, and Apex can do anything but requires more effort. Key considerations for each tool include ease of use, functionality, scalability, debugging, and time to deploy.
A DevOps Journey - An experience report after 6 years of implementing DevOps and Continuous Delivery in Frende Forsikring, a small insurance company in Norway.
The document discusses how LinkedIn performs traffic shifting to mitigate user impact from infrastructure issues, validate disaster recovery plans, test capacity headroom across datacenters, and perform maintenance. It describes how edge and datacenter traffic shifts are done using techniques like withdrawing IPVS routes and marking user buckets as online/offline. Single master failovers are also discussed as an extreme option that leverages distributed locking in Zookeeper. Regular practice of traffic shifts through testing and automation helps LinkedIn prepare for potential disasters.
Outsmarting Merge Edge Cases in Component Based DesignPerforce
This document discusses edge cases and challenges that can occur when merging code changes between component-based software development streams. It outlines several types of complex merge scenarios, such as renames that cross stream views and "shadowed deletes" not caught by integration tools. The key lessons are to consider the big picture problem rather than symptoms, have a simple managed workflow, and continuously test upgrades. An ideal solution would involve source control at the file object level rather than filenames to more easily handle renames and component changes.
This presentation discusses continuous database deployments. It begins with an introduction of the presenter and an overview of topics to be covered. It then contrasts manual database change management with continuous deployment. The main methods covered are schema-based, using the database schema in source control; script-based, using change scripts; and code-based, coding database changes. Benefits include reduced errors and faster releases. Best practices discussed include backing up data and deploying breaking changes in steps. The presentation concludes with a call for questions.
Using SaltStack to Auto Triage and Remediate Production SystemsMichael Kehoe
LinkedIn created an auto-remediation system named Nurse which leverages SaltStack and the CherryPy API to auto-triage and remediate issues with production systems. See how LinkedIn uses SaltStack with Nurse in its production environment and learn how to architect your own auto-triage and remediation system.
This document discusses Spring Boot, a framework for building Java applications. It makes building Java web applications easier by providing sensible defaults and automatic configuration. Spring Boot allows building applications that are easy to test, debug and deploy. It supports adding additional libraries and frameworks like Spring Data JPA with minimal configuration. The document demonstrates how to create a basic application with Spring Boot and Spring Data JPA with auto-configured infrastructure and shows how Spring Boot helps with development, operations and deployment of Java applications.
Spring Boot makes it easier to create Java web applications. It provides sensible defaults and infrastructure so developers don't need to spend time wiring applications together. Spring Boot applications are also easier to develop, test, and deploy. The document demonstrates how to create a basic web application with Spring Boot, add Spring Data JPA for database access, and use features for development and operations.
The document discusses microservices and their advantages over monolithic architectures. Microservices break applications into small, independent components that can be developed, deployed and scaled independently. This allows for faster development and easier continuous delivery. The document recommends using Spring Boot to implement microservices and Docker to deploy and manage the microservices as independent components. It provides an example of implementing an ELK stack as Dockerized microservices.
This document discusses how to reduce the costs of integrating OpenStack and NFV platforms through OPNFV. It proposes:
1. Simplifying the integration process to reduce learning costs and make installer selection faster through a single command deployment option for both virtual and bare-metal environments.
2. Focusing on NFV scenario value rather than integration complexity to improve user experience and success rates.
3. Providing templates for common configurations that are supported by different installers to standardize deployments and minimize development work.
This document discusses LinkedIn's use of Couchbase as an in-memory data store. It describes how LinkedIn has grown to rely heavily on Couchbase, now running it in production, staging, and corporate environments. It also outlines some of the key use cases Couchbase supports at LinkedIn, such as serving as a read-through cache, storing counters, and acting as a source of truth datastore for some internal tools. Finally, it discusses the operational tooling and processes LinkedIn has developed to support Couchbase at scale.
Using JMeter Scripts in CloudTest for Continuous TestingJennifer Finney
JMeter is popular with developers for creating tests that can be run easily during development. With the SOASTA CloudTest Spring release, it's easy to run JMeter tests in CloudTest from low-levels during development all the way to full scale load in production, with all of the great CloudTest features. Come learn how easy it is to shift-left and shift-right and make testing continuous.
You don’t need DTAP + Backbase implementation - Amsterdam 17-12-2015Pavel Chunyayev
DTAP is already an outdated concept in 2016. Instead an idea of immutable infrastructure should be used. Backbase in partnership with Levi9 have employed the concept of immutable infrastructure to revolutionize the way Custemer Experience Platform (CXP) is developed and released.
Heroku is a platform as a service that originally started as a Ruby PaaS but now supports Node.js, Clojure, Grails, Scala, and Python. It uses the Git version control system for deployment and a dyno process model for scaling applications. While flexible in allowing custom buildpacks and configuration via environment variables, there are also restrictions like maximum source code size and memory limits for dyno processes.
LinkedIn serves traffic for its 467 million members from four data centers and multiple PoPs spread geographically around the world. Serving live traffic from from many places at the same time has taken us from a disaster recovery model to a disaster avoidance model where we can take an unhealthy data center or PoP out of rotation and redistribute its traffic to a healthy one within minutes, with virtually no visible impact to users. The geographical distribution of our infrastructure also allows us to optimize the end-user's experience by geo routing users to the best possible PoP and datacenter.
This talk provide details on how LinkedIn shifts traffic between its PoPs and data centers to provide the best possible performance and availability for its members. We will also touch on the complexities of performance in APAC, how IPv6 is helping our members and how LinkedIn stress tests data centers verify its disaster recovery capabilities.
The document discusses how the BBC delivers software faster through continuous delivery. It achieves this by automatically deploying code to production, using continuous delivery tools, removing bottlenecks, empowering teams, and providing fast feedback through rapid iterations. The BBC moved control to individual teams, adopted a devops support model, used cloud infrastructure, and changed budgets. Its principles are to automate processes, have a backlog with zero defects, and only blocker bugs can stop the automated release pipeline.
Couchbase Connect 2016: Monitoring Production Deployments The Tools – LinkedInMichael Kehoe
Good monitoring can be the difference between a great night's sleep or hearing your phone go off at 2:37 a.m. because of a production outage. Couchbase Server provides a large number of metrics which can be overwhelming if you do not know the critical things to focus on or how to expose that information to your monitoring system. In this talk we will look at example production incidents, going in depth around specific things to monitor, and how this information can be used to find issues, work out root cause, and discover trends.
Eberhard Wolff discusses several factors that contribute to creating changeable software beyond just architecture. He emphasizes that automated testing, following a test pyramid approach, continuous delivery practices like automated deployment, and understanding the customer's priorities are all important. While architecture is a factor, there are no universal rules and the architect's job is to understand each project's unique needs.
Reducing MTTR and False Escalations: Event Correlation at LinkedInMichael Kehoe
LinkedIn’s production stack is made up of over 900 applications and over 2200 internal API’s. With any given application having many interconnected pieces, it is difficult to escalate to the right person in a timely manner.
In order to combat this, LinkedIn built an Event Correlation Engine that monitors service health and maps dependencies between services to correctly escalate to the SRE’s who own the unhealthy service.
We’ll discuss the approach we used in building a correlation engine and how it has been used at LinkedIn to reduce incident impact and provide better quality of life to LinkedIn’s oncall engineers.
Understanding the CloudStack Release Processke4qqq
The document discusses the CloudStack release process. It describes the current process which involves feature development, feature freeze, code freeze, and multiple release candidates that cause frustration. The process aims for a 4 month release cycle but has never maintained the schedule. The document proposes moving to reliance on automated testing, more rigid acceptance standards, gated commits based on passing tests, and releasing more frequently with smaller changes to improve quality and reduce delays.
The document discusses implementing a pragmatic continuous delivery pipeline for Java applications using open source tools like Jenkins, Nexus, and LiveRebel. The pipeline includes phases for building, testing, QA, and production. Artifacts like WAR files and log files move through the pipeline. While changing processes can be difficult, the presenter suggests automating current workflows and capturing them in a pipeline as a way to introduce continuous delivery practices.
Continuous Delivery and Micro Services - A SymbiosisEberhard Wolff
Continuous Delivery profits from Micro Services - and the other way round. This presentation shows how the two technologies work together - and how Micro Services can be used to simplify the transition to Continuous Delivery.
This document discusses the future of Java and its key components: the Java Virtual Machine (JVM), the Java language, and the Java Community Process (JCP) standards. It notes that while the JVM is widely used and will remain important, the Java language has seen little innovation compared to other languages and the standards process has produced some poor specifications. However, Java still dominates the market due to its large existing base and the persistence of the JVM. The future of Java is uncertain but the JVM and open source development may help ensure its continued relevance.
Service Architectures At Scale - QCon London 2015Randy Shoup
Over time, almost all large, well-known web sites have evolved their architectures from an early monolithic application to a loosely-coupled ecosystem of polyglot microservices. While first-order goals are almost always driven by the needs of scalability and velocity, this evolution also produces second-order effects on the organization as well. This session will discuss modern service architectures at scale, using specific examples from both Google and eBay.
It covers some interesting -- and perhaps nonintuitive -- lessons learned in building and operating these sites. It concludes with a number of experience-based recommendations for other smaller organizations evolving to -- and sustaining -- an effective service ecosystem.
Grant Fritchey Justin Caldicott - Best practices for database deploymentsRed Gate Software
This document discusses best practices for database deployments. It recommends treating the database like code by putting it under source control and integrating it with the development process. A well-defined, repeatable deployment process is key, working backwards from production and testing changes at each stage. Automation helps speed the process and remove human errors. The overall goal is a tightly coupled, automated workflow that moves database changes reliably through environments like development, testing and production.
This document discusses unit testing ASP.Net and ASP.Net MVC applications. It notes that ASP.Net applications are difficult to test because they rely on server state like HttpContext. Testing through a browser is slow and hard to debug. However, ASP.Net MVC is more test-friendly as it works through interfaces and BaseHttpContext, but still has challenges like testing redirection and custom code. The document provides examples of testing what is written to the client, an MVC controller, and security logging. It promotes downloading Isolator.Net and Ivonna for isolation testing and concludes by calling the audience to action.
Micro Service – The New Architecture ParadigmEberhard Wolff
The document discusses microservices as a new software architecture paradigm. It defines microservices as small, independent processes that work together to form an application. The key benefits of microservices are that they allow for easier, faster deployment of features since each service is its own deployment unit and teams can deploy independently without integration. However, the document also notes challenges of microservices such as increased communication overhead, difficulty of code reuse across services, and managing dependencies between many different services. It concludes that microservices are best for projects where time to market is important and continuous delivery is a priority.
12.00 - Dr. Tim Chown - University of SouthamptonIPv6 Summit 2010
1) The university deployed IPv6 in a phased approach over many years, first running it in 1997 and now having a large dual-stack production network.
2) They took a dual-stack approach to allow existing IPv4 systems while gaining experience with IPv6. Managing the complexity of dual-stack has been the main challenge.
3) Early experiences included getting IPv6 connectivity, enabling core services like DNS and web servers, and porting internal software. Harder aspects involved multi-addressing, some application support, and security issues like rogue routers.
This document discusses VMware Integration Engineering's implementation of IPv6 in their physical and virtual infrastructure.
The key points are:
1) VMware Integration Engineering implemented IPv6 to test and validate VMware products as if operating as a real customer, including acquiring IPv6 address space and enabling IPv6 in their physical network and virtual testbeds (vPods).
2) Their implementation involved multiple phases including network audits, address planning, enabling management services, and deploying IPv6 routing.
3) Their use of virtual testbeds (vPods) with dual-stack IPv4/IPv6 networks was very successful for testing networking scenarios and VMware products.
4) Some best
Spring Boot makes it easier to create Java web applications. It provides sensible defaults and infrastructure so developers don't need to spend time wiring applications together. Spring Boot applications are also easier to develop, test, and deploy. The document demonstrates how to create a basic web application with Spring Boot, add Spring Data JPA for database access, and use features for development and operations.
The document discusses microservices and their advantages over monolithic architectures. Microservices break applications into small, independent components that can be developed, deployed and scaled independently. This allows for faster development and easier continuous delivery. The document recommends using Spring Boot to implement microservices and Docker to deploy and manage the microservices as independent components. It provides an example of implementing an ELK stack as Dockerized microservices.
This document discusses how to reduce the costs of integrating OpenStack and NFV platforms through OPNFV. It proposes:
1. Simplifying the integration process to reduce learning costs and make installer selection faster through a single command deployment option for both virtual and bare-metal environments.
2. Focusing on NFV scenario value rather than integration complexity to improve user experience and success rates.
3. Providing templates for common configurations that are supported by different installers to standardize deployments and minimize development work.
This document discusses LinkedIn's use of Couchbase as an in-memory data store. It describes how LinkedIn has grown to rely heavily on Couchbase, now running it in production, staging, and corporate environments. It also outlines some of the key use cases Couchbase supports at LinkedIn, such as serving as a read-through cache, storing counters, and acting as a source of truth datastore for some internal tools. Finally, it discusses the operational tooling and processes LinkedIn has developed to support Couchbase at scale.
Using JMeter Scripts in CloudTest for Continuous TestingJennifer Finney
JMeter is popular with developers for creating tests that can be run easily during development. With the SOASTA CloudTest Spring release, it's easy to run JMeter tests in CloudTest from low-levels during development all the way to full scale load in production, with all of the great CloudTest features. Come learn how easy it is to shift-left and shift-right and make testing continuous.
You don’t need DTAP + Backbase implementation - Amsterdam 17-12-2015Pavel Chunyayev
DTAP is already an outdated concept in 2016. Instead an idea of immutable infrastructure should be used. Backbase in partnership with Levi9 have employed the concept of immutable infrastructure to revolutionize the way Custemer Experience Platform (CXP) is developed and released.
Heroku is a platform as a service that originally started as a Ruby PaaS but now supports Node.js, Clojure, Grails, Scala, and Python. It uses the Git version control system for deployment and a dyno process model for scaling applications. While flexible in allowing custom buildpacks and configuration via environment variables, there are also restrictions like maximum source code size and memory limits for dyno processes.
LinkedIn serves traffic for its 467 million members from four data centers and multiple PoPs spread geographically around the world. Serving live traffic from from many places at the same time has taken us from a disaster recovery model to a disaster avoidance model where we can take an unhealthy data center or PoP out of rotation and redistribute its traffic to a healthy one within minutes, with virtually no visible impact to users. The geographical distribution of our infrastructure also allows us to optimize the end-user's experience by geo routing users to the best possible PoP and datacenter.
This talk provide details on how LinkedIn shifts traffic between its PoPs and data centers to provide the best possible performance and availability for its members. We will also touch on the complexities of performance in APAC, how IPv6 is helping our members and how LinkedIn stress tests data centers verify its disaster recovery capabilities.
The document discusses how the BBC delivers software faster through continuous delivery. It achieves this by automatically deploying code to production, using continuous delivery tools, removing bottlenecks, empowering teams, and providing fast feedback through rapid iterations. The BBC moved control to individual teams, adopted a devops support model, used cloud infrastructure, and changed budgets. Its principles are to automate processes, have a backlog with zero defects, and only blocker bugs can stop the automated release pipeline.
Couchbase Connect 2016: Monitoring Production Deployments The Tools – LinkedInMichael Kehoe
Good monitoring can be the difference between a great night's sleep or hearing your phone go off at 2:37 a.m. because of a production outage. Couchbase Server provides a large number of metrics which can be overwhelming if you do not know the critical things to focus on or how to expose that information to your monitoring system. In this talk we will look at example production incidents, going in depth around specific things to monitor, and how this information can be used to find issues, work out root cause, and discover trends.
Eberhard Wolff discusses several factors that contribute to creating changeable software beyond just architecture. He emphasizes that automated testing, following a test pyramid approach, continuous delivery practices like automated deployment, and understanding the customer's priorities are all important. While architecture is a factor, there are no universal rules and the architect's job is to understand each project's unique needs.
Reducing MTTR and False Escalations: Event Correlation at LinkedInMichael Kehoe
LinkedIn’s production stack is made up of over 900 applications and over 2200 internal API’s. With any given application having many interconnected pieces, it is difficult to escalate to the right person in a timely manner.
In order to combat this, LinkedIn built an Event Correlation Engine that monitors service health and maps dependencies between services to correctly escalate to the SRE’s who own the unhealthy service.
We’ll discuss the approach we used in building a correlation engine and how it has been used at LinkedIn to reduce incident impact and provide better quality of life to LinkedIn’s oncall engineers.
Understanding the CloudStack Release Processke4qqq
The document discusses the CloudStack release process. It describes the current process which involves feature development, feature freeze, code freeze, and multiple release candidates that cause frustration. The process aims for a 4 month release cycle but has never maintained the schedule. The document proposes moving to reliance on automated testing, more rigid acceptance standards, gated commits based on passing tests, and releasing more frequently with smaller changes to improve quality and reduce delays.
The document discusses implementing a pragmatic continuous delivery pipeline for Java applications using open source tools like Jenkins, Nexus, and LiveRebel. The pipeline includes phases for building, testing, QA, and production. Artifacts like WAR files and log files move through the pipeline. While changing processes can be difficult, the presenter suggests automating current workflows and capturing them in a pipeline as a way to introduce continuous delivery practices.
Continuous Delivery and Micro Services - A SymbiosisEberhard Wolff
Continuous Delivery profits from Micro Services - and the other way round. This presentation shows how the two technologies work together - and how Micro Services can be used to simplify the transition to Continuous Delivery.
This document discusses the future of Java and its key components: the Java Virtual Machine (JVM), the Java language, and the Java Community Process (JCP) standards. It notes that while the JVM is widely used and will remain important, the Java language has seen little innovation compared to other languages and the standards process has produced some poor specifications. However, Java still dominates the market due to its large existing base and the persistence of the JVM. The future of Java is uncertain but the JVM and open source development may help ensure its continued relevance.
Service Architectures At Scale - QCon London 2015Randy Shoup
Over time, almost all large, well-known web sites have evolved their architectures from an early monolithic application to a loosely-coupled ecosystem of polyglot microservices. While first-order goals are almost always driven by the needs of scalability and velocity, this evolution also produces second-order effects on the organization as well. This session will discuss modern service architectures at scale, using specific examples from both Google and eBay.
It covers some interesting -- and perhaps nonintuitive -- lessons learned in building and operating these sites. It concludes with a number of experience-based recommendations for other smaller organizations evolving to -- and sustaining -- an effective service ecosystem.
Grant Fritchey Justin Caldicott - Best practices for database deploymentsRed Gate Software
This document discusses best practices for database deployments. It recommends treating the database like code by putting it under source control and integrating it with the development process. A well-defined, repeatable deployment process is key, working backwards from production and testing changes at each stage. Automation helps speed the process and remove human errors. The overall goal is a tightly coupled, automated workflow that moves database changes reliably through environments like development, testing and production.
This document discusses unit testing ASP.Net and ASP.Net MVC applications. It notes that ASP.Net applications are difficult to test because they rely on server state like HttpContext. Testing through a browser is slow and hard to debug. However, ASP.Net MVC is more test-friendly as it works through interfaces and BaseHttpContext, but still has challenges like testing redirection and custom code. The document provides examples of testing what is written to the client, an MVC controller, and security logging. It promotes downloading Isolator.Net and Ivonna for isolation testing and concludes by calling the audience to action.
Micro Service – The New Architecture ParadigmEberhard Wolff
The document discusses microservices as a new software architecture paradigm. It defines microservices as small, independent processes that work together to form an application. The key benefits of microservices are that they allow for easier, faster deployment of features since each service is its own deployment unit and teams can deploy independently without integration. However, the document also notes challenges of microservices such as increased communication overhead, difficulty of code reuse across services, and managing dependencies between many different services. It concludes that microservices are best for projects where time to market is important and continuous delivery is a priority.
12.00 - Dr. Tim Chown - University of SouthamptonIPv6 Summit 2010
1) The university deployed IPv6 in a phased approach over many years, first running it in 1997 and now having a large dual-stack production network.
2) They took a dual-stack approach to allow existing IPv4 systems while gaining experience with IPv6. Managing the complexity of dual-stack has been the main challenge.
3) Early experiences included getting IPv6 connectivity, enabling core services like DNS and web servers, and porting internal software. Harder aspects involved multi-addressing, some application support, and security issues like rogue routers.
This document discusses VMware Integration Engineering's implementation of IPv6 in their physical and virtual infrastructure.
The key points are:
1) VMware Integration Engineering implemented IPv6 to test and validate VMware products as if operating as a real customer, including acquiring IPv6 address space and enabling IPv6 in their physical network and virtual testbeds (vPods).
2) Their implementation involved multiple phases including network audits, address planning, enabling management services, and deploying IPv6 routing.
3) Their use of virtual testbeds (vPods) with dual-stack IPv4/IPv6 networks was very successful for testing networking scenarios and VMware products.
4) Some best
Roadmap to Next Generation IP Networks: A Review of the FundamentalsNetwork Utility Force
This document discusses the requirement for all IP-capable nodes to support IPv6 given the depletion of IPv4 address space. It advises that IPv6 support is no longer optional, and cautions that references to "IP" may refer to IPv4, IPv6, or both depending on context. The document then provides an overview of IPv6 fundamentals including addressing, interconnectivity, security, staff training, and transition approaches. It emphasizes that IPv6 works in practice and addresses challenges but nothing that can't be overcome.
This document provides a 12-step plan for enabling IPv6 in an Internet service provider (ISP) network. The steps include: 1) requesting IPv6 address space from registries; 2) auditing network equipment for IPv6 support; 3) training staff on IPv6; 4) enabling IPv6 with upstream providers; 5) updating security policies for IPv6; 6) monitoring IPv6 metrics; 7) developing an IPv6 addressing plan; 8) deploying IPv6 in the core network; 9) conducting IPv6 trials; 10) enabling IPv6 in the access network; 11) configuring IPv6 transition technologies; and 12) updating customer-premises equipment to support IPv6. The document compares
12 steps for IPv6 Deployment in Governments and EnterprisesAPNIC
Training is the first step, as IPv6 requires redesigning networks and is not like IPv4. A transition plan requires in-depth IPv6 knowledge of the current network and future evolution. It affects client devices, applications, and how small entities will deploy IPv6. The document outlines 12 steps for transitioning to a dual-stack network with the long-term goal of IPv6-only, including getting training, creating a deployment strategy, controlling DNS, considering BGP, developing an addressing plan, obtaining internet resources, using an IPAM tool, assigning and auditing addresses, verifying IPv6 support, testing applications, and checking contracts with third parties.
The document discusses IPv6 adoption on the InteropNET network, including transition strategies used like dual stacking, autoconfiguration so clients can obtain IPv6 addresses, DNS services load balanced across both IPv4 and IPv6, and wireless access points supporting both protocols, with the goal of making internal services fully available over both IPv4 and IPv6. Challenges included ensuring services published AAAA records and coordinated with vendors to support IPv6, and some monitoring of IPv6 attack traffic was also performed.
The document discusses Aviran Mordo's presentation on Wix's journey towards continuous delivery. Some key points:
- Wix has transitioned from traditional waterfall development to continuous delivery, deploying changes around 60 times per day.
- This was enabled by adopting DevOps practices like test-driven development, feature toggles, A/B testing, automated deployments, and monitoring.
- Tools like App-Info, New Relic, and custom deployment tools were crucial for implementing continuous delivery at Wix's scale across multiple data centers and cloud providers.
- Transitioning required cultural changes, empowering developers, and embracing risk and failure to improve continuously. Wix now develops and replaces infrastructure
Mathew Beane discusses strategies for optimizing and scaling Magento applications on clustered infrastructure. Some key points include:
- Using Puppetmaster to build out clusters with standard webnodes and database configurations.
- Magento supports huge stores and is very flexible and scalable. Redis is preferred over Memcache for caching.
- Important to have application optimization, testing protocols and deployment pipelines in place before scaling.
- Common components for scaling include load balancers, proxying web traffic, clustering Redis with Sentinel and Twemproxy, adding read servers and auto-scaling.
This document provides an overview of IPv6 for an audience unfamiliar with the topic. It begins with a brief explanation of what IPv6 is and how it differs from IPv4 in areas like addressing and configuration. Statistics on global and domestic IPv6 deployment levels are presented. Potential business drivers for IPv6 adoption in research and education are outlined. The document then discusses IPv6 support and services available through Janet, as well as initial deployment strategies and considerations. Sources of additional guidance are listed, and examples of IPv6 in use are briefly described.
Benchmarking NGINX for Accuracy and ResultsNGINX, Inc.
View full webinar on demand at http://bit.ly/nginxbenchmarking
Whether you’re doing performance testing or planning for infrastructure needs, benchmarking can be a big deal. Join us for this webinar where we cover NGINX benchmarking best practices, including:
- the test environment
- configuring NGINX
- using benchmarking tools
- and more!
You’ll learn how to approach doing benchmarks so that you obtain results that are more accurate, better understood, and do a better job of addressing the needs of your project.
IETF IPv6 Activities Report by Cathy Aronson at ARIN 36. Presentation and webcast archive available at: https://www.arin.net/participate/meetings/reports/ARIN_36/ppm.html
Connections Migrations the easy way Soccnx10Sharon James
Migrating & upgrading connections can be a daunting - Here i share some trips, best practises and information on how to ensure that your upgrades are stress free
Connections Upgrades and Migrations the Easy WayLetsConnect
Migrating or upgrading an IBM Connections instance can be daunting, but it doesn't have to be. Sharon has had 6 years of successful upgrades and migrations and can assist in the mind-field of information required (and also lessons learnt from some not so successful ones). From what you need to download and configure to which data and assets you must move. We will cover the pros and cons of side-by-side and in-place migrations and what to do if something should go wrong. From an iSeries Connections upgrade through to a side-by-side Oracle DB migration we should have every scenario and information to take the stress out of YOUR Connections upgrade.
This document discusses considerations for internet service providers transitioning to IPv6. It covers common network architectural patterns like core/backbone, last mile, and border networks. It also discusses transition approaches like dual-stack and tunneling. The document outlines a multi-phase transition plan including obtaining IPv6 address space, setting up a testbed, enabling IPv6 routing and services, and addressing security considerations during the rollout.
RIPE NCC conducted active measurements of the 2012 World IPv6 Launch from May 19 to June 18 using 53 vantage points to measure DNS records, ping, traceroute, and HTTP performance over IPv6 for 60 participating networks. The results showed that most sites kept IPv6 enabled as intended, though some did not enable it during the event. RIPE also demonstrated tools for visualizing IPv6 connectivity and performance using its RIPE Atlas probes to trace routes to networks, with options to view results by autonomous system or get raw data. Feedback was sought on the traceroute visualization tool to help improve debugging of network issues.
The document discusses Spring and Java EE application development. It describes Spring as a programming model that defines APIs but no infrastructure, allowing applications to run on servlet containers like Tomcat without needing full Java EE application servers. It also summarizes Spring tools for operations and monitoring large clusters, and how OSGi modularization allows updating parts of applications at runtime.
WTF: Where To Focus when you take over a Drupal projectSymetris
Jumping into pre-built Drupal projects sometimes requires a leap of faith as much for clients as for developers. The client is usually coming out of a bad previous business relationship and the code is not always structured according to your standards.
During this talk, Symetris will share its experience and provide tips on how to navigate these often uncharted waters. Our goal is to help you convert an uncertain client into a long term partner and have a checklist of what to look out for as developers.
Presentation on the planning and deployment of the IPv6 enabled, municipal WiFi network, built for the city of Douglasville, GA, by Network Utility Force.
Aerohive's case study of the outdoor, municipal WiFi network designed and built by Network Utility Force for the city of Douglasville, a community project funded by Google.
The document provides an overview of IPv6 addressing architecture, DHCPv6, and DNS. It discusses the 128-bit address space of IPv6 which provides a large number of addresses. It also describes address types, address formatting, router advertisements, neighbor discovery, and stateful address assignment using DHCPv6. The document highlights changes needed for DNS to support IPv6, including new record types and the ip6.arpa domain for IPv6 reverse lookups.
Network Utility Force is a network architecture and consulting firm that specializes in IPv6 deployment. The document discusses several key points regarding the need for IPv6 adoption. It notes that IPv4 addresses are depleting and IPv6 is necessary to support growth, including the internet of things. It also outlines factors organizations should consider when planning their IPv6 deployment such as addressing, routing, security, testing, and training. The document emphasizes that IPv6 can be deployed using best practices with an emphasis on performance, security, and flexibility.
The document discusses the need for higher education institutions to deploy IPv6, as IPv4 addresses are depleting. It recommends that IPv6 support is no longer optional for IP-capable nodes. It provides examples of how US federal agencies deployed IPv6 and the costs of deploying versus not deploying IPv6. The presentation discusses addressing plans, security considerations, staff training, and transition technologies like dual stack that institutions can use to deploy IPv6. Real-world case studies of successful IPv6 deployments are also presented.
The document provides an overview of Border Gateway Protocol (BGP) routing concepts. It discusses how BGP allows organizations to exchange routing information with neighbors and control traffic flow. BGP uses autonomous system numbers to identify networks and interior BGP (iBGP) to distribute routes within an organization, while exterior BGP (eBGP) communicates with neighboring networks. The document also notes that most internet service providers peer with each other via BGP at internet exchange points around the world.
Network Utility Force IPv6 NAT64 Presentation for North American IPv6 SummitNetwork Utility Force
The document summarizes a demonstration of NAT64 technology. It describes the demo setup including an A10 appliance performing NAT64/DNS64 translation between an IPv6-only network and IPv4 internet. Most protocols worked through the translation including web, email, chat and streaming media. Some issues were encountered with iOS devices initially connecting and FTP without an ALG but these were resolved. Android devices did not connect due to an IPv4 check.
The document summarizes a presentation on IPv6 support on the InteropNET network. It discusses:
1) The background and goals of fully supporting dual-stack IPv4 and IPv6 on the network, with equivalent or better functionality for IPv6 compared to IPv4.
2) How IPv6 was implemented on the network, including stateless address autoconfiguration (SLAAC), DNS services, internal services, and wireless connectivity being made dual-stack.
3) Topics covered included IPv6 subnetting and addressing, challenges in implementation, and statistics on adoption and traffic. The document provides recommendations on IPv6 subnetting and addressing approaches.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
How to Plan and Conduct IPv6 Field Trials
1. IPv6 Trials
Field, beta, limited deployment, lab, opt in or opt out, how to
choose and
what to consider.
Brandon Ross
Chief Network Architect and CEO
2. Where to Start
• Like any other problem, building a v6 trial can
seem overwhelming
• Don’t look it at monolithically, instead break
the problem into small pieces
• There are a few common activities everyone
should start with
3. Lab
• Build a lab
• Stock it with the identical equipment you have
in the field
• Replicate identical configurations and
software versions of what is in the field
• Can’t afford to buy all that equipment?
– Make a vendor do it
– Hire a consulting firm
4. Get IPv6 space
• Upstream provider
• Regional Internet Registry (ARIN, etc.)
• Get it from a tunnel broker
– He.net
– Tunnelbroker.net
– Etc.
5. Connection to the world
• Check with your upstream provider
• Get a tunnel
– Tunnelbroker.net
– Etc.
• Run lab testing without an outside
connection?
6. What to test?
• Start by identifying where you would like to
deliver v6 service first
• Depends on the type of network and the type of
business
• Also depends on the goals of the v6 trial
– Public facing, “Look at me, I’m doing v6 and my
competitors aren’t!”
– Internal service for employees
– IPv6 Internet access for ISP clients
– Holy crap! I’m out of addresses!!
7. Public Facing
• “I want the world to reach my web site over
v6”
• Users of my web site that have v6 will get
better performance
• Probably one of the easiest trials to start with
8. Public facing
• Identify the elements needed for the trial
– Several manufacturers make load balancers that can
translate your v4 web site into v6
• Your existing load balancer may already be capable of this!
• Very little if anything needs to change on the web site
itself, the load balancer does all the work
– IPv6 transit link from an upstream provider
– At least one router that can route v6
• Can’t do all of that?
– Check out free service from CloudFlare (Automatic
IPv6)
9. Public facing
• Put all that in your lab and poke at it
• Once everything works well, a parallel trial is
appropriate
• Low risk if you use a separate link from the
upstream provider and a separate router
• Control your experiment by publishing (or
removing) AAAA records
10. Internal Service
• Parts:
– Desktops that support v6
• Good news, most modern ones do out of the box
– DHCPv6 server
• Optional, but necessary if you want to deliver DNS over IPv6
– DNS server that understands AAAA requests
• Many do out of the box
– IPv6 transport
• Probably don’t want to use tunnels
• As before, start small, pick somewhere easy
11. Internal Service
• Again, put it in the lab and try to break it
• Trial considerations
– Opt in
• Configure all users’ machines without v6 using
administrative tools
• Provide instructions to users for how to turn it on if they
want it
– Opt out
• Same, just opposite
– Blind trial
• Turn it on by default for some users and not for others
• Track the trouble tickets you get from each
12. Internal Service
• Grow the service over time, slowly and
continue to track issues
• Need to back off? Easily move all users on a
subnet off of v6 by shutting down router
advertisements (RA)
13. ISP Clients
• Similar to internal employees
• But you have no control over their devices
• Considerations
– CPE support
– RAS/aggregation device support
• JUST BECAUSE IT’S IN THE SPECS DOESN’T
MEAN IT WILL PERFORM THE WAY YOU NEED
IT TO
14. ISP Clients
• Opt-in Beta
– Start by enabling some part of the backbone for v6
• This may mean building separate logical infrastructure in cases where
not-so-capable equipment needs to be bypassed
• For a beta, it’s best to keep trial traffic somewhat separate from the
rest
• Ship a new CPE configured for v6 use (if necessary) or configure opt-in
users with the appropriate profile
• Supply instructions to users on how to disable v6 if they have a
problem
– Comcast has been a leader in opt-in v6 trials in North America
– Most importantly, set up your support organization so they can
carefully track problems related to clients who are in the trial
• Opt-out Beta not recommended
15. ISP Clients
• Field trial
• Opt-in beta going well? Time to inflict v6 on the
unsuspecting
• Create a “Big red switch” so you can disable v6 for all of
your clients quickly if necessary
– Remember that miscreants will be looking for vulernabilities
in v6. We don’t know what they all are yet. Being able to shut
it off in case of a disaster will be critical.
• Your support organization is the key to measuring your
success
16. Out of addresses
• Don’t get here
• Don’t get here
• Don’t get here
• Don’t get here
• Don’t get here
• Don’t get here
• Don’t get here