This document discusses edge cases and challenges that can occur when merging code changes between component-based software development streams. It outlines several types of complex merge scenarios, such as renames that cross stream views and "shadowed deletes" not caught by integration tools. The key lessons are to consider the big picture problem rather than symptoms, have a simple managed workflow, and continuously test upgrades. An ideal solution would involve source control at the file object level rather than filenames to more easily handle renames and component changes.
Software Testing in a Distributed EnvironmentPerforce
Distributed development across countries creates both challenges and opportunities for the production of high quality software. We’ll look at new ways of achieving automation for testing software in a continuous delivery context, using parallelization techniques and automated analysis fully integrated with a reliable and scalable SCM system. A new optimal method of testing common code in similar branches is presented along with the semantic merging of testing results.
Building a successful DevOps solution requires a holistic view of your development ecosystem plus solid technology that can support your organization today and in the future. Learn how to start defining your own successful DevOps solution and how to position Helix to be at the center of it all.
Perforce Helix Never Dies: DevOps at Bandai Namco StudiosPerforce
Traditionally at Bandai Namco Studios, there has been no unified version control system in place and teams could choose to use any VCS system for their game titles—Subversion, Git, AlienBrain, or none at all. I’ll talk about why Bandai Namco Studios chose to standardize on Perforce Helix, show how we develop LiveOps-type mobile applications using the Unity game engine, and the advantages we gain from centrally managing code and assets in Helix.
URP? Excuse You! The Three Kafka Metrics You Need to KnowTodd Palino
What do you really know about how to monitor a Kafka cluster for problems? Is your most reliable monitoring your users telling you there’s something broken? Are you capturing more metrics than the actual data being produced? Sure, we all know how to monitor disk and network, but when it comes to the state of the brokers, many of us are still unsure of which metrics we should be watching, and what their patterns mean for the state of the cluster. Kafka has hundreds of measurements, from the high-level numbers that are often meaningless to the per-partition metrics that stack up by the thousands as our data grows.
We will thoroughly explore three key monitoring concepts in the broker, that will leave you an expert in identifying problems with the least amount of pain:
Under-replicated Partitions: The mother of all metrics
Request Latencies: Why your users complain
Thread pool utilization: How could 80% be a problem?
We will also discuss the necessity of availability monitoring and how to use it to get a true picture of what your users see, before they come beating down your door!
Global Software Development powered by PerforcePerforce
From inception to sunset, hundreds of people from around the world are involved in the production and live operations of video games developed by Electronic Arts. An overview of how EA uses a variety of features in Perforce Helix to effectively utilize its world wide talent pool, develop software efficiently, and protect its intellectual property.
Make It Cooler: Using Decentralized Version Controlindiver
A commonly used version control system in the ColdFusion community is Subversion -- a centralized system that relies on being connected to a central server. The next generation version control systems are “decentralized”, in that version control tasks do not rely on a central server.
Decentralized version control systems are more efficient and offer a more practical way of software development.
In this session, Indy takes you through the considerations in moving from Subversion to Git, a decentralized version control system. You also get to understand the pros and cons of each and hear of the practical experience of migrating projects to decentralized version control.
Version control is often used in conjunction with a testing framework and continuous integration. To complete the picture, Indy walks you through how to integrate Git with a testing framework, MXUnit, and a continuous integration server, Hudson.
Software Testing in a Distributed EnvironmentPerforce
Distributed development across countries creates both challenges and opportunities for the production of high quality software. We’ll look at new ways of achieving automation for testing software in a continuous delivery context, using parallelization techniques and automated analysis fully integrated with a reliable and scalable SCM system. A new optimal method of testing common code in similar branches is presented along with the semantic merging of testing results.
Building a successful DevOps solution requires a holistic view of your development ecosystem plus solid technology that can support your organization today and in the future. Learn how to start defining your own successful DevOps solution and how to position Helix to be at the center of it all.
Perforce Helix Never Dies: DevOps at Bandai Namco StudiosPerforce
Traditionally at Bandai Namco Studios, there has been no unified version control system in place and teams could choose to use any VCS system for their game titles—Subversion, Git, AlienBrain, or none at all. I’ll talk about why Bandai Namco Studios chose to standardize on Perforce Helix, show how we develop LiveOps-type mobile applications using the Unity game engine, and the advantages we gain from centrally managing code and assets in Helix.
URP? Excuse You! The Three Kafka Metrics You Need to KnowTodd Palino
What do you really know about how to monitor a Kafka cluster for problems? Is your most reliable monitoring your users telling you there’s something broken? Are you capturing more metrics than the actual data being produced? Sure, we all know how to monitor disk and network, but when it comes to the state of the brokers, many of us are still unsure of which metrics we should be watching, and what their patterns mean for the state of the cluster. Kafka has hundreds of measurements, from the high-level numbers that are often meaningless to the per-partition metrics that stack up by the thousands as our data grows.
We will thoroughly explore three key monitoring concepts in the broker, that will leave you an expert in identifying problems with the least amount of pain:
Under-replicated Partitions: The mother of all metrics
Request Latencies: Why your users complain
Thread pool utilization: How could 80% be a problem?
We will also discuss the necessity of availability monitoring and how to use it to get a true picture of what your users see, before they come beating down your door!
Global Software Development powered by PerforcePerforce
From inception to sunset, hundreds of people from around the world are involved in the production and live operations of video games developed by Electronic Arts. An overview of how EA uses a variety of features in Perforce Helix to effectively utilize its world wide talent pool, develop software efficiently, and protect its intellectual property.
Make It Cooler: Using Decentralized Version Controlindiver
A commonly used version control system in the ColdFusion community is Subversion -- a centralized system that relies on being connected to a central server. The next generation version control systems are “decentralized”, in that version control tasks do not rely on a central server.
Decentralized version control systems are more efficient and offer a more practical way of software development.
In this session, Indy takes you through the considerations in moving from Subversion to Git, a decentralized version control system. You also get to understand the pros and cons of each and hear of the practical experience of migrating projects to decentralized version control.
Version control is often used in conjunction with a testing framework and continuous integration. To complete the picture, Indy walks you through how to integrate Git with a testing framework, MXUnit, and a continuous integration server, Hudson.
Using Perforce Data in Development at TableauPerforce
Data plays a big role at Tableau—not just for our customers, but also throughout our company. Using our own products is not only one of our fundamental company values, but the analysis and discoveries we make are important to track as they shape our development processes and influence our day-to-day decisions. In this talk, we present and analyze a variety of data visualizations based on Perforce data from our development organization and share how it has influenced our infrastructure and development practices.
The devops approach to monitoring, Open Source and Infrastructure as Code StyleJulien Pivotto
Monitoring is critical for every decent application that runs on production. Many of the monitoring tools widely used show their limits at the age of Infrastructure as Code and Cloud computing. Let's investigate how monitoring can face the new challenges: scalability, reproducability and automation
Infrastructure as Code represents treating infrastructure components like software that can be version controlled, tested, and deployed. The document discusses tools and techniques for implementing Infrastructure as Code including using version control, continuous integration/delivery, configuration automation, and virtual labs for testing changes. It provides examples of workflows using these techniques and recommends starting small and evolving Infrastructure as Code practices over time.
Webinar slides: Replication Topology Changes for MySQL and MariaDBSeveralnines
This document discusses replication topology changes for MySQL and MariaDB databases. It covers making changes using GTID or regular replication, the failover process, and tools like MaxScale and ProxySQL that can help automate query rerouting during a failover. Specific topics covered include reslaving nodes, setting up master-master replication, and performing both offline and online failovers.
How Samsung Engineers Do Pre-Commit Builds with Perforce Helix StreamsPerforce
Get an in-depth look at the life of a pre-commit build at Samsung using Perforce Helix Streams and Electric Cloud’s Electric Commander with Helix Swarm for code review.
Measure and Increase Developer Productivity with Help of Serverless at AWS Co...Vadym Kazulkin
The goal of Serverless is to focus on writing the code that delivers business value and offload everything else to your trusted partners (like Cloud providers or SaaS vendors). You want to iterate quickly and today’s code quickly becomes tomorrow’s technical debt. In this talk we will show why Serverless adoption increases the developer productivity and how to measure it. We will also go through AWS Serverless architectures where you only glue together different Serverless managed services relying solely on configuration, minimizing the amount of the code written.
The document provides a brief history of revision control systems including SCCS, RCS, CVS, Subversion, and distributed systems like Git, Mercurial, and Bazaar. It discusses the problems with earlier systems that motivated the creation of Git, including issues with CVS and Subversion. It describes how Linus Torvalds created Git to address these problems and support fast, distributed, and non-linear development workflows.
A Practical Guide to Selecting a Stream Processing Technology confluent
Presented by Michael Noll, Product Manager, Confluent.
Why are there so many stream processing frameworks that each define their own terminology? Are the components of each comparable? Why do you need to know about spouts or DStreams just to process a simple sequence of records? Depending on your application’s requirements, you may not need a full framework at all.
Processing and understanding your data to create business value is the ultimate goal of a stream data platform. In this talk we will survey the stream processing landscape, the dimensions along which to evaluate stream processing technologies, and how they integrate with Apache Kafka. Particularly, we will learn how Kafka Streams, the built-in stream processing engine of Apache Kafka, compares to other stream processing systems that require a separate processing infrastructure.
OnAndroidConf 2013: Accelerating the Android Platform BuildDavid Rosen
Presented at the OnAndroidConf, October 22 2013, http://www.onandroidconf.com/sessions.html
Abstract:
Optimizing the Android build environment to perform at world-class level is a big challenge for many Android device and chipset makers today. Churning through thousands of platform builds per week requires laser-focus on high-performance infrastructure and tooling. If you’re looking at improving your overall engineering and developer productivity, the software build use case is an obvious area to prioritize.
This technical talk will focus on the following aspects of the Android platform build:
Common Android platform build challenges and opportunities with real-life production references
The various Android build use cases and their needs – full integration and release builds, developer incremental builds
Evolution of the Android build and codebase with trends and statistics
Detailed technical analysis of the Android platform build, highlighting opportunities for improvements
Proposed solutions and technical tricks to optimize an Android software build environment
The document discusses deployment pipelines for databases. It defines a deployment pipeline and describes its typical stages: change description, change validation, and change implementation. It outlines challenges of including databases in deployment pipelines, such as different processes for database and application changes. The document advocates for automating database deployments to increase deployment speed and reliability while reducing risk. It provides examples of database deployment pipeline scenarios and considerations for continuous integration, delivery, and rollbacks.
eZ Publish 5: from zero to automated deployment (and no regressions!) in one ...Gaetano Giunta
1. The workshop will cover Docker, managing environments, database changes, and automated deployments for eZPublish websites.
2. A Docker stack is proposed that includes containers for Apache, MySQL, Solr, PHP, and other tools to replicate a production environment for development. Configuration and code are mounted as volumes.
3. Managing environments involves storing settings in the code repository and using symlinks to deploy different configurations. Database changes should be managed via migration scripts rather than connecting directly to a shared database.
4. Automating deployments is important and involves tasks like updating code, the database, caches and reindexing content. The same deployment script should be used for development and production. Testing websites is also recommended.
The document discusses best practices for building and deploying Scala applications based on the 12 Factor App methodology. It covers topics like managing dependencies, separating configuration from code, building in a simple and automated way, scaling apps through stateless processes, achieving parity between development and production environments, and running admin tasks isolated from the main app. The presentation provides examples using tools like sbt, Dropwizard, and Heroku to demonstrate how to structure Scala apps according to the 12 factors.
Continuous integration using Jenkins and SonarPascal Larocque
Continuous Integration can help your to team release features faster. It reduces the risk of deployment issue and will speed up your development cycle. In this presentation we take a look at how Jenkins and Sonar can help you Test, Analyze, Deploy and gather performance metrics that will help your team increase their development quality and reduce deployment time
Adopting Java for the Serverless world at AWS User Group PretoriaVadym Kazulkin
Java is for many years one of the most popular programming languages, but it used to have hard times in the Serverless Community. Java is known for its high cold start times and high memory footprint. For both you have to pay to the cloud providers of your choice. That's why most developers tried to avoid using Java for such use cases. But the times change: Community and cloud providers improve things steadily for Java developers. In this talk we look at the features and possibilities AWS cloud provider offers for the Java developers and look the most popular Java frameworks, like Micronaut, Quarkus and Spring (Boot) and look how (AOT compiler and GraalVM native images play a huge role) they address Serverless challenges and enable Java for broad usage in the Serverless world.
Systematic Load Testing of Web ApplicationsJürg Stuker
Talk held at the conference Coding Serbia in Novi Sad.
Performance of web applications is a crucial dissatisfier for users and thus an important quality criteria -- also used by Google to rank their result lists. As with other quality aspects, performance testing cannot be done at the end of a project but is an integral part of the development process.
The practice presentations submitted explains web performance testing along practical examples in order to better understand and judge cause and effect of behavior observed. Usually few causes have a disproportionate effect on bad performance. In addition, it is important to understand diverse load and test scenarios to optimize application behavior.
The presentation also introduces a methodology to systematically define and assess performance metrics of an application. The content is based on open source tools and the presentation includes live testing to illustrate the excellent cost benefit ratio of systematically white box testing of performance using an HTTP proxy.
The document discusses making a stateless service-oriented application highly available using GlusterFS. It describes setting up a GlusterFS cluster with replicated volumes to provide a centralized data store. The application is configured to mount the GlusterFS volume and an update mechanism is built to notify the application when data changes by monitoring the volume for modifications. This allows making the application redundant and aware of data changes for high availability.
Code Yellow: Helping Operations Top-Heavy Teams the Smart WayTodd Palino
All engineering teams run into trouble from time to time. Alert fatigue, caused by technical debt or a failure to plan for growth, can quickly burn out SREs, overloading both development and operations with reactive work. Layer in the potential for communication problems between teams, and we can find ourselves in a place so troublesome we cannot easily see a path out. At times like this, our natural instinct as reliability engineers is to double down and fight through the issues. Often, however, we need to step back, assess the situation, and ask for help to put the team back on the road to success.
We will look at the process for Code Yellow, the term we use for this process of “righting the ship”, and discuss how to identify teams that are struggling. Through a look at three separate experiences, we will examine some of the root causes, what steps were taken, and how the engineering organization as a whole supports the process.
Eberhard Wolff discusses several factors that contribute to creating changeable software beyond just architecture. He emphasizes that automated testing, following a test pyramid approach, continuous delivery practices like automated deployment, and understanding the customer's priorities are all important. While architecture is a factor, there are no universal rules and the architect's job is to understand each project's unique needs.
URP? Excuse You! The Three Metrics You Have to Know confluent
(Todd Palino, LinkedIn) Kafka Summit SF 2018
What do you really know about how to monitor a Kafka cluster for problems? Is your most reliable monitoring your users telling you there’s something broken? Are you capturing more metrics than the actual data being produced? Sure, we all know how to monitor disk and network, but when it comes to the state of the brokers, many of us are still unsure of which metrics we should be watching, and what their patterns mean for the state of the cluster. Kafka has hundreds of measurements, from the high-level numbers that are often meaningless to the per-partition metrics that stack up by the thousands as our data grows.
We will thoroughly explore three key monitoring concepts in the broker, that will leave you an expert in identifying problems with the least amount of pain:
-Under-replicated Partitions: The mother of all metrics
-Request Latencies: Why your users complain
-Thread pool utilization: How could 80% be a problem?
We will also discuss the necessity of availability monitoring and how to use it to get a true picture of what your users see, before they come beating down your door!
DESIGN West 2013 Presentation: Accelerating Android Development and DeliveryDavid Rosen
This document discusses accelerating the Android development process. It begins by noting the widespread use of Android and the challenges of slow builds and testing. It then outlines techniques for speeding up builds and Compatibility Test Suite (CTS) execution, including using more efficient build tools, dependency optimization, and test parallelization. Faster development cycles can be achieved through an integrated continuous delivery solution that applies these acceleration strategies and provides end-to-end process visibility.
Modular software design is an ingrained methodology, but how can Perforce help with that modularity? See how WMS Gaming and Perforce have taken the lid off p4 sync and stuffed it with some extra brain power to make modular version control "automagic."
Configuration and Build Management of Product Line Development with Perforce Perforce
This session provides an in-depth look at best practices for component-based software development practices in configuration and build management. Learn how to manage the challenges often faced by companies with many products, diverse teams and daily releases.
Using Perforce Data in Development at TableauPerforce
Data plays a big role at Tableau—not just for our customers, but also throughout our company. Using our own products is not only one of our fundamental company values, but the analysis and discoveries we make are important to track as they shape our development processes and influence our day-to-day decisions. In this talk, we present and analyze a variety of data visualizations based on Perforce data from our development organization and share how it has influenced our infrastructure and development practices.
The devops approach to monitoring, Open Source and Infrastructure as Code StyleJulien Pivotto
Monitoring is critical for every decent application that runs on production. Many of the monitoring tools widely used show their limits at the age of Infrastructure as Code and Cloud computing. Let's investigate how monitoring can face the new challenges: scalability, reproducability and automation
Infrastructure as Code represents treating infrastructure components like software that can be version controlled, tested, and deployed. The document discusses tools and techniques for implementing Infrastructure as Code including using version control, continuous integration/delivery, configuration automation, and virtual labs for testing changes. It provides examples of workflows using these techniques and recommends starting small and evolving Infrastructure as Code practices over time.
Webinar slides: Replication Topology Changes for MySQL and MariaDBSeveralnines
This document discusses replication topology changes for MySQL and MariaDB databases. It covers making changes using GTID or regular replication, the failover process, and tools like MaxScale and ProxySQL that can help automate query rerouting during a failover. Specific topics covered include reslaving nodes, setting up master-master replication, and performing both offline and online failovers.
How Samsung Engineers Do Pre-Commit Builds with Perforce Helix StreamsPerforce
Get an in-depth look at the life of a pre-commit build at Samsung using Perforce Helix Streams and Electric Cloud’s Electric Commander with Helix Swarm for code review.
Measure and Increase Developer Productivity with Help of Serverless at AWS Co...Vadym Kazulkin
The goal of Serverless is to focus on writing the code that delivers business value and offload everything else to your trusted partners (like Cloud providers or SaaS vendors). You want to iterate quickly and today’s code quickly becomes tomorrow’s technical debt. In this talk we will show why Serverless adoption increases the developer productivity and how to measure it. We will also go through AWS Serverless architectures where you only glue together different Serverless managed services relying solely on configuration, minimizing the amount of the code written.
The document provides a brief history of revision control systems including SCCS, RCS, CVS, Subversion, and distributed systems like Git, Mercurial, and Bazaar. It discusses the problems with earlier systems that motivated the creation of Git, including issues with CVS and Subversion. It describes how Linus Torvalds created Git to address these problems and support fast, distributed, and non-linear development workflows.
A Practical Guide to Selecting a Stream Processing Technology confluent
Presented by Michael Noll, Product Manager, Confluent.
Why are there so many stream processing frameworks that each define their own terminology? Are the components of each comparable? Why do you need to know about spouts or DStreams just to process a simple sequence of records? Depending on your application’s requirements, you may not need a full framework at all.
Processing and understanding your data to create business value is the ultimate goal of a stream data platform. In this talk we will survey the stream processing landscape, the dimensions along which to evaluate stream processing technologies, and how they integrate with Apache Kafka. Particularly, we will learn how Kafka Streams, the built-in stream processing engine of Apache Kafka, compares to other stream processing systems that require a separate processing infrastructure.
OnAndroidConf 2013: Accelerating the Android Platform BuildDavid Rosen
Presented at the OnAndroidConf, October 22 2013, http://www.onandroidconf.com/sessions.html
Abstract:
Optimizing the Android build environment to perform at world-class level is a big challenge for many Android device and chipset makers today. Churning through thousands of platform builds per week requires laser-focus on high-performance infrastructure and tooling. If you’re looking at improving your overall engineering and developer productivity, the software build use case is an obvious area to prioritize.
This technical talk will focus on the following aspects of the Android platform build:
Common Android platform build challenges and opportunities with real-life production references
The various Android build use cases and their needs – full integration and release builds, developer incremental builds
Evolution of the Android build and codebase with trends and statistics
Detailed technical analysis of the Android platform build, highlighting opportunities for improvements
Proposed solutions and technical tricks to optimize an Android software build environment
The document discusses deployment pipelines for databases. It defines a deployment pipeline and describes its typical stages: change description, change validation, and change implementation. It outlines challenges of including databases in deployment pipelines, such as different processes for database and application changes. The document advocates for automating database deployments to increase deployment speed and reliability while reducing risk. It provides examples of database deployment pipeline scenarios and considerations for continuous integration, delivery, and rollbacks.
eZ Publish 5: from zero to automated deployment (and no regressions!) in one ...Gaetano Giunta
1. The workshop will cover Docker, managing environments, database changes, and automated deployments for eZPublish websites.
2. A Docker stack is proposed that includes containers for Apache, MySQL, Solr, PHP, and other tools to replicate a production environment for development. Configuration and code are mounted as volumes.
3. Managing environments involves storing settings in the code repository and using symlinks to deploy different configurations. Database changes should be managed via migration scripts rather than connecting directly to a shared database.
4. Automating deployments is important and involves tasks like updating code, the database, caches and reindexing content. The same deployment script should be used for development and production. Testing websites is also recommended.
The document discusses best practices for building and deploying Scala applications based on the 12 Factor App methodology. It covers topics like managing dependencies, separating configuration from code, building in a simple and automated way, scaling apps through stateless processes, achieving parity between development and production environments, and running admin tasks isolated from the main app. The presentation provides examples using tools like sbt, Dropwizard, and Heroku to demonstrate how to structure Scala apps according to the 12 factors.
Continuous integration using Jenkins and SonarPascal Larocque
Continuous Integration can help your to team release features faster. It reduces the risk of deployment issue and will speed up your development cycle. In this presentation we take a look at how Jenkins and Sonar can help you Test, Analyze, Deploy and gather performance metrics that will help your team increase their development quality and reduce deployment time
Adopting Java for the Serverless world at AWS User Group PretoriaVadym Kazulkin
Java is for many years one of the most popular programming languages, but it used to have hard times in the Serverless Community. Java is known for its high cold start times and high memory footprint. For both you have to pay to the cloud providers of your choice. That's why most developers tried to avoid using Java for such use cases. But the times change: Community and cloud providers improve things steadily for Java developers. In this talk we look at the features and possibilities AWS cloud provider offers for the Java developers and look the most popular Java frameworks, like Micronaut, Quarkus and Spring (Boot) and look how (AOT compiler and GraalVM native images play a huge role) they address Serverless challenges and enable Java for broad usage in the Serverless world.
Systematic Load Testing of Web ApplicationsJürg Stuker
Talk held at the conference Coding Serbia in Novi Sad.
Performance of web applications is a crucial dissatisfier for users and thus an important quality criteria -- also used by Google to rank their result lists. As with other quality aspects, performance testing cannot be done at the end of a project but is an integral part of the development process.
The practice presentations submitted explains web performance testing along practical examples in order to better understand and judge cause and effect of behavior observed. Usually few causes have a disproportionate effect on bad performance. In addition, it is important to understand diverse load and test scenarios to optimize application behavior.
The presentation also introduces a methodology to systematically define and assess performance metrics of an application. The content is based on open source tools and the presentation includes live testing to illustrate the excellent cost benefit ratio of systematically white box testing of performance using an HTTP proxy.
The document discusses making a stateless service-oriented application highly available using GlusterFS. It describes setting up a GlusterFS cluster with replicated volumes to provide a centralized data store. The application is configured to mount the GlusterFS volume and an update mechanism is built to notify the application when data changes by monitoring the volume for modifications. This allows making the application redundant and aware of data changes for high availability.
Code Yellow: Helping Operations Top-Heavy Teams the Smart WayTodd Palino
All engineering teams run into trouble from time to time. Alert fatigue, caused by technical debt or a failure to plan for growth, can quickly burn out SREs, overloading both development and operations with reactive work. Layer in the potential for communication problems between teams, and we can find ourselves in a place so troublesome we cannot easily see a path out. At times like this, our natural instinct as reliability engineers is to double down and fight through the issues. Often, however, we need to step back, assess the situation, and ask for help to put the team back on the road to success.
We will look at the process for Code Yellow, the term we use for this process of “righting the ship”, and discuss how to identify teams that are struggling. Through a look at three separate experiences, we will examine some of the root causes, what steps were taken, and how the engineering organization as a whole supports the process.
Eberhard Wolff discusses several factors that contribute to creating changeable software beyond just architecture. He emphasizes that automated testing, following a test pyramid approach, continuous delivery practices like automated deployment, and understanding the customer's priorities are all important. While architecture is a factor, there are no universal rules and the architect's job is to understand each project's unique needs.
URP? Excuse You! The Three Metrics You Have to Know confluent
(Todd Palino, LinkedIn) Kafka Summit SF 2018
What do you really know about how to monitor a Kafka cluster for problems? Is your most reliable monitoring your users telling you there’s something broken? Are you capturing more metrics than the actual data being produced? Sure, we all know how to monitor disk and network, but when it comes to the state of the brokers, many of us are still unsure of which metrics we should be watching, and what their patterns mean for the state of the cluster. Kafka has hundreds of measurements, from the high-level numbers that are often meaningless to the per-partition metrics that stack up by the thousands as our data grows.
We will thoroughly explore three key monitoring concepts in the broker, that will leave you an expert in identifying problems with the least amount of pain:
-Under-replicated Partitions: The mother of all metrics
-Request Latencies: Why your users complain
-Thread pool utilization: How could 80% be a problem?
We will also discuss the necessity of availability monitoring and how to use it to get a true picture of what your users see, before they come beating down your door!
DESIGN West 2013 Presentation: Accelerating Android Development and DeliveryDavid Rosen
This document discusses accelerating the Android development process. It begins by noting the widespread use of Android and the challenges of slow builds and testing. It then outlines techniques for speeding up builds and Compatibility Test Suite (CTS) execution, including using more efficient build tools, dependency optimization, and test parallelization. Faster development cycles can be achieved through an integrated continuous delivery solution that applies these acceleration strategies and provides end-to-end process visibility.
Modular software design is an ingrained methodology, but how can Perforce help with that modularity? See how WMS Gaming and Perforce have taken the lid off p4 sync and stuffed it with some extra brain power to make modular version control "automagic."
Configuration and Build Management of Product Line Development with Perforce Perforce
This session provides an in-depth look at best practices for component-based software development practices in configuration and build management. Learn how to manage the challenges often faced by companies with many products, diverse teams and daily releases.
[Nvidia] Extracting Depot Paths Into New Instances of Their OwnPerforce
This document outlines a method for extracting sections of a Perforce depot into new instances to address issues with growing database size. The method uses the Perfsplit tool along with additional steps to enable zero downtime migration, prevent duplicate depot names, and fully migrate integration history. Key steps include restricting access to prepare the data, using Perfsplit to build the new instance foundation, converting paths and metadata, verifying the new instance, and cleaning up extra data. This process aims to make Perfsplit more suitable for large installations needing to split Perforce depots.
Perforce offers a more productive solution for code management compared to Subversion. While Subversion has no upfront licensing costs, it leads to slower productivity and wasted resources due to poor workflows, limited scalability, and outdated features. In contrast, Perforce is optimized for agile development and continuous delivery with proven performance even for large projects and global teams. Migrating to Perforce from Subversion can help regain productivity losses and avoid hidden costs that outweigh licensing fees.
Granular Protections Management with TriggersPerforce
Managing the Perforce Helix protections table can be unwieldy at best. Learn how we implemented a trigger-based system that removes the need for an administrator to manually edit the protections table. By granting ownership of individual projects or codelines in the protections table, we can allow project managers to control permissions to a path without worrying about mistakes that could affect the entire company.
[Lucas Films] Using a Perforce Proxy with Alternate TransportsPerforce
The document discusses using a Perforce proxy with an alternate transport like UDP to overcome high latency or low bandwidth networks between global sites. It describes how a Perforce proxy caches files to improve transfer speeds but is traditionally limited by TCP/IP. The author details using Aspera Sync to mirror the Perforce server to the proxy at much higher speeds of 20-25 MB/s over UDP, improving a 1GB file transfer from 4 hours to just minutes. This solution leverages the stateless nature of the proxy and removes the dependence on TCP/IP for large data sharing between remote offices.
[Mentor Graphics] A Perforce-based Automatic Document Generation SystemPerforce
The document describes an automatic documentation generation system used by DVT Technical Publications to generate product documentation libraries. The system utilizes Perforce for document version control and management. When documents are checked into Perforce, a pubs4d utility runs docgen to generate HTML, PDF, and update the documentation library (InfoHub). This provides a "correct-by-construction" InfoHub that is continually updated. The process allows for real-time updates and integration of last minute changes. Authors simply check documents in and out of Perforce to edit and release documentation.
[Webinar] The Changing Role of Release Engineering in a DevOps World with J. ...Perforce
The rise of DevOps is revitalizing age-old topics in release engineering and application lifecycle management, and aspects of software delivery that DevOps doesn’t magically solve. If you're responsible for the release engineering function in your organization, see what the new world looks like and which aspects of the industry it’s leaving behind.
The document provides a cheat sheet comparing the tools and commands for the Perforce Visual Client (P4V) and the Perforce Command-Line Client. It lists the main toolbar icons and functions in P4V, such as refreshing the view, checking out files, adding/deleting files, and submitting changelists. It also provides a list of over 50 common Perforce commands that can be used in the command-line client to perform similar functions like adding/deleting files, viewing file histories, integrating/merging files, and more. The cheat sheet is intended to help users quickly understand the key similarities and differences between the graphical and command-line Perforce clients.
[AMD] Novel Use of Perforce for Software Auto-updates and File TransferPerforce
1) Users at AMD leverage Perforce to create a file transfer mechanism between Windows and Linux that allows seamless transfer of files for pre-submit developer builds without complex permission or protocol setup between OSes.
2) The mechanism uses an intermediary Perforce depot to upload modified files from local machines. It then downloads the files to overlay changes for accelerated compilation and testing before official submission.
3) The file transfer mechanism includes a self-updating client that silently syncs the latest version from Perforce every use, ensuring developers always use the most recent version without manual updates.
[SAP] Perforce Administrative Self Services at SAPPerforce
SAP has transitioned from a purely centralized approach to user and project management in Perforce to a more decentralized model. They implemented a central user database synchronized to over 100 Perforce server instances. Initially, all administration was done centrally but they introduced "admin groups" allowing project managers to self-manage user access for their projects. They also allow any user to request access to projects and are piloting self-service project creation for mobile development projects to reduce wait times. Future plans include full self-service project creation across all projects and depots.
Perforce offers a more productive, cost-effective, and easier to use alternative to ClearCase for version control. ClearCase requires high licensing fees and multiple highly skilled administrators due to its slow performance and complex tooling, resulting in lost developer productivity. In contrast, Perforce has proven to be dramatically faster and less expensive to administer with minimal staff. Many companies have migrated from ClearCase to Perforce and recouped their initial investment quickly through faster software delivery and lower operating costs.
[NetApp] Simplified HA:DR Using Storage SolutionsPerforce
This document discusses using NetApp storage solutions to simplify high availability (HA) and disaster recovery (DR) for Perforce server deployments. It provides an example architecture where NetApp features like SnapMirror are used to replicate data between sites, improving HA and minimizing data loss during a disaster. The architecture is scalable for large enterprises and helps meet requirements like performance, capacity, global access, and data protection.
[NetherRealm Studios] Game Studio Perforce ArchitecturePerforce
This document summarizes the Perforce architecture implemented at NetherRealm Studios to support their large-scale game development. They migrated to a virtualized infrastructure using VMware hosted on Cisco UCS servers with NetApp storage. NetApp NFS provided better performance than iSCSI for VM and Perforce storage. Perforce servers use high-RAM, high-core Linux VMs with direct NFS mounts. Redundancy and reliability are ensured through NetApp snapshots, VMware HA, load balancing, and replicas. Management tools provide oversight of resource usage and failures across the stack. This architecture has allowed Perforce to scale with the studio's data-intensive development needs.
[NetApp Managing Big Workspaces with Storage MagicPerforce
The document describes how NetApp FlexClone technology can be used with Perforce to quickly clone large workspaces in minutes rather than hours. FlexClone allows instant clones of data volumes that only use additional storage space when data blocks are modified. The steps outlined include creating a FlexClone volume from a snapshot of a template workspace, changing file ownership, configuring the Perforce client, and using commands like "p4 flush" to populate the new workspace instantly. This approach improves developer productivity over traditional slow methods of populating workspaces.
Microservices allow for extensible app architecture and a vendor-agnostic, scalable infrastructure. While microservices simplify app deployments, they come at a price: because they’re so fragmented, it is more difficult to track and manage all the independent, yet interconnected components of an app. All this information (requirements, code, test cases and results, build artifacts, and deployment blueprints) needs to live somewhere and most importantly be versioned. Using a real example and a live demonstration of Perforce Helix, Docker and Selenium, get best practices and tips for enabling a robust, scalable and extensible pipeline to support today’s modern app delivery.
From ClearCase to Perforce Helix: Breakthroughs in Scalability at IntelPerforce
See how the Intel Security and Sensors Firmware team transitioned from IBM ClearCase to Perforce Helix with Microsoft TFS to enable robust and scalable ALM and CI with full traceability. Discover how Intel consolidated and converged 15 different development methodologies used to drive firmware projects to three single paths for all Intel platforms.
This document describes the Perforce configuration management system used at MathWorks. It discusses MathWorks' Perforce infrastructure which includes a master server, replicas for load balancing and high availability, and proxies. It also describes how configuration files are used to define and manage the infrastructure, including services, failover processes, and cron jobs. Specific examples are provided around automating workspace updates across multiple global locations.
Could you release off your mainline today? In our fast paced world well scheduled releases have become a thing of the past. Now more then ever you must maintain clean well tested code lines that can be shipped at any moment. At the last Merge we talked about how these increased demands pushed Xilinx to develop automation that validates every change before submission. In this talk we will continue that discussion covering the evolution of our tools over the past two years as we have battled with more developers, more products, and a faster code churn the ever before.
[Citrix] Perforce Standardisation at CitrixPerforce
This document describes the Perforce standard environment (PSE) created at Citrix Systems to simplify managing multiple Perforce instances. Previously, Citrix had many isolated Perforce instances set up over 10+ years without standardization, causing management and performance issues. The new PSE uses a "mesh network" approach with proxy servers to provide a single access point for all instances, regardless of physical location. It also implemented a standardized build system called "Solera" to help developers deal with code from multiple ports. The PSE has improved stability, reduced downtime, and enhanced disaster recovery capabilities at Citrix.
The Problems with Redux: Are MobX and Realm going to put and end to it?Quantum Mob
We all know redux is great but comes along with tons of boilerplate configuration and architecture. Can React's local state, MobX and Realm solve this?
Based on my article: https://blog.qmo.io/the-problems-with-redux-and-alternatives-local-state-mobx-realm/
The document discusses the challenges that retail companies face with database downtime and changes. It notes that downtime can result in significant lost revenue for retailers. Common database development practices like using scripts are outlined as being difficult to maintain and not providing adequate change control. The solution, DBmaestro TeamWork, is presented as providing database version control, enforcement of best practices for database changes, automated deployments between environments, and overall improved productivity and quality.
This talk discusses how we structure our analytics information at Adjust. The analytics environment consists of 20+ 20TB databases and many smaller systems for a total of more than 400 TB of data. See how we make it work, from structuring and modelling the data through moving data around between systems.
.NET Core Summer event 2019 in Linz, AT - War stories from .NET team -- Karel...Karel Zikmund
.NET Core Summer event, 2019 in Linz, AT - 2019/7/23
Talk: War stories from .NET team by Karel Zikmund
https://www.meetup.com/NET-Stammtisch-Linz/events/261637908/
A modern architecturereview–usingcodereviewtools-ver-3.5SSW
For any project that is critical to the business, it’s important to do ‘Modern Architecture Reviews’. Being an architect is fun, you get to design the system, do ongoing code reviews, and play the bad ass. It is even more fun when using modern cool tools.
You are a clever and talented person. You create beautiful designs, or perhaps you can architect a system that even a cat could use. Your peers adore you. Your clients love you. But (until now) you haven't *&^#^ been able to make Git bend to your will. It makes you angry inside that you have to ask your co-worker, again, for that *&^#^ command to share your work.
It's not you. It's Git. Promise.
We'll kick off this session with an explanation of why Git is so freaking hard to learn. Then we'll flip the tables and make YOU (not Git) the centre of attention. You'll learn how to define, and sketch out how version control works, using terms and scenarios that make sense to you. Yup, sketch. On paper. (Tablets and other electronic devices will be allowed, as long as you promise not to get distracted choosing the perfect shade for rage.) To this diagram you'll layer on the common Git commands that are used regularly by efficient Git-using teams. It'll be the ultimate cheat sheet, and specific to your job. If you think this sounds complicated, it's not! Your fearless leader, Emma Jane, has been successfully teaching people how-to-tech for over a decade. She is well known for her non-technical metaphors which ease learners into complex, work-related topics that previously felt inaccessible.
Yes, this is an introductory session. No, you don't have to have Git installed to attend. You don't even need to know where the command line is on your computer. Yes, you should attend if you've been embarrassed to ask team-mates what Git command you used three weeks ago to upload your work...just in case you're supposed to remember.
If you're a super-human Git fanatic who is frustrated by people who don't just "git it", this session is also for you. You'll learn new ways to effectively communicate your ever-loving Git, and you may develop a deeper understanding of why your previous attempts to explain Git have failed.
This document discusses various programming anti-patterns organized into three sections: programming anti-patterns, methodological anti-patterns, and configuration management anti-patterns. Some of the programming anti-patterns discussed include accidental complexity, blind faith, boat anchor, cargo cult programming, coding by exception, error hiding, hard coding, magic numbers, spaghetti code, and incorrect exceptions usage. Some methodological anti-patterns discussed include copy and paste programming, golden hammer, improbability factor, premature optimization, and premature pessimization.
Harper Reed, the keynote speaker, discussed his experience as CTO of Obama for America's 2012 campaign, noting the massive scale of building technology for a presidential campaign. Other speakers discussed emerging technologies like touch screens, CSS preprocessors, single-page applications, server-side tools for testing, Node.js streams and events, open source challenges, and crafting URLs independent of content management systems. Overall the conference covered front-end development, web applications, Node.js, and rethinking technologies.
.NET Core Summer event 2019 in Brno, CZ - War stories from .NET team -- Karel...Karel Zikmund
.NET Core Summer event, 2019 in Brno, CZ - 2019/7/9
Talk: War stories from .NET team by Karel Zikmund
https://www.wug.cz/brno/akce/1152--NET-Core-Summer-Event
Performance is a key aspect when developing an application, but for developers, production performance usually is a black box. When production problems arise, a lack of insight into log files and performance metrics forces us to reproduce issues locally before we can start to tackle the root cause. Using real world examples, we show how a unified performance management platform helps teams across the lifecycle to monitor applications, detect problems early on, and collect data that enables developers to efficiently solve problems.
This document discusses various techniques for inter-process communication and synchronization between concurrent processes. It covers topics like mutual exclusion, semaphores, monitors, and classical synchronization problems. Mutual exclusion is required to prevent race conditions when accessing shared resources. Common solutions discussed are software algorithms, hardware support using test-and-set operations, and operating system semaphores. Monitors provide synchronization through condition variables. Message passing enables communication and synchronization between distributed processes.
Software Carpentry and the Hydrological Sciences @ AGU 2013Aron Ahmadia
This document discusses bringing computational skills training to hydrologists through Software Carpentry workshops. It notes that while many hydrologists are focused on their research, computational methods are now essential. Software Carpentry teaches practical skills like the Unix shell, version control with Git, Python and R programming, and databases. These intensive, short workshops have been effective at training graduate students. The document encourages hydrologists to host their own workshops and support computational literacy by discussing code and practices in their papers.
This document discusses various techniques for measuring and improving application performance. It begins by explaining the importance of measuring performance at the machine, component, and request levels. This includes collecting metrics on CPU, memory, I/O, logs, and tracing requests. Once issues are identified, the document recommends actions like caching, queueing work, and rearchitecting systems using service-oriented principles to improve performance. It stresses the importance of an ongoing process of measuring, analyzing data, taking action, and verifying the impact of changes.
Алексей Ященко и Ярослав Волощук "False simplicity of front-end applications"Fwdays
It’s easy to underestimate a front-end project's complexity, which leads to shallow and thus incorrect implementation. Attempts to fix this problem result in uncontrolled complexity growth and undefined behavior in corner cases.
We'll discuss ways of revealing the inherent complexity of a problem and dealing with it both on theoretical and practical levels.
The slides for my UBC Alumni talk on programming for the Cloud. I show Cloud Foundry as an example of an open cloud platform and how easy it is to create modular, scalable applications using it.
The document discusses moving from applications to the enterprise architecture. It begins with an introduction to the speaker and their experience. It then outlines a plan to discuss the history of software development, the software portfolio, breaking down silos, and layering the enterprise. For each section, it provides details on the concepts and examples to illustrate how to analyze applications and integrate them at the enterprise level to avoid silos. The goal is to show how to evolve individual applications into an integrated enterprise architecture.
This document discusses database automation and the mistrust that can exist around it. A survey found that while continuous delivery is on the rise, database automation sees less adoption due to mistrust. Database changes can impact whole systems, so any automation must be done carefully. Script-based version control and deployment can lead to issues like out-of-process changes and working on wrong revisions. Integrating databases into version control and continuous delivery processes through tools like DBmaestro can bring more visibility, control and trust to database changes and deployments. This is done by enforcing best practices, tracking who made changes, and facilitating automated but safe deployments through capabilities like baseline comparisons and impact analysis.
The promise of DevOps is that we can push new ideas out to market faster while avoiding delivering serious defects into production. Andreas Grabner explains that testers are no longer measured by the number of defect reports they enter, nor are developers measured by the lines of code they write. As a team, you are measured by how fast you can deploy high quality functionality to the end user. Achieving this goal requires testers to increase their skills. It’s all about finding solutions—not just problems. Testers must transition from reporting “app crashes” to providing details such as “memory leak caused by bad cache implementation.” Instead of reporting “it’s slow,” testers must discover “wrong hibernate configuration causes too much traffic from the database.” Using three real-life examples, Andreas illustrates what it takes for testing teams to become part of the DevOps transformation—bringing more value to the entire organization.
Does Git make you angry inside? In this workshop you will get a gentle introduction to working efficiently as a Web developer in small teams, or as a solo developer. We'll focus on real world examples you can actually use to make your work faster and more efficient. Windows? OSX? Linux? No problem, we'll get you up and running with Git, no matter what your system. Yes, this is an introductory session. This is for people who feel shame that they don't know how to "clone my github project", wish they too could "get the gist", and get mad when people say "just diff me a patch" as if it's something as easy as making a mai thai even though you have no rum. No, you don't have to have git installed to attend. You don't even need to know where the command line is on your computer.
This document discusses the importance of simplicity in software development. It notes that complexity is everywhere and that making things simple is difficult. It provides several principles for achieving simplicity, including following agile and XP rules, minimizing duplication, maximizing clarity, avoiding unnecessary coupling, and practicing refactoring and removing duplication. The document advocates for better naming, consistency, functional programming techniques, and practicing simplicity principles. It provides additional resources on extreme programming and simple design.
Similar to Outsmarting Merge Edge Cases in Component Based Design (20)
How to Organize Game Developers With Different Planning NeedsPerforce
Different skills have different needs when it comes to planning. For a coder it may make perfect sense to plan work in two-week sprints, but for an artist, an asset may take longer than two weeks to complete.
How do you allow different skills to plan the way that works best for them? Some studios may choose to open up for flexibility – do whatever you like! But that tends to cause issues with alignment and siloes of data, resulting in loss of vision. Lost vision in the sense that it is difficult to understand, but also — and maybe more importantly — the risk of losing the vision of what the game will be.
With the right approach, however, you can avoid these obstacles. Join backlog expert Johan Karlsson to learn:
-The balance of team autonomy and alignment.
-How to use the product backlog to align the project vision.
-How to use tools to support the flexibility you need.
Looking for a planning and backlog tool? You can try Hansoft for free.
Regulatory Traceability: How to Maintain Compliance, Quality, and Cost Effic...Perforce
How do regulations impact your product requirements? How do you ensure that you identify all the needed requirements changes to meet these regulations?
Ideally, your regulations should live alongside your product requirements, so you can trace among each related item. Getting to that point can be quite an undertaking, however. Ultimately you want a process that:
-Saves money
-Ensures quality
-Avoids fines
If you want help achieving these goals, this webinar is for you. Watch Tom Totenberg, Senior Solutions Engineer for Helix ALM, show you:
-How to import a regulation document into Helix ALM.
-How to link to requirements.
-How to automate impact analysis from regulatory updates.
Efficient Security Development and Testing Using Dynamic and Static Code Anal...Perforce
Be sure to register for a demo, if you would like to see how Klocwork can help ensure that your code is secure, reliable, and compliant.
https://www.perforce.com/products/klocwork/live-demo
If it’s not documented, it didn’t happen.
When it comes to compliance, if you’re doing the work, you need to prove it. That means having well-documented SOPs (standard operating procedures) in place for all your regulated workflows.
It also means logging your efforts to enforce these SOPs. They show that you took appropriate action in any number of scenarios, which can be related to regulations, change requests, firing of an employee, logging an HR compliant, or anything else that needs a structured workflow.
But when do you need to do this, and how do you go about it?
In this webinar, Tom Totenberg, our Helix ALM senior solutions engineer, clarifies workflow enforcement SOPs, along with a walkthrough of how Perforce manages GDPR (General Data Protection Regulation) requests. He’ll cover:
-What are SOPs?
-Why is it important to have this documentation?
-Example: walking through our internal Perforce GDPR process.
-What to beware of.
-Building the workflow in ALM.
Branching Out: How To Automate Your Development ProcessPerforce
If you could ship 20% faster, what would it mean for your business? What could you build? Better question, what’s slowing your teams down?
Teams struggle to manage branching and merging. For bigger teams and projects, it gets even more complex. Tracking development using a flowchart, team wiki, or a white board is ineffective. And attempts to automate with complex scripting are costly to maintain.
Remove the bottlenecks and automate your development your way with Perforce Streams –– the flexible branching model in Helix Core.
Join Brad Hart, Chief Technology Officer and Brent Schiestl, Senior Product Manager for Perforce version control to learn how Streams can:
-Automate and customize development and release processes.
-Easily track and propagate changes across teams.
-Boost end user efficiency while reducing errors and conflicts.
-Support multiple teams, parallel releases, component-based development, and more.
How to Do Code Reviews at Massive Scale For DevOpsPerforce
Code review is a critical part of your build process. And when you do code review right, you can streamline your build process and achieve DevOps.
Most code review tools work great when you have a team of 10 developers. But what happens when you need to scale code review to 1,000s of developers? Many will struggle. But you don’t need to.
Join our experts Johan Karlsson and Robert Cowham for a 30-minute webinar. You’ll learn:
-The problems with scaling code review from 10s to 100s to 1,000s of developers along with other dimensions of scale (files, reviews, size).
-The solutions for dealing with all dimensions of scale.
-How to utilize Helix Swarm at massive scale.
Ready to scale code review and streamline your build process? Get started with Helix Swarm, a code review tool for Helix Core.
By now many of us have had plenty of time to clean and tidy up our homes. But have you given your product backlog and task tracking software as much attention?
To keep your digital tools organized, it is important to avoid hoarding on to inefficient processes. By removing the clutter in your product backlog, you can keep your teams focused.
It’s time to spark joy by cleaning up your planning tools!
Join Johan Karlsson — our Agile and backlog expert — to learn how to:
-Apply digital minimalism to your tracking and planning.
-Organize your work by category.
-Motivate teams by transitioning to a cleaner way of working.
TRY HANSOFT FREE
Going Remote: Build Up Your Game Dev Team Perforce
Everyone’s working remote as a result of the coronavirus (COVID-19). And while game development has always been done with remote teams, there’s a new challenge facing the industry.
Your audience has always been mostly at home – now they may be stuck there. And they want more games to stay happy and entertained.
So, how can you enable your developers to get files and feedback faster to meet this rapidly growing demand?
In this webinar, you’ll learn:
-How to meet the increasing demand.
-Ways to empower your remote teams to build faster.
-Why Helix Core is the best way to maximize productivity.
Plus, we’ll share our favorite games keeping us happy in the midst of a pandemic.
Shift to Remote: How to Manage Your New WorkflowPerforce
The spread of coronavirus has fundamentally changed the way people work. Companies around the globe are making an abrupt shift in how they manage projects and teams to support their newly remote workers.
Organizing suddenly distributed teams means restructuring more than a standup. To facilitate this transition, teams need to update how they collaborate, manage workloads, and maintain projects.
At Perforce, we are here to help you maintain productivity. Join Johan Karlsson — our Agile expert — to learn how to:
Keep communication predictable and consistent.
-Increase visibility across teams.
-Organize projects, sprints, Kanban boards and more.
-Empower and support your remote workforce.
Hybrid Development Methodology in a Regulated WorldPerforce
In a regulated industry, collaboration can be vital to building quality products that meet compliance. But when an Agile team and a Waterfall team need to work together, it can feel like mixing oil with water.
If you're used to Agile methods, Waterfall can feel slow and unresponsive. From a Waterfall perspective, pure Agile may lack accountability and direction. Misaligned teams can slow progress, and expose your development to mistakes that undermine compliance.
It's possible to create the best of both worlds so your teams can operate together harmoniously. This is how to develop products quickly, and still make regulators happy.
Join ALM Solutions Engineer Tom Totenberg in this webinar to learn how teams can:
- Operate efficiently with differing methodologies.
- Glean best practices for their tailored hybrid.
- Work together in a single environment.
Watch the webinar, and when you're ready for a tool to help you with the hybrid, know that you can try Helix ALM for free.
Better, Faster, Easier: How to Make Git Really Work in the EnterprisePerforce
There's a lot of reasons to love Git. (Git is awesome at what it does.) Let’s look at the 3 major use cases for Git in the enterprise:
1. You work with third party or outsourced development teams.
2. You use open source in your products.
3. You have different workflow needs for different teams.
Making the best of Git can be difficult in an enterprise environment. Trying to manage all the moving parts is like herding cats.
So, how do you optimize your teams’ use of Git — and make it all fit into your vision of the enterprise SDLC?
You’ll learn about:
-The challenges that accompany each use case — third parties, open source code, different workflows.
-Ways to solve these problems.
-How to make Git better, faster, and easier — with Perforce
Easier Requirements Management Using Diagrams In Helix ALMPerforce
Sometimes requirements need visuals. Whether it’s a diagram that clarifies an idea or a screenshot to capture information, images can help you manage requirements more efficiently. And that means better quality products shipped faster.
In this webinar, Helix ALM Professional Services Consultant Gerhard Krüger will demonstrate how to use visuals in ALM to improve requirements. Learn how to:
-Share information faster than ever.
-Drag and drop your way to better teamwork.
-Integrate various types of visuals into your requirements.
-Utilize diagram and flowchart software for every need.
-And more!
Immediately apply the information in this webinar for even better requirements management using Helix ALM.
It’s common practice to keep a product backlog as small as possible, probably just 10-20 items. This works for single teams with one Product Owner and perhaps a Scrum Master.
But what if you have 100 Scrum teams managing a complex system of hardware and software components? What do you need to change to manage at such a massive scale?
Join backlog expert Johan Karlsson to learn how to:
-Adapt Agile product backlog practices to manage many backlogs.
-Enhance collaboration across disciplines.
-Leverage backlogs to align teams while giving them flexibility.
Achieving Software Safety, Security, and Reliability Part 3: What Does the Fu...Perforce
In Part 3, we will look at what the future might hold for embedded programming languages and development tools. And, we will look at the future for software safety and security standards.
How to Scale With Helix Core and Microsoft Azure Perforce
This document discusses how to scale Helix Core using Microsoft Azure. It begins by explaining the benefits of using Helix Core and Azure together, such as high performance, scalability, security integration, and availability. It then covers computing, storage, and security options on Azure, including virtual machine types and operating system choices. Next, it describes how to set up global deployments with Helix Core on Azure using techniques like proxies, replicas, and the Perforce federated architecture. It concludes with examples of advanced topologies like build servers, hybrid cloud/on-premises implementations, and multi-cloud considerations.
Achieving Software Safety, Security, and Reliability Part 2Perforce
In Part 2, we will focus on the automotive industry, as it leads the way in enforcing safety, security, and reliability standards as well as best practices for software development. We will then examine how other industries could adopt similar practices.
Modernizing an application’s architecture is often a necessary multi-year project in the making. The goal –– to stabilize code, detangle dependencies, and adopt a toolset that ignites innovation.
Moving your monolith repository to a microservices/component based development model might be on trend. But is it right for you?
Before you break up with anything, it is vital to assess your needs and existing environment to construct the right plan. This can minimize business risks and maximize your development potential.
Join Tom Tyler and Chuck Gehman to learn more about:
-Why you need to plan your move with the right approach.
-How to reduce risk when refactoring your monolithic repository.
-What you need to consider before migrating code.
Achieving Software Safety, Security, and Reliability Part 1: Common Industry ...Perforce
In part one of our three-part webinar series, we examine common software development challenges, review the safety and security standards adopted by different industries, and examine the best practices that can be applied to any software development team.
The features you’ve been waiting for! Helix ALM’s latest update expands usability and functionality to bring solid improvements to your processes.
Watch Helix ALM Senior Product Manager Paula Rome demonstrate how new features:
-Simplify workflows.
-Expand report analysis.
-Boost productivity in the Helix ALM web client.
All this and MORE packed into an exciting 30 minutes! Get inspired. Be extraordinary with the new Helix ALM.
Companies that track requirements, create traceability matrices, and complete audits - especially for compliance - run into many problems using only Word and Excel to accomplish these tasks.
Most notably, manual processes leave employees vulnerable to making costly mistakes and wasting valuable time.
These outdated tracking procedures rob organizations of benefiting from four keys to productivity and efficiency:
-Automation
-Collaboration
-Visibility
-Traceability
However, modern application lifecycle management (ALM) tools solve all of these problems, linking and organizing information into a single source of truth that is instantly auditable.
Gerhard Krüger, senior consultant for Helix ALM, explains how the right software supports these fundamentals, generating improvements that save time and money.
DECODING JAVA THREAD DUMPS: MASTER THE ART OF ANALYSISTier1 app
Are you ready to unlock the secrets hidden within Java thread dumps? Join us for a hands-on session where we'll delve into effective troubleshooting patterns to swiftly identify the root causes of production problems. Discover the right tools, techniques, and best practices while exploring *real-world case studies of major outages* in Fortune 500 enterprises. Engage in interactive lab exercises where you'll have the opportunity to troubleshoot thread dumps and uncover performance issues firsthand. Join us and become a master of Java thread dump analysis!
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
🏎️Tech Transformation: DevOps Insights from the Experts 👩💻campbellclarkson
Connect with fellow Trailblazers, learn from industry experts Glenda Thomson (Salesforce, Principal Technical Architect) and Will Dinn (Judo Bank, Salesforce Development Lead), and discover how to harness DevOps tools with Salesforce.
Malibou Pitch Deck For Its €3M Seed Roundsjcobrien
French start-up Malibou raised a €3 million Seed Round to develop its payroll and human resources
management platform for VSEs and SMEs. The financing round was led by investors Breega, Y Combinator, and FCVC.
DevOps Consulting Company | Hire DevOps Servicesseospiralmantra
Spiral Mantra excels in providing comprehensive DevOps services, including Azure and AWS DevOps solutions. As a top DevOps consulting company, we offer controlled services, cloud DevOps, and expert consulting nationwide, including Houston and New York. Our skilled DevOps engineers ensure seamless integration and optimized operations for your business. Choose Spiral Mantra for superior DevOps services.
https://www.spiralmantra.com/devops/
Boost Your Savings with These Money Management AppsJhone kinadey
A money management app can transform your financial life by tracking expenses, creating budgets, and setting financial goals. These apps offer features like real-time expense tracking, bill reminders, and personalized insights to help you save and manage money effectively. With a user-friendly interface, they simplify financial planning, making it easier to stay on top of your finances and achieve long-term financial stability.
Superpower Your Apache Kafka Applications Development with Complementary Open...Paul Brebner
Kafka Summit talk (Bangalore, India, May 2, 2024, https://events.bizzabo.com/573863/agenda/session/1300469 )
Many Apache Kafka use cases take advantage of Kafka’s ability to integrate multiple heterogeneous systems for stream processing and real-time machine learning scenarios. But Kafka also exists in a rich ecosystem of related but complementary stream processing technologies and tools, particularly from the open-source community. In this talk, we’ll take you on a tour of a selection of complementary tools that can make Kafka even more powerful. We’ll focus on tools for stream processing and querying, streaming machine learning, stream visibility and observation, stream meta-data, stream visualisation, stream development including testing and the use of Generative AI and LLMs, and stream performance and scalability. By the end you will have a good idea of the types of Kafka “superhero” tools that exist, which are my favourites (and what superpowers they have), and how they combine to save your Kafka applications development universe from swamploads of data stagnation monsters!
Odoo releases a new update every year. The latest version, Odoo 17, came out in October 2023. It brought many improvements to the user interface and user experience, along with new features in modules like accounting, marketing, manufacturing, websites, and more.
The Odoo 17 update has been a hot topic among startups, mid-sized businesses, large enterprises, and Odoo developers aiming to grow their businesses. Since it is now already the first quarter of 2024, you must have a clear idea of what Odoo 17 entails and what it can offer your business if you are still not aware of it.
This blog covers the features and functionalities. Explore the entire blog and get in touch with expert Odoo ERP consultants to leverage Odoo 17 and its features for your business too.
An Overview of Odoo ERP
Odoo ERP was first released as OpenERP software in February 2005. It is a suite of business applications used for ERP, CRM, eCommerce, websites, and project management. Ten years ago, the Odoo Enterprise edition was launched to help fund the Odoo Community version.
When you compare Odoo Community and Enterprise, the Enterprise edition offers exclusive features like mobile app access, Odoo Studio customisation, Odoo hosting, and unlimited functional support.
Today, Odoo is a well-known name used by companies of all sizes across various industries, including manufacturing, retail, accounting, marketing, healthcare, IT consulting, and R&D.
The latest version, Odoo 17, has been available since October 2023. Key highlights of this update include:
Enhanced user experience with improvements to the command bar, faster backend page loading, and multiple dashboard views.
Instant report generation, credit limit alerts for sales and invoices, separate OCR settings for invoice creation, and an auto-complete feature for forms in the accounting module.
Improved image handling and global attribute changes for mailing lists in email marketing.
A default auto-signature option and a refuse-to-sign option in HR modules.
Options to divide and merge manufacturing orders, track the status of manufacturing orders, and more in the MRP module.
Dark mode in Odoo 17.
Now that the Odoo 17 announcement is official, let’s look at what’s new in Odoo 17!
What is Odoo ERP 17?
Odoo 17 is the latest version of one of the world’s leading open-source enterprise ERPs. This version has come up with significant improvements explained here in this blog. Also, this new version aims to introduce features that enhance time-saving, efficiency, and productivity for users across various organisations.
Odoo 17, released at the Odoo Experience 2023, brought notable improvements to the user interface and added new functionalities with enhancements in performance, accessibility, data analysis, and management, further expanding its reach in the market.
8 Best Automated Android App Testing Tool and Framework in 2024.pdfkalichargn70th171
Regarding mobile operating systems, two major players dominate our thoughts: Android and iPhone. With Android leading the market, software development companies are focused on delivering apps compatible with this OS. Ensuring an app's functionality across various Android devices, OS versions, and hardware specifications is critical, making Android app testing essential.
Orca: Nocode Graphical Editor for Container OrchestrationPedro J. Molina
Tool demo on CEDI/SISTEDES/JISBD2024 at A Coruña, Spain. 2024.06.18
"Orca: Nocode Graphical Editor for Container Orchestration"
by Pedro J. Molina PhD. from Metadev
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
WMF 2024 - Unlocking the Future of Data Powering Next-Gen AI with Vector Data...Luigi Fugaro
Vector databases are transforming how we handle data, allowing us to search through text, images, and audio by converting them into vectors. Today, we'll dive into the basics of this exciting technology and discuss its potential to revolutionize our next-generation AI applications. We'll examine typical uses for these databases and the essential tools
developers need. Plus, we'll zoom in on the advanced capabilities of vector search and semantic caching in Java, showcasing these through a live demo with Redis libraries. Get ready to see how these powerful tools can change the game!
4. 4
Background – MathWorks
We are a 3500+ person company dedicated to accelerating
the pace of engineering and science
We have ~90 products based upon our
core platforms
• MATLAB – The Language of Technical Computing
• Simulink – Simulation and Model-Based Design
5. 5
Background - Definitions
Product
• MATLAB
• Parallel Computing Toolbox
• Signal Processing Toolbox
Component
• Set of strongly-related files
6. 6
Background - Componentization
~90 products
• 5,000+ components, acyclical dependencies
• 100,000s of tests and test points
• ~36 hours to build and test from the ground up
• Managing an over 1 million file code base
Benefits of componentization
• Develop in isolation
• Identify problems ahead of time
• Run only a subset of building and testing
7. 7
Background – Solution Chosen
Single stream per product, mostly
• //mw/Bparallel
• CTB (“Components to Build”) list, can change
• Virtual stream enforcing a view
How to manage getting changes from one stream to
another?
• Tools (What I work on, automatically merging between streams)
More background: “Moving 1000 Users and 100 Branches
Into Streams” – John LoVerso, MERGE 2014
8. 8
Background - Architectural Issues
Renames with regards to crossing the
stream view boundary
Submitting both a delete and an add onto
the same filename
Filename-based versus file object-based source control
system
12. 12
Edge Cases – Complex Merge
What is it?
• Multiple changes to merge to a destination
• Cannot be represented in one single change
Classic example
Easy solution, no?
• “Just merge the left half first.”
Can’t ask the developer to manually apply the solution to
each issue we find
13. 13
Edge Cases – Complex Merge
How do you identify one?
• First pitfall: Problem definition
Incorrect: Counting endpoints
• Correct in many cases (95% or so)
• False positives and false negatives
14. 14
Edge Cases – Complex Merge
How do you identify one?
• First pitfall: Problem definition
Incorrect: Set of changes that cannot
be represented in just one change
• How do you know you are correct?
15. 15
Edge Cases – Complex Merge
How do you identify one?
• First pitfall: Problem definition
Correct: Identify the actual, big-ticket item
• For us, this was the fact that people were renaming files often
• Much easier to identify “filenames which need both a delete, and then an
add applied to them” (i.e. complex merge on filename A)
16. 16
Edge Cases – Complex Merge
How do you resolve one?
• Second pitfall: Merging at an earlier change
• Third pitfall: Asking the user to do it
Incorrect: Complex merge identified at
c123456, so run the command at c123455
• Or at c123000, and then at c123100,
and then at c123150, and then…
• Manually resolve!
17. 17
Edge Cases – Complex Merge
How do you resolve one?
• Second pitfall: Merging at an earlier change
• Third pitfall: Asking the user to do it
Correct: Automatically submit
18. 18
Edge Cases – Deadwood
What is it?
• Stream has fileA in view
• Stream CTB changes, fileA no longer in view
• fileA continues to exist, no changes are received
Files in view
“Dead wood”
Files out of view
19. 19
Edge Cases – Deadwood
Why is it bad?
Files in view
“Dead wood”
Files out of view
20. 20
Edge Cases – Deadwood
First pitfall: Leave it alone
Files in view
“Dead wood”
Files out of view
21. 21
Edge Cases – Deadwood
Second pitfall: Merge everything from the beginning of time
Files in view
“Dead wood”
Files out of view
22. 22
Edge Cases – Deadwood
Implementation we settled on: Iteratively merge everything
in a smaller chunk of changes
• Has downsides, manual resolves scheduled when they’re not needed
Files in view
“Dead wood”
Files out of view
23. 23
Edge Cases – Deadwood
Implementation we want to eventually get to: Sparse
Branches (John LoVerso’s MERGE 2016 presentation)
Files in view
“Dead wood”
Files out of view
24. 24
Edge Cases – Renames Across View
Files in view
“Dead wood”
Files out of view
What is it?
• FileA -> FileB -> FileC on the source
• FileB is not in view
25. 25
Edge Cases – Rename Across View
What should happen: What actually happens:
26. 26
Edge Cases – Rename Across View
What should happen: What actually happens:
27. 27
Edge Cases – Rename Across View
What should happen: What actually happens:
28. 28
Edge Cases – Rename Across View
What should happen: What actually happens:
29. 29
Edge Cases – Rename Across View
What should happen: What actually happens:
30. 30
Edge Cases – Renames Across View
Pitfalls:
• Only include the first and last names in view
• Only consider one rename, not multiple renames
• Add too many things to the view of a virtual stream
31. 31
Edge Cases – Renames Across View
Solution:
• Merge everything in a given range, selectively revert edits and branches
• Same solution we use for keeping deadwood relatively managed
32. 32
Edge Cases – Shadowed Delete
What is it?
• Delete that does not show up when you merge
• Not the head revision
• Integration engine thinks it’s done at the move/add
33. 33
Edge Cases – Shadowed Delete
What should happen: What actually happens:
34. 34
Edge Cases – Shadowed Delete
What should happen: What actually happens:
35. 35
Edge Cases – Shadowed Delete
What should happen: What actually happens:
36. 36
Edge Cases – Shadowed Delete
What should happen: What actually happens:
37. 37
Edge Cases – Shadowed Delete
How to identify it?
• Look for deletes not at the head revision
in the range you are merging
How to resolve it?
• Merge at a specific revision for the problem file
38. 38
Edge Cases – Shadowed Delete
How to identify it?
• Look for deletes not at the head revision
in the range you are merging
How to resolve it?
• Merge at a specific revision for the problem file
Pitfalls?
• Merging multiple times without resolving
40. 40
Conclusion – Lessons Learned
Things to do:
• Look at the big picture, solve a problem instead of its symptoms
• Ask users to follow a simple, managed workflow
• Test everything across releases and decide to upgrade or not
Things not to do:
• Assume you’ve found all cases of a problem
• Assume Perforce behavior will stay the same
41. 41
Conclusion – What We’d Like
Source control on an object-by-object basis
• Reconcile would work in most / all cases
• Renames, view, and componentization would be easier to handle and
define by file object content rather than specific filenames
Truly sparse streams
• Prevent issues from ever happening, if the files never actually exist
on a stream
I’m Mike Hobbs, a software engineer in the tools group in MathWorks.
I’d like to give you a background of MW as a company and why we decided to componentize our source code.
Then, I’ll be talking about merge edge cases we’ve encountered componentizing our source code, their solutions in our world, and pitfalls along the way to finding these solutions.
Finally, I’ll be talking about what work we have remaining to do, and algorithm changes we would like Perforce to implement.
Reiterate product count, mention component count/complexity and how long it takes to build and test.
No one likes waiting 2 days, so instead, you work only on code your team actively develops.
This also allows us to catch componentization issues before they happen, such as trying to submit code changes to a component your team does not develop in.
The solution we’ve chosen is one where every product gets its own stream in our depot (example: //mw/Bparallel), and defines the components that developers of that product should be able to modify. We additionally have a virtual stream that we use to enforce the view in which developers are able to make edits, and this virtual stream is updated every time the CTB list changes.
With more than 100 products actively being developed for each bi-yearly release of MATLAB, there’s an elephant in the room: How do we get changes from one product to another? How do we get changes from one product to a release branch? How do we get changes from point A to point B? We have tools in place to allow users to merge changes from one stream to another, and that’s what I work on.
John LoVerso, one of my coworkers, has a nice presentation he gave at the last conference if you are interested in more details of our chosen solution of organizing our products’ sources.
So, what exactly is a complex merge? We define that as multiple changes that need to be merged from a source stream to a destination stream, but which cannot be represented in one single change for whatever reason. Describe the classic resurrection-based conflict (rename, re-add).
The solution is easy too, right? Just merge the left-hand side of the complex merge first, and boom, you’re done, complex merge defused.
Imagine being the standard developer who does not care about the source control system as long as they can do their actual job:
“So now, I run p4 merge of… oldname revision 2? No, it didn’t merge, okay, so I find the final name of that file-object, and put that in the merge command? Still no… oh right, those extra flags I need to add, lowercase or uppercase S? What’s a stream again, is it just Bparallel or //mw/Bparallel?”
Chaos: Developers with just enough knowledge to be dangerous without knowing it.
Multiple other examples of users knowing just enough of Perforce to be damaging:
“Well, I know how to add and delete, so there’s that folder rename my manager asked me to do!” Obliterate time.
“Well, I can just say p4 reconcile after moving a file.” Now you get the case above.
“Well, why should I wait for the proper flow of merging code? I can just merge from there to here!” And now you’ve lost work on your stream months later when you already have a credit for an integration record you were not expecting.
First talk a bit about the differences between a source control system that’s filename based and one that’s object based. Example to the right, file object 1 lives across 3 different names at 3 different times, and file object 2 lives in the same name as file object 1 did, just at a different time.
Filename benefits: easy to implement
File object benefits: captures user intent (same logical file, or a different purpose for the file?)
First talk a bit about the differences between a source control system that’s filename based and one that’s object based. Example to the right, file object 1 lives across 3 different names at 3 different times, and file object 2 lives in the same name as file object 1 did, just at a different time.
Filename benefits: easy to implement
File object benefits: captures user intent (same logical file, or a different purpose for the file?)
So now let’s talk about a bunch of the edge cases (and not-so-edge cases) we’ve run into. (Expected 15 minutes on complex merges, 10 minutes on renames across view, 10 on )
So, what exactly is a complex merge? We define that as multiple changes that need to be merged from a source stream to a destination stream, but which cannot be represented in one single change for whatever reason. Describe the classic resurrection-based conflict (rename, re-add).
The solution is easy too, right? Just merge the left-hand side of the complex merge first, and boom, you’re done, complex merge defused.
Imagine being the standard developer who does not care about the source control system as long as they can do their actual job:
“So now, I run p4 merge of… oldname revision 2? No, it didn’t merge, okay, so I find the final name of that file-object, and put that in the merge command? Still no… oh right, those extra flags I need to add, lowercase or uppercase S? What’s a stream again, is it just Bparallel or //mw/Bparallel?”
Chaos: Developers with just enough knowledge to be dangerous without knowing it.
Multiple other examples of users knowing just enough of Perforce to be damaging:
“Well, I know how to add and delete, so there’s that folder rename my manager asked me to do!” Obliterate time.
“Well, I can just say p4 reconcile after moving a file.” Now you get the case above.
“Well, why should I wait for the proper flow of merging code? I can just merge from there to here!” And now you’ve lost work on your stream months later when you already have a credit for an integration record you were not expecting.
So we tried to at least identify them for the user and tell them “Run this command at this specific change level, and if you do, you’ve defused a certain amount of complex merges and can progress”.
The first pitfall we ran into while trying to handle this edge case, complex merges, was in how we defined our problem. Our first definition of the problem was file was to look at the endpoints and see if one filename goes in, and two come out, or vice versa. It was good for most cases, but there were still too many false negatives, like complex merges involving multiple renames, to keep our comfort level low.
Okay, so you are missing some cases. What about trying to figure out at a high-level, all types of “multiple changes that can’t be represented as one single change” issues? There is no way to guarantee correctness without full access to the Perforce source code. This idea was thankfully one we didn’t implement along the way to a complete solution for simply identifying (let alone resolving) complex merges without developer intervention.
So we sat down, thought about it some more, and realized that the only cases that affect us that can’t be represented in one change happen on a filename by filename basis, and that is the fact that we sometimes want a delete (or move/delete) to be applied, and then an add (or move/add).
It turns out, it’s much easier to figure out the answer to that question. Does the file exist on the destination? Does the file continue to exist on the source? Is there a delete of any kind that needs to be merged, followed by an add of any kind? If so, we’ve found a filename with a set of changes on it that simply can’t be represented in one change.
One false positive of sorts, (delete, add). Talk about that for a bit, and why we’re okay with it.
Our first implementation involved asking users to re-run their command at an earlier change level to defuse a complex merge. That’s a great first attempt at defusing a complex merge, but there were two massive pains that came from it:
At times, there would be cascading complex merges, where the first would need to be defused before we could even think about handling the next.
Too often, some file unrelated to the files that have a complex merge, needed extremely painful manual resolution. If we had multiple levels of complex merges like in the picture, the developer would need to manually resolve the same file, many, many times, wasting many units of time, effort, and sanity.
So, now that we have a good algorithm in place for identifying the problematic files, as we can handle this as part of the automatic process used to merge code between different streams. Once the files are identified, we follow their rename chain from the delete-side of the complex merge until we find the final resting place, and merge, resolve and automatically submit, rather than telling the user that they have to do yet another intermediate error-prone step.
Complex merges are the biggest and hardest problem we had to solve.
Our first implementation involved asking users to re-run their command at an earlier change level to defuse a complex merge. That’s a great first attempt at defusing a complex merge, but there were two massive pains that came from it:
At times, there would be cascading complex merges, where the first would need to be defused before we could even think about handling the next.
Too often, some file unrelated to the files that have a complex merge, needed extremely painful manual resolution. If we had multiple levels of complex merges like in the picture, the developer would need to manually resolve the same file, many, many times, wasting many units of time, effort, and sanity.
So, now that we have a good algorithm in place for identifying the problematic files, as we can handle this as part of the automatic process used to merge code between different streams. Once the files are identified, we follow their rename chain from the delete-side of the complex merge until we find the final resting place, and merge, resolve and automatically submit, rather than telling the user that they have to do yet another intermediate error-prone step.
Complex merges are the biggest and hardest problem we had to solve.
1-minute description of what deadwood is: Files that were branched into existence when they were considered to be in-view and code changes were merged into the stream, but are now not on the view and no longer get updates.
Bad because it violates our component-based design. If a file is not owned by any component, then why should it exist on your branch to begin with?
Why it might exist:
Previously edited, previously in view.
Why we can’t get rid of it:
Automatic obliterates? Nope.
Bad because it violates our component-based design. If a file is not owned by any component, then why should it exist on your branch to begin with?
Our initial reaction and handling of deadwood is a pitfall: Leave it alone until it’s no longer deadwood, if ever. Only merge all files
Mention:
Extra unnecessary complex merges
User confusion as why stream A has the new name but stream B does not
User error (manually deleting old renamed file)
Our second attempt: Merge everything, revert branches or edits that land outside of the virtual stream’s view but keeping renames and deletes.
Question: Have you ever merged 2 million filenames and all of their revisions, 1 million of which continue to be alive, from one stream to another, where the destination has most of the filenames, and many of them are completely out of date? One hour. Resolving? Two hours.
Unacceptable loss of performace was the price to pay for this algorithm’s correctness.
Our final attempt currently deployed to all developers: Merge everything since the last change you merged from, revert branches or edits that land outside of the virtual stream’s view but keeping renames and deletes.
Keeps stream clean of unnecessary deadwood.
This worked well, but sometimes manual resolves were scheduled when we didn’t actually want or need them. So we then had to identify most of the files that we could safely do a resolve –at for, and do that instead.
What we would want to do is to use the sparse branching technology John LoVerso has done here at MathWorks. He will be giving / has given an excellent presentation on that at this conference, so I won’t go much deeper into it than to generalize it as “If a stream has never edited a filename, there should be no integration records on that stream for that filename.”
So now, let’s talk about a similar problem, handling renames that go in and out of the virtual stream’s view.
Both old and new filenames have to be in view for the merge command to succeed.
Branched instead of renamed!
First stab:
Excluding intermediate names sometimes caused branches of the last name instead of move/delete, move/add pairs.
Second stab:
The view was too big and had too many lines with wildcards (…), could not save the virtual stream specification, large failure
It turns out, our solution to keeping deadwood relatively clean in terms of old and new filenames also works exceedingly well here. Merge everything using a wide-open, unrestricted view, and revert anything else.
Onto the final edge case that caused build failures more often than we’d like: Shadowed deletes.
What is a SD?
See image to the right, what do you think would happen when you merge everything from the source above the line to the destination?
Shadowed delete! Middle filename continues to exist, interchanges does not list the delete as needing to come down. Talk a bit about the issues this causes.
Talk a bit about how it relates to complex merges, yet different enough to require another attempt.
Delete of B doesn’t appear!
Thankfully, it’s very simple to both identify and handle this.
Identify: Just find deletes that haven’t been merged yet (integ records) and are not the head revision in the range you are merging.
Resolve: Merge the file at that specific revision.
No pitfalls along the way specific to this problem, but we did identify that running the same merge command more than once without resolving can cause pain and incorrect changes being applied to the wrong files (i.e. rename, re-add, re-delete).
I’d like to take a few minutes to conclude this presentation with what we’ve learned, and what we would like to see from Perforce.
Lessons learned, and the benefits gained from them.
What we would like to see to simplify or handle all of these edge cases, and how they would be beneficial overall.