This document describes the Perforce standard environment (PSE) created at Citrix Systems to simplify managing multiple Perforce instances. Previously, Citrix had many isolated Perforce instances set up over 10+ years without standardization, causing management and performance issues. The new PSE uses a "mesh network" approach with proxy servers to provide a single access point for all instances, regardless of physical location. It also implemented a standardized build system called "Solera" to help developers deal with code from multiple ports. The PSE has improved stability, reduced downtime, and enhanced disaster recovery capabilities at Citrix.
From ClearCase to Perforce Helix: Breakthroughs in Scalability at IntelPerforce
See how the Intel Security and Sensors Firmware team transitioned from IBM ClearCase to Perforce Helix with Microsoft TFS to enable robust and scalable ALM and CI with full traceability. Discover how Intel consolidated and converged 15 different development methodologies used to drive firmware projects to three single paths for all Intel platforms.
We’ve begun an initiative at Citrix to make software development inherently more secure. I’ll start with a few security anecdotes, give you a walkthrough of the security layers from data to physical, and highlight security features along the way. I’ll also discuss the Helix Versioning Engine protocol and show you why SSL encryption should be on by default.
Accelerating Software Development with NetApp's P4flexPerforce
The challenge for developers who work with large volumes of data such as multimedia assets, video game art, and firmware designs, etc., is the ability to get a quick copy of source and build assets. By combining the technologies of Perforce and NetApp, a new Perforce workspace can be created in minutes instead of hours. Perforce in collaboration with NetApp has developed a p4 broker script written in Python that allows users to create workspaces quickly using NetApp FlexClone technology.
Using Perforce Data in Development at TableauPerforce
Data plays a big role at Tableau—not just for our customers, but also throughout our company. Using our own products is not only one of our fundamental company values, but the analysis and discoveries we make are important to track as they shape our development processes and influence our day-to-day decisions. In this talk, we present and analyze a variety of data visualizations based on Perforce data from our development organization and share how it has influenced our infrastructure and development practices.
Supporting Digital Media Workflows in the Cloud with Perforce HelixPerforce
Walk through a distributed, non-destructive digital media workflow with graphics, audio and video media from start to finish. Learn the pain points and challenges of versioning increasingly large and varied formats, and see various strategies and best practices for configuring and managing depots in Perforce Helix that facilitate collaborative creative work while minimizing large data transfers. You’ll leave this session with the insights and skills needed to securely support automated digital media workflows in your organization using the Perforce Helix platform with the latest cloud services.
Perforce Helix Never Dies: DevOps at Bandai Namco StudiosPerforce
Traditionally at Bandai Namco Studios, there has been no unified version control system in place and teams could choose to use any VCS system for their game titles—Subversion, Git, AlienBrain, or none at all. I’ll talk about why Bandai Namco Studios chose to standardize on Perforce Helix, show how we develop LiveOps-type mobile applications using the Unity game engine, and the advantages we gain from centrally managing code and assets in Helix.
How Samsung Engineers Do Pre-Commit Builds with Perforce Helix StreamsPerforce
Get an in-depth look at the life of a pre-commit build at Samsung using Perforce Helix Streams and Electric Cloud’s Electric Commander with Helix Swarm for code review.
From ClearCase to Perforce Helix: Breakthroughs in Scalability at IntelPerforce
See how the Intel Security and Sensors Firmware team transitioned from IBM ClearCase to Perforce Helix with Microsoft TFS to enable robust and scalable ALM and CI with full traceability. Discover how Intel consolidated and converged 15 different development methodologies used to drive firmware projects to three single paths for all Intel platforms.
We’ve begun an initiative at Citrix to make software development inherently more secure. I’ll start with a few security anecdotes, give you a walkthrough of the security layers from data to physical, and highlight security features along the way. I’ll also discuss the Helix Versioning Engine protocol and show you why SSL encryption should be on by default.
Accelerating Software Development with NetApp's P4flexPerforce
The challenge for developers who work with large volumes of data such as multimedia assets, video game art, and firmware designs, etc., is the ability to get a quick copy of source and build assets. By combining the technologies of Perforce and NetApp, a new Perforce workspace can be created in minutes instead of hours. Perforce in collaboration with NetApp has developed a p4 broker script written in Python that allows users to create workspaces quickly using NetApp FlexClone technology.
Using Perforce Data in Development at TableauPerforce
Data plays a big role at Tableau—not just for our customers, but also throughout our company. Using our own products is not only one of our fundamental company values, but the analysis and discoveries we make are important to track as they shape our development processes and influence our day-to-day decisions. In this talk, we present and analyze a variety of data visualizations based on Perforce data from our development organization and share how it has influenced our infrastructure and development practices.
Supporting Digital Media Workflows in the Cloud with Perforce HelixPerforce
Walk through a distributed, non-destructive digital media workflow with graphics, audio and video media from start to finish. Learn the pain points and challenges of versioning increasingly large and varied formats, and see various strategies and best practices for configuring and managing depots in Perforce Helix that facilitate collaborative creative work while minimizing large data transfers. You’ll leave this session with the insights and skills needed to securely support automated digital media workflows in your organization using the Perforce Helix platform with the latest cloud services.
Perforce Helix Never Dies: DevOps at Bandai Namco StudiosPerforce
Traditionally at Bandai Namco Studios, there has been no unified version control system in place and teams could choose to use any VCS system for their game titles—Subversion, Git, AlienBrain, or none at all. I’ll talk about why Bandai Namco Studios chose to standardize on Perforce Helix, show how we develop LiveOps-type mobile applications using the Unity game engine, and the advantages we gain from centrally managing code and assets in Helix.
How Samsung Engineers Do Pre-Commit Builds with Perforce Helix StreamsPerforce
Get an in-depth look at the life of a pre-commit build at Samsung using Perforce Helix Streams and Electric Cloud’s Electric Commander with Helix Swarm for code review.
Global Software Development powered by PerforcePerforce
From inception to sunset, hundreds of people from around the world are involved in the production and live operations of video games developed by Electronic Arts. An overview of how EA uses a variety of features in Perforce Helix to effectively utilize its world wide talent pool, develop software efficiently, and protect its intellectual property.
How to Combine Artifacts and Source in a Single ServerPerforce
See how to use Perforce Helix as an artifact manager by extending a Helix repository to store artifacts used for build and deployment. We’ll demo our proof of concept, Hive, and its core functions for configuring and adding new artifact repositories.
Adventures in versioning everything - from software to chip designs - from NVIDIA, where more than 90% of the company use Perforce as a single source of truth. An overview of the real-world advantages of the "monorepo" across development and operations teams, including lessons learned along the way.
Alfresco Platform Update and Roadmap delivered by Gabriele Columbro, Senior Product Manager for Core Platform / API at Alfresco, with updates on the upcoming Alfresco 5.1 release, on Extreme Scalability (and Solr sharding), Share separation, the new API lifecycle and brand new Developer documentation, samples and tutorials. Mentions of the Upgrade Task Force and new Developer platform improvements like support for JAR modules and tracking / reporting of Share modules.
Software Testing in a Distributed EnvironmentPerforce
Distributed development across countries creates both challenges and opportunities for the production of high quality software. We’ll look at new ways of achieving automation for testing software in a continuous delivery context, using parallelization techniques and automated analysis fully integrated with a reliable and scalable SCM system. A new optimal method of testing common code in similar branches is presented along with the semantic merging of testing results.
Committing to a company-wide software change is no small feat, but if you’re already sweating at the mere thought of checking code in and out, it’s time to plan your escape route.
So, break free and join Tom Tyler, Senior Consultant at Perforce and in-house ClearCase specialist to map out:
- Baseline-and-branch vs. detail history import strategies
- Porting
- Integrations for defect trackers, training, and tooling
- Cutover strategies
This is the session delivered during the Alfresco Developers Conference in Lisbon, January 2018. Learn all what you need to know to perform a proper backup and disaster recovery strategy. From a single server installation with hundreds of documents to a large deployment with multiple nodes, layers, databases and multi-million documents. What is the best way for each case?
Using Oracle Multitenant to efficiently manage development and test databasesMarc Fielding
How to efficiently manage large volumes of non-production databases using technologies like Oracle Multitenant and storage snapshots
From Oracle OpenWorld 2014
Back your App with MySQL & Redis, the Cloud Foundry Way- Kenny Bastani, PivotalRedis Labs
In this session, we will build a minimum viable Spring Data web service with REST API, add a MySQL backing service as the primary data store, and a Redis Labs backing service for caching. We will demonstrate performance metrics without Redis caching enabled and then with Redis caching enabled. I will also provide an intro-level explanation of the platform capabilities within Pivotal Web Services.
Infrastructure, use cases and performance considerations for
an Enterprise Grade ECM implementation up to 1B documents on AWS (Amazon Web Services EC2 and Aurora) based on the Alfresco (http://www.alfresco.com) Platform, leading Open Source Enterprise Content Management system.
Advanced dev ops governance with terraformJames Counts
DevOps project sprawl is real! Large organizations with many teams need to support a variety of configurations from infrastructure governance to domain-specific app deployments, all while enforcing good security practices like least privilege for each team. Maintaining these controls by hand leads to complexity, stagnation, and insecure shortcuts. In this session, you'll learn how Terraform can automate this configuration--using Terraform--and make doing the right thing easy!
Presentation delivered at LinuxCon China 2017.
Zephyr is an upstream open source project for places where Linux is too big to fit. This talk will overview the progress we've made in the first year towards the projects goals around incorporating best of breed technologies into the code base, and building up the community to support multiple architectures and development environments. We will share our roadmap, plans and the challenges ahead of the us and give an overview of the major technical challenges we want to tackle in 2017.
How Continuous Delivery Helped McKesson Create Award Winning ApplicationsPerforce
Healthcare has always had unique challenges, and as we move through the Affordable Care Act era, it requires new and stronger applications. Choosing the right tool to create and deploy these applications is critical. Hear how CI and CD (before we even knew the terms) contributed to the production of an award-winning electronic health record application, iKnowMed, and how those lessons learned continue to shape McKesson’s ongoing application development and deployment.
Global Software Development powered by PerforcePerforce
From inception to sunset, hundreds of people from around the world are involved in the production and live operations of video games developed by Electronic Arts. An overview of how EA uses a variety of features in Perforce Helix to effectively utilize its world wide talent pool, develop software efficiently, and protect its intellectual property.
How to Combine Artifacts and Source in a Single ServerPerforce
See how to use Perforce Helix as an artifact manager by extending a Helix repository to store artifacts used for build and deployment. We’ll demo our proof of concept, Hive, and its core functions for configuring and adding new artifact repositories.
Adventures in versioning everything - from software to chip designs - from NVIDIA, where more than 90% of the company use Perforce as a single source of truth. An overview of the real-world advantages of the "monorepo" across development and operations teams, including lessons learned along the way.
Alfresco Platform Update and Roadmap delivered by Gabriele Columbro, Senior Product Manager for Core Platform / API at Alfresco, with updates on the upcoming Alfresco 5.1 release, on Extreme Scalability (and Solr sharding), Share separation, the new API lifecycle and brand new Developer documentation, samples and tutorials. Mentions of the Upgrade Task Force and new Developer platform improvements like support for JAR modules and tracking / reporting of Share modules.
Software Testing in a Distributed EnvironmentPerforce
Distributed development across countries creates both challenges and opportunities for the production of high quality software. We’ll look at new ways of achieving automation for testing software in a continuous delivery context, using parallelization techniques and automated analysis fully integrated with a reliable and scalable SCM system. A new optimal method of testing common code in similar branches is presented along with the semantic merging of testing results.
Committing to a company-wide software change is no small feat, but if you’re already sweating at the mere thought of checking code in and out, it’s time to plan your escape route.
So, break free and join Tom Tyler, Senior Consultant at Perforce and in-house ClearCase specialist to map out:
- Baseline-and-branch vs. detail history import strategies
- Porting
- Integrations for defect trackers, training, and tooling
- Cutover strategies
This is the session delivered during the Alfresco Developers Conference in Lisbon, January 2018. Learn all what you need to know to perform a proper backup and disaster recovery strategy. From a single server installation with hundreds of documents to a large deployment with multiple nodes, layers, databases and multi-million documents. What is the best way for each case?
Using Oracle Multitenant to efficiently manage development and test databasesMarc Fielding
How to efficiently manage large volumes of non-production databases using technologies like Oracle Multitenant and storage snapshots
From Oracle OpenWorld 2014
Back your App with MySQL & Redis, the Cloud Foundry Way- Kenny Bastani, PivotalRedis Labs
In this session, we will build a minimum viable Spring Data web service with REST API, add a MySQL backing service as the primary data store, and a Redis Labs backing service for caching. We will demonstrate performance metrics without Redis caching enabled and then with Redis caching enabled. I will also provide an intro-level explanation of the platform capabilities within Pivotal Web Services.
Infrastructure, use cases and performance considerations for
an Enterprise Grade ECM implementation up to 1B documents on AWS (Amazon Web Services EC2 and Aurora) based on the Alfresco (http://www.alfresco.com) Platform, leading Open Source Enterprise Content Management system.
Advanced dev ops governance with terraformJames Counts
DevOps project sprawl is real! Large organizations with many teams need to support a variety of configurations from infrastructure governance to domain-specific app deployments, all while enforcing good security practices like least privilege for each team. Maintaining these controls by hand leads to complexity, stagnation, and insecure shortcuts. In this session, you'll learn how Terraform can automate this configuration--using Terraform--and make doing the right thing easy!
Presentation delivered at LinuxCon China 2017.
Zephyr is an upstream open source project for places where Linux is too big to fit. This talk will overview the progress we've made in the first year towards the projects goals around incorporating best of breed technologies into the code base, and building up the community to support multiple architectures and development environments. We will share our roadmap, plans and the challenges ahead of the us and give an overview of the major technical challenges we want to tackle in 2017.
How Continuous Delivery Helped McKesson Create Award Winning ApplicationsPerforce
Healthcare has always had unique challenges, and as we move through the Affordable Care Act era, it requires new and stronger applications. Choosing the right tool to create and deploy these applications is critical. Hear how CI and CD (before we even knew the terms) contributed to the production of an award-winning electronic health record application, iKnowMed, and how those lessons learned continue to shape McKesson’s ongoing application development and deployment.
Granular Protections Management with TriggersPerforce
Managing the Perforce Helix protections table can be unwieldy at best. Learn how we implemented a trigger-based system that removes the need for an administrator to manually edit the protections table. By granting ownership of individual projects or codelines in the protections table, we can allow project managers to control permissions to a path without worrying about mistakes that could affect the entire company.
[Webinar] The Changing Role of Release Engineering in a DevOps World with J. ...Perforce
The rise of DevOps is revitalizing age-old topics in release engineering and application lifecycle management, and aspects of software delivery that DevOps doesn’t magically solve. If you're responsible for the release engineering function in your organization, see what the new world looks like and which aspects of the industry it’s leaving behind.
Could you release off your mainline today? In our fast paced world well scheduled releases have become a thing of the past. Now more then ever you must maintain clean well tested code lines that can be shipped at any moment. At the last Merge we talked about how these increased demands pushed Xilinx to develop automation that validates every change before submission. In this talk we will continue that discussion covering the evolution of our tools over the past two years as we have battled with more developers, more products, and a faster code churn the ever before.
Microservices allow for extensible app architecture and a vendor-agnostic, scalable infrastructure. While microservices simplify app deployments, they come at a price: because they’re so fragmented, it is more difficult to track and manage all the independent, yet interconnected components of an app. All this information (requirements, code, test cases and results, build artifacts, and deployment blueprints) needs to live somewhere and most importantly be versioned. Using a real example and a live demonstration of Perforce Helix, Docker and Selenium, get best practices and tips for enabling a robust, scalable and extensible pipeline to support today’s modern app delivery.
Building a successful DevOps solution requires a holistic view of your development ecosystem plus solid technology that can support your organization today and in the future. Learn how to start defining your own successful DevOps solution and how to position Helix to be at the center of it all.
Planning Optimal Lotus Quickr services for Portal (J2EE) DeploymentsStuart McIntyre
As per the Quickr Wiki ( http://www-10.lotus.com/ldd/lqwiki.nsf/dx/20052009045545WEBCGW.htm ):
"This document contains the presentation from Quickr masterclass covering planning optimal deployments – crawl/walk/run.
Discussing simplistic deployment architectures which can be linearily scaled over time (e.g. from POC to simple-non-clustered to clustered)
Sharing of key tips/recommendations from SVT and Perf - so as to help avoid expensive crit-sits in the field
Tuning for performance, stability and reliability"
Please note, I do not claim any ownership of this presentation, just am uploading to allow sharing via the Quickr Blog. Any questions/comments/issues, just let me know!
Check out the great new features in Helix Core 2017.1 and Helix Swarm 2017 to see why it’s never been easier to collaborate and improve rapid release cycles.
LCNA14: Why Use Xen for Large Scale Enterprise Deployments? - Konrad Rzeszute...The Linux Foundation
For many years, the Xen community has been delivering a solid virtualization platform for the enterprise. In support of the Xen community innovation effort, Oracle has been translating our enterprise experience with mission-critical workloads and large-scale infrastructure deployments into upstream contributions for the Linux and Xen efforts. In this session, you'll hear from a key Oracle expert, and community member, about Oracle contributions that focus on large-scale Xen deployments, networking, PV drivers, new PVH architecture, performance enhancements, dynamic memory usage with ‘tmem', and much more. This is your chance to get an under the hood view and see why the Xen architecture is the ideal choice for the enterprise.
Still All on One Server: Perforce at Scale Perforce
Google runs the busiest single Perforce server on the planet, and one of the largest repositories in any source control system. This session will address server performance and other issues of scale, as well as where Google is in general, how it got there and how it continues to stay ahead of its users.
NetApp FlexPod is a converged infrastructure solution that lets customers reduce the deployment time and total cost of infrastructure by half! This is the CiscoLive session slide deck that provided an overview of this popular solution.
Going Remote: Build Up Your Game Dev Team Perforce
Everyone’s working remote as a result of the coronavirus (COVID-19). And while game development has always been done with remote teams, there’s a new challenge facing the industry.
Your audience has always been mostly at home – now they may be stuck there. And they want more games to stay happy and entertained.
So, how can you enable your developers to get files and feedback faster to meet this rapidly growing demand?
In this webinar, you’ll learn:
-How to meet the increasing demand.
-Ways to empower your remote teams to build faster.
-Why Helix Core is the best way to maximize productivity.
Plus, we’ll share our favorite games keeping us happy in the midst of a pandemic.
How to Improve RACF Performance (v0.2 - 2016)Rui Miguel Feio
When hundreds and some times thousands of security validations occur every minute on the mainframe, performance and availability are paramount. In this session the presenter shows some different techniques that when implemented can help improve RACF performance, so that it does not become the source of your performance problems.
Best Practices for Deploying Enterprise Applications on UNIXNoel McKeown
Gain some insight to a UNIX based operating system and the types of tasks a build team or vendors perform. How to prepare a UNIX server for typical enterprise deployments.
The best practices applied to a UNIX platform in a telco environment. Some hints, tips, troubleshooting and practical knowledge.
3 Ways to Improve Performance from a Storage PerspectivePerforce
In this session, get three takeaways about Perforce performance benchmarks and their results across varying storage protocols, using NetApp storage as an example. Learn how to use Perforce benchmarks and tools to validate the performance of your Perforce deployment; understand Perforce performance across different storage protocols; and get tips and tricks for deploying Perforce on varying storage technologies.
PHD Virtual: Optimizing Backups for Any StorageMark McHenry
Learn about the differences between virtual full, and traditional full and incremental backup modes, and which mode works best depending on the type of storage.
How to Organize Game Developers With Different Planning NeedsPerforce
Different skills have different needs when it comes to planning. For a coder it may make perfect sense to plan work in two-week sprints, but for an artist, an asset may take longer than two weeks to complete.
How do you allow different skills to plan the way that works best for them? Some studios may choose to open up for flexibility – do whatever you like! But that tends to cause issues with alignment and siloes of data, resulting in loss of vision. Lost vision in the sense that it is difficult to understand, but also — and maybe more importantly — the risk of losing the vision of what the game will be.
With the right approach, however, you can avoid these obstacles. Join backlog expert Johan Karlsson to learn:
-The balance of team autonomy and alignment.
-How to use the product backlog to align the project vision.
-How to use tools to support the flexibility you need.
Looking for a planning and backlog tool? You can try Hansoft for free.
Regulatory Traceability: How to Maintain Compliance, Quality, and Cost Effic...Perforce
How do regulations impact your product requirements? How do you ensure that you identify all the needed requirements changes to meet these regulations?
Ideally, your regulations should live alongside your product requirements, so you can trace among each related item. Getting to that point can be quite an undertaking, however. Ultimately you want a process that:
-Saves money
-Ensures quality
-Avoids fines
If you want help achieving these goals, this webinar is for you. Watch Tom Totenberg, Senior Solutions Engineer for Helix ALM, show you:
-How to import a regulation document into Helix ALM.
-How to link to requirements.
-How to automate impact analysis from regulatory updates.
Efficient Security Development and Testing Using Dynamic and Static Code Anal...Perforce
Be sure to register for a demo, if you would like to see how Klocwork can help ensure that your code is secure, reliable, and compliant.
https://www.perforce.com/products/klocwork/live-demo
If it’s not documented, it didn’t happen.
When it comes to compliance, if you’re doing the work, you need to prove it. That means having well-documented SOPs (standard operating procedures) in place for all your regulated workflows.
It also means logging your efforts to enforce these SOPs. They show that you took appropriate action in any number of scenarios, which can be related to regulations, change requests, firing of an employee, logging an HR compliant, or anything else that needs a structured workflow.
But when do you need to do this, and how do you go about it?
In this webinar, Tom Totenberg, our Helix ALM senior solutions engineer, clarifies workflow enforcement SOPs, along with a walkthrough of how Perforce manages GDPR (General Data Protection Regulation) requests. He’ll cover:
-What are SOPs?
-Why is it important to have this documentation?
-Example: walking through our internal Perforce GDPR process.
-What to beware of.
-Building the workflow in ALM.
Branching Out: How To Automate Your Development ProcessPerforce
If you could ship 20% faster, what would it mean for your business? What could you build? Better question, what’s slowing your teams down?
Teams struggle to manage branching and merging. For bigger teams and projects, it gets even more complex. Tracking development using a flowchart, team wiki, or a white board is ineffective. And attempts to automate with complex scripting are costly to maintain.
Remove the bottlenecks and automate your development your way with Perforce Streams –– the flexible branching model in Helix Core.
Join Brad Hart, Chief Technology Officer and Brent Schiestl, Senior Product Manager for Perforce version control to learn how Streams can:
-Automate and customize development and release processes.
-Easily track and propagate changes across teams.
-Boost end user efficiency while reducing errors and conflicts.
-Support multiple teams, parallel releases, component-based development, and more.
How to Do Code Reviews at Massive Scale For DevOpsPerforce
Code review is a critical part of your build process. And when you do code review right, you can streamline your build process and achieve DevOps.
Most code review tools work great when you have a team of 10 developers. But what happens when you need to scale code review to 1,000s of developers? Many will struggle. But you don’t need to.
Join our experts Johan Karlsson and Robert Cowham for a 30-minute webinar. You’ll learn:
-The problems with scaling code review from 10s to 100s to 1,000s of developers along with other dimensions of scale (files, reviews, size).
-The solutions for dealing with all dimensions of scale.
-How to utilize Helix Swarm at massive scale.
Ready to scale code review and streamline your build process? Get started with Helix Swarm, a code review tool for Helix Core.
By now many of us have had plenty of time to clean and tidy up our homes. But have you given your product backlog and task tracking software as much attention?
To keep your digital tools organized, it is important to avoid hoarding on to inefficient processes. By removing the clutter in your product backlog, you can keep your teams focused.
It’s time to spark joy by cleaning up your planning tools!
Join Johan Karlsson — our Agile and backlog expert — to learn how to:
-Apply digital minimalism to your tracking and planning.
-Organize your work by category.
-Motivate teams by transitioning to a cleaner way of working.
TRY HANSOFT FREE
Shift to Remote: How to Manage Your New WorkflowPerforce
The spread of coronavirus has fundamentally changed the way people work. Companies around the globe are making an abrupt shift in how they manage projects and teams to support their newly remote workers.
Organizing suddenly distributed teams means restructuring more than a standup. To facilitate this transition, teams need to update how they collaborate, manage workloads, and maintain projects.
At Perforce, we are here to help you maintain productivity. Join Johan Karlsson — our Agile expert — to learn how to:
Keep communication predictable and consistent.
-Increase visibility across teams.
-Organize projects, sprints, Kanban boards and more.
-Empower and support your remote workforce.
Hybrid Development Methodology in a Regulated WorldPerforce
In a regulated industry, collaboration can be vital to building quality products that meet compliance. But when an Agile team and a Waterfall team need to work together, it can feel like mixing oil with water.
If you're used to Agile methods, Waterfall can feel slow and unresponsive. From a Waterfall perspective, pure Agile may lack accountability and direction. Misaligned teams can slow progress, and expose your development to mistakes that undermine compliance.
It's possible to create the best of both worlds so your teams can operate together harmoniously. This is how to develop products quickly, and still make regulators happy.
Join ALM Solutions Engineer Tom Totenberg in this webinar to learn how teams can:
- Operate efficiently with differing methodologies.
- Glean best practices for their tailored hybrid.
- Work together in a single environment.
Watch the webinar, and when you're ready for a tool to help you with the hybrid, know that you can try Helix ALM for free.
Better, Faster, Easier: How to Make Git Really Work in the EnterprisePerforce
There's a lot of reasons to love Git. (Git is awesome at what it does.) Let’s look at the 3 major use cases for Git in the enterprise:
1. You work with third party or outsourced development teams.
2. You use open source in your products.
3. You have different workflow needs for different teams.
Making the best of Git can be difficult in an enterprise environment. Trying to manage all the moving parts is like herding cats.
So, how do you optimize your teams’ use of Git — and make it all fit into your vision of the enterprise SDLC?
You’ll learn about:
-The challenges that accompany each use case — third parties, open source code, different workflows.
-Ways to solve these problems.
-How to make Git better, faster, and easier — with Perforce
Easier Requirements Management Using Diagrams In Helix ALMPerforce
Sometimes requirements need visuals. Whether it’s a diagram that clarifies an idea or a screenshot to capture information, images can help you manage requirements more efficiently. And that means better quality products shipped faster.
In this webinar, Helix ALM Professional Services Consultant Gerhard Krüger will demonstrate how to use visuals in ALM to improve requirements. Learn how to:
-Share information faster than ever.
-Drag and drop your way to better teamwork.
-Integrate various types of visuals into your requirements.
-Utilize diagram and flowchart software for every need.
-And more!
Immediately apply the information in this webinar for even better requirements management using Helix ALM.
It’s common practice to keep a product backlog as small as possible, probably just 10-20 items. This works for single teams with one Product Owner and perhaps a Scrum Master.
But what if you have 100 Scrum teams managing a complex system of hardware and software components? What do you need to change to manage at such a massive scale?
Join backlog expert Johan Karlsson to learn how to:
-Adapt Agile product backlog practices to manage many backlogs.
-Enhance collaboration across disciplines.
-Leverage backlogs to align teams while giving them flexibility.
Achieving Software Safety, Security, and Reliability Part 3: What Does the Fu...Perforce
In Part 3, we will look at what the future might hold for embedded programming languages and development tools. And, we will look at the future for software safety and security standards.
How to Scale With Helix Core and Microsoft Azure Perforce
Microsoft Azure helps teams increase their speed, gain flexibility, and save time. Using Helix Core with Azure you maximizes cloud benefits. You can scale to meet both current and future deployment demands. And this powerful combination helps secure your most valuable IP assets.
So, where do you start? What do you need to set up your teams for success? How can you expedite your pipelines to deliver ahead of your competitors?
Join Chuck Gehman from Perforce to learn more about:
-Compute, storage, and security options from Azure.
-Strategies that boost your cloud investment.
-Tips to secure your data.
-Best practices for global deployments.
Achieving Software Safety, Security, and Reliability Part 2Perforce
In Part 2, we will focus on the automotive industry, as it leads the way in enforcing safety, security, and reliability standards as well as best practices for software development. We will then examine how other industries could adopt similar practices.
Modernizing an application’s architecture is often a necessary multi-year project in the making. The goal –– to stabilize code, detangle dependencies, and adopt a toolset that ignites innovation.
Moving your monolith repository to a microservices/component based development model might be on trend. But is it right for you?
Before you break up with anything, it is vital to assess your needs and existing environment to construct the right plan. This can minimize business risks and maximize your development potential.
Join Tom Tyler and Chuck Gehman to learn more about:
-Why you need to plan your move with the right approach.
-How to reduce risk when refactoring your monolithic repository.
-What you need to consider before migrating code.
Achieving Software Safety, Security, and Reliability Part 1: Common Industry ...Perforce
In part one of our three-part webinar series, we examine common software development challenges, review the safety and security standards adopted by different industries, and examine the best practices that can be applied to any software development team.
The features you’ve been waiting for! Helix ALM’s latest update expands usability and functionality to bring solid improvements to your processes.
Watch Helix ALM Senior Product Manager Paula Rome demonstrate how new features:
-Simplify workflows.
-Expand report analysis.
-Boost productivity in the Helix ALM web client.
All this and MORE packed into an exciting 30 minutes! Get inspired. Be extraordinary with the new Helix ALM.
Companies that track requirements, create traceability matrices, and complete audits - especially for compliance - run into many problems using only Word and Excel to accomplish these tasks.
Most notably, manual processes leave employees vulnerable to making costly mistakes and wasting valuable time.
These outdated tracking procedures rob organizations of benefiting from four keys to productivity and efficiency:
-Automation
-Collaboration
-Visibility
-Traceability
However, modern application lifecycle management (ALM) tools solve all of these problems, linking and organizing information into a single source of truth that is instantly auditable.
Gerhard Krüger, senior consultant for Helix ALM, explains how the right software supports these fundamentals, generating improvements that save time and money.
5 Ways to Accelerate Standards Compliance with Static Code Analysis Perforce
In mission- and safety-critical industries, static code analysis (SCA) is key to facilitating the development of robust and reliable software - yet, according to VDC Research, only 27% of embedded developers report using SCA tools on their current project.
Why is adoption low and what can you do to deploy SCA effectively?
Join Walter Capitani (Rogue Wave Software) and Christopher Rommel (VDC Research) as they review the results of the latest VDC Research paper on the trends, techniques, and best practices for standards compliance within embedded software teams. You will learn what organizations like yours are doing now and how to prepare for future challenges by:
-Understanding trends for standards compliance in 2018
-Identifying common challenges for automotive, medical, industrial automation, and other types of applications
-Learning best practices for achieving compliance using different tools, techniques, and processes
After attending this webinar, you'll be better prepared to plan and execute a standards compliance program for your team and maximize the effectiveness of static code analysis.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Assuring Contact Center Experiences for Your Customers With ThousandEyes
[Citrix] Perforce Standardisation at Citrix
1.
MERGE 2013 THE PERFORCE CONFERENCE SAN FRANCISCO • APRIL 24−26
Perforce Standardisation at Citrix
Coping with Change in a Growing
Global Organisation
Jason Leonard & Lee Leggett, Citrix Systems
Abstract
This white paper describes the Perforce standard
environment (PSE) created at Citrix Systems to aid and
simplify the management and administration of
Perforce instances.
2. 2 Perforce Standardisation at Citrix
Introduction
The purpose of this white paper is to describe the Perforce standard environment (PSE)
created at Citrix Systems to aid and simplify the management and administration of Perforce
instances. It will cover the historical setup employed at Citrix for more than 10 years as well as
the new implementation linked to the development of the PSE. This will be followed by a
description of the syncing and building processes employed by Citrix, in part driven by some of
the complexities discussed in the past implementation. Then we will fully describe the PSE.
The paper concludes with a look at all the future improvements planned for the PSE and the
general Perforce implementation at Citrix.
Citrix Perforce History
Citrix has been a customer of Perforce for more than a decade. As a result, many of the early
practices and recommendations have been followed with little deviation. As Perforce has
grown, best-practice recommendations have naturally evolved as well. New practices and
ways of thinking, however, sometimes can meet with a lot of resistance in an established
environment. How Perforce was implemented and run changed very little at Citrix. New
Perforce instances continued to be created and product dependencies between these different
instances magnified exponentially. The result was management problems for our
administrators and frustration from our end users.
Example of Hardware Implementation
This example has been taken from one of the Citrix offices. It starts by defining an old
hardware implementation, describes what problems were encountered, and ends with a new
implementation, PSE, that is currently in use. The PSE was created for the new
implementation (as will be described later).
Previous Implementation
This hardware specification was in use until around 2010.
Physical
Server
• Rack-mounted server
• Windows Server 2003
• 4 GB RAM
• 350 GB HDD organised in a RAID 5 array
Perforce
Instance
Configuration
• 7 Perforce master instances running on local hard disk drive (HDD):
• 5 of which linked via an authorisation server
• 1 of which used external authorisation via Active Directory LDAP; specifically used as
a test bed for Perforce version updates and trigger scripts
• 22 Perforce proxy (p4p) instances pointing to other Citrix Perforce instances at other sites.
All hosted on local HDD.
• Total licensed user count of nearly 2,000, around 150 local heavy users including the
automated build system in both the United Kingdom and United States
3. 3 Perforce Standardisation at Citrix
Performance
The following sync, branch, and resolve examples are all based on a sample 5 GB area. Some
sync time examples are given below:
Remote site Local site Remote site using proxy
3 hours 45 mins 55 mins
Average p4 branch time: 2 mins
Average p4 resolve time: 3 mins
Sequential read to disk:
Sequential write to disk:
Longest checkpoint: 1.5 hours
Longest verify: 4 hours
Problems Encountered with the Previous Implementation
There were several issues with the old implementation encountered at multiple sites.
Perforce
Server
Downtime
This was largely due to checkpoints and other database intensive commands. With the size of
the Perforce instances at certain sites, checkpoints could easily last 16 hours or more. This
meant that certain instances were down for an entire day at the weekend. Although this is
potentially acceptable during the middle of a project, it quickly becomes intolerable as the
release date approaches. In some sites, checkpoints were fast enough to run every day; the
longest checkpoint took around 1.5 hours. However, once the company expanded to include
development sites in Australia, China, India, Japan and the U.S. West Coast, it became
impossible to select a checkpoint time that didn’t affect a development team somewhere.
System
Stability
The Perforce instances were running on a 32 bit OS (Windows Server 2003), which meant that
any one process had a 2 Gb memory limit. With the number of commands being run against
certain servers, the p4d process was reaching this limit, which caused subsequent commands
to stall or fail completely. This problem was simply getting more severe as time progressed.
Disaster
Recovery
Regular checkpoints were performed and tape backups of these as well as all the versioned
files were kept. However, with the hardware available, test restores were not a regular
occurrence.
Complexity
&
24/7/365
Support
With little standardisation between sites, misconfigurations were common. A significant part of
the time spent solving a Perforce issue became learning how a particular Perforce instance
was configured rather than resolving the problem.
Perforce
Knowledge
With a distributed part-time administration team, the level of Perforce experience varied wildly.
This meant that advanced administration of the Perforce instances became very difficult.
Issues raised included: What options do I pass to the checkpoint command? How do I do a
restore? What happens if I set this p4 configurable? Citrix needed a way to simplify the
4. 4 Perforce Standardisation at Citrix
Perforce administration experience for less experienced users without losing some of the in-
depth technical knowledge from the more experienced administrators.
Performance
Citrix has a global distributed workforce accessing Perforce instances and syncing files from
one geographic location to another. Users would often complain of slow client application and
sync times. A classic example of this comes from the way Citrix stores its toolset. Citrix has a
common set of build tools placed in Perforce for compiling most Citrix products. This toolset
has grown to exceed 30 GB of data, and is currently held in one of our U.S. sites. Users from
any other geos syncing all of these tools could lose around 3.5 hours waiting for the sync to
complete. The use of proxies has helped reduce this problem dramatically, but the issue still
remains for the administrator. With most users required to sync all the tools before they set to
work, the ‘have’ tables on this Perforce instance grow very large. Given the 32-bit OS problem,
we end up with a memory swapping issue causing increasingly bad performance.
Revised Implementation
The following specification is in use today in one of the U.K. offices:
Physical
Server
• Rack-mounted server
• Windows Server 2008 R2
• 16 GB RAM
• 450 GB HDD organised in a RAID 5 array (due to limited spindles). Two separate
partitions, one for the journal and the rest (430 GB) for Perforce metadata of the local p4d
processes.
• A further 800 GB is connected via iSCSi from a SAN device and contains the version files
Perforce
Instance
Configuration
• 7 Perforce master instances running on local HDD
• 5 of which linked via an authorisation server
• 1 of which used external authorisation via Active Directory LDAP, also used as a test
bed for Perforce version updates and trigger scripts
• 22 Perforce proxy (p4p) instances pointing at other Citrix Perforce master instances. The
proxy cache files are hosted on the SAN storage device.
• Total licensed user count of nearly 2,000, around 150 local heavy users including the
automated build system in both the United Kingdom and United States
Performance
The following sync, branch, and resolve examples are all based on a 5-Gb sample area. Some
sync time examples are given below:
Remote site Local site Remote site using proxy
3 Hours 30 mins 35 mins
Average p4 branch time: 20 seconds
Average p4 resolve time: 30 seconds
Longest checkpoint: 45 mins
5. 5 Perforce Standardisation at Citrix
Longest verify: 1 hour 40 mins
As these examples illustrate, the improvement in sync times is modest but the improvement in
other database intensive commands such as resolves and verifies is massive. The overall
stability of the system has also been greatly improved with a marked decline in Perforce
problems reported by end users.
Users’ Interaction with Perforce
Over the years some interesting solutions to this Perforce instance explosion have surfaced.
The next few sections describe the problem and some of the efforts to solve it.
When something multiplies exponentially without any control, it causes massive knock-on
effects for whatever environment it is multiplying into. In terms of Perforce, we are talking
about groups of isolated individuals who, with the best intentions, decide to put their own
Perforce server into production in a company where working in silos was the norm.
Over time our company ethos has evolved, and a big push towards product integration has
started to break down these silos. .
Fortunately a core group of Perforce server owners stayed in contact from the outset and had
begun to implement some process over the Citrix Perforce architecture. These individuals put
together an idea based around how one might view Perforce from a high-level perspective, by
only having one piece of information that uniquely identifies the server instance.
Perforce Mesh Network
Usually a user needs two pieces of information in order to connect to an instance—the
hostname of the server that runs the instance and the port number on that machine. The
default port number for Perforce is 1666, but this can be easily changed. What if the port
number was the unique component? This would mean that a user could identify the instance
with only a port number. But what of the hostname? This question becomes more interesting
when we think about another Perforce technology, the proxy.
The Perforce proxy is a piece of Perforce technology that will redirect users’ commands to the
master instance, but will cache a local copy of any file data that might travel across the
connection for speed improvements for later sync requests.
Figure 1: All ports available on all servers
6. 6 Perforce Standardisation at Citrix
Suppose we have two machines and each runs a Perforce server instance (see Figure 1).
Suppose each of these machines is situated in a different country and assume that some sort
of WAN connects them. Each of the instances has a unique port number, but we also run a
Perforce proxy on each of the machines making the missing port available. Now, it doesn’t
matter which hostname the user employs, the instance is still accessible. Of course, there is a
small performance improvement if the user uses a server that is located nearby.
If we now expand this idea into a multi-instance, multi-server, multi-location environment, we
reveal a mesh network of Perforce services (see Figure 2). The users need only know their
local Perforce server hostname and then provide whichever port they wish to connect to.
At Citrix, how we number the ports is important but mostly from an administration point of view.
We use a fairly easy scheme to identify which site the instance is at and some small indication
of the usage. Each port uses 4 digits, just like the default port number for Perforce, but the first
digit describes the geographic location. For example, 1 = United Kingdom, 2 = West America,
3 = East America, 4 = Australia, 5 = China, and so on. To the user it’s all transparent, but from
an administrator perspective it serves as a reminder.
Multi-Port Problems
Eventually, no matter how hard you try to keep products built out of one port, through mergers
and internal reorganisations you will find that products will build out of multiple ports. For Citrix,
it didn’t take long before this started to happen. Unfortunately it causes a cascade effect on
tools and systems. One example of this is the build system.
We modified our build system to control the multi-port issue. But this small change led to some
interesting build numbers (e.g., 112233#443322). Normally the users would know which port
their product source code was in; with only one changelist number this was easy. But now we
have a changelist for each of the ports the product source is located on. In this example we
see one at change 112233, and the other at change 443322. To decode the combined build
number, more information is needed—the port numbers and the order in which they appear in
the build number. So by adding the following port ordering string—1666#2666—we can match
Figure 2: Mesh network
7. 7 Perforce Standardisation at Citrix
up the changelist and the port.
What happens now if a developer makes a change to the code that affects multiple ports?
This is where things can get complicated. It’s up to the developer to figure out where the
source code came from and separate the change on the ports that the code change affects.
This has caused lots of frustration in the past and continues to do so.
Solutions
Two years ago, at the San Francisco 2011 Perforce user conference, a colleague of ours
presented "Creating a World-Class Build System, and Getting It Right. It covered in-house
techniques Citrix has developed to fulfill the engineering build requirements. The next evolution
of this over the years has been a rebranding and consolidation exercise to present our
developers with a standard end-to-end build system called “Solera”. It continues to be
internally developed and has the following five parts:
• Solera Sync
• Solera Build
• Solera Controller
• Solera Release
• Solera Layout
Each part is very distinct and covers a specific area of the build system. Because our focus
here is on Perforce at Citrix, we will describe only Sync and Controller.
Solera Sync
Solera Sync tries to reduce the complexity of multi-port syncing by providing a way for us to
describe (using configuration files) what a product component requires in the way of inputs for
it to build successfully. The inputs are typically source code but could as easily be SDKs and
tools, including compilers.
Each of the product components has a unique name usually made up of the component’s
name and the branch. For example, the Solera Sync mainline code could have a component
name of solerasync_main. For users to obtain the correct build environment and source code
for this component, they would simply instruct Solera Sync to sync ‘solerasync_main’.
An interesting side effect of doing this syncing is that we have the opportunity to insert some
extra information into folders about where the files come from. We make use of P4CONFIG
files so that if you were using the p4 command-line, you could easily submit files without
having to remember which port the source code came from.
Solera Sync also helps with determining the best way to obtain the inputs required. The
Perforce server hostnames are site specific. Therefore with a little knowledge of the Citrix
internal network, its subnets, and geo time zones, we can determine the correct Perforce
server to use for maximum performance.
Solera Controller
This part of Solera is at the heart of the build system and is the automated continuous
integration (CI) engine.
Ever since Citrix has been considering the cloud and what that means for its technology, a
group of build engineers has debated the merits of viewing the controller as a cloud controlling
8. 8 Perforce Standardisation at Citrix
technology. They have considered how to decouple systems from source control and
challenged the ideas of fixed infrastructure machines in favour of a rich and flexible system
that is almost organic in nature.
This is how we view the next generation of CI engines, and with the help of the virtualisation
technology Citrix has built up over the years and its talented engineers, we believe that this
vision is our future.
Solera Controller builds on the ideas of Solera Sync and therefore gains the simplicity of
syncing our products. However, it must still keep some control over the syncing process
because the controller needs to keep a track of what inputs it used in the construction of any of
our builds for reproducibility reasons.
Reporting Services
A number of Citrix tools can extract data from our source and build servers and display it in a
variety of different ways. Historically it’s been hard to truly visualise how our products are built,
particularly when they are made up of smaller components, SDKs, and libraries that could be
built in many other geographies and build systems. If changes go into one of the SDKs, testers
need to know when they can test the product for the fix.
‘Sniff’, one of our newest engineering tools, was developed by a Citrix engineer for this very
purpose and has quickly become one of the handiest tools in our engineers’ toolboxes. It
collects data from all of our Perforce instances, collates it with the data from our build systems,
and pulls in any extra metadata from the various control files we have dotted around. It allows
any engineer to pull up and drill down on any of these items. It can even draw diagrams that
show how a change to one component gets pulled into other components and eventually
bubbles up until it’s on one of our DVDs. For a test engineer this tool has helped to keep focus
and ensure effort isn’t wasted.
Citrix Perforce Standard Environment (PSE)
The PSE was created to solve a myriad of problems plaguing the implementation and
administration of Perforce at Citrix.
Over time the company has grown to incorporate other sites that own Perforce servers. This
led to the need for a common environment that everyone understood and let non-advanced
Perforce administrators easily perform operations on servers.
A major driving factor for needing to solve the administration problem came about because of
the loss of Perforce knowledge within a key team. This team was seen as a thought leader
when it came to Perforce, particularly one individual who had been using Perforce since its
inception. It had developed many scripts using advanced ideas and techniques that quickly
became unsupportable. A new set of admins made up of mostly beginners or intermediates
attempted to pick up the pieces, but quickly the decision was made to start afresh with a
system that all administrators could understand and use effectively and confidently.
After reading lots of white papers and information on the Perforce website, a team set about
creating an administration environment that fitted Perforce for Citrix. And so the Citrix Perforce
Standard Environment (PSE) was born.
Overview of the PSE
The PSE is fundamentally a set of scripts and configuration files supporting the running of
multiple Perforce instances on a single machine.
9. 9 Perforce Standardisation at Citrix
The PSE defines three types of Perforce instances:
1. A “Root”: This would be a standard p4d Perforce instance. Sometimes called a master.
2. A “Proxy”: This is a standard p4p (Perforce proxy) instance pointing at a “Root”.
3. A “Replica”: This is a p4d instance that is configured as a replica of a “Root”.
The PSE also can support a “multi-version” environment. This means that each Perforce
instance controlled by the PSE can be running a different version of the Perforce software. For
example, one could be at 2012.2 while another is running at 2011.1. Because of the large
number of Perforce instances in Citrix, the ability to upgrade one piece at a time is a necessity.
This certainly does not mean that Citrix should be running several different Perforce versions
at once; it simply means that upgrades can be rolled out and tested in a structured way with
the ultimate objective of all the Perforce instances at least at one location being the same
Perforce version.
PSE Configuration Files
The PSE has two key configurations files. The first, config.txt, describes how the machine is
configured, where instance artefacts are to be stored, as well as defaults values for certain
actions. The second, site.txt, describes which instances are to be serviced by the machine and
how they are to be run.
Config.txt
Figure 3 presents an example of the type of information this file contains.
10. 1
0
Perforce Standardisation at Citrix
Figure 3: config.txt example
BinBasePath
=
C:Perforce
JournalBasePath
=
D:
LogBasePath
=
D:
MetadataBasePath
=
E:
VersionBasePath
=
F:
These paths describe the base paths of each of the artefacts required by the Perforce
software. This allows the flexibility to define different types of storage for the artefacts
according to their needs. For example, the metadata is best on very fast access drives, while
the journal is best on a sequential write optimised file system.
PathSep
=
This allows the paths formed by the PSE scripts to support different platform conventions.
11. 1
1
Perforce Standardisation at Citrix
P4Roots
=
p4roots
P4Proxy
=
p4proxy
P4Replica
=
p4replica
For each instance type supported by the PSE, a corresponding folder is created under each of
the base paths. This means that when viewing the folders with a file browser, it is clear what
the instance type is.
Under each of these instance type folders, port number folders are created containing the
actual artefact files for the instance.
For example, a path labelled E:p4roots1666 would contain the metadata or database files for
port 1666, which is a root or master instance.
P4Progs
=
bin
This path is concatenated to BinBasePath to form a path that describes where the p4
executables downloaded from Perforce.com will be stored.
Licenses
=
license
LicenseFiles
=
license.10.30.*.*
The “Licenses” path is concatenated to BinBasePath to form a path that describes where the
license files for the Perforce server live. The “LicenseFiles” path describes which of the license
files to use. This allows slightly better control of license files requested from Perforce.
NagiosServer
=
*********
NagiosPort
=
*******
Nagios is used to monitor the Perforce machines. However, to monitor a scheduled process,
Nagios recommends the use of passive checks. This means that once either a checkpoint or a
verify is complete, the script will contact the Nagios server to supply the result.
Checkpoint
=
online
The “Checkpoint” value simply controls what the default implementation for checkpoints is—
that is, whether checkpoints happen live (online) or on a replica server (offline). This can be
overridden in the site.txt file on a per instance basis.
RollOverExtension
=
_log.txt.gz
RollOverToKeep
=
5
The PSE keeps log files of upkeep tasks such as checkpoints and verifies. The rollover values
control what file extension to add to previously run log files and how many of these logs to
keep.
CheckpointSchedule
=
Sun|Mon|Tue|Wed|Thu|Fri|Sat#1#01:00:00
VerifySchedule
=
Sun#1#03:00:00
The final items control on what days and times checkpoints and verifies occur. So in this
example, checkpoints occur every day at 1 a.m. and verifies are run each Sunday at 3 a.m.
Site.txt
This file controls configuration of the particular Perforce instances that run on the machine.
12. 1
2
Perforce Standardisation at Citrix
Each instance includes the port number, the version of the Perforce software to use, and the
logging level to use. Both proxies and replicas always have a pointer to their corresponding
root or master instance, which could be on the same or different machine. Roots have the
optional ability to point to an authorisation instance. Overrides are used to change default
values for particular features (see Figure 4).
13. 1
3
Perforce Standardisation at Citrix
Figure 4: site.txt example
Port
Number
The port number used by the Perforce software to expose the service to users must be unique.
It is also used when executing PSE scripts to identify which port to perform operations on.
Perforce
Version
This field is used to identify the Perforce version to use when running the Perforce software.
No provision is made for patched Perforce software.
Type
The type identifies how the PSE will treat the port when executing certain scripts. Currently this
field can take on one of the following values: “root”, “proxy”, “replica”.
• Root ports use p4d and enable scheduled tasks for checkpoints and verifies.
• Proxy ports use p4p and disable most port management scripts that are meaningless.
• Replica ports use p4d as in roots, but don’t add checkpoint or verify schedules.
Auth
Port
This is specifically for root ports and specifies the location of the authorisation port that p4d
should use when authenticating users checking permissions and group membership.
14. 1
4
Perforce Standardisation at Citrix
Proxy
Port
This specifies the port for the proxy server.
Master
Port
This is specifically for replica ports and indicates the port that the replica server is to pull
metadata and/or version files from. It also provides more convenience when using the restore
script to restore port metadata from a checkpoint on another machine also running the PSE.
Log
Level
This field allows the administrator to control the amount of logging provided by the Perforce
software. The logging is written out to the log path defined in the config.txt.
Overrides
This field gives the administrator more control of exactly how the PSE will run the port, by
changing the configuration of the features provided—for example, offline checkpoints and
named configuration (P4NAME).
Examples
The configuration file in Figure 4 shows that instance 2266 is a master port version 2011.1,
which doesn’t have an authorisation port and is run at log level 0. Instance 2244 is also version
2011.1 at log level 1, but it does use an authorisation instance. Instance 1279 is version
2012.2, also a master instance, but it uses an override and overrides the online checkpoint set
in config.txt and performs an offline checkpoint instead using instance 1279 on server
Chfofflineserver.
Using the PSE Scripts
The following instructions demonstrate the PSE scripts. They start by configuring PSE for a
new port, then go through the steps to enable, run, and finally perform other operations on the
port.
Once two configuration files have been populated and a starting Perforce version has been
downloaded, it’s possible to create a Perforce instance using the scripts that come as part of
the PSE. A walk-through of this process follows.
Configuring PSE for a New Port
The site.txt needs to be edited to include the new Perforce instance to be run:
Port Version Type AuthPort LogLevel
2211 2012.2 root - 1
Set Up and Run the Port
As a first step, ensure that the latest hotfix of the required Perforce version is on the server:
download.pl 2012.2
Or
download.pl 2211
Once this is available, the admin then needs to run schedule.pl for the specified instance. This
will create the Windows scheduled tasks to run the port, checkpoint, and verify. Note that the
PSE actually takes a copy of the downloaded p4d.exe and renames it by appending the port
number. This allows the administrator to better identify which p4d.exe corresponds to which
15. 1
5
Perforce Standardisation at Citrix
port in the Task Manager processes list. For example, for Perforce instance 2211, the p4d
executable would be named p4d-2211.exe:
schedule.pl 2211
Next the Perforce instance needs the windows firewall opened so that users can access it:
firewall.pl 2211
Now the port can be started. We can use the schedule script again, but this time instructing it
to run the schedule, not create it:
schedule.pl 2211 --run
A new Perforce instance is now running on port 2211 and is available to users.
Stopping the Port
If an administrator needs to stop access to a Perforce instance, then rather than stopping the
port and trying to run it on “localhost:port”, the firewall can just be closed on that port while
keeping the Perforce instance running:
firewall.pl 2211 --delete
To remove a Perforce instance, only two commands are needed:
schedule.pl 2211 –end
schedule.pl 2211 --delete
These commands, however, will not remove the metadata or versioned files from the HDD of
the server; the admin would have to manually delete those folders. This functionality hasn’t
been added as a deliberate safety measure; making deletion of all Perforce instance data easy
was considered too risky.
Performing Other Port Operations
If an upgrade of the Perforce instance is required, then the following command can be run:
upgrade.pl 2211 2013.1
Upgrade.pl performs several functions here. The first step is to p4admin stop the instance.
Then a checkpoint is performed; once this is complete the actual upgrade is performed and the
new version automatically written into site.txt. Next a checkpoint is taken post-upgrade and if
this is successful it will perform a restore of that checkpoint. This step ensures that any large
deletion of files/clients are removed from the db.have data table.
In the PSE checkpointing, a Perforce instance is a case of simply running a single command:
checkpoint.pl 2211
The actual checkpoint mechanism can be configured differently for each port. The checkpoint
will either happen “online”, which will momentary lock the database tables, or “offline”, which
will perform the checkpoint on a replica of this port and therefore not cause any downtime.
If administrators want to restore from a checkpoint, they have two options: Restore a specific
checkpoint or the “latest” one. To restore a specific checkpoint, the administrator simply runs:
restore.pl 2211 <full checkpoint filename>
To restore the latest checkpoint, simply replace <full checkpoint filename> with “latest”.
16. 1
6
Perforce Standardisation at Citrix
To verify a Perforce instance outside of the normal scheduled verify, the following command is
needed:
verify.pl 2211
Offline Checkpointing
Checkpointing a Perforce instance that is configured to use an offline checkpoint server is
handled differently in the PSE, even though the command is the same. Figure 5 illustrates the
process. First, note that the replica port configured to actually perform the checkpoint proper is
set to pull metadata from the root port using the “p4 pull” command.1
The root port also needs
to have its “checkpoint” value in the configuration set to the hostname and port number of the
replica offline checkpointing server. By executing the PSE checkpoint script as normal, the
checkpoint proceeds as follows:
1. The replica port is told to “schedule” the checkpoint, with the standard “p4 admin
checkpoint” command.
2. The root port now needs only to rotate the database journal, which causes the replica
port to pull over the database changes, detect the rotation, and perform the checkpoint.
3. The script then waits for the MD5 file from the checkpoint to be created; because this is
the last file created by the checkpoint process, it is seen as the end of the checkpoint.
4. The checkpoint files are then copied to the root port version file location as they would
normally do during an online checkpoint.
Figure 5: Offline checkpoint procedure
Upgrading an offline checkpointed Perforce instance is the same as the usual upgrade
process, except that the offline checkpoint server must be upgraded before the live server.
This enables the offline checkpoint server to handle journal entries made in the old or new
version. Also when the upgrade of the main instance is performed, the checkpoints that occur
as part of the upgrade are all performed online, not offline. This is done to both simplify the
upgrade process and give some online checkpoints that can be contrasted with offline ones to
1
Configuration details can be found here:
http://www.perforce.com/perforce/doc.current/manuals/p4sag/10_replication.html
17. 1
7
Perforce Standardisation at Citrix
ensure that everything is working correctly.
PSE in Citrix
The PSE has been in production at the U.K. site for nearly one year, although offline
checkpoints have only recently been introduced. The benefits noticed by the U.K.-based
Perforce administrators have included faster issue resolution, less downtime in a disaster
recovery scenario, and more simplified administration and monitoring. Since the initial phase in
the United Kingdom, the PSE has now also been rolled out in the India, China, and U.S.
offices. Further rollouts to all other Citrix development sites are planned.
Between the two U.K. offices a disaster recovery event was simulated. One site needed to
bring up all the Perforce instances hosted there in the other site. With the use of the SAN
replication technology and the PSE, all Perforce instances were restored in an hour. Without
the PSE, this would have taken significantly longer.
Futures
Recent new features of Perforce have truly opened some interesting paths for us to explore
and opportunities for us to innovate. Ultimately we want to address the hard problems facing
us in order to get us into better shape for the future.
Merging Ports
Since attending the Perforce RoadShow events, we have discussed some interesting ideas
around the possibility of merging Perforce ports. Although on the surface, this sounds like an
easy task, in reality, it is not. Considerations about other services that use Perforce as an
information repository have to be taken into account. They include change review tools, build
databases, e-mails, internal technical documentation, and configuration files. Editing all these
links would be a massive undertaking, so the merge must be performed in a way that does not
invalidate these links.
One way of doing this is to take two Perforce master databases and use the P4Merge tool on
them to create a third, combined database. This process is then repeated over and over until
the result is one master Perforce server (see Figure 6). Our issue is that we have many
systems that point to these Perforce servers (bug tracking, build database, even our syncing
tools), so to facilitate this we would have to keep the old ports live but in read-only mode. This
situation would remain until a specified amount of time elapsed, at which point the old servers
would be backed up and then switched off.
18. 1
8
Perforce Standardisation at Citrix
Figure 6: Merging ports
Another way would be to slowly centralise the data by only submitting new projects to a single
port. Eventually the data on the other instances will become old and only made available for
reference or maintenance.
Perforce Federated Architecture
Database replication isn’t a new concept, but recently Perforce has been looking into what it
means for the Perforce server. Mostly it’s about addressing the load a company may put on
the Perforce server and its associated network. With the help of replication, some of that load
can be taken away from the master server and handled by replica servers, and other networks.
Lots of excitement has been generated about the impact federated architecture will have on
the design of the Citrix Perforce infrastructure. Ideas include improving site proxies, creating
dedicated build farm proxies, and making enhancements to other internal tools that put a
heavy load on the Perforce server, such as our reporting services.
Secure authentication is of particular interest, and the ability to tie into the active directory to
reduce the management overhead of the users’ creation/deletion process is a must.
Administration of the users, groups, and protections is probably the worst part of our
administrators’ jobs. By taking advantage of replicated authentication servers, we should be
able to centralise the configuration. That would reduce the administration overhead and the
pain it causes users when they have to log in to every port they use.
Perforce Standard Environment (PSE)
Perforce is constantly improving Perforce software, adding more and more features and
tweaking the current ones. Therefore the PSE needs to be an ever-evolving toolset that strives
to support key administration features. During its development, it has been pulled in a number
of ways to make it fit, and at times maintaining the idea of simplicity has been tricky. Here we
offer ideas for extension to the toolset and mention problems we are encountering.
19. 1
9
Perforce Standardisation at Citrix
Logging
Gradually as we have seen problems occur with our Perforce deployment running inside of the
PSE, we have increased the logging functionality of our scripts. This enables us to capture
error conditions that occur and use our existing monitoring servers to receive the alert
condition and notify us of the failure.
However, we currently don’t do much in the way of processing the logging output from
Perforce itself and therefore find it hard to figure out why something like a hung server went
wrong. What we would like to do is couple the log output to a log parsing tool that could give us
a clearer idea of the problem the server is experiencing and allow us to take action quickly.
Replicas
Federated Perforce or Perforce replication has only a basic implementation within the PSE.
We are able to bring up a port as a replica, but this functionality just limits the abilities of a
normal root type port. As administrators, we can modify the Perforce server configuration
variables and bring the server up with a particular name to enable a certain setup, but this is
rather clunky and adds complexity to using the PSE. Ideally we would like a more fluid and
natural way to bring up replica services.
The PSE currently doesn’t support upgrading with a replica. The only way to do this now is to
take down the replica, upgrade the master, then replay the new checkpoint into the replica and
start it again.
We would like to take advantage of Windows Services for running Perforce, rather than the
slightly complicated way of using Windows Scheduler.
Replica servers can be run in a number of different modes; we would like to allow the PSE to
support some of the other modes such as smart proxy replica and build farm replica.
The Vision for PSE
Everyone needs an out-of-this-world vision to aim for. We may never reach it, but it allows us
to daydream and inspires us to drive on with a project.
The PSE started as a bunch of helper scripts to aid administrators who were less confident
with Perforce. Taking this to the next level, we need to start to look at what an administrator
needs to know about the current state of the Citrix Perforce architecture. Finding a way to
visualise this and log how the system is performing over time will greatly help in making good
decisions going forward.
Suppose we had a large-scale system with multiple servers, in multiple locations all running
Perforce software, which services users all over the world. What if we had a view onto this
system such that we could make changes to the environment easily and quickly? What if this
view could show us things like server activity, load, alerts, status of checkpoints, and verifies?
Imagine a scenario where one of the servers was being hit hard by an automated system that
had gone astray. It should be relatively simple to isolate the traffic from that Perforce server, or
find the user and work with that user to resolve the issue, or even deploy a new replicated
smart proxy to deal with the new load.
How about a system that could automatically react to failures by activating hot standby
servers? Or maybe even react to a failure that is about to happen?
What if all of this was as simple as a few clicks on a user interface?
This isn’t an impossible vision, and with every version of the PSE, we move closer to this goal.
20. 2
0
Perforce Standardisation at Citrix
Situations like upgrading a server with multiple replicas require some synchronisation between
the replicas and the master. It’s not going to be long before we connect our servers with
software and run the PSE like a distributed application. Providing a view on to this type of
application would be a logical next step.
Conclusion
The Citrix Perforce architecture certainly isn’t a recommended strategy. For those in a similar
situation to Citrix, this white paper offers some ideas and thoughts about how to maintain a
working system. For those just starting out on the road to Perforce, here are a few pointers on
the right path:
• Ensure you only have one Perforce instance for your company
• Make use of the great replication features of Perforce for your single instance
• Having a dedicated team that rules and controls the evolution of a version control
system at a company is important, but doing this from the outset is priceless