This document discusses the integration of Perforce source control software into the Anvil game engine used for Assassin's Creed games. It describes how Ubisoft built a custom integration library called Guildlib using the Perforce C++ API to fully integrate Perforce into the engine. This allowed them to store massive game assets efficiently in a "Bigfile" and handle synchronization of hundreds of developers across locations. The tight integration provided benefits like improved performance, custom file statuses, and full control over file system operations.
Best Practices For Game Development Using Perforce Streams Perforce
To build a future hit, AAA game development teams need to manage a complex environment. Making a game involves a lot of (big) files, many contributors, and millions of changes. The sheer number of branches associated can be overwhelming for any team.
That’s why 19 of the top 20 game development studios choose Helix Core –– version control from Perforce.
Take Sumo Digital. They use Helix Core to manage obstacles, visualize code, and integrate the tools they need. And they use Perforce Streams –– branching and merging in Helix Core –– to guide development and streamline their workflows.
Join Mark Washbrook and Tony Crowther from Sumo Digital, along with Chuck Gehman from Perforce, to learn:
-Key version control challenges for AAA game development.
-What is Perforce Streams?
-How Sumo Digital uses Perforce Streams to integrate with Unreal.
Discover how your team can benefit from using Streams.
How we optimized our Game - Jake & Tess' Finding Monsters AdventureFelipe Lira
Presentation I gave at Unite Boston 2015. I'll cover a few techniques we used to optimize our Unity mobile game - Jake & Tess' Finding Monsters Adventure
Branching Out: How To Automate Your Development ProcessPerforce
If you could ship 20% faster, what would it mean for your business? What could you build? Better question, what’s slowing your teams down?
Teams struggle to manage branching and merging. For bigger teams and projects, it gets even more complex. Tracking development using a flowchart, team wiki, or a white board is ineffective. And attempts to automate with complex scripting are costly to maintain.
Remove the bottlenecks and automate your development your way with Perforce Streams –– the flexible branching model in Helix Core.
Join Brad Hart, Chief Technology Officer and Brent Schiestl, Senior Product Manager for Perforce version control to learn how Streams can:
-Automate and customize development and release processes.
-Easily track and propagate changes across teams.
-Boost end user efficiency while reducing errors and conflicts.
-Support multiple teams, parallel releases, component-based development, and more.
Unity - Internals: memory and performanceCodemotion
by Marco Trivellato - In this presentation we will provide in-depth knowledge about the Unity runtime. The first part will focus on memory and how to deal with fragmentation and garbage collection. The second part will cover implementation details and their memory vs cycles tradeoffs in both Unity4 and the upcoming Unity5.
The goal of this session is to demonstrate techniques that improve GPU scalability when rendering complex scenes. This is achieved through a modular design that separates the scene graph representation from the rendering backend. We will explain how the modules in this pipeline are designed and give insights to implementation details, which leverage GPU''s compute capabilities for scene graph processing. Our modules cover topics such as shader generation for improved parameter management, synchronizing updates between scenegraph and rendering backend, as well as efficient data structures inside the renderer.
Video here: http://on-demand.gputechconf.com/gtc/2013/video/S3032-Advanced-Scenegraph-Rendering-Pipeline.mp4
UE4は4.19からInput Latencyの改善を行える設定が加わりました。
https://docs.unrealengine.com/ja/Platforms/LowLatencyFrameSyncing/index.html
その設定が実際どのようなことをしているのか質問されることが多かったため、今回簡単にですがドキュメトにまとめてみました。各スレッドの並列動作を理解する必要があり事前説明がちょいと長いのですが、ご参考になれば幸いです。
(Epic Games Japan Support Manager 篠山範明)
Best Practices For Game Development Using Perforce Streams Perforce
To build a future hit, AAA game development teams need to manage a complex environment. Making a game involves a lot of (big) files, many contributors, and millions of changes. The sheer number of branches associated can be overwhelming for any team.
That’s why 19 of the top 20 game development studios choose Helix Core –– version control from Perforce.
Take Sumo Digital. They use Helix Core to manage obstacles, visualize code, and integrate the tools they need. And they use Perforce Streams –– branching and merging in Helix Core –– to guide development and streamline their workflows.
Join Mark Washbrook and Tony Crowther from Sumo Digital, along with Chuck Gehman from Perforce, to learn:
-Key version control challenges for AAA game development.
-What is Perforce Streams?
-How Sumo Digital uses Perforce Streams to integrate with Unreal.
Discover how your team can benefit from using Streams.
How we optimized our Game - Jake & Tess' Finding Monsters AdventureFelipe Lira
Presentation I gave at Unite Boston 2015. I'll cover a few techniques we used to optimize our Unity mobile game - Jake & Tess' Finding Monsters Adventure
Branching Out: How To Automate Your Development ProcessPerforce
If you could ship 20% faster, what would it mean for your business? What could you build? Better question, what’s slowing your teams down?
Teams struggle to manage branching and merging. For bigger teams and projects, it gets even more complex. Tracking development using a flowchart, team wiki, or a white board is ineffective. And attempts to automate with complex scripting are costly to maintain.
Remove the bottlenecks and automate your development your way with Perforce Streams –– the flexible branching model in Helix Core.
Join Brad Hart, Chief Technology Officer and Brent Schiestl, Senior Product Manager for Perforce version control to learn how Streams can:
-Automate and customize development and release processes.
-Easily track and propagate changes across teams.
-Boost end user efficiency while reducing errors and conflicts.
-Support multiple teams, parallel releases, component-based development, and more.
Unity - Internals: memory and performanceCodemotion
by Marco Trivellato - In this presentation we will provide in-depth knowledge about the Unity runtime. The first part will focus on memory and how to deal with fragmentation and garbage collection. The second part will cover implementation details and their memory vs cycles tradeoffs in both Unity4 and the upcoming Unity5.
The goal of this session is to demonstrate techniques that improve GPU scalability when rendering complex scenes. This is achieved through a modular design that separates the scene graph representation from the rendering backend. We will explain how the modules in this pipeline are designed and give insights to implementation details, which leverage GPU''s compute capabilities for scene graph processing. Our modules cover topics such as shader generation for improved parameter management, synchronizing updates between scenegraph and rendering backend, as well as efficient data structures inside the renderer.
Video here: http://on-demand.gputechconf.com/gtc/2013/video/S3032-Advanced-Scenegraph-Rendering-Pipeline.mp4
UE4は4.19からInput Latencyの改善を行える設定が加わりました。
https://docs.unrealengine.com/ja/Platforms/LowLatencyFrameSyncing/index.html
その設定が実際どのようなことをしているのか質問されることが多かったため、今回簡単にですがドキュメトにまとめてみました。各スレッドの並列動作を理解する必要があり事前説明がちょいと長いのですが、ご参考になれば幸いです。
(Epic Games Japan Support Manager 篠山範明)
How to build build pipeline for your Unreal Engine 4 game, along with iteration advice and best practices.
Originally presented in Poznan, Poland for GIC 19.
Presented by Ken Kuwano (Epic Games Japan)
This slide is a translation of the presentation material from the "UE4 Localization Deep Dive" on October 31, 2019.
Speed up your asset imports for big projects - Unite Copenhagen 2019Unity Technologies
The release of the new Asset Database provides a solid foundation for further speeding up asset imports. In these slides, you'll learn about the improvements in the way Unity handles very large projects. You'll also discover some of the upcoming features directed towards making the asset pipeline more extensible and ensuring project stability.
Speaker:
Jonas Drewsen - Unity
Watch the session on YouTube: https://youtu.be/VF-Qe-0zXlc
There are a lot of articles about games. Most of these are about particular aspects of a game like rendering or physics. All engines, however, have a binding structure that ties all aspects of the game together. Usually there is a base class (Object, Actor or Entity are common names) that all objects in the game derive from, but very little is written on the subject. Only very recently a couple of talks on game|tech have briefly touched on the subject. Still, choosing a structure to build your game on is very important. The end user might not “see” the difference between a good and a bad structure, but this choice will affect many aspects of the development process. A good structure will reduce risk and increase the efficiency of the team.
OpenGL 4.4 provides new features for accelerating scenes with many objects, which are typically found in professional visualization markets. This talk will provide details on the usage of the features and their effect on real-life models. Furthermore we will showcase how more work for rendering a scene can be off-loaded to the GPU, such as efficient occlusion culling or matrix calculations.
Video presentation here: http://on-demand.gputechconf.com/gtc/2014/video/S4379-opengl-44-scene-rendering-techniques.mp4
Bill explains some of the ways that the Vertex Shader can be used to improve performance by taking a fast path through the Vertex Shader rather than generating vertices with other parts of the pipeline in this AMD technology presentation from the 2014 Game Developers Conference in San Francisco March 17-21. Check out more technical presentations at http://developer.amd.com/resources/documentation-articles/conference-presentations/
Ever wondered how to use modern OpenGL in a way that radically reduces driver overhead? Then this talk is for you.
John McDonald and Cass Everitt gave this talk at Steam Dev Days in Seattle on Jan 16, 2014.
How to build build pipeline for your Unreal Engine 4 game, along with iteration advice and best practices.
Originally presented in Poznan, Poland for GIC 19.
Presented by Ken Kuwano (Epic Games Japan)
This slide is a translation of the presentation material from the "UE4 Localization Deep Dive" on October 31, 2019.
Speed up your asset imports for big projects - Unite Copenhagen 2019Unity Technologies
The release of the new Asset Database provides a solid foundation for further speeding up asset imports. In these slides, you'll learn about the improvements in the way Unity handles very large projects. You'll also discover some of the upcoming features directed towards making the asset pipeline more extensible and ensuring project stability.
Speaker:
Jonas Drewsen - Unity
Watch the session on YouTube: https://youtu.be/VF-Qe-0zXlc
There are a lot of articles about games. Most of these are about particular aspects of a game like rendering or physics. All engines, however, have a binding structure that ties all aspects of the game together. Usually there is a base class (Object, Actor or Entity are common names) that all objects in the game derive from, but very little is written on the subject. Only very recently a couple of talks on game|tech have briefly touched on the subject. Still, choosing a structure to build your game on is very important. The end user might not “see” the difference between a good and a bad structure, but this choice will affect many aspects of the development process. A good structure will reduce risk and increase the efficiency of the team.
OpenGL 4.4 provides new features for accelerating scenes with many objects, which are typically found in professional visualization markets. This talk will provide details on the usage of the features and their effect on real-life models. Furthermore we will showcase how more work for rendering a scene can be off-loaded to the GPU, such as efficient occlusion culling or matrix calculations.
Video presentation here: http://on-demand.gputechconf.com/gtc/2014/video/S4379-opengl-44-scene-rendering-techniques.mp4
Bill explains some of the ways that the Vertex Shader can be used to improve performance by taking a fast path through the Vertex Shader rather than generating vertices with other parts of the pipeline in this AMD technology presentation from the 2014 Game Developers Conference in San Francisco March 17-21. Check out more technical presentations at http://developer.amd.com/resources/documentation-articles/conference-presentations/
Ever wondered how to use modern OpenGL in a way that radically reduces driver overhead? Then this talk is for you.
John McDonald and Cass Everitt gave this talk at Steam Dev Days in Seattle on Jan 16, 2014.
[Ubisoft] Perforce Integration in a AAA Game EnginePerforce
In 2004, Ubisoft built a really tight integration between the Assassin's Creed game engine and Perforce. It could store massive games assets and support hundreds of people working on the same project in different locations. This talk focuses on the challenges they faced implementing a scalable, robust, and deeply integrated solution with Perforce.
Reproducibility in artificial intelligenceCarlos Toxtli
In this presentation, we explore how artificial intelligence experiments can be reproduced by implementing three different approaches such as: Reproducibility frameworks, Reproducible benchmarking tools, and Reproducible standalone methods.
Workstations powered by Intel can play a vital role in CPU-intensive AI devel...Principled Technologies
In three AI development workflows, Intel processor-powered workstations delivered strong performance, without using their GPUs, making them a good choice for this part of the AI process
Conclusion
We executed three AI development workflows on tower workstations and mobile workstations from three vendors, with each workflow utilizing only the Intel CPU cores, and found that these platforms were suitable for carrying out various AI tasks. For two of the workflows, we learned that completing the tasks on the tower workstations took roughly half as much time as on the mobile workstations. This supports the idea that the tower workstations would be appropriate for a development environment for more complex models with a greater volume of data and that the mobile workstations would be well-suited for data scientists fine-tuning simpler models. In the third workflow, we explored tower workstation performance with different precision levels and learned that using 16-bit floating point precision allowed the workstations to execute the workflow in less time and also reduced memory usage dramatically. For all three AI workflows we executed, we consider the time the workstations needed to complete the tasks to be acceptable, and believe that these workstations can be appropriate, cost-effective choices for these kinds of activities.
Complex applications need a persistent database to store, search and join data: till now a dedicated server was needed to do this, and no offline usage of the app was possible. With the introduction of HTML5 and the concept of Web Databases, we don’t need an external server anymore: everything is stored within the user browser and thus the web app can be used offline as well as online.
Strategies and Tips for Building Enterprise Drupal Applications - PNWDS 2013Mack Hardy
Mack Hardy, Dave Tarc, Damien Norris of Affinity Bridge presenting at Pacific Northwest Drupal Summit in Vancouver, October 5th, 2013. The presentation walks through management of releases, deployment strategies and build strategies with drupal features, git, and make files. Performance and caching is also covered, as well as specific tips and tricks for configuring apache and managing private files.
How to Organize Game Developers With Different Planning NeedsPerforce
Different skills have different needs when it comes to planning. For a coder it may make perfect sense to plan work in two-week sprints, but for an artist, an asset may take longer than two weeks to complete.
How do you allow different skills to plan the way that works best for them? Some studios may choose to open up for flexibility – do whatever you like! But that tends to cause issues with alignment and siloes of data, resulting in loss of vision. Lost vision in the sense that it is difficult to understand, but also — and maybe more importantly — the risk of losing the vision of what the game will be.
With the right approach, however, you can avoid these obstacles. Join backlog expert Johan Karlsson to learn:
-The balance of team autonomy and alignment.
-How to use the product backlog to align the project vision.
-How to use tools to support the flexibility you need.
Looking for a planning and backlog tool? You can try Hansoft for free.
Regulatory Traceability: How to Maintain Compliance, Quality, and Cost Effic...Perforce
How do regulations impact your product requirements? How do you ensure that you identify all the needed requirements changes to meet these regulations?
Ideally, your regulations should live alongside your product requirements, so you can trace among each related item. Getting to that point can be quite an undertaking, however. Ultimately you want a process that:
-Saves money
-Ensures quality
-Avoids fines
If you want help achieving these goals, this webinar is for you. Watch Tom Totenberg, Senior Solutions Engineer for Helix ALM, show you:
-How to import a regulation document into Helix ALM.
-How to link to requirements.
-How to automate impact analysis from regulatory updates.
Efficient Security Development and Testing Using Dynamic and Static Code Anal...Perforce
Be sure to register for a demo, if you would like to see how Klocwork can help ensure that your code is secure, reliable, and compliant.
https://www.perforce.com/products/klocwork/live-demo
If it’s not documented, it didn’t happen.
When it comes to compliance, if you’re doing the work, you need to prove it. That means having well-documented SOPs (standard operating procedures) in place for all your regulated workflows.
It also means logging your efforts to enforce these SOPs. They show that you took appropriate action in any number of scenarios, which can be related to regulations, change requests, firing of an employee, logging an HR compliant, or anything else that needs a structured workflow.
But when do you need to do this, and how do you go about it?
In this webinar, Tom Totenberg, our Helix ALM senior solutions engineer, clarifies workflow enforcement SOPs, along with a walkthrough of how Perforce manages GDPR (General Data Protection Regulation) requests. He’ll cover:
-What are SOPs?
-Why is it important to have this documentation?
-Example: walking through our internal Perforce GDPR process.
-What to beware of.
-Building the workflow in ALM.
How to Do Code Reviews at Massive Scale For DevOpsPerforce
Code review is a critical part of your build process. And when you do code review right, you can streamline your build process and achieve DevOps.
Most code review tools work great when you have a team of 10 developers. But what happens when you need to scale code review to 1,000s of developers? Many will struggle. But you don’t need to.
Join our experts Johan Karlsson and Robert Cowham for a 30-minute webinar. You’ll learn:
-The problems with scaling code review from 10s to 100s to 1,000s of developers along with other dimensions of scale (files, reviews, size).
-The solutions for dealing with all dimensions of scale.
-How to utilize Helix Swarm at massive scale.
Ready to scale code review and streamline your build process? Get started with Helix Swarm, a code review tool for Helix Core.
By now many of us have had plenty of time to clean and tidy up our homes. But have you given your product backlog and task tracking software as much attention?
To keep your digital tools organized, it is important to avoid hoarding on to inefficient processes. By removing the clutter in your product backlog, you can keep your teams focused.
It’s time to spark joy by cleaning up your planning tools!
Join Johan Karlsson — our Agile and backlog expert — to learn how to:
-Apply digital minimalism to your tracking and planning.
-Organize your work by category.
-Motivate teams by transitioning to a cleaner way of working.
TRY HANSOFT FREE
Going Remote: Build Up Your Game Dev Team Perforce
Everyone’s working remote as a result of the coronavirus (COVID-19). And while game development has always been done with remote teams, there’s a new challenge facing the industry.
Your audience has always been mostly at home – now they may be stuck there. And they want more games to stay happy and entertained.
So, how can you enable your developers to get files and feedback faster to meet this rapidly growing demand?
In this webinar, you’ll learn:
-How to meet the increasing demand.
-Ways to empower your remote teams to build faster.
-Why Helix Core is the best way to maximize productivity.
Plus, we’ll share our favorite games keeping us happy in the midst of a pandemic.
Shift to Remote: How to Manage Your New WorkflowPerforce
The spread of coronavirus has fundamentally changed the way people work. Companies around the globe are making an abrupt shift in how they manage projects and teams to support their newly remote workers.
Organizing suddenly distributed teams means restructuring more than a standup. To facilitate this transition, teams need to update how they collaborate, manage workloads, and maintain projects.
At Perforce, we are here to help you maintain productivity. Join Johan Karlsson — our Agile expert — to learn how to:
Keep communication predictable and consistent.
-Increase visibility across teams.
-Organize projects, sprints, Kanban boards and more.
-Empower and support your remote workforce.
Hybrid Development Methodology in a Regulated WorldPerforce
In a regulated industry, collaboration can be vital to building quality products that meet compliance. But when an Agile team and a Waterfall team need to work together, it can feel like mixing oil with water.
If you're used to Agile methods, Waterfall can feel slow and unresponsive. From a Waterfall perspective, pure Agile may lack accountability and direction. Misaligned teams can slow progress, and expose your development to mistakes that undermine compliance.
It's possible to create the best of both worlds so your teams can operate together harmoniously. This is how to develop products quickly, and still make regulators happy.
Join ALM Solutions Engineer Tom Totenberg in this webinar to learn how teams can:
- Operate efficiently with differing methodologies.
- Glean best practices for their tailored hybrid.
- Work together in a single environment.
Watch the webinar, and when you're ready for a tool to help you with the hybrid, know that you can try Helix ALM for free.
Better, Faster, Easier: How to Make Git Really Work in the EnterprisePerforce
There's a lot of reasons to love Git. (Git is awesome at what it does.) Let’s look at the 3 major use cases for Git in the enterprise:
1. You work with third party or outsourced development teams.
2. You use open source in your products.
3. You have different workflow needs for different teams.
Making the best of Git can be difficult in an enterprise environment. Trying to manage all the moving parts is like herding cats.
So, how do you optimize your teams’ use of Git — and make it all fit into your vision of the enterprise SDLC?
You’ll learn about:
-The challenges that accompany each use case — third parties, open source code, different workflows.
-Ways to solve these problems.
-How to make Git better, faster, and easier — with Perforce
Easier Requirements Management Using Diagrams In Helix ALMPerforce
Sometimes requirements need visuals. Whether it’s a diagram that clarifies an idea or a screenshot to capture information, images can help you manage requirements more efficiently. And that means better quality products shipped faster.
In this webinar, Helix ALM Professional Services Consultant Gerhard Krüger will demonstrate how to use visuals in ALM to improve requirements. Learn how to:
-Share information faster than ever.
-Drag and drop your way to better teamwork.
-Integrate various types of visuals into your requirements.
-Utilize diagram and flowchart software for every need.
-And more!
Immediately apply the information in this webinar for even better requirements management using Helix ALM.
It’s common practice to keep a product backlog as small as possible, probably just 10-20 items. This works for single teams with one Product Owner and perhaps a Scrum Master.
But what if you have 100 Scrum teams managing a complex system of hardware and software components? What do you need to change to manage at such a massive scale?
Join backlog expert Johan Karlsson to learn how to:
-Adapt Agile product backlog practices to manage many backlogs.
-Enhance collaboration across disciplines.
-Leverage backlogs to align teams while giving them flexibility.
Achieving Software Safety, Security, and Reliability Part 3: What Does the Fu...Perforce
In Part 3, we will look at what the future might hold for embedded programming languages and development tools. And, we will look at the future for software safety and security standards.
How to Scale With Helix Core and Microsoft Azure Perforce
Microsoft Azure helps teams increase their speed, gain flexibility, and save time. Using Helix Core with Azure you maximizes cloud benefits. You can scale to meet both current and future deployment demands. And this powerful combination helps secure your most valuable IP assets.
So, where do you start? What do you need to set up your teams for success? How can you expedite your pipelines to deliver ahead of your competitors?
Join Chuck Gehman from Perforce to learn more about:
-Compute, storage, and security options from Azure.
-Strategies that boost your cloud investment.
-Tips to secure your data.
-Best practices for global deployments.
Achieving Software Safety, Security, and Reliability Part 2Perforce
In Part 2, we will focus on the automotive industry, as it leads the way in enforcing safety, security, and reliability standards as well as best practices for software development. We will then examine how other industries could adopt similar practices.
Modernizing an application’s architecture is often a necessary multi-year project in the making. The goal –– to stabilize code, detangle dependencies, and adopt a toolset that ignites innovation.
Moving your monolith repository to a microservices/component based development model might be on trend. But is it right for you?
Before you break up with anything, it is vital to assess your needs and existing environment to construct the right plan. This can minimize business risks and maximize your development potential.
Join Tom Tyler and Chuck Gehman to learn more about:
-Why you need to plan your move with the right approach.
-How to reduce risk when refactoring your monolithic repository.
-What you need to consider before migrating code.
Achieving Software Safety, Security, and Reliability Part 1: Common Industry ...Perforce
In part one of our three-part webinar series, we examine common software development challenges, review the safety and security standards adopted by different industries, and examine the best practices that can be applied to any software development team.
The features you’ve been waiting for! Helix ALM’s latest update expands usability and functionality to bring solid improvements to your processes.
Watch Helix ALM Senior Product Manager Paula Rome demonstrate how new features:
-Simplify workflows.
-Expand report analysis.
-Boost productivity in the Helix ALM web client.
All this and MORE packed into an exciting 30 minutes! Get inspired. Be extraordinary with the new Helix ALM.
Companies that track requirements, create traceability matrices, and complete audits - especially for compliance - run into many problems using only Word and Excel to accomplish these tasks.
Most notably, manual processes leave employees vulnerable to making costly mistakes and wasting valuable time.
These outdated tracking procedures rob organizations of benefiting from four keys to productivity and efficiency:
-Automation
-Collaboration
-Visibility
-Traceability
However, modern application lifecycle management (ALM) tools solve all of these problems, linking and organizing information into a single source of truth that is instantly auditable.
Gerhard Krüger, senior consultant for Helix ALM, explains how the right software supports these fundamentals, generating improvements that save time and money.
5 Ways to Accelerate Standards Compliance with Static Code Analysis Perforce
In mission- and safety-critical industries, static code analysis (SCA) is key to facilitating the development of robust and reliable software - yet, according to VDC Research, only 27% of embedded developers report using SCA tools on their current project.
Why is adoption low and what can you do to deploy SCA effectively?
Join Walter Capitani (Rogue Wave Software) and Christopher Rommel (VDC Research) as they review the results of the latest VDC Research paper on the trends, techniques, and best practices for standards compliance within embedded software teams. You will learn what organizations like yours are doing now and how to prepare for future challenges by:
-Understanding trends for standards compliance in 2018
-Identifying common challenges for automotive, medical, industrial automation, and other types of applications
-Learning best practices for achieving compliance using different tools, techniques, and processes
After attending this webinar, you'll be better prepared to plan and execute a standards compliance program for your team and maximize the effectiveness of static code analysis.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Elizabeth Buie - Older adults: Are we really designing for our future selves?
[Ubisoft] Perforce Integration in a AAA Game Engine
1.
MERGE 2013 THE PERFORCE CONFERENCE SAN FRANCISCO • APRIL 24−26
Abstract
In 2004, Ubisoft started building the foundation of a
new game engine for Assassin s Creed. It built a tight
integration between the game engine and Perforce that
could store massive amounts of game assets and be
scalable and robust enough to support hundreds of
people working on the same project in different
locations. This white paper discusses the challenges
faced during this development and the benefits of
having a tight integration with Perforce.
Perforce Integration in an AAA
Game Engine
The Strength of the C++ API
Marc Desfossés
and Jean-Sébastien Pelletier, Ubisoft
2. 2 Perforce Integration in a AAA Game Engine
Introduction
Back in 2004, Ubisoft Montreal started building the foundations of Anvil1
, the game engine
behind the Assassin’s Creed games. Since then, the Anvil game engine has been used for the
production of more than a dozen games, and it is without a doubt one of the most important
game engines developed by Ubisoft.
From our experience during the production of previous games, we knew we had to find a better
way to store the massive amounts of game assets for the upcoming generation of consoles.
The solution had to be scalable and robust enough to support the hundreds of people working
on the project with teams spread all over the world. At that time, Ubisoft Montreal was using
Perforce Software Version Management to store its source code, and it was a logical choice to
also use it for the game assets. To achieve our goals, we came up with a very specific
architecture, leveraging the C++ API to fully integrate Perforce into the engine.
In this paper, we will talk about the challenges faced during the development of our in-house
Perforce integration library called Guildlib. We will also discuss the architecture, the technical
choices that were made, and the benefits we get by having such a tight integration.
History
Must
Not
Repeat
Itself
Part of the engine team behind Assassin’s Creed previously worked on the game Prince of
Persia: The Sands of Time. During the production of Prince of Persia, we had serious
problems with our source control integration. We had data corruption on a weekly basis and
people were literally waiting their turn to submit data assets. The system was unreliable and
endangered the delivery of the game. Because committing local changes could take up to 30
minutes, our source control was clearly a bottleneck for the production. Precious hours were
spent waiting and sometimes data was lost and had to be redone. It was a nightmare for build
engineers, and artists were afraid they might lose their work every time they had to submit.
Obviously, we did not want history to repeat itself. One of the first things we decided when
Assassin’s Creed kicked off was that we would invest serious development time in improving
the way our data assets were managed. Our first thought was to refactor the existing solution
and fix the main corruption issues. It soon became obvious that no matter how much effort we
would invest in a custom solution, we could not compete with the stability, reliability, and the
performance Perforce provides. We decided to use Perforce, not only to store our game assets
but also to fully integrate it into our engine. This approach allowed us to keep the engine’s
custom file system and have full control over the functionalities exposed to the user. It turned
out to be a very good decision because the C++ API allowed us to achieve our vision and meet
our performance expectations.
1
http://en.wikipedia.org/wiki/Anvil_(game_engine)
3. 3 Perforce Integration in a AAA Game Engine
Architecture
Overview
The Anvil engine has a multi-tier architecture. Anvil (the engine’s official public name) is the
presentation layer. Scimitar is the engine itself and Guildlib is the data exchange layer.
Guildlib’s main purpose is to create the bridge among the core engine, the file system, and
Perforce, and it is the subject of this white paper.
Figure 1 presents a simplified diagram of the engine’s architecture.
Figure 1: Overview of the Anvil engine’s architecture
TheBigfile:
Data
Storage
in
the
Engine
In Anvil, all files are stored locally in a virtual file system called a Bigfile. The Bigfile consists of
a single file container for millions of data files. The Bigfile also embeds a database to store the
information about game objects. There are several advantages in having a Bigfile on a game
production:
1. Speed: Because all data is in a single file, we avoid opening/closing thousands of files
when performing operations on the files (edit, load, sync).
2. We can copy a Bigfile from our network instead of synching the files directly from
Perforce. This can be an order of magnitude faster than synching millions of files from
Perforce. (Some sync comparisons appear later in this paper.)
4. 4 Perforce Integration in a AAA Game Engine
3. We have more control over the types of operations that can be performed on the files
because all file modifications have to be made using our custom tools. This also means
that operations on the Perforce client view must also go through our tools. This lets us
implement features such as ghosted files that are synced only at their first read attempt.
(We discuss custom file statuses later.)
4. Deleting a single Bigfile is much faster than deleting 2.5 million files from the HDD.
5. Users cannot clobber their environment, and this saves us trouble debugging crashes
caused by bad user manipulations.
The Bigfile file system also contains a hierarchy of folders and files like the file systems we are
accustomed to. Each of those files and folders is identified using a unique 64-bit object
identifier that will NEVER change during the object’s lifetime.
The
Perforce
Depot
Hierarchy
For performance reasons and to be able to simplify some Perforce operations such as rename
and move (more on this later), we came up with a special mapping between the files in the
Bigfile and the files in the Perforce depot.
Our files are stored in the selected depot branch using the following path:
//<depot>/<branch>/<a|d|f>/<bytes
5-‐8>/<byte
4>/<byte
3>/<complete
object
id>.<a|d|f>
Where a = attribute file, d = data file, f = folder
Some examples:
//ac3/main/d/3/00/01/300012ce1.d
-‐
33
chars
-‐
ID
0x300012ce1
//ac3/main/d/0/ef/00/ef005844.d
-‐
32
chars
-‐
ID
0xef005844
//ac3/main/a/3/00/01/300012ce1.a
-‐
33
chars
-‐
ID
300012ce1
//ac3/main/a/0/ef/00/ef005844.a
-‐
32
chars
-‐
ID
0xef005844
//ac3/main/f/0/00/04/43d40.f
-‐
29
chars
-‐
ID
0x43d40
//ac3/main/f/1/00/01/10001409d.f
-‐
33
chars
-‐
ID
0x10001409d
Figure 2 shows how the mapping between the Bigfile and Perforce is done. The file texture.psd
consists of a single file in the Bigfile. In the Perforce depot, it corresponds to a pair of an
attribute and a data file. The data file contains the raw data and can be a very large file
(hundreds of megabytes for Photoshop files). The attribute file is used to store some metadata
information about the file such as its filename, the parent folder, and its thumbnail. It is meant
to be a small file, no bigger than a few kilobytes.
5. 5 Perforce Integration in a AAA Game Engine
Figure 2: Mapping data from the Perforce shared repository to Bigfile
In our Perforce integration, the folders are under revision control. For each folder, we store all
its associated metadata in a file called the directory attribute file. This file contains pairs of
key/values for storing directory information such as the owner directory and the directory name
associated with the folder in our Bigfile.
Having some metadata associated with each file in a different file (attribute file) allows us to
move files, rename files, update the thumbnails, or add new metadata without having to
resubmit the data file. This is particularly useful for large files such as textures or 3D models.
Moving a file is only a matter of editing its Perforce attribute file, modifying the owner directory,
updating our Bigfile FAT content, and then submitting the attribute file.
This also means that moving a data file/folder around in the Bigfile doesn’t move the file in
Perforce and is instead a simple edit operation in our tool. This feature is a huge benefit for us
because branching can be confusing for users. The workflow is totally different from what
happens when you rename or move a source code file with P4Win or P4V.
In addition, we have the following file //<depot name>/<branch name>/config/datacontrol.ini
used to store the data control configuration that can be dynamically modified at any time. This
file is constantly watched and can be used to force users to upgrade their version or have
someone as their changelist approver for a submit. This is useful when we are close to the end
of certain milestones.
Guildlib
Attributes
Versus
Perforce
Attributes
Someone might wonder why we haven’t used the Perforce attributes feature to store our
metadata instead of our own attributes files. First, when we started this project in 2004, this
feature was just becoming available in a very limited form for thumbnails and was an
undocumented feature. Second, some functionality was missing and is still missing in today’s
implementation of the p4 attribute that we would need in order to keep our existing logic.
In 2011, we investigated the possibility to migrate to the Perforce native attributes for two main
reasons:
• To reduce the amount of files we store in the database
6. 6 Perforce Integration in a AAA Game Engine
• To have a clean fix for the atomicity problem in our engine. Both the data and attribute file
should be in sync all of the time, which complicates the code.
To be able to re-implement our metadata storage using the Perforce attributes, we would need
the following additions:
• Be able to sync attributes along with files
• Be able to submit changes to attributes without having to submit the associated file
• Be able to diff attributes
Data
Synchronization
Because our Perforce depot tree does not match the Bigfile hierarchy as explained earlier, our
synchronization process had to be adjusted to deal with the concept of attribute files and
directory files. (Our directories are under revision control.) The general idea of the data
synchronization process is to reconstruct the folder tree on disk and then write the pairs of
data/attribute simultaneously in our Bigfile.
Data synchronization involves several steps that we’ll summarize here:
1. We first execute a sync preview and keep what will be synched in memory data
structures.
2. Synchronization of directory attributes: The attribute files are not “stored” in the Bigfile;
instead we read and interpret all the directory attributes and write them on disk. The
folder hierarchy is therefore reconstructed in the Bigfile.
3. Synchronization of file attributes (also buffered in memory): They are interpreted later,
either when receiving the associated data file or at the end of the sync if only the
attribute file was received (for instance, if the file was moved in a different Bigfile folder).
4. Perform file ghosting/unghosting (more on this later) using p4 flush commands on
selected data files. These flushes affect the next step.
5. Synchronization of data files. When a file is completely received (we buffer it completely
in memory for speed), we update the Bigfile. When updating a file in the Bigfile, we
check if we’ve received an attribute file and we interpret it at that time and apply all the
attributes when we update the data file (that update is ACID2
).
Client
View
Synchronization
In our Perforce integration, there is a central concept: the Bigfile is the master. Because we are
able to copy a Bigfile from the network or to a colleague, we need to be able to synchronize
the content of that Bigfile with its associated Perforce client on our server. This is what we call
the client view synchronization. In order to do that, we store source control metadata in the
Bigfile for each file contained in the Bigfile. This metadata includes information such as:
• Revision numbers for data, attribute, and directory files
2
http://en.wikipedia.org/wiki/ACID
7. 7 Perforce Integration in a AAA Game Engine
• Source control actions for the data and attribute files: Edit, Delete, Add, and
Custom statuses (more on these later)
We also keep some global metadata information in the Bigfile metadata such as the last
synchronized changelist, the server address, server depot branch associated with the Bigfile,
and some guids to know if we need to synchronize the Bigfile with its associated Perforce
client.
When we open a Bigfile, we first compute a unique client view name from several criteria such
as the Bigfile path, depot branch, and machine’s name. We then examine the client view and
the Bigfile to know if we need to synchronize the Bigfile with the client view. One of the things
we examine is a guid stored in the windows registry as well as in the Bigfile. This guid is used
to indicate if a crash has occurred during a Perforce operation (e.g., a revert operation).
Depending on the value of the guid, we will decide if the existing client view must be
resynchronized or not with the Bigfile.
We then run several commands such as fstat, revert –k, flush, delete, edit, and add to replicate
on the client view the exact state of each file as stored in the Bigfile. This synchronization
process is incremental to minimize the execution time and will use the last synchronized
change list number as a helper to accelerate the process if possible.
Using
the
C++
API
Central to our Perforce integration, there is a very important C++ class that we’ve called
PerforceBigFileWrapper. This class derives from the FileSys3
class from the Perforce C++ API.
We use this class in the Perforce API to implement file system access. Normally, users of the
Perforce API don’t have to overload this class to use the API, but in our case this class is a key
to our integration with the game engine.
In this class we’ve intercepted all the virtual functions to redirect all I/O to our Bigfile. For
instance, no directories or files are created in the file system when synching files because
we’ve intercepted those calls.
We’ve found that the Perforce C++ API is quite easy to use and very powerful. Because it is
callback based using virtual methods, this gives us a lot of flexibility about how to process a
command’s output via custom handling of error messages, text output, or the dictionaries of
key-values.
The fact that the C++ API is callback-based is especially good for our solution because we
have millions of files in the source control; if each command buffered its results in containers, it
could amount to GBs of memory being consumed by dictionaries or text output. We have more
control this way and can decide when and what we need to buffer in memory after a command
runs.
We’ve seen several advantages with this type of file system integration:
• By being able to intercept the files access, we were able to implement our attributes
files, which are not stored directly in the Bigfile but are interpreted at sync time and
dynamically reconstructed when submitting or diffing files. This would have been
impossible without full control on the file system operations.
3
http://www.perforce.com/perforce/doc.current/manuals/p4api/02_clientprog.html#1050939
8. 8 Perforce Integration in a AAA Game Engine
• When syncing, we dynamically serialize (using a generic serializer) the content of the
data files we’re receiving and then index the data into a database used to know the
relations between game objects. This is all done asynchronously using the raw
synchronization buffers from the data synchronization. We also update the content of
the big file by using that buffer. Without that deep integration, this would have been less
efficient to index our files because we would need to read back the synced files from the
disk to be able to analyze them.
That deep integration has some disadvantages:
• Some of the behaviors of the virtual functions available in the FileSys class are not very
well documented, and the interaction between the virtual functions and the Perforce
commands are not always clear. For example, some Perforce commands will call the
Stat() virtual method to verify if a file exists or not. But in our case because this is not
the Windows file system, we must sometimes return a fixed return value depending on
what kind of command is executed. For example, when reverting files, Stat() is called to
verify if a file exists; we’ve found that in some cases it can be faster to always return 0
to avoid the creation of a temporary file. Therefore to help make those choices, we have
access in our FileSys object to what kind of operation is being run (sync, revert, revert
unchanged, etc.). This information is transmitted to the FileSys object at creation by the
UI object invoking the current Perforce command.
• We need to keep up with new Perforce features. For example, we’ve done most of the
work to integrate the shelving in our workflow, but some work is still needed before
being able to push the feature in production. However, this is not critical because we
have an import/export feature similar to p4tar.4
• When upgrading the server, we must ensure that everything is behaving as it should.
We highly suggest that you lock your protocol version to a specific version.
Implementing a complete virtual file system similar to what we’ve implemented is several
thousand lines of C++ code. It took a few months for two programmers to have the basics in
place and then this has been an on-going work in progress for the past eight years. In the last
eight years, we estimate that at least four to five years/person has been invested on this
project. It is a lot of work, but we think it was worth it. It certainly is a major contribution to our
engine’s success.
Challenges
When designing Guildlib, we wanted to make sure it could meet our high expectations in terms
of scalability, user experience, and stability. We wanted a solution that could evolve over the
years with the engine and support the constantly growing needs of the gaming industry.
Scalability
In the early 2000s, the production teams at Ubisoft were relatively small. A group of 80
talented programmers and artists working together in the same studio for two to three years
was a good recipe to create AAA games such as Splinter Cell or Prince of Persia: The Sands
4
http://public.perforce.com/wiki/P4tar
9. 9 Perforce Integration in a AAA Game Engine
of Time. At the time, we thought that these were big teams and we were reaching the limits of
our source control solutions. The arrival of the next console generation (Xbox 360 and Sony
PS3) combined with the open world trend had a direct impact on the teams and data sizes.
The choice to switch to Perforce was to address those growing needs. We were confident that
Perforce could support:
• Hundreds of GB of data
• Millions of files
• Hundreds of users
Figure 3 shows the amount of code and data submits in the last year on ACIII.
Figure 3: Code and data submits in the last year on ACIII
Stability
It was important for us to keep the stability Perforce already provides. The entire goal of this
integration was to make sure we did not end up with corrupted data like our previous SCC
integrations. This was a challenge, however, because our integration modifies the way we
perform files operations on the disk. In short, re-implementing core functionalities with the
Perforce API such as I/O operations (described in the “Using the C++ API” section earlier)
meant that we were risking losing the existing stability of the Perforce clients.
User Experience
Our first experience with Perforce came with the P4Win client. Programmers really liked it at
Ubisoft. P4Win was very basic and was not particularly user friendly for non-programmers;
artists were reluctant to use it. We chose to rewrite the main parts of the user interface in order
to expose only the Perforce commands and statuses that were relevant for our engine. The
basic Perforce concepts such as changelists and opened files were kept, but some more
10. 10 Perforce Integration in a AAA Game Engine
advanced features such as branching are hidden to the common users.
Visual Diff of Binary Files
One of the nice features of our integration is the Visual Diff window (see Figure 4). It is
basically a diff tool similar to P4Merge but with the ability to diff the game objects. This is
especially useful for data validation prior to submission.
Figure 4: Diffing game objects
Automatic Open for Edit
Having full control over the files allows us to automatically open for edit files and deal with
clobbered files more easily. We can also see the file commands available for each file (see
Figure 5).
11. 11 Perforce Integration in a AAA Game Engine
Figure 5: The integration allows for full control over file actions
Dependencies Viewer
The dependency viewer lets us see dependencies between all files (see Figure 6). We can
also see our source control file status displayed in the window.
Figure 6: The dependency viewer shows file dependencies and file status
12. 12 Perforce Integration in a AAA Game Engine
Figure 7: Custom revision history control
Submit Assistant
The submit assistant is our implementation of a submit dialog (see Figure 8). The assistant
also integrates with JIRA and does many custom client-side data validation to ensure data
consistency prior to submission.
Figure 8: JIRA integration
13. 13 Perforce Integration in a AAA Game Engine
Performance
During the development of Guildlib, we always kept performance in mind. It was important for
us that the end user did not suffer from the millions of files and Gigabytes of data required for
building the game.
Some operations are costly, especially when working from a client in a remote location. For
instance, the client synchronization required when a user starts from a fresh Bigfile or after a
crash requires running a p4 flush command. This command is very inefficient because it calls
the API after each file. To work around this problem, we came up with our own command
proxy that executes the command on the site where the server sits and sends output to the
remote client. This is much more efficient because the output is sent only once in a
compressed format without any latency occurring for each of the files to process. However,
recent testing showed that on a very recent Perforce server, we could run a p4 flush –q and it
won’t call the callback if the client is not running with tagged output.
Benefits
Faster Sync Operations
It was not planned or expected but synchronizing millions of attribute files in memory and
saving them in batch turned out to be very efficient performance-wise. Our sync operations are
faster in Guildlib than they are when we use P4V or any other Perforce client that directly
writes the files on the disk (see Figure 9).
Client
HP
Z420
with
12
threads
(6
cores),
32
GB
RAM,
2
TB
HD
and
Intel
520
(480
GB)
Total
Data
Size
(MB)
15 147
Number
of
Files
(Sample)
328
093
HDD
Sync
Type
Time(Best)
Time(secs)
Speed(MB/sec)
FakeP4
00:03:04
184
82.32
BigFile
00:03:28
208
72.82
P4
00:13:38
818
18.52
P4V
00:08:12
492
30.79
P4Win
00:18:27
1107
13.68
SSD
Sync
Type
Time(Best)
Time(secs)
Speed(MB/sec)
FakeP4
00:03:04
184
82.32
BigFile
00:03:24
204
74.25
P4
00:09:59
599
25.29
P4V
00:06:07
367
41.27
P4Win
00:13:42
822
18.43
Figure 9: Sync operations are faster in Guildlib than P4, P4V, or P4Win
The FakeP4 line is for a custom replacement of p4.exe that we’ve done to replace all file
operations by no-ops. This means that the times taken by this executable are the best
14. 14 Perforce Integration in a AAA Game Engine
theoretical ones and are only limited by server and network speed.
As you can see, the sync times with our Bigfile implementation are excellent and we are very
close to the theoretical speed. Our implementation is also not really affected by the type of
drive because we’ve got almost the same time with an SSD and HDD. Also, you can see that
all official clients are taking much more time than our client implementation with both disk types
but with SSD having much better sync times.
Custom
File
Statuses
Ghosted Files: Sync-on-Demand of Large Assets
The number of files we have in each depot branch is enormous, with millions of files amounting
to more than 100 GB. Syncing all of those files takes time and takes a lot of disk space. Also,
some types of files are useless for 99 percent of our users.
For example, a texture is updated using Photoshop by opening a .psd file. Those files are
wrapped in our Bigfile; when a user wants to edit a texture, the texture is extracted from the
Bigfile, stored on the disk, and edited in Photoshop. When the user is finished editing the
texture, it is reimported in the Bigfile and converted to formats usable by the engine. The
engine never directly loads .psd files.
Those files are really big; some of the .psd are over 500 MB with many of them exceeding 100
MB. To avoid syncing those files, we had to come up with a creative solution to save on disk
space and avoid syncing them.
Those files are stored in our Bigfile but are only useful for artists editing those textures. So we
sync those files on demand from Perforce when the user tries to access the file. This also
means that when syncing the associated data file for those files, we need a way to skip them.
Because we synchronize our file attributes first, we are able to determine which files
correspond to the types of files we would like to ghost. Once we’ve identified those files, we
perform a p4 flush //filepath/…@changelist on all the data files we want to exclude from the
sync operation.
Then when we sync the data files, ghost files are skipped since we ran a p4 flush on them with
the revision found in the sync preview. Also in the Bigfile we store that revision number in the
associated FAT file entry for the ghost file.
Results
On AC3, if someone would try to sync all the data including ghost files, then he or she would
have to sync 1,018,231 data files for a total size of 147.15 GB. But with our implementation of
ghost files, a sync from scratch results in a Bigfile of 19.4 GB, which is a huge saving in size
and sync time.
Hijacked Files: Our Own Implementation of Clobbered Files
When we started the deployment of our Perforce integration, we were supporting multiple
checkouts on attributes files (along with auto-merge of attributes) as well as exclusive locking
for data files (that exclusive locking is enforced in our tool set). It didn’t take long before we got
some reports from users who needed to be able to locally modify files to make tests even if
someone else locked the files.
15. 15 Perforce Integration in a AAA Game Engine
What we’ve come up with is a mode that we’ve called hijack. A hijacked file acts as a kind of
source control operation but it is local to the Bigfile and completely independent of Perforce.
This lets users modify/delete files locally and mark them as such in their Bigfile. Then they are
able at any moment to revert their local change or put the change in a change list (provided
nobody else has the file checked out or has submitted a new version of the file).
When users sync from Perforce, they generally revert all their hijacked files before proceeding
with the sync.
Perforce
Metadata
and
Large
Client
Specs
Database Optimisations
A few months ago the instance holding the data for the Assassin’s Creed brand had a total .db
size exceeding 700 GB (with about 92 percent in db.have). This prompted us to think about
how we could reduce the total size of our database files.
Figure 10 shows the Perforce databases size in 2012.
Figure 10: Perforce databases in 2012 exceeded 700 GB
After some thinking and simulations, we ended up changing a lot of things to reduce the length
of the file paths stored in the databases.
Until a few months ago, our files were stored as the following form:
//assassin3-‐data/main/data/hi/3/00/01/2c/0000000300012ce1.scc_dat
-‐
66
chars
//assassin3-‐data/main/data/ef/00/58/ef005844.scc_dat
-‐
53
//assassin3-‐data/main/attr/hi/3/00/01/2c/0000000300012ce1.scc_att
-‐
66
//assassin3-‐data/main/attr/ef/00/58/ef005847.scc_att
-‐
53
//assassin3-‐data/main/dir/00/04/3d/00043d40.scc_dir
-‐
52
//assassin3-‐data/main/dir/hi/1/00/01/40/000000010001409d.scc_dir
-‐
65
//assassin3-‐data/main/data/hi/ffff/aa/01/2c/0000ffffaa012ce1.scc_dat
-‐
70
-‐
WORST
CASE
We’ve changed that to this shorter form:
//ac3/main/d/3/00/01/300012ce1.d
–
33
chars
//ac3/main/d/0/ef/00/ef005844.d
–
32
chars
//ac3/main/a/3/00/01/300012ce1.a
–
33
chars
//ac3/main/a/0/ef/00/ef005847.a
–
32
chars
//ac3/main/f/0/00/04/43d40.f
–
29
chars
//ac3/main/f/1/00/01/10001409d.f
–
33
chars
//ac3/main/d/ffff/aa/01/ffffaa012ce1.d
–
36
chars
-‐
WORST
CASE
16. 16 Perforce Integration in a AAA Game Engine
Optimization 1: Shorter Depot Names
We’ve renamed the depot //assassin3-data to a shorter //ac3. This saves 12 characters per
file.
Optimization 2: Renamed Root Directories and Extensions
We’ve renamed the root directories from data, attr, and dir to d, a, f. We’ve also renamed file
extensions from .scc_dat, scc_att, and .scc_dir to .d, .a, and .f.
Savings are nine characters for data and attribute files and eight for folder files.
Optimization 3: Client Name Optimization
Also, we’ve changed the name of our client views from something like scimitar-dataserver-ac3-
mtl-wks-ag650-1 (40 chars) to a crc32 of several criteria merged together. To save even more
characters, the name is then converted using a scheme similar to base64. Our client names
now range from four to eight characters with an average of six characters.
Optimization 4: Client View Mapping Now Omitting the Depot Path from Mapping
We now omit the depot path from the right part of the client view mapping because we only
map one branch at a time with those clients and client views are created by our tool.
Before:
No optim (53 characters per file):
//ac3/main/...
//scimitar-‐dataserver-‐ac3-‐mtl-‐wks-‐ag650-‐1/ac3/main/...
Optim 2 (21 characters per file):
//ac3/main/...
//7cc82a5f/ac3/main/...
After optim 2 and 3:
All optims (13 characters per file):
//ac3/main/...
//c7cc82a5f/...
In this specific case, this reduces from 53 to 13 characters the client path for each file
referenced by db.have. Given that we have more than 2 million files mapped in each client
view, this quickly amounts to enormous savings.
Effect on Database Size
With all these changes, we’ve reduced the total size of our .db.have by 55 percent (size after
optims: 550 GB) with freshly recovered checkpoint.
Effect on Performances
We’ve measured the time to run a flush command over instances without and with our
changes. Flush //…@now is now 19 percent faster. And flush //…#0 is now 23 percent faster.
These results are similar to those in a paper from Michael Shields.5
5
http://www.perforce.com/user-conferences/2009/repository-structure-considerations-performance
17. 17 Perforce Integration in a AAA Game Engine
Conclusion
The development of Guildlib using Perforce and its C++ API has been an ongoing task for the
past eight years. The initial problems we encountered with our previous integration such as
data corruption are long gone.
Even though it required a lot of work, our architecture and design choices allowed the game
productions to scale to a level we did not even anticipate. The server performance did not
suffer much and we never lost precious data.
The decision to store our game assets in Perforce was a good one. Having Perforce
embedded in the engine allowed us to meet our scalability, stability, and performance
requirements. The Perforce C++ API is well designed and allowed us to realize our vision.