Dedupe-Centric Storage for General Applications EMC
Proliferation and preservation of many versions and copies of data drives much of the tremendous data growth most companies are experiencing. IT administrators are left to deal with the consequences. Because deduplication addresses one of the key elements of data growth, it should be at the heart of any data management strategy. This paper demonstrates how storage systems can be built with a deduplication engine powerful enough to become a platform that general applications can leverage.
Following on from the new company strategy, we will take a look into the priorities for the Perforce development team. Sharing the product roadmap for the next 12 months and recent updates made to make Helix continue to meet the demands of all our global customers.
Streams in Parallel Development by Sven Erik KnopPerforce
Perforce introduced Streams in 2011. Since then Streams have been adopted by the majority of new Perforce customers for all projects and by many existing customers for new projects. This is a brief overview of Streams and a deep dive into newer features that can help you with parallel and component based development.
Single Source of Truth in a Distributed World by Sven Erik KnopPerforce
Sven Erik Knop, Senior Technical Specialist at Perforce, will discusses the huge difference between Perforce's "Single Source of Truth" in globally distributed teams and a traditional centralised solution. Learn why the "Single Source of Truth" is important, what its requirements are and how to make it globally accessible and available.
Committing to a company-wide software change is no small feat, but if you’re already sweating at the mere thought of checking code in and out, it’s time to plan your escape route.
So, break free and join Tom Tyler, Senior Consultant at Perforce and in-house ClearCase specialist to map out:
- Baseline-and-branch vs. detail history import strategies
- Porting
- Integrations for defect trackers, training, and tooling
- Cutover strategies
Dedupe-Centric Storage for General Applications EMC
Proliferation and preservation of many versions and copies of data drives much of the tremendous data growth most companies are experiencing. IT administrators are left to deal with the consequences. Because deduplication addresses one of the key elements of data growth, it should be at the heart of any data management strategy. This paper demonstrates how storage systems can be built with a deduplication engine powerful enough to become a platform that general applications can leverage.
Following on from the new company strategy, we will take a look into the priorities for the Perforce development team. Sharing the product roadmap for the next 12 months and recent updates made to make Helix continue to meet the demands of all our global customers.
Streams in Parallel Development by Sven Erik KnopPerforce
Perforce introduced Streams in 2011. Since then Streams have been adopted by the majority of new Perforce customers for all projects and by many existing customers for new projects. This is a brief overview of Streams and a deep dive into newer features that can help you with parallel and component based development.
Single Source of Truth in a Distributed World by Sven Erik KnopPerforce
Sven Erik Knop, Senior Technical Specialist at Perforce, will discusses the huge difference between Perforce's "Single Source of Truth" in globally distributed teams and a traditional centralised solution. Learn why the "Single Source of Truth" is important, what its requirements are and how to make it globally accessible and available.
Committing to a company-wide software change is no small feat, but if you’re already sweating at the mere thought of checking code in and out, it’s time to plan your escape route.
So, break free and join Tom Tyler, Senior Consultant at Perforce and in-house ClearCase specialist to map out:
- Baseline-and-branch vs. detail history import strategies
- Porting
- Integrations for defect trackers, training, and tooling
- Cutover strategies
Managing Large Flask Applications On Google App Engine (GAE)Emmanuel Olowosulu
There are a number of issues production applications need to solve to be scalable and fault tolerant. In this talk, we explore some tips for efficiently running Python apps, particularly with Flask, on App Engine. We also share some collective experience and best practices on GAE.
Presented at Richmond SQL Server Users group, this presentation provides an introduction to Continuous Delivery principles, practices, tools and how to apply these to SQL Server Database. It also presents a case study and lessons learned from adopting these practices on a real project.
This is a paper was written by David Reine, an IT analyst for The Clipper Group, and highlights IBM’s SAN Volume Controller new features, capabilities and benefits. These new capabilities were announced on October 20, 2009Virtualization is at the center of all 21st Century IT systems, yet many CIOs fail to fully understand all of the benefits it can deliver to the data center operation. When we think of virtualization, we think compute, network, and storage—and we mostly think about driving up utilization on each. Storage controllers have always offered the ability to carve out pieces of real storage from a large pool and deliver them efficiently to a number of hosts, but it is storage virtualization itself that offers improvements that drive operational efficiency. IBM has been quietly addressing storage virtualization with SAN Volume Controller (SVC) for the last six years, building up a significant technical lead in this space.
Strange but true: most infrastructure architectures are deliberately designed from the outset to need little or no change over their lifetimes. There are two main reasons for this:
1. Change often means outages and customer impact and must be avoided
2. Budgets are set at the beginning of a project and getting more cash later is tough
Typically, then, applications are configured with all of the storage capacity they need to support the wildest dreams of their business sponsors (and then some extra is added for contingency by IT). Equally, storage is always configured with the performance level (storage tier) set to cope with the wildest transactional dreams of the business sponsor (and guess what? IT generally adds a bit more for good measure.).
No wonder storage is now one of the largest cost components involved in delivering and running a business application.
How to Organize Game Developers With Different Planning NeedsPerforce
Different skills have different needs when it comes to planning. For a coder it may make perfect sense to plan work in two-week sprints, but for an artist, an asset may take longer than two weeks to complete.
How do you allow different skills to plan the way that works best for them? Some studios may choose to open up for flexibility – do whatever you like! But that tends to cause issues with alignment and siloes of data, resulting in loss of vision. Lost vision in the sense that it is difficult to understand, but also — and maybe more importantly — the risk of losing the vision of what the game will be.
With the right approach, however, you can avoid these obstacles. Join backlog expert Johan Karlsson to learn:
-The balance of team autonomy and alignment.
-How to use the product backlog to align the project vision.
-How to use tools to support the flexibility you need.
Looking for a planning and backlog tool? You can try Hansoft for free.
Regulatory Traceability: How to Maintain Compliance, Quality, and Cost Effic...Perforce
How do regulations impact your product requirements? How do you ensure that you identify all the needed requirements changes to meet these regulations?
Ideally, your regulations should live alongside your product requirements, so you can trace among each related item. Getting to that point can be quite an undertaking, however. Ultimately you want a process that:
-Saves money
-Ensures quality
-Avoids fines
If you want help achieving these goals, this webinar is for you. Watch Tom Totenberg, Senior Solutions Engineer for Helix ALM, show you:
-How to import a regulation document into Helix ALM.
-How to link to requirements.
-How to automate impact analysis from regulatory updates.
More Related Content
Similar to Extending Perforce Scalability Using Job Content Synchronization
Managing Large Flask Applications On Google App Engine (GAE)Emmanuel Olowosulu
There are a number of issues production applications need to solve to be scalable and fault tolerant. In this talk, we explore some tips for efficiently running Python apps, particularly with Flask, on App Engine. We also share some collective experience and best practices on GAE.
Presented at Richmond SQL Server Users group, this presentation provides an introduction to Continuous Delivery principles, practices, tools and how to apply these to SQL Server Database. It also presents a case study and lessons learned from adopting these practices on a real project.
This is a paper was written by David Reine, an IT analyst for The Clipper Group, and highlights IBM’s SAN Volume Controller new features, capabilities and benefits. These new capabilities were announced on October 20, 2009Virtualization is at the center of all 21st Century IT systems, yet many CIOs fail to fully understand all of the benefits it can deliver to the data center operation. When we think of virtualization, we think compute, network, and storage—and we mostly think about driving up utilization on each. Storage controllers have always offered the ability to carve out pieces of real storage from a large pool and deliver them efficiently to a number of hosts, but it is storage virtualization itself that offers improvements that drive operational efficiency. IBM has been quietly addressing storage virtualization with SAN Volume Controller (SVC) for the last six years, building up a significant technical lead in this space.
Strange but true: most infrastructure architectures are deliberately designed from the outset to need little or no change over their lifetimes. There are two main reasons for this:
1. Change often means outages and customer impact and must be avoided
2. Budgets are set at the beginning of a project and getting more cash later is tough
Typically, then, applications are configured with all of the storage capacity they need to support the wildest dreams of their business sponsors (and then some extra is added for contingency by IT). Equally, storage is always configured with the performance level (storage tier) set to cope with the wildest transactional dreams of the business sponsor (and guess what? IT generally adds a bit more for good measure.).
No wonder storage is now one of the largest cost components involved in delivering and running a business application.
How to Organize Game Developers With Different Planning NeedsPerforce
Different skills have different needs when it comes to planning. For a coder it may make perfect sense to plan work in two-week sprints, but for an artist, an asset may take longer than two weeks to complete.
How do you allow different skills to plan the way that works best for them? Some studios may choose to open up for flexibility – do whatever you like! But that tends to cause issues with alignment and siloes of data, resulting in loss of vision. Lost vision in the sense that it is difficult to understand, but also — and maybe more importantly — the risk of losing the vision of what the game will be.
With the right approach, however, you can avoid these obstacles. Join backlog expert Johan Karlsson to learn:
-The balance of team autonomy and alignment.
-How to use the product backlog to align the project vision.
-How to use tools to support the flexibility you need.
Looking for a planning and backlog tool? You can try Hansoft for free.
Regulatory Traceability: How to Maintain Compliance, Quality, and Cost Effic...Perforce
How do regulations impact your product requirements? How do you ensure that you identify all the needed requirements changes to meet these regulations?
Ideally, your regulations should live alongside your product requirements, so you can trace among each related item. Getting to that point can be quite an undertaking, however. Ultimately you want a process that:
-Saves money
-Ensures quality
-Avoids fines
If you want help achieving these goals, this webinar is for you. Watch Tom Totenberg, Senior Solutions Engineer for Helix ALM, show you:
-How to import a regulation document into Helix ALM.
-How to link to requirements.
-How to automate impact analysis from regulatory updates.
Efficient Security Development and Testing Using Dynamic and Static Code Anal...Perforce
Be sure to register for a demo, if you would like to see how Klocwork can help ensure that your code is secure, reliable, and compliant.
https://www.perforce.com/products/klocwork/live-demo
If it’s not documented, it didn’t happen.
When it comes to compliance, if you’re doing the work, you need to prove it. That means having well-documented SOPs (standard operating procedures) in place for all your regulated workflows.
It also means logging your efforts to enforce these SOPs. They show that you took appropriate action in any number of scenarios, which can be related to regulations, change requests, firing of an employee, logging an HR compliant, or anything else that needs a structured workflow.
But when do you need to do this, and how do you go about it?
In this webinar, Tom Totenberg, our Helix ALM senior solutions engineer, clarifies workflow enforcement SOPs, along with a walkthrough of how Perforce manages GDPR (General Data Protection Regulation) requests. He’ll cover:
-What are SOPs?
-Why is it important to have this documentation?
-Example: walking through our internal Perforce GDPR process.
-What to beware of.
-Building the workflow in ALM.
Branching Out: How To Automate Your Development ProcessPerforce
If you could ship 20% faster, what would it mean for your business? What could you build? Better question, what’s slowing your teams down?
Teams struggle to manage branching and merging. For bigger teams and projects, it gets even more complex. Tracking development using a flowchart, team wiki, or a white board is ineffective. And attempts to automate with complex scripting are costly to maintain.
Remove the bottlenecks and automate your development your way with Perforce Streams –– the flexible branching model in Helix Core.
Join Brad Hart, Chief Technology Officer and Brent Schiestl, Senior Product Manager for Perforce version control to learn how Streams can:
-Automate and customize development and release processes.
-Easily track and propagate changes across teams.
-Boost end user efficiency while reducing errors and conflicts.
-Support multiple teams, parallel releases, component-based development, and more.
How to Do Code Reviews at Massive Scale For DevOpsPerforce
Code review is a critical part of your build process. And when you do code review right, you can streamline your build process and achieve DevOps.
Most code review tools work great when you have a team of 10 developers. But what happens when you need to scale code review to 1,000s of developers? Many will struggle. But you don’t need to.
Join our experts Johan Karlsson and Robert Cowham for a 30-minute webinar. You’ll learn:
-The problems with scaling code review from 10s to 100s to 1,000s of developers along with other dimensions of scale (files, reviews, size).
-The solutions for dealing with all dimensions of scale.
-How to utilize Helix Swarm at massive scale.
Ready to scale code review and streamline your build process? Get started with Helix Swarm, a code review tool for Helix Core.
By now many of us have had plenty of time to clean and tidy up our homes. But have you given your product backlog and task tracking software as much attention?
To keep your digital tools organized, it is important to avoid hoarding on to inefficient processes. By removing the clutter in your product backlog, you can keep your teams focused.
It’s time to spark joy by cleaning up your planning tools!
Join Johan Karlsson — our Agile and backlog expert — to learn how to:
-Apply digital minimalism to your tracking and planning.
-Organize your work by category.
-Motivate teams by transitioning to a cleaner way of working.
TRY HANSOFT FREE
Going Remote: Build Up Your Game Dev Team Perforce
Everyone’s working remote as a result of the coronavirus (COVID-19). And while game development has always been done with remote teams, there’s a new challenge facing the industry.
Your audience has always been mostly at home – now they may be stuck there. And they want more games to stay happy and entertained.
So, how can you enable your developers to get files and feedback faster to meet this rapidly growing demand?
In this webinar, you’ll learn:
-How to meet the increasing demand.
-Ways to empower your remote teams to build faster.
-Why Helix Core is the best way to maximize productivity.
Plus, we’ll share our favorite games keeping us happy in the midst of a pandemic.
Shift to Remote: How to Manage Your New WorkflowPerforce
The spread of coronavirus has fundamentally changed the way people work. Companies around the globe are making an abrupt shift in how they manage projects and teams to support their newly remote workers.
Organizing suddenly distributed teams means restructuring more than a standup. To facilitate this transition, teams need to update how they collaborate, manage workloads, and maintain projects.
At Perforce, we are here to help you maintain productivity. Join Johan Karlsson — our Agile expert — to learn how to:
Keep communication predictable and consistent.
-Increase visibility across teams.
-Organize projects, sprints, Kanban boards and more.
-Empower and support your remote workforce.
Hybrid Development Methodology in a Regulated WorldPerforce
In a regulated industry, collaboration can be vital to building quality products that meet compliance. But when an Agile team and a Waterfall team need to work together, it can feel like mixing oil with water.
If you're used to Agile methods, Waterfall can feel slow and unresponsive. From a Waterfall perspective, pure Agile may lack accountability and direction. Misaligned teams can slow progress, and expose your development to mistakes that undermine compliance.
It's possible to create the best of both worlds so your teams can operate together harmoniously. This is how to develop products quickly, and still make regulators happy.
Join ALM Solutions Engineer Tom Totenberg in this webinar to learn how teams can:
- Operate efficiently with differing methodologies.
- Glean best practices for their tailored hybrid.
- Work together in a single environment.
Watch the webinar, and when you're ready for a tool to help you with the hybrid, know that you can try Helix ALM for free.
Better, Faster, Easier: How to Make Git Really Work in the EnterprisePerforce
There's a lot of reasons to love Git. (Git is awesome at what it does.) Let’s look at the 3 major use cases for Git in the enterprise:
1. You work with third party or outsourced development teams.
2. You use open source in your products.
3. You have different workflow needs for different teams.
Making the best of Git can be difficult in an enterprise environment. Trying to manage all the moving parts is like herding cats.
So, how do you optimize your teams’ use of Git — and make it all fit into your vision of the enterprise SDLC?
You’ll learn about:
-The challenges that accompany each use case — third parties, open source code, different workflows.
-Ways to solve these problems.
-How to make Git better, faster, and easier — with Perforce
Easier Requirements Management Using Diagrams In Helix ALMPerforce
Sometimes requirements need visuals. Whether it’s a diagram that clarifies an idea or a screenshot to capture information, images can help you manage requirements more efficiently. And that means better quality products shipped faster.
In this webinar, Helix ALM Professional Services Consultant Gerhard Krüger will demonstrate how to use visuals in ALM to improve requirements. Learn how to:
-Share information faster than ever.
-Drag and drop your way to better teamwork.
-Integrate various types of visuals into your requirements.
-Utilize diagram and flowchart software for every need.
-And more!
Immediately apply the information in this webinar for even better requirements management using Helix ALM.
It’s common practice to keep a product backlog as small as possible, probably just 10-20 items. This works for single teams with one Product Owner and perhaps a Scrum Master.
But what if you have 100 Scrum teams managing a complex system of hardware and software components? What do you need to change to manage at such a massive scale?
Join backlog expert Johan Karlsson to learn how to:
-Adapt Agile product backlog practices to manage many backlogs.
-Enhance collaboration across disciplines.
-Leverage backlogs to align teams while giving them flexibility.
Achieving Software Safety, Security, and Reliability Part 3: What Does the Fu...Perforce
In Part 3, we will look at what the future might hold for embedded programming languages and development tools. And, we will look at the future for software safety and security standards.
How to Scale With Helix Core and Microsoft Azure Perforce
Microsoft Azure helps teams increase their speed, gain flexibility, and save time. Using Helix Core with Azure you maximizes cloud benefits. You can scale to meet both current and future deployment demands. And this powerful combination helps secure your most valuable IP assets.
So, where do you start? What do you need to set up your teams for success? How can you expedite your pipelines to deliver ahead of your competitors?
Join Chuck Gehman from Perforce to learn more about:
-Compute, storage, and security options from Azure.
-Strategies that boost your cloud investment.
-Tips to secure your data.
-Best practices for global deployments.
Achieving Software Safety, Security, and Reliability Part 2Perforce
In Part 2, we will focus on the automotive industry, as it leads the way in enforcing safety, security, and reliability standards as well as best practices for software development. We will then examine how other industries could adopt similar practices.
Modernizing an application’s architecture is often a necessary multi-year project in the making. The goal –– to stabilize code, detangle dependencies, and adopt a toolset that ignites innovation.
Moving your monolith repository to a microservices/component based development model might be on trend. But is it right for you?
Before you break up with anything, it is vital to assess your needs and existing environment to construct the right plan. This can minimize business risks and maximize your development potential.
Join Tom Tyler and Chuck Gehman to learn more about:
-Why you need to plan your move with the right approach.
-How to reduce risk when refactoring your monolithic repository.
-What you need to consider before migrating code.
Achieving Software Safety, Security, and Reliability Part 1: Common Industry ...Perforce
In part one of our three-part webinar series, we examine common software development challenges, review the safety and security standards adopted by different industries, and examine the best practices that can be applied to any software development team.
The features you’ve been waiting for! Helix ALM’s latest update expands usability and functionality to bring solid improvements to your processes.
Watch Helix ALM Senior Product Manager Paula Rome demonstrate how new features:
-Simplify workflows.
-Expand report analysis.
-Boost productivity in the Helix ALM web client.
All this and MORE packed into an exciting 30 minutes! Get inspired. Be extraordinary with the new Helix ALM.
Companies that track requirements, create traceability matrices, and complete audits - especially for compliance - run into many problems using only Word and Excel to accomplish these tasks.
Most notably, manual processes leave employees vulnerable to making costly mistakes and wasting valuable time.
These outdated tracking procedures rob organizations of benefiting from four keys to productivity and efficiency:
-Automation
-Collaboration
-Visibility
-Traceability
However, modern application lifecycle management (ALM) tools solve all of these problems, linking and organizing information into a single source of truth that is instantly auditable.
Gerhard Krüger, senior consultant for Helix ALM, explains how the right software supports these fundamentals, generating improvements that save time and money.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Mind map of terminologies used in context of Generative AI
Extending Perforce Scalability Using Job Content Synchronization
1. Extending Perforce Scalability Using Job Content
Synchronization
Shannon Mann, Research In Motion
December 2010
Abstract
Research in Motion (RIM) has completed an 8 month project to split our main software
depot. A hidden difficulty of a split of a large depot is the impact to the bug tracking
system. Job Content Synchronization (JCS) allows multiple depots to share a single
link with the bug tracking system, eliminating needed changes and significantly reducing
split preparation time.
This paper discusses the split project and the use of JCS to reduce the complexity of
changes to the bug tracking system integration.
Introduction
Research In Motion (RIM) is a world leader in the mobile communications market and
has a history of developing breakthrough wireless solutions. RIM’s portfolio of award-
winning products, services and embedded technologies are used by thousands of
organizations around the world and include the BlackBerry™ wireless platform, the RIM
Wireless Handheld™ product line, software development tools, radio-modems and
software/hardware licensing agreements.
Our largest depot has had a series of performance problems since 2005, and in
response, we’ve had several projects over the years to continually improve
performance. In 2006, we decided to split the depot, however, code organization at the
time prevented splitting – other measures were taken that improved performance by
over 10x (see ‘Improving Perforce Performance by over 10x’, 2008 Perforce European
Users Conference for details). The decision to split was revisited in March 2010 with
one group at RIM committing to the split which we completed in November 2010.
History
Perforce has been used at RIM since November 1999. Although application
performance wasn’t as spectacular as Perforce 2010.2, the early depot did not suffer
many issues. Starting late 2005, several automated and continuous integration build
systems were implemented. Soon after these new automation systems were
implemented serious slowdowns appeared. By late 2006, performance during the day
was so severe that a single file change sometimes took two hours. With Perforce
guidance, RIM decided to split the depot, and a project in early 2007 was launched to
2. accomplish this goal. Investigations during this project showed that source architecture
interconnections at RIM prevented a split at that time. The project team sought
alternatives, and again, Perforce was helpful with some suggested hardware
improvements. These changes dramatically improved Perforce performance, however,
these improvements were only temporary. Increased user base and expanded
automation were guaranteed to hit our performance again.
Perforce
Timeline
Comparison
100.0%
90.0%
80.0%
70.0%
60.0%
Changelists
50.0%
Sample
Times
40.0%
30.0%
Users
20.0%
Log
Size
10.0%
0.0%
Graph note: Values for changelists range from 5700-3.5 million, for sample times from .2 seconds to 22 seconds, for users from
1750-5275 and for log size from .4 million-145 million. To graph such diverse ranges, each data point was taken as a fraction of the
largest value for that set and plotted as a percent. This allows relative changes to be compared for diverse data. Plotting using four
separate Y-axes would have produced the same graphs. Data for sample times and users doesn’t exist prior to January 2007.
In March 2010, a large group of RIM stakeholders met with Perforce to discuss whether
to split and, if so, which split approach to use. Discussions confirmed that recent source
architecture changes now allowed a split for some groups. Several alternative split
methods were considered – of those, splitting using a full depot copy and using
protections for separation was finally chosen and a project to implement initiated.
Split Solution Overview
The selected split approach involves four stages:
a. Duplication and Separation: Create a duplicate copy of all data. Use protections
table entries to partition each copy into separate unique depot trees.
b. Clientspec cleanup: On each copy, delete any clientspecs that contain only
despot trees that exist on the opposite depot.
3. c. Offline obliterates: Do obliterates of the no-longer-needed depot tree on a copy of
each depot and replay the results.
d. Clean up depot files: Find a solution to identify and remove no-longer-needed
archives (still a research topic).
Duplication is done using a SAN flashsnap to copy real-time data to a duplicate disk. At
split time, the depot is brought down, the flashsnap broken apart and the copy mounted
on a separate machine. The protections table for each creates a ‘virtual’ separation,
each depot having an exclusive portion of the original depot tree. Although Perforce
Admin can see everything, as far as the users are concerned, two entirely separate
depot trees are evident on each depot. After the split, user confusion over where a
particular depot sub-tree is located is quickly and permanently resolved. Moreover, if
mistakes are made in the directory placement, a simple protections table change allows
a sub-tree to migrate from one depot to the other.
Clientspecs that are no longer meaningful (containing only paths that are no longer
visible to the users on that depot) are deleted, producing each depot with a mostly-
separated set of clientspecs. A clientspec will still appear on both depots in the rare
case of a clientspec with paths on both depots. Removing unnecessary clientspecs
helps reduce the size of the db.have table giving both depots an additional boost in
performance.
Offline obliterates are done for each new depot. A copy of a depot is created for this
purpose on a non-production machine. On this copy, the obliterate commands are run
to remove the no-longer-needed depot trees producing a journal that can be replayed to
enact the obliteration in production. Given obliterates are long and costly operations,
being able to do this operation while production is unaffected is key. Even if the
operation takes weeks to complete on the non-production machine, the result is
equivalent to doing the obliteration on production while the actual impact is minimized.
The depot files associated with the obliterated files are NOT eliminated with this
process.
The last stage, depot cleanup, is still a research topic aimed at reducing our disk
footprint as not being able to remove no-longer-needed archives makes this split
technique very expensive.
Integration Impacts
Although a split improves performance of each depot by splitting the user load and
doubling the availability of db table locks, what about the tools that integrate with
Perforce? Some tools will only connect to one depot post-split – largely, these tools will
be unaffected by the split save possible reconfiguration to point to a new depot. Others
4. are broadly impacted as they will need to talk to both depots post-split. The split project
team opened discussions with all Perforce-integrating tool owners to help them assess
the impacts the split would have on their processes and tools. Most of these
discussions were quite fruitful and were able to identify needed changes and potential
impacts in a few short weeks. After six weeks of protracted discussions a solution for
RIM’s bug-tracking system, MKS Integrity, was still outstanding. At the time, the
integration existed as three P4DTG-based integrations, although a fourth was being
implemented and a fifth was planned.
Integrity and Perforce are tightly integrated with Perforce jobs triggering post-
submission testing and release processes. Although each of those integration links
could be duplicated to each split depot, there were performance issues on Integrity and
possibly some on each Perforce depot. As we planned further depot splits the impacts
would not scale linearly – they would multiply. Worse, in order to make Integrity work
properly, the application would need to be able to ‘reassign’ an issue between the
depots – we would need to make Integrity ‘aware’ of multiple depots. As there are
dozens of integrations with Integrity, the integration impact would grow tremendously.
Lastly, items in Integrity automatically create their associated Perforce jobs. As there
would now be multiple depots for this creation, either all depots would get a copy of the
job (increasing the impact again), or the user would have to select which depot the job
should appear, usually an impossible task for the user. These issues indicated that a
bug-tracking-centric approach was not the right solution as the approach was
introducing more problems than solving - another solution was desperately needed.
The issues with the integration to Integrity put the split effort in serious jeopardy.
5. A brainstorm session to identify an alternative solution produced an abstraction where
all activity between Perforce depots and Integrity were considered one virtual process.
In order to preserve the existing Perforce-to-Integrity integration, a Perforce depot would
have to serve as the middle point, a jobs-only depot. The next step recognized that job
information from the production depots would need to be transferred and kept up-to-
date by some process. This process became Job Content Synchronization (JCS) which
would handle copying appropriate information between the production depots and the
jobs-only depot. A trigger would request job information on an as-demanded basis and
a separate process would keep those jobs up-to-date. This approach minimized the
amount of job data that would need to be maintained, as only jobs used by users would
be copied and synced to each depot.
6. Additional discussions suggested that JCS would be best implemented using P4DTG as
the updating engine; each production depot would act as the source control system and
the jobs depot would act as the defect tracking system. This approach keeps
application-domain information within the application suite. In late May 2010, a
description of the proposed JCS solution was sent to Perforce Support to get their input
as to whether this was a good solution, whether alterations were needed or whether a
different solution would be best.
After two weeks, Perforce support requested a meeting to discuss the proposed JCS
solution. During the meeting they explained that they had spent the previous two weeks
developing a reference implementation much to our surprise and pleasure. To the basic
process they added another addition: P4Broker to redirect ‘p4 jobs’ commands to the
jobs depot, making the whole system invisible to the user. With that alteration, the
showstopper problem had found its lynchpin solution.
The Rest of the Recipe
In order to support the needs of JCS and the various impacted tools and integrations,
further facilities were needed:
a) P4Auth – supports users connecting to multiple depots with one login
b) P4Change – makes changelist numbers unique amongst connected depots
c) P4Jobs – the interconnecting depot separating JCS from the bug tracking system
d) P4Brokers – needed to redirect all ‘p4 jobs’ commands to the P4Jobs depot
e) Changelist to depot mapping DB – maps a changelist to a particular depot
These extra facilities add significant complexity to the Perforce environment, however,
the benefits have been many. Adding new depots to the environment involves simple
7. configuration taking minutes. Adding more bug-tracking instances (and even different
types of bug-trackers) is also straight forward and supportable. Moreover, all additions
involve linear growth at all levels – only the specific connections are involved in
increasing complexity keeping growth permanently linear, a vast improvement over the
solutions initially considered.
Pre-Implementation Testing
In order to test the integrations and the split protections, two duplicate copies were
created on a test machine with test versions of the different protections tables, complete
with broker, P4Auth, P4Change and P4Jobs depots. A test P4Jobs-to-Integrity
integration was created, and the JCS links between the split depots and P4Jobs were
also created. This environment allowed our users and tool developers to test the split
protections and to test how well their tools handled the split. As well, our build release
team was able to test many builds against the new protections as the changed
protections may very well have split a build into both depots.
The protections changes were done using a tagged protections table. Each line in the
protections table was tagged with either ‘both’, ‘ENT’ or ‘HH’. A split script converted
the tagged file into separate protections tables ready for loading. A separate script
loading the generated protections directly into the test depots. Three protections table
versions were generated. The first version was a complete copy of everything –
essentially the tagged file with the tags removed. The second version was the lines
tagged for ‘both’ and ‘HH’ for the ‘handheld’ depot, the third version was the lines
8. tagged as ‘both’ and ‘ENT’ for the ‘platform’ depot (originally named ‘enterprise’ depot).
The first version was compared to the production protections when incorporating recent
‘live’ changes into the test protections while preserving order. Adding those differences
to the tagged file and then rebuilding and re-comparing to the original allowed a difficult
merge process a straight forward (but tedious) solution. The main advantage to the
merge process was that the output proved the merge correct every time, the primary
concern. Moreover, a few changes to the split script allowed it to generate a web page
that was instantly updated with each change allowing users to see where a directory
was destined to appear. Changes that users required were quickly added with the web
page confirming the move at the same time. Post-split, the same scripts were used to
migrate paths that for various reasons ended on the wrong depot. In this way, users
had incorrect paths resolved very quickly, most in the first 24 hours post-split and all
shortly after contacting us.
The test environment ran from July through November. Db file and depot file contents
were updated in late August and early October. Protections updates from production
happened irregularly given the constraints of testing. Each protections update involved
between 50 and 300 changes all done by hand by the process noted above.
The test environment and process allowed us to greatly reduce post-split issues,
especially with release build systems. Although at great effort, the results make clear
not taking that effort would have been disastrous. Many issues were caught months
before the split that would have had great impact on ship dates and on development
process quality and sanity.
Split Rollout
Given the complexity of the split and the potential for failure, the supporting additions
were implemented early. In effect, the problems potentially encountered were removed
from the actual split, reducing risk. As well, by implementing early, these additions were
further tested in isolation from the actual split, removing possible conflicting reasons for
failure at split time. P4Broker was implemented in late September 2010 with P4Auth,
P4Change and P4Jobs depots implemented two weeks later.
The split was scheduled for late November 2010 after months of negotiating around
release schedules and readiness for the split. The split took two hours to complete, with
the actual split process taking 30 minutes. In the seven days following the split, 46
messages indicating problems with the split were received from a community exceeding
5000 users. Two-thirds of the messages were due to user confusion as to where their
code now resided. One-sixth of the messages were requests to move a path to the
opposite depot and the remaining sixth were real issues requiring correction post split.
9. Over-all, the split has been massively successful with minimal impact, a testament to
careful planning and plenty of testing.
Post-Split Benefits
Perforce performance has improved post split, as expected. The Platform depot has
excellent performance while the Handheld depot has generally improved performance
with some performance issues.
Software depot – before split
Handheld depot after split
As the charts show, our performance metrics in the weeks leading to the split show
significant noise and noticeable area under the graph – indications of performance
issues. After the split, Handheld depot performance is much better. The Platform depot
graph is not included as it is a dead-flat line hardly distinguishable from the X-axis.
New depot additions now take minutes to add. Since we’ve split, we’ve added two
depots to the environment, the IT depot and Buildstore. The IT depot is a long-time
sibling to the pre-split depot and Buildstore is a binaries-only depot for post-built objects
– a clearing house for build support and external vendor binary drops.
10. The solution has proven very modular. P4Auth and P4Change support is trivial to add
to an existing depot or a new depot, opening the way for bug-tracking integration with
JCS. Even without using JCS, RIM uses P4Auth and P4Change with an additional
acquisition-related depot, simplifying user support. RIM’s integration with Code
Collaborator, a code review tool, was greatly simplified as instead of copying user and
group information into a ‘perms’ depot, P4Auth provided that facility for free.
P4Broker has provided a method to partially ‘pause’ Perforce, allowing recovery during
performance spikes. Although not preferred, the ‘pause’ allows more time to develop
other solutions for the performance issues .
Lastly, the experience has given our support team confidence to continue splitting.
Other splits are planned in the near future (late 2011 or in 2012).
Post-Split Issues
The split approach used by RIM is effective, straight-forward to implement and does
give good performance benefits. However, there have been issues. The process RIM
followed is extremely expensive with disk as we use SAN disk for both the db files and
the depot files. A move to filer is possible for depot files, however, recent experience
suggests that there may be performance issues with RIM filer implementation.
The implementation exposed a bug in ‘p4 protects’ that affects the version of Perforce
we used during the split (Perforce 2009.2). The ‘p4 protects’ command always looks at
the local user and group tables, even when P4Auth is in use. RIM used ‘p4 replicate’ to
copy P4Auth tables to the local system, and, for the depot affected, replaced the local
user and group tables with a symlink to the replicated tables. Perforce has since
corrected this bug in Perforce 2010.1 and later.
RIM found a significant memory leak in the broker that threatened to scuttle the split
effort, however, Perforce was able to provide a corrected broker in short order. The
memory leak happens only when command redirection is turned on. The latest broker
has this memory problem corrected.
Lastly, the handheld depot is experiencing some performance issues. The performance
issues are expected as the size of the split was not balanced. Fewer groups could be
migrated than preferred, leaving the handheld depot larger than desired.
Future Directions
As always, Perforce performance is an ongoing issue. User base increases and an
ever-expanding desire for greater build and data mining automation means the load on
Perforce only increases. In 2007, RIM had 1750 licenses and never ran more than 1.5
11. million commands/day. RIM now has over 6000 licenses and regularly runs more than
4 million commands/day on a single depot/build replica pair. RIM has had peaks of 15
million commands/day during a particularly busy day.
Given user thirst for performance, further splits are likely. RIM is looking at less costly
split solutions using new technology - two solutions are under consideration, perfsplit++
by Perforce and a client-side solution under development at RIM.
Cleanup has continued, both to improve performance by reducing db.have table size
and by reducing disk overhead.
Data mining in Perforce is a major drain on Perforce performance. The P4toDB solution
is being investigated as an alternative, although more investigation of the lack of
permissions support is needed.
A hardware refresh is nearing completion, replacing our Sun X4600 servers with IBM
x3950 X5 servers containing more cores and more memory. Although hardware
improvements are always expected to give mild improvements, initial experience
suggests more than normal improvements may be had.
The IO scheduler traditionally used by the DB filesystem is completely fair scheduling
(CFS). Given our DB files are on SSD-based SAN, Perforce Support suggested moving
to deadline scheduling. Significant testing confirmed that deadline scheduling is
superior to no-op scheduling and significantly better than CFS. RIM is now moving to
deadline scheduling. This IO scheduler change should relieve the current problems
associated with read storms.
Lastly, an upgrade to Perforce 2010.2 is underway, with expected completion before
late May. A number of improvements in Perforce 2010.2 are expected to give further
performance improvements, including relief from some bugs found along the way.
As always, further performance improvements are sought, considered and implemented
where possible. Performance improvements is an ongoing and never-ending process.
Lessons Learned
The split was a long arduous journey which originally began in late 2006. Many new
lessons were learned over the last nine months of that journey:
a) Each depot sub-tree in Perforce needs an owner, someone who takes
responsibility for that sub-tree. RIM’s split was complicated not just for technical
reasons, but also due to political issues arising from multiple teams overlapping
sets of files. Tree owners can reduce these conflicts and spread the effort to
decide where each sub-tree will reside.
12. b) Early engagement of internal tool owners is critical. RIM’s efforts here helped
eliminate most of the post-split trauma by getting people who can make the
change to integrating tools involved.
c) A full copy of the post-split environment for testing is critical. Providing this
environment and keeping it up-to-date allows testing and alterations to happen
prior to split, not after in crisis-mode cleanup. Getting people to test their
changes against the test environment was a large challenge, but, those efforts
proved very important. Only users and groups who didn’t test against the test
system had significant issues post-split.
d) Protections-based splittingi allows extreme flexibility during testing and even after
the split. The flexibility and speed of change allows users desires to be
implemented quickly and errors to be corrected quickly. Using a tagged
protections file as the seed for a protections split process proved very valuable,
as it enabled the flexibility to be handled quickly. Having some system in place
to make quick changes and corrections will be sorely needed.
e) A system to track users changes was needed and was unavailable. The
Perforce split team did not have resources available to develop this facility, so
requests for changes had to be handled by-hand. When there were conflicts in
requests, helping the conflicting users to connect proved difficult when the first
request got buried amongst the significant inflow of email and distanced with time.
A simple web page to track requests would have sufficed, but with resource
constraints, could not be supported. Take time to implement this first.
f) Perforce was critical to the decisions, the outset planning, correcting two
potentially dangerous problems and supporting all stages of the split process.
Further, their efforts with the JCS solution placed them ahead of all, turning a
proposal into a reference solution in very short time. Before attempting any
significant change, discuss it with them. Perforce has long proven capable,
knowledgeable, concerned and committed to supporting their product and they
do an admirable job. Seek their wisdom. You won’t regret it.
i
Implicit protections grant broad access and reduce access when necessary through exclusions. Explicit
protections only grant the exact access you need without needing exclusions. Since May 2010, RIM uses
explicit protections as testing showed that implicit protections were 60 times more expensive. The
conversion reduced our protections expense by 10 times as we didn’t convert all exclusions.
Counterintuitively, our protections table line count expanded almost eight times (526 to almost 4000
lines). Explicit protections greatly simplify protections-based splitting as simple division of lines properly
separates all paths. Implicit protections can still be used, however, the split depots will perform less well
due to increased exclusions.