"Get a tour of Perforce BTree history, its behaviors and configuration. Learn about performance alternatives, space management tools and future projects, too."
The internals and the latest trends of container runtimesAkihiro Suda
The document discusses the internals and latest trends of container runtimes. It describes how container runtimes like Docker use kernel features like namespaces and cgroups to isolate containers. It explains how containerd and runc work together to manage the lifecycles of container processes. It also covers security measures like capabilities, AppArmor, and SELinux that container runtimes employ to safeguard the host system.
The document discusses using the Raspberry Pi GPU for deep neural network prediction on end devices. It provides an overview of the Raspberry Pi GPU architecture and benchmarks convolutional neural network models like GoogLeNet, ResNet50, and YOLO on the Raspberry Pi 3 and Zero. Optimization techniques discussed include specialized convolution implementations, instruction golfing to reduce operations, removing wasteful computations, and improving data locality.
The internals and the latest trends of container runtimesAkihiro Suda
The document discusses the internals and latest trends of container runtimes. It describes how container runtimes like Docker use kernel features like namespaces and cgroups to isolate containers. It explains how containerd and runc work together to manage the lifecycles of container processes. It also covers security measures like capabilities, AppArmor, and SELinux that container runtimes employ to safeguard the host system.
The document discusses using the Raspberry Pi GPU for deep neural network prediction on end devices. It provides an overview of the Raspberry Pi GPU architecture and benchmarks convolutional neural network models like GoogLeNet, ResNet50, and YOLO on the Raspberry Pi 3 and Zero. Optimization techniques discussed include specialized convolution implementations, instruction golfing to reduce operations, removing wasteful computations, and improving data locality.
Debug Information And Where They Come FromMin-Yih Hsu
(Presented in COSCUP 2022)
Debug information is a mapping between the original source code and low-level binary locations. It provides developers powerful insights to diagnose problems (via debuggers) in their code and acts as one of the most important foundations for modern software development. Furthermore, in recent years, we are seeing increasing demands of high quality debug information for highly optimized applications that are otherwise “un-debuggable”. For instance, debugging unoptimized games is generally not feasible since it’s likely to miss every single frame. In this talk, we are going to introduce how debug information works and how compilers generate proper debug info even with extensive levels of optimization enabled. At the end of this talk, you will gain insights into the structure of debug information and learn key compiler engineering knowledge on generating high quality debug info for critical, highly optimized software.
GPORCA is query optimizer used inside Greenplum database, the first open source MPP solution based on PostgreSQL.
These are slides presented at the PGConf Seattle 2017. It introduced the internals of GPORCA, and provide OSS developers context to contribute back to the project.
Vlsi physical design automation on partitioningSushil Kundu
This document provides an introduction to VLSI physical design automation and partitioning. It discusses the importance of partitioning large circuits into smaller subcircuits for manageable design. The objectives of partitioning are to minimize the number of partitions and interconnections between partitions. Common partitioning algorithms discussed include min-cut bipartitioning, Kernighan-Lin iterative improvement algorithm, and other methods like ratio cut, genetic algorithms, and simulated annealing. Partitioning is an essential step in the physical design flow and impacts circuit performance and layout costs.
This document outlines an agenda for a workshop on Kubernetes networking with eBPF and Cilium. The workshop covers various topics including principles of eBPF and Cilium, Kubernetes networking, cluster mesh, security, observability, service mesh, and Tetragon. It provides overviews and examples for each topic. The workshop is presented by Raphaël Pinson who works on Cilium at Isovalent.
Ever thought what's an interviewer's favorite questions to rip you off - all of my previous post :).
And On-Chip Variation (OCV) is one of them, specifically for Static Timing Analysis interview. This analysis is coming from people who got interviewed and recruited into leading VLSI industries.
Most importantly, my posts and videos have helped most of them and I really feel proud about it. Nice feeling.
This document discusses Veriloggen, a Python framework for generating Verilog HDL code from Python. It allows designing hardware at the register-transfer level using Python by mapping Python constructs to Verilog modules, always blocks, wires, and other Verilog constructs. Veriloggen includes modules for RTL generation (Core), connecting Python threads to finite state machines (Thread), and defining streaming hardware (Stream). It aims to support a "Veriloggen for DSL X" approach to create domain-specific hardware description languages in Python.
The document discusses the backend design flow in VLSI, including floorplanning, placement, and routing. Floorplanning involves estimating block sizes and locations. Placement defines the location of logic cells and interconnect space. Routing connects the placed logic cells, with global routing determining interconnect locations and local routing connecting cells. The document outlines the goals and objectives of EDA tools for floorplanning, placement, and routing. It also discusses clock trees, placement strategies, and concludes with the overall backend flow.
- MongoDB allows for automatic sharding of data across multiple servers to improve write performance. However, scaling write performance is challenging due to the way B-tree indexes handle random inserts.
- To improve write performance, one can partition data by time or use a hash shard key. However, these have limitations as the data grows large. The best approach is to use a low-cardinality hash prefix combined with a sequential part for the shard key.
- Proper choice of shard key is crucial for scaling MongoDB's write performance as data size increases. Linear scalability is difficult to achieve and alternative databases may be better if extremely high write throughput is required.
This document provides an introduction to using Git and GitHub for version control. It covers common Git commands like init, add, commit, status, branch, merge, push and pull. It also explains how to set up a remote repository on GitHub and push/pull from a local repository. The document recommends using branches for new features and pull requests to merge them into the master branch. It emphasizes Git's abilities for distributed, collaborative development on GitHub.
O documento introduz o Git, seu histórico de criação por Linus Torvalds para o desenvolvimento do Linux, e seus principais comandos e funcionalidades como branches, tags e resolução de conflitos.
MongoDB World 2015 - A Technical Introduction to WiredTigerWiredTiger
MongoDB 3.0 introduces a new pluggable storage engine API and a new storage engine called WiredTiger. The engineering team behind WiredTiger team has a long and distinguished career, having architected and built Berkeley DB, now the world's most widely used embedded database. In this talk we will describe our original design goals for WiredTiger, including considerations we made for heavily threaded hardware, large on-chip caches, and SSD storage. We'll also look at some of the latch-free and non-blocking algorithms we've implemented, as well as other techniques that improve scaling, overall throughput and latency. Finally, we'll take a look at some of the features we hope to incorporate into WiredTiger and MongoDB in the future.
DoS and DDoS mitigations with eBPF, XDP and DPDKMarian Marinov
The document compares eBPF, XDP and DPDK for packet inspection. It describes the speaker's experience using these tools to build a virtual machine that can handle 10Gbps of traffic and drop packets to mitigate DDoS attacks. It details how eBPF and XDP were able to achieve higher packet drop rates than iptables or a custom module. While DPDK could drop traffic at line rate, it required specialized hardware and expertise. Ultimately, XDP provided the best balance of performance, driver support and programmability using eBPF to drop millions of packets per second.
Optimizing Kubernetes Resource Requests/Limits for Cost-Efficiency and Latenc...Henning Jacobs
Kubernetes has the concept of resource requests and limits. Pods get scheduled on the nodes based on their requests and optionally limited in how much of the resource they can consume. Understanding and optimizing resource requests/limits is crucial both for reducing resource "slack" and ensuring application performance/low-latency. This talk shows our approach to monitoring and optimizing Kubernetes resources for 80+ clusters to achieve cost-efficiency and reducing impact for latency-critical applications. All shown tools are Open Source and can be applied to most Kubernetes deployments.
Oracle Open World 2014: Lies, Damned Lies, and I/O Statistics [ CON3671]Kyle Hailey
The document discusses analyzing I/O performance and summarizing lessons learned. It describes common tools used to measure I/O like moats.sh, strace, and ioh.sh. It also summarizes the top 10 anomalies encountered like caching effects, shared drives, connection limits, I/O request consolidation and fragmentation over NFS, and tiered storage migration. Solutions provided focus on avoiding caching, isolating workloads, proper sizing of NFS parameters, and direct I/O.
Databases are a key part of any application. The storage subsystem contributes most to performance of the database. In recent days, new storage technologies like Solid State Storage (SSD) and high performance drives are becoming cheaper and more accessible, but it takes a lot of planning to use these technologies in a cost effective way for best price-performance.
Debug Information And Where They Come FromMin-Yih Hsu
(Presented in COSCUP 2022)
Debug information is a mapping between the original source code and low-level binary locations. It provides developers powerful insights to diagnose problems (via debuggers) in their code and acts as one of the most important foundations for modern software development. Furthermore, in recent years, we are seeing increasing demands of high quality debug information for highly optimized applications that are otherwise “un-debuggable”. For instance, debugging unoptimized games is generally not feasible since it’s likely to miss every single frame. In this talk, we are going to introduce how debug information works and how compilers generate proper debug info even with extensive levels of optimization enabled. At the end of this talk, you will gain insights into the structure of debug information and learn key compiler engineering knowledge on generating high quality debug info for critical, highly optimized software.
GPORCA is query optimizer used inside Greenplum database, the first open source MPP solution based on PostgreSQL.
These are slides presented at the PGConf Seattle 2017. It introduced the internals of GPORCA, and provide OSS developers context to contribute back to the project.
Vlsi physical design automation on partitioningSushil Kundu
This document provides an introduction to VLSI physical design automation and partitioning. It discusses the importance of partitioning large circuits into smaller subcircuits for manageable design. The objectives of partitioning are to minimize the number of partitions and interconnections between partitions. Common partitioning algorithms discussed include min-cut bipartitioning, Kernighan-Lin iterative improvement algorithm, and other methods like ratio cut, genetic algorithms, and simulated annealing. Partitioning is an essential step in the physical design flow and impacts circuit performance and layout costs.
This document outlines an agenda for a workshop on Kubernetes networking with eBPF and Cilium. The workshop covers various topics including principles of eBPF and Cilium, Kubernetes networking, cluster mesh, security, observability, service mesh, and Tetragon. It provides overviews and examples for each topic. The workshop is presented by Raphaël Pinson who works on Cilium at Isovalent.
Ever thought what's an interviewer's favorite questions to rip you off - all of my previous post :).
And On-Chip Variation (OCV) is one of them, specifically for Static Timing Analysis interview. This analysis is coming from people who got interviewed and recruited into leading VLSI industries.
Most importantly, my posts and videos have helped most of them and I really feel proud about it. Nice feeling.
This document discusses Veriloggen, a Python framework for generating Verilog HDL code from Python. It allows designing hardware at the register-transfer level using Python by mapping Python constructs to Verilog modules, always blocks, wires, and other Verilog constructs. Veriloggen includes modules for RTL generation (Core), connecting Python threads to finite state machines (Thread), and defining streaming hardware (Stream). It aims to support a "Veriloggen for DSL X" approach to create domain-specific hardware description languages in Python.
The document discusses the backend design flow in VLSI, including floorplanning, placement, and routing. Floorplanning involves estimating block sizes and locations. Placement defines the location of logic cells and interconnect space. Routing connects the placed logic cells, with global routing determining interconnect locations and local routing connecting cells. The document outlines the goals and objectives of EDA tools for floorplanning, placement, and routing. It also discusses clock trees, placement strategies, and concludes with the overall backend flow.
- MongoDB allows for automatic sharding of data across multiple servers to improve write performance. However, scaling write performance is challenging due to the way B-tree indexes handle random inserts.
- To improve write performance, one can partition data by time or use a hash shard key. However, these have limitations as the data grows large. The best approach is to use a low-cardinality hash prefix combined with a sequential part for the shard key.
- Proper choice of shard key is crucial for scaling MongoDB's write performance as data size increases. Linear scalability is difficult to achieve and alternative databases may be better if extremely high write throughput is required.
This document provides an introduction to using Git and GitHub for version control. It covers common Git commands like init, add, commit, status, branch, merge, push and pull. It also explains how to set up a remote repository on GitHub and push/pull from a local repository. The document recommends using branches for new features and pull requests to merge them into the master branch. It emphasizes Git's abilities for distributed, collaborative development on GitHub.
O documento introduz o Git, seu histórico de criação por Linus Torvalds para o desenvolvimento do Linux, e seus principais comandos e funcionalidades como branches, tags e resolução de conflitos.
MongoDB World 2015 - A Technical Introduction to WiredTigerWiredTiger
MongoDB 3.0 introduces a new pluggable storage engine API and a new storage engine called WiredTiger. The engineering team behind WiredTiger team has a long and distinguished career, having architected and built Berkeley DB, now the world's most widely used embedded database. In this talk we will describe our original design goals for WiredTiger, including considerations we made for heavily threaded hardware, large on-chip caches, and SSD storage. We'll also look at some of the latch-free and non-blocking algorithms we've implemented, as well as other techniques that improve scaling, overall throughput and latency. Finally, we'll take a look at some of the features we hope to incorporate into WiredTiger and MongoDB in the future.
DoS and DDoS mitigations with eBPF, XDP and DPDKMarian Marinov
The document compares eBPF, XDP and DPDK for packet inspection. It describes the speaker's experience using these tools to build a virtual machine that can handle 10Gbps of traffic and drop packets to mitigate DDoS attacks. It details how eBPF and XDP were able to achieve higher packet drop rates than iptables or a custom module. While DPDK could drop traffic at line rate, it required specialized hardware and expertise. Ultimately, XDP provided the best balance of performance, driver support and programmability using eBPF to drop millions of packets per second.
Optimizing Kubernetes Resource Requests/Limits for Cost-Efficiency and Latenc...Henning Jacobs
Kubernetes has the concept of resource requests and limits. Pods get scheduled on the nodes based on their requests and optionally limited in how much of the resource they can consume. Understanding and optimizing resource requests/limits is crucial both for reducing resource "slack" and ensuring application performance/low-latency. This talk shows our approach to monitoring and optimizing Kubernetes resources for 80+ clusters to achieve cost-efficiency and reducing impact for latency-critical applications. All shown tools are Open Source and can be applied to most Kubernetes deployments.
Oracle Open World 2014: Lies, Damned Lies, and I/O Statistics [ CON3671]Kyle Hailey
The document discusses analyzing I/O performance and summarizing lessons learned. It describes common tools used to measure I/O like moats.sh, strace, and ioh.sh. It also summarizes the top 10 anomalies encountered like caching effects, shared drives, connection limits, I/O request consolidation and fragmentation over NFS, and tiered storage migration. Solutions provided focus on avoiding caching, isolating workloads, proper sizing of NFS parameters, and direct I/O.
Databases are a key part of any application. The storage subsystem contributes most to performance of the database. In recent days, new storage technologies like Solid State Storage (SSD) and high performance drives are becoming cheaper and more accessible, but it takes a lot of planning to use these technologies in a cost effective way for best price-performance.
Database performance tuning for SSD based storageAngelo Rajadurai
Databases are a key part of any application. The storage subsystem contributes most to performance of the database. In recent days, new storage technologies like Solid State Storage (SSD) and high performance drives are becoming cheaper and more accessible, but it takes a lot of planning to use these technologies in a cost effective way for best price-performance.
With Hadoop-3.0.0-alpha2 being released in January 2017, it's time to have a closer look at the features and fixes of Hadoop 3.0.
We will have a look at Core Hadoop, HDFS and YARN, and answer the emerging question whether Hadoop 3.0 will be an architectural revolution like Hadoop 2 was with YARN & Co. or will it be more of an evolution adapting to new use cases like IoT, Machine Learning and Deep Learning (TensorFlow)?
The document discusses the on-disk structures of an Informix database instance. It describes how data is stored across partitions, pages, chunks, and dbspaces. It provides examples of using oncheck commands to view the root chunk, pages, and partitions that make up an Informix instance on disk. The key concepts covered include how partitions, pages, and extents are used to store and organize table and index data across a database server's storage devices.
BlueStore: a new, faster storage backend for CephSage Weil
BlueStore is a new storage backend for Ceph that provides faster performance compared to the existing FileStore backend. BlueStore stores metadata in RocksDB and data directly on block devices, avoiding double writes and improving transaction performance. It supports multiple storage tiers by allowing different components like the RocksDB WAL, database and object data to be placed on SSDs, HDDs or NVRAM as appropriate.
BlueStore is a new storage backend for Ceph that stores data directly on block devices rather than using a file system. It uses RocksDB to store metadata like a key-value database and pluggable block allocation policies to improve performance. BlueStore aims to provide more natural transaction support without double writes by using a write-ahead log. It also supports multiple storage devices to optimize placement of metadata, data and write-ahead logs.
Data deduplication is a hot topic in storage and saves significant disk space for many environments, with some trade offs. We’ll discuss what deduplication is and where the Open Source solutions are versus commercial offerings. Presentation will lean towards the practical – where attendees can use it in their real world projects (what works, what doesn’t, should you use in production, etcetera).
On X86 systems, using an Unbreakable Enterprise Kernel (UEK) is recommended over other enterprise distributions as it provides better hardware support, security patches, and testing from the larger Linux community. Key configuration recommendations include enabling maximum CPU performance in BIOS, using memory types validated by Oracle, ensuring proper NUMA and CPU frequency settings, and installing only Oracle-validated packages to avoid issues. Monitoring tools like top, iostat, sar and ksar help identify any CPU, memory, disk or I/O bottlenecks.
This document discusses very large database (VLDB) configurations and maintenance. It begins by defining a VLDB as a database occupying more than 1 terabyte or containing several billion rows. It then covers various configuration topics like operating system settings, instance memory allocation including the importance of tempdb configuration, and database file configuration. The document also discusses maintenance best practices such as disaster recovery planning, partitioning data to aid restores, compressing backups and data, purging or archiving old data, and performing regular index maintenance and integrity checks.
Best Practices with PostgreSQL on SolarisJignesh Shah
This document provides best practices for deploying PostgreSQL on Solaris, including:
- Using Solaris 10 or latest Solaris Express for support and features
- Separating PostgreSQL data files onto different file systems tuned for each type of IO
- Tuning Solaris parameters like maxphys, klustsize, and UFS buffer cache size
- Configuring PostgreSQL parameters like fdatasync, commit_delay, wal_buffers
- Monitoring key metrics like memory, CPU, and IO usage at the Solaris and PostgreSQL level
Disk IO Benchmarking in shared multi-tenant environmentsRodrigo Campos
This document discusses benchmarking disk I/O performance in shared, multi-tenant cloud environments. It notes challenges with traditional tools and methodologies. It proposes a simple benchmarking tool called iomelt that uses direct I/O and fadvise calls to minimize buffering effects. Results show direct I/O can reduce performance compared to buffered I/O. The document recommends regularly benchmarking different instance types and regions over long periods to analyze performance consistency in shared environments.
A round-up of the significant new ODA improvements from last quarter of 2014. This presentation was first delivered by Simon Haslam at a ClubOracle event in London on 26 February 2015.
Note: content assumes some familiarity with ODA
The document discusses disk I/O performance in SQL Server 2005. It begins with some questions about which queries and RAID configurations would affect disk I/O the most. It then covers the basics of I/O and different RAID levels, their pros and cons. The document provides an overview of monitoring physical and logical disk performance, and offers tips on tuning disk I/O performance when bottlenecks occur. It concludes with resources for further information.
Updated version of my talk about Hadoop 3.0 with the newest community updates.
Talk given at the codecentric Meetup Berlin on 31.08.2017 and on Data2Day Meetup on 28.09.2017 in Heidelberg.
POLARDB: A database architecture for the cloudoysteing
PolarDB is a cloud-native database architecture designed for the cloud. It separates storage and computation to independently scale each and provide high availability even across availability zones without data loss. PolarDB uses a shared storage architecture with PolarStore for storage and PolarProxy for intelligent routing. PolarStore is optimized for emerging hardware like NVMe and Optane and provides low latency access. PolarDB supports dynamic scaling, physical replication for high reliability, and read/write separation for session consistency.
BlueStore: a new, faster storage backend for CephSage Weil
Traditionally Ceph has made use of local file systems like XFS or btrfs to store its data. However, the mismatch between the OSD's requirements and the POSIX interface provided by kernel file systems has a huge performance cost and requires a lot of complexity. BlueStore, an entirely new OSD storage backend, utilizes block devices directly, doubling performance for most workloads. This talk will cover the motivation a new backend, the design and implementation, the improved performance on HDDs, SSDs, and NVMe, and discuss some of the thornier issues we had to overcome when replacing tried and true kernel file systems with entirely new code running in userspace.
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
Exchange Server 2013 introduced changes to the database and store to decrease hardware costs, increase reliability and availability, and provide better data protection and diagnostics. Key changes included running the store as multiple processes per database, optimizing data structures and storage for sequential IO, adding managed availability monitoring and recovery actions, and improving diagnostic tools and data available through PowerShell.
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...Glenn K. Lockwood
Glenn K. Lockwood's document summarizes his professional background and experience with data-intensive computing systems. It then discusses the Gordon supercomputer deployed at SDSC in 2012, which was one of the world's first systems to use flash storage. The document analyzes Gordon's architecture using burst buffers and SSDs, experiences using the flash file system, and lessons learned. It also compares Gordon's proto-burst buffer approach to the dedicated burst buffer nodes on the Cori supercomputer.
Similar to Perforce BTrees: The Arcane and the Profane (20)
How to Organize Game Developers With Different Planning NeedsPerforce
Different skills have different needs when it comes to planning. For a coder it may make perfect sense to plan work in two-week sprints, but for an artist, an asset may take longer than two weeks to complete.
How do you allow different skills to plan the way that works best for them? Some studios may choose to open up for flexibility – do whatever you like! But that tends to cause issues with alignment and siloes of data, resulting in loss of vision. Lost vision in the sense that it is difficult to understand, but also — and maybe more importantly — the risk of losing the vision of what the game will be.
With the right approach, however, you can avoid these obstacles. Join backlog expert Johan Karlsson to learn:
-The balance of team autonomy and alignment.
-How to use the product backlog to align the project vision.
-How to use tools to support the flexibility you need.
Looking for a planning and backlog tool? You can try Hansoft for free.
Regulatory Traceability: How to Maintain Compliance, Quality, and Cost Effic...Perforce
How do regulations impact your product requirements? How do you ensure that you identify all the needed requirements changes to meet these regulations?
Ideally, your regulations should live alongside your product requirements, so you can trace among each related item. Getting to that point can be quite an undertaking, however. Ultimately you want a process that:
-Saves money
-Ensures quality
-Avoids fines
If you want help achieving these goals, this webinar is for you. Watch Tom Totenberg, Senior Solutions Engineer for Helix ALM, show you:
-How to import a regulation document into Helix ALM.
-How to link to requirements.
-How to automate impact analysis from regulatory updates.
Efficient Security Development and Testing Using Dynamic and Static Code Anal...Perforce
Be sure to register for a demo, if you would like to see how Klocwork can help ensure that your code is secure, reliable, and compliant.
https://www.perforce.com/products/klocwork/live-demo
If it’s not documented, it didn’t happen.
When it comes to compliance, if you’re doing the work, you need to prove it. That means having well-documented SOPs (standard operating procedures) in place for all your regulated workflows.
It also means logging your efforts to enforce these SOPs. They show that you took appropriate action in any number of scenarios, which can be related to regulations, change requests, firing of an employee, logging an HR compliant, or anything else that needs a structured workflow.
But when do you need to do this, and how do you go about it?
In this webinar, Tom Totenberg, our Helix ALM senior solutions engineer, clarifies workflow enforcement SOPs, along with a walkthrough of how Perforce manages GDPR (General Data Protection Regulation) requests. He’ll cover:
-What are SOPs?
-Why is it important to have this documentation?
-Example: walking through our internal Perforce GDPR process.
-What to beware of.
-Building the workflow in ALM.
Branching Out: How To Automate Your Development ProcessPerforce
If you could ship 20% faster, what would it mean for your business? What could you build? Better question, what’s slowing your teams down?
Teams struggle to manage branching and merging. For bigger teams and projects, it gets even more complex. Tracking development using a flowchart, team wiki, or a white board is ineffective. And attempts to automate with complex scripting are costly to maintain.
Remove the bottlenecks and automate your development your way with Perforce Streams –– the flexible branching model in Helix Core.
Join Brad Hart, Chief Technology Officer and Brent Schiestl, Senior Product Manager for Perforce version control to learn how Streams can:
-Automate and customize development and release processes.
-Easily track and propagate changes across teams.
-Boost end user efficiency while reducing errors and conflicts.
-Support multiple teams, parallel releases, component-based development, and more.
How to Do Code Reviews at Massive Scale For DevOpsPerforce
Code review is a critical part of your build process. And when you do code review right, you can streamline your build process and achieve DevOps.
Most code review tools work great when you have a team of 10 developers. But what happens when you need to scale code review to 1,000s of developers? Many will struggle. But you don’t need to.
Join our experts Johan Karlsson and Robert Cowham for a 30-minute webinar. You’ll learn:
-The problems with scaling code review from 10s to 100s to 1,000s of developers along with other dimensions of scale (files, reviews, size).
-The solutions for dealing with all dimensions of scale.
-How to utilize Helix Swarm at massive scale.
Ready to scale code review and streamline your build process? Get started with Helix Swarm, a code review tool for Helix Core.
By now many of us have had plenty of time to clean and tidy up our homes. But have you given your product backlog and task tracking software as much attention?
To keep your digital tools organized, it is important to avoid hoarding on to inefficient processes. By removing the clutter in your product backlog, you can keep your teams focused.
It’s time to spark joy by cleaning up your planning tools!
Join Johan Karlsson — our Agile and backlog expert — to learn how to:
-Apply digital minimalism to your tracking and planning.
-Organize your work by category.
-Motivate teams by transitioning to a cleaner way of working.
TRY HANSOFT FREE
Going Remote: Build Up Your Game Dev Team Perforce
Everyone’s working remote as a result of the coronavirus (COVID-19). And while game development has always been done with remote teams, there’s a new challenge facing the industry.
Your audience has always been mostly at home – now they may be stuck there. And they want more games to stay happy and entertained.
So, how can you enable your developers to get files and feedback faster to meet this rapidly growing demand?
In this webinar, you’ll learn:
-How to meet the increasing demand.
-Ways to empower your remote teams to build faster.
-Why Helix Core is the best way to maximize productivity.
Plus, we’ll share our favorite games keeping us happy in the midst of a pandemic.
Shift to Remote: How to Manage Your New WorkflowPerforce
The spread of coronavirus has fundamentally changed the way people work. Companies around the globe are making an abrupt shift in how they manage projects and teams to support their newly remote workers.
Organizing suddenly distributed teams means restructuring more than a standup. To facilitate this transition, teams need to update how they collaborate, manage workloads, and maintain projects.
At Perforce, we are here to help you maintain productivity. Join Johan Karlsson — our Agile expert — to learn how to:
Keep communication predictable and consistent.
-Increase visibility across teams.
-Organize projects, sprints, Kanban boards and more.
-Empower and support your remote workforce.
Hybrid Development Methodology in a Regulated WorldPerforce
In a regulated industry, collaboration can be vital to building quality products that meet compliance. But when an Agile team and a Waterfall team need to work together, it can feel like mixing oil with water.
If you're used to Agile methods, Waterfall can feel slow and unresponsive. From a Waterfall perspective, pure Agile may lack accountability and direction. Misaligned teams can slow progress, and expose your development to mistakes that undermine compliance.
It's possible to create the best of both worlds so your teams can operate together harmoniously. This is how to develop products quickly, and still make regulators happy.
Join ALM Solutions Engineer Tom Totenberg in this webinar to learn how teams can:
- Operate efficiently with differing methodologies.
- Glean best practices for their tailored hybrid.
- Work together in a single environment.
Watch the webinar, and when you're ready for a tool to help you with the hybrid, know that you can try Helix ALM for free.
Better, Faster, Easier: How to Make Git Really Work in the EnterprisePerforce
There's a lot of reasons to love Git. (Git is awesome at what it does.) Let’s look at the 3 major use cases for Git in the enterprise:
1. You work with third party or outsourced development teams.
2. You use open source in your products.
3. You have different workflow needs for different teams.
Making the best of Git can be difficult in an enterprise environment. Trying to manage all the moving parts is like herding cats.
So, how do you optimize your teams’ use of Git — and make it all fit into your vision of the enterprise SDLC?
You’ll learn about:
-The challenges that accompany each use case — third parties, open source code, different workflows.
-Ways to solve these problems.
-How to make Git better, faster, and easier — with Perforce
Easier Requirements Management Using Diagrams In Helix ALMPerforce
Sometimes requirements need visuals. Whether it’s a diagram that clarifies an idea or a screenshot to capture information, images can help you manage requirements more efficiently. And that means better quality products shipped faster.
In this webinar, Helix ALM Professional Services Consultant Gerhard Krüger will demonstrate how to use visuals in ALM to improve requirements. Learn how to:
-Share information faster than ever.
-Drag and drop your way to better teamwork.
-Integrate various types of visuals into your requirements.
-Utilize diagram and flowchart software for every need.
-And more!
Immediately apply the information in this webinar for even better requirements management using Helix ALM.
It’s common practice to keep a product backlog as small as possible, probably just 10-20 items. This works for single teams with one Product Owner and perhaps a Scrum Master.
But what if you have 100 Scrum teams managing a complex system of hardware and software components? What do you need to change to manage at such a massive scale?
Join backlog expert Johan Karlsson to learn how to:
-Adapt Agile product backlog practices to manage many backlogs.
-Enhance collaboration across disciplines.
-Leverage backlogs to align teams while giving them flexibility.
Achieving Software Safety, Security, and Reliability Part 3: What Does the Fu...Perforce
In Part 3, we will look at what the future might hold for embedded programming languages and development tools. And, we will look at the future for software safety and security standards.
How to Scale With Helix Core and Microsoft Azure Perforce
This document discusses how to scale Helix Core using Microsoft Azure. It begins by explaining the benefits of using Helix Core and Azure together, such as high performance, scalability, security integration, and availability. It then covers computing, storage, and security options on Azure, including virtual machine types and operating system choices. Next, it describes how to set up global deployments with Helix Core on Azure using techniques like proxies, replicas, and the Perforce federated architecture. It concludes with examples of advanced topologies like build servers, hybrid cloud/on-premises implementations, and multi-cloud considerations.
Achieving Software Safety, Security, and Reliability Part 2Perforce
In Part 2, we will focus on the automotive industry, as it leads the way in enforcing safety, security, and reliability standards as well as best practices for software development. We will then examine how other industries could adopt similar practices.
Modernizing an application’s architecture is often a necessary multi-year project in the making. The goal –– to stabilize code, detangle dependencies, and adopt a toolset that ignites innovation.
Moving your monolith repository to a microservices/component based development model might be on trend. But is it right for you?
Before you break up with anything, it is vital to assess your needs and existing environment to construct the right plan. This can minimize business risks and maximize your development potential.
Join Tom Tyler and Chuck Gehman to learn more about:
-Why you need to plan your move with the right approach.
-How to reduce risk when refactoring your monolithic repository.
-What you need to consider before migrating code.
Achieving Software Safety, Security, and Reliability Part 1: Common Industry ...Perforce
In part one of our three-part webinar series, we examine common software development challenges, review the safety and security standards adopted by different industries, and examine the best practices that can be applied to any software development team.
The features you’ve been waiting for! Helix ALM’s latest update expands usability and functionality to bring solid improvements to your processes.
Watch Helix ALM Senior Product Manager Paula Rome demonstrate how new features:
-Simplify workflows.
-Expand report analysis.
-Boost productivity in the Helix ALM web client.
All this and MORE packed into an exciting 30 minutes! Get inspired. Be extraordinary with the new Helix ALM.
Companies that track requirements, create traceability matrices, and complete audits - especially for compliance - run into many problems using only Word and Excel to accomplish these tasks.
Most notably, manual processes leave employees vulnerable to making costly mistakes and wasting valuable time.
These outdated tracking procedures rob organizations of benefiting from four keys to productivity and efficiency:
-Automation
-Collaboration
-Visibility
-Traceability
However, modern application lifecycle management (ALM) tools solve all of these problems, linking and organizing information into a single source of truth that is instantly auditable.
Gerhard Krüger, senior consultant for Helix ALM, explains how the right software supports these fundamentals, generating improvements that save time and money.
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
E-Invoicing Implementation: A Step-by-Step Guide for Saudi Arabian CompaniesQuickdice ERP
Explore the seamless transition to e-invoicing with this comprehensive guide tailored for Saudi Arabian businesses. Navigate the process effortlessly with step-by-step instructions designed to streamline implementation and enhance efficiency.
Top 9 Trends in Cybersecurity for 2024.pptxdevvsandy
Security and risk management (SRM) leaders face disruptions on technological, organizational, and human fronts. Preparation and pragmatic execution are key for dealing with these disruptions and providing the right cybersecurity program.
Top Benefits of Using Salesforce Healthcare CRM for Patient Management.pdfVALiNTRY360
Salesforce Healthcare CRM, implemented by VALiNTRY360, revolutionizes patient management by enhancing patient engagement, streamlining administrative processes, and improving care coordination. Its advanced analytics, robust security, and seamless integration with telehealth services ensure that healthcare providers can deliver personalized, efficient, and secure patient care. By automating routine tasks and providing actionable insights, Salesforce Healthcare CRM enables healthcare providers to focus on delivering high-quality care, leading to better patient outcomes and higher satisfaction. VALiNTRY360's expertise ensures a tailored solution that meets the unique needs of any healthcare practice, from small clinics to large hospital systems.
For more info visit us https://valintry360.com/solutions/health-life-sciences
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
Everything You Need to Know About X-Sign: The eSign Functionality of XfilesPr...XfilesPro
Wondering how X-Sign gained popularity in a quick time span? This eSign functionality of XfilesPro DocuPrime has many advancements to offer for Salesforce users. Explore them now!
Most important New features of Oracle 23c for DBAs and Developers. You can get more idea from my youtube channel video from https://youtu.be/XvL5WtaC20A
SMS API Integration in Saudi Arabia| Best SMS API ServiceYara Milbes
Discover the benefits and implementation of SMS API integration in the UAE and Middle East. This comprehensive guide covers the importance of SMS messaging APIs, the advantages of bulk SMS APIs, and real-world case studies. Learn how CEQUENS, a leader in communication solutions, can help your business enhance customer engagement and streamline operations with innovative CPaaS, reliable SMS APIs, and omnichannel solutions, including WhatsApp Business. Perfect for businesses seeking to optimize their communication strategies in the digital age.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
UI5con 2024 - Bring Your Own Design SystemPeter Muessig
How do you combine the OpenUI5/SAPUI5 programming model with a design system that makes its controls available as Web Components? Since OpenUI5/SAPUI5 1.120, the framework supports the integration of any Web Components. This makes it possible, for example, to natively embed own Web Components of your design system which are created with Stencil. The integration embeds the Web Components in a way that they can be used naturally in XMLViews, like with standard UI5 controls, and can be bound with data binding. Learn how you can also make use of the Web Components base class in OpenUI5/SAPUI5 to also integrate your Web Components and get inspired by the solution to generate a custom UI5 library providing the Web Components control wrappers for the native ones.
What to do when you have a perfect model for your software but you are constrained by an imperfect business model?
This talk explores the challenges of bringing modelling rigour to the business and strategy levels, and talking to your non-technical counterparts in the process.
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
2. 2
Major changes in the P4D Database
Berkeley DB 1.8X
DBOpen2
2001.1
+Reorg
2005.1
+Checksums
2008.2
+LockLess+64Bit
Ref
2013.3
Storage behavior and Operational needs have changed over time
SSD’s and non-disk storage have changed the database world
3. 3
File System Caching is Critical
Each P4D Thread/Process has only a small in-process cache
The OS Cache provides primary I/O caching
Load up machines with real memory to get good I/O caching
(Does mean archives and processes fight for memory use)
Page size behaviors can be non-obvious
8K byte pages seem best
SSD’s are not a substitute for real memory!
4. 4
Rebuilding for Space and Performance
SDP and many other installations reload the DB regularly
Pro: Recover disk space
Pro: Sequential reads are fast
Con: Downtime (can be minimal using offline backups)
Con: Updates can be slow for a while after a rebuild
Con: Space for rebuild
dbopen.freepct (0-99, 0 default)
p4d –vdbopen.freepct=10 –jr <checkpoint>
5. 5
Passive Reorganization
OS file systems often schedule read-ahead I/O
We want to take advantage of that
Solution – Re-write subtrees to be kept in sequential pages
Slows down some write operations
Can make the DB files larger due to needing contiguous
pages
Larger table scans win
Churns Flash memory – expensive writes
6. 6
Reorganization Space Usage
Getting sequential pages for a reorganization is hard
Free Page index can quickly find contiguous free page blocks
– But often no such blocks are available
If reorganizations happen too often, tables grow from
reorganization while many scattered free pages remain
unused!
Summary point – Reorganization makes tables larger with
more unused space
7. 7
Is Reorganization Obsolete?
In some cases, we’ve seen Reorganization is not worth the
costs – Extra write load can be costly
Solid State “Disk” makes read-ahead less important
Overhead of larger DB files may be costly for SSD
New Lock Free Reading speeds up scans and eliminates
readers blocking writers – so slower readers are OK
Try turning it off
db.reorg.disable = 1
8. 8
Page Location Choices
The Index of free pages allows page allocations to be made
near to referencing pages. I.e. We reuse pages near to
existing related pages
But if newer data is near the end of the db file, we keep
using pages near the end of the file.
db.page.migrate can be set to a percentage to avoid
allocating pages at the end of the file if possible.
Foreshadowing – Shrinking the db file – If a lot of pages are
free at the end of the file, we can truncate!
9. 9
Other configuration
dbopen.cache – In p4d cache (number of pages)
dbopen.cache.wide – In p4d cache for db.integed
dbopen.nofsync – Skip fsync on close of DB file
dbopen.pagesize – default 8K, key size related (only useful
when tables are created such as with checkpoint recovery)
15. 15
Why not use a DBMS instead of your DB?
Lots of DBMS’s provide lots of value
Answer: P4D is a DBMS!
Ok, It’s a Special Purpose DBMS, not a general one
Tightly integrated
Maps and pattern matching is close to the database
Might be able to use an Extensible DBMS to match
functionality
16. 16
Useful References
USENIX Fast’16 Conference Proceedings
https://www.usenix.org/conference/fast16/technical-sessions
BTree introduction and graphics of splits
http://underpop.online.fr/j/java/algorithims-in-java-1-
4/ch16lev1sec3.htm
p4 help undoc | grep db
17. Catch me at the Conference
wherever you can to talk!
anton@perforce.com