This document provides guidelines for splitting a large Perforce depot into multiple instances to address issues from growing file and metadata size. It describes using the Perforce tool perfsplit to extract a section of the depot, while ensuring zero downtime and preserving integration history. The process prepares the original instance, uses perfsplit to build the new instance foundation, then converts metadata and verifies the new instance before cleanup and completion.
Get acquainted with a distributed, reliable tool/service for collecting a large amount of streaming data to centralized storage with their architecture.
Let me know if anything is required. Happy to help.
Ping me google #bobrupakroy.
take care!
Analysis of commits and pull requests in Travis CI, Buddy and AppVeyor using ...Andrey Karpov
Starting from the version 7.04, the PVS-Studio analyzer for C and C++ languages on Linux and macOS provides the test feature of checking the list of specified files. Using the new mode, you can configure the analyzer to check commits and pull requests. This article covers setting up the check of certain modified files from a GitHub project in such popular CI (Continuous Integration) systems, as Travis CI, Buddy and AppVeyor.
Get to know the configuration with Hadoop installation types and also handling of the HDFS files.
Let me know if anything is required. Happy to help.
Ping me google #bobrupakroy.
Talk soon!
It enables very high-speed movement of data and metadata from one database to another means providing Disaster recovery, Complete automatic data protection, and High
availability. Servers working on Oracle 11g database with base OS Oracle Linux Enterprise. One is currently open and Up On which huge no. of transaction are going on and while other is in mount mode with Different geographical locations.
As automate the backups of archives & datafiles from server 1 to server 2, Whenever if there is problem in currently up server 1 or get destroyed in disaster or burnt out or require maintenance so to avoid failure of ongoing transactions decrease or decrease in hit ratio or to maintain no time lag or
server down problem, then with few commands to run the server 2 will automatically up and working as primary server So no loss of ongoing transactions and Data Loss.
Digital Fabrication Studio.02 _Information @ Aalto Media FactoryMassimo Menichinelli
DIGITAL FABRICATION STUDIO (25438)
The course provides a general understanding on how to design and manufacture products and prototypes in a Fab Lab, using digital fabrication technologies and understanding their features and limits.
Students will learn how information shapes design, manufacturing and collaboration processes and artifacts in a Fab Lab. They will learn how to digitally fabricate a project or how to digitally modify an existing project; students will also learn how to manage, embed and retrieve information about a project. Projects and prototypes developed and manufactured in this course will not be interactive.
The course consists of lectures and a group project to be digitally fabricated, be it a project already designed but not yet realized or be it the modification of an existing project. Every lecture (3 hours) includes time for testing the technologies covered (1 hour) and for developing part of the group project and for receiving feedback about it (1 hour).
http://mlab.taik.fi/studies/courses/course?id=1963
Get acquainted with a distributed, reliable tool/service for collecting a large amount of streaming data to centralized storage with their architecture.
Let me know if anything is required. Happy to help.
Ping me google #bobrupakroy.
take care!
Analysis of commits and pull requests in Travis CI, Buddy and AppVeyor using ...Andrey Karpov
Starting from the version 7.04, the PVS-Studio analyzer for C and C++ languages on Linux and macOS provides the test feature of checking the list of specified files. Using the new mode, you can configure the analyzer to check commits and pull requests. This article covers setting up the check of certain modified files from a GitHub project in such popular CI (Continuous Integration) systems, as Travis CI, Buddy and AppVeyor.
Get to know the configuration with Hadoop installation types and also handling of the HDFS files.
Let me know if anything is required. Happy to help.
Ping me google #bobrupakroy.
Talk soon!
It enables very high-speed movement of data and metadata from one database to another means providing Disaster recovery, Complete automatic data protection, and High
availability. Servers working on Oracle 11g database with base OS Oracle Linux Enterprise. One is currently open and Up On which huge no. of transaction are going on and while other is in mount mode with Different geographical locations.
As automate the backups of archives & datafiles from server 1 to server 2, Whenever if there is problem in currently up server 1 or get destroyed in disaster or burnt out or require maintenance so to avoid failure of ongoing transactions decrease or decrease in hit ratio or to maintain no time lag or
server down problem, then with few commands to run the server 2 will automatically up and working as primary server So no loss of ongoing transactions and Data Loss.
Digital Fabrication Studio.02 _Information @ Aalto Media FactoryMassimo Menichinelli
DIGITAL FABRICATION STUDIO (25438)
The course provides a general understanding on how to design and manufacture products and prototypes in a Fab Lab, using digital fabrication technologies and understanding their features and limits.
Students will learn how information shapes design, manufacturing and collaboration processes and artifacts in a Fab Lab. They will learn how to digitally fabricate a project or how to digitally modify an existing project; students will also learn how to manage, embed and retrieve information about a project. Projects and prototypes developed and manufactured in this course will not be interactive.
The course consists of lectures and a group project to be digitally fabricated, be it a project already designed but not yet realized or be it the modification of an existing project. Every lecture (3 hours) includes time for testing the technologies covered (1 hour) and for developing part of the group project and for receiving feedback about it (1 hour).
http://mlab.taik.fi/studies/courses/course?id=1963
Oracle Golden Gate Bidirectional ReplicationArun Sharma
Golden gate bidirectional replication is two-way unidirectional replication. Let us setup bi-directional replication for a single table from source to target.
Full article link is here: https://www.support.dbagenesis.com/post/oracle-golden-gate-bidirectional-replication
Difference between cluster image package show-repository and system image getAshwin Pawar
How to copy firmware and software on the NetApp cluster without using web/HTTPs or ftp server. Also, find out the difference between cluster image package show-repository and system image get repository.
This information is available on request.
Alfresco document migration from 5.x to 6.0 can enable your enterprise to get the benefits of new features and improved functionality. We have an in-house team of experienced Alfresco developers to enable our corporate users to make the most of the latest Alfresco 6.0 version.
Bundled with the documentation to the introduction of Apache Hbase to the configuration.
Let me know if anything is required. Happy to help.
Ping me google #bobrupakroy.
A General Purpose Extensible Scanning Query Architecture for Ad Hoc AnalyticsFlurry, Inc.
We present Burst, an analytic query system with a scalable and flexible approach to performing lowlatency ad hoc analysis over large complex datasets. The architecture consists of hardwareefficient scan techniques and a language facility to transform an extensible set of ad hoc declarative queries into imperative physical scan plans. These plans are multicast across all nodes/cores of a two level sharded/distributed ingestion, storage, and execution topology and executed. The first release of this system is the query engine behind the Flurry Explorer product. Here we explore the design details of that system as well as the incremental ingestion pipeline enhancement currently being implemented for the next major release.
Oracle Golden Gate Bidirectional ReplicationArun Sharma
Golden gate bidirectional replication is two-way unidirectional replication. Let us setup bi-directional replication for a single table from source to target.
Full article link is here: https://www.support.dbagenesis.com/post/oracle-golden-gate-bidirectional-replication
Difference between cluster image package show-repository and system image getAshwin Pawar
How to copy firmware and software on the NetApp cluster without using web/HTTPs or ftp server. Also, find out the difference between cluster image package show-repository and system image get repository.
This information is available on request.
Alfresco document migration from 5.x to 6.0 can enable your enterprise to get the benefits of new features and improved functionality. We have an in-house team of experienced Alfresco developers to enable our corporate users to make the most of the latest Alfresco 6.0 version.
Bundled with the documentation to the introduction of Apache Hbase to the configuration.
Let me know if anything is required. Happy to help.
Ping me google #bobrupakroy.
A General Purpose Extensible Scanning Query Architecture for Ad Hoc AnalyticsFlurry, Inc.
We present Burst, an analytic query system with a scalable and flexible approach to performing lowlatency ad hoc analysis over large complex datasets. The architecture consists of hardwareefficient scan techniques and a language facility to transform an extensible set of ad hoc declarative queries into imperative physical scan plans. These plans are multicast across all nodes/cores of a two level sharded/distributed ingestion, storage, and execution topology and executed. The first release of this system is the query engine behind the Flurry Explorer product. Here we explore the design details of that system as well as the incremental ingestion pipeline enhancement currently being implemented for the next major release.
Genomics Is Not Special: Towards Data Intensive BiologyUri Laserson
Genomics and life sciences is using antiquated technology for processing data. As the data volume is increasing in the life sciences, many in the biology community are reinventing the wheel, without realizing the existence of a rich ecosystem of tools for processing large data sets: Hadoop.
Transactional Roll-backs and upgrades [preview]johngt
This is a presentation given to Caixa Magica employees as a preview of what will be shown at FOSDEM, Sunday, February 7th 2010. It is subject to change and is illustrative of what will be shown at the conference.
Informatica Power Center - Workflow ManagerZaranTech LLC
50-55 hours Training + Assignments + Actual Project Based Case Studies
All attendees will receive,
Assignment after each module, Video recording of every session
Notes and study material for examples covered.
Access to the Training Blog & Repository of Materials
Training Highlights
Focus on Hands on training
30-35 hours of Assignments, Live Case Studies
Video Recordings of sessions provided
Demonstration of Concepts using different tools
One Problem Statement discussed across the Whole training program
Informatica Certification Guidance
Resume prep, Interview Questions provided
Introduction to Data Warehousing, Infomatica Designer
Understand the Transformation, Mapping and Qualifier
Informatica Advanced Features
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Merge2013 mwarren(pv5)
1. MERGE 2013THE PERFORCE CONFERENCESAN FRANCISCO APRIL 24 26
Perforce White Paper
To provide a solid foundation for software development
excellence in today’s demanding economy, it’s critical
to have a software version management solution that
can meet your demands.
Extracting depot paths into new
instances of their own
Mark Warren
2. 2 DEPOT SPLITTING
INTRODUCTION
As instances are used over time they naturally grow in file and metadata size. New files are
submitted and metadata increases in size and the instance become unwieldy. At some point
normal operations take long enough table locks that affect all users. To help mitigate this we
can increase hardware performance but there is a hard limit to what hardware you can replace.
A more practical method to release the building metadata pressure to move select datasets to
their own instance/depot.
Perfsplit 1
is a tooldeveloped by perforce that extracts a section of a depot from an existing
Perforce database. It will access a Perforce Server's database directly, and is platform and
release specific. Perfsplit does not remove data from the source Perforce server but does copy
archive files from it. Perfsplit is a good tool for this operation but does not resolve some
problems;
The need for zero downtime. Most instances that are in need of splitting have a very
large user-base. The need to keep instances up and running are compounded by the
number of users unable to access their instance.
Perfsplit does not rename the new instance depot. This is undesirable since it can be
confusing to users having the same depot name across multiple instances.
The need to use „p4 snap‟ to copy lazy integrated files to their physical lbr location.p4
snapcan considerably increase the size of the original depot depending on the size
area we are splitting off.
This document is intended to give guidelines on a method to resolve all these issues.
PREPARATION
To make sure we gather a complete dataset for migration from a live instance, it‟s necessary to
prevent users from making changes to the path(s) we are splitting. With super access rights,
this can be done by simply restricting ReadWrite access to this path and only allowing
ReadOnly. This is to ensure the metadata structure we are splitting off will be up-to-date. Once
this is done we need to create a checkpoint of the instanceto gather lbr records and we need
to have a running instance of this checkpoint for perfsplit use.
Despite the inadequacies perfsplit has, this process makes use of it;Perfsplit is necessary to
build the foundation of the new instance. The key function of perfsplit is using a map file to
direct it to the select path(s) to extract. Since we are splitting not only the initial path(s) but also
the integration history we will need to append this dataset to the splitmap. To get this, we need
to grep from the newly created original instances checkpoint the lbrFilerecord defined in
db.rev2
of all files associated with the depot path we are splitting.
For example
grep @db.rev@ /checkpoint.XXX | grep //targeted/path/to/split/
1
http://ftp.perforce.com/perforce/tools/perfsplit/perfsplit.html
2
http://www.perforce.com/perforce/doc.current/schema/#db.rev
3. 3 DEPOT SPLITTING
This will give you the db.rev entries for the path you want to split. From these entries you pull
the lbrfile column and remove all entries referring to the original path. This will give you the
location of all lazy integrated files.
We will add these paths to the splitmap(mapping) files already containing the path(s) we are
splitting from the original depot. This is a necessary step because we are not making use of
the p4 snap feature of perfsplit.
TRANSITION
Once we have thismapping we can begin our split using perfsplit using the minimum options,
source, output, splitmap file, but we also need an additional (undocumented) option „–a‟ to skip
the perfsplit archive file copy step. This will build a duplicate instance of the original metadata
for all depots associated with the originals split path in the output path. Since we don‟t want
two instances with depots of the same name, we need to take another checkpoint of this new
instance.
CONVERSION
With this new checkpoint, we can shape the metadata into a new data structures.To do this we
build another instance from the newly created checkpoint, but during creation (replay) we
make some substitutions to point the current data structure to what we want.
For example, to do this we would use the following commands:
p4d -r $p4root -f -jr – | sed –e s#//foo/path/#//bar/path/#
Now we have a new instance with the correct metadata.
CONNECTION
The conversion has now pointed the original metadata to a new depot area. We will need to
create this new depot „bar‟ to access this area. The files for this depot need to be copied from
the original location or if space is a factor it can be left in the original locationor moved to a new
one and the symlinked(depending on an installations particular circumstance).
Once this is done, you will have a new instance with a different name containing a complete
data structure of split files.
VERIFICATION
Verification of the new instance should be run to test the success of the transfer. There are
only two problems that can occur.
Verification returns a "BAD" error. This is reported when the MD5 digest of a file
revision stored in the Perforce database differs from the one calculated by the p4 verify
command. This error indicates that the file revision might be corrupted. This is most
likely due to changes to the physical files during transfer. Otherwise files should be
confirmed by someone familiar with them or by diffing them from the original.3
3
http://kb.perforce.com/article/961
4. 4 DEPOT SPLITTING
Verification returns a “MISSING” error. This indicates that Perforce cannot find the
specified revisions within the versioned file tree. Most likely this means the archive file
is missing from the versioned file tree. Check the lbrfile record of this file and make
sure this file is in it correct location, make sure the new instance can access this
location, and make sure this files location was part of the splitmap4
.
CLEANUP
You will notice perfsplit carried over all the depots from the original instance regardless if they
were part of the splitmap or not. These can be removed except for any depot containing data
from integrated files located on your original split path. These extra depots can be easily
hidden by restricting their view in the protection table making the new instance looks like it only
has the single depot.
COMPLETION
By implementing these steps using the perforce tool perfsplit, the issues regarding the zero
downtime, duplicate naming, and integration history are addressed. Resolution of these issues
makes Perfsplit a more desirable tool in a large installation environment ….
4
http://kb.perforce.com/article/693