Boosting I/O Performance with KVM io_uringShapeBlue
Storage performance is becoming much more important. KVM io_uring attempts to bring the I/O performance of a virtual machine on almost the same level of bare metal. Apache CloudStack has support for io_uring since version 4.16. Wido will show the difference in performance io_uring brings to the table.
Wido den Hollander is the CTO of CLouDinfra, an infrastructure company offering total Webhosting solutions. CLDIN provides datacenter, IP and virtualization services for the companies within TWS. Wido den Hollander is a PMC member of the Apache CloudStack Project and a Ceph expert. He started with CloudStack 9 years ago. What attracted his attention is the simplicity of CloudStack and the fact that it is an open-source solution. During the years Wido became a contributor, a PMC member and he was a VP of the project for a year. He is one of our most active members, who puts a lot of efforts to keep the project active and transform it into a turnkey solution for cloud builders.
-----------------------------------------
The CloudStack European User Group 2022 took place on 7th April. The day saw a virtual get together for the European CloudStack Community, hosting 265 attendees from 25 countries. The event hosted 10 sessions with from leading CloudStack experts, users and skilful engineers from the open-source world, which included: technical talks, user stories, new features and integrations presentations and more.
------------------------------------------
About CloudStack: https://cloudstack.apache.org/
How do you operate over 1,200 deployments on a single BOSH Director? In the past many talks have had the Topic of Cloud Foundry at scale. But how about the underlying automation layer? BOSH has its own set of challenges and limits for running VMs and Deployments at scale. Learn which obstacles and limits came up and how we solved them with the help of the BOSH core development team. Learn how we monitor the directors, be it via logging and metrics or performance indicators. We’ll also show you how we automate BOSH itself to ensure the best experience for end users, and to keep them blissfully unaware of the complexity of the processes working on their behalf After this talk you will also be able to run at least 1,200 deployments on your directors.
How Can OpenNebula Fit Your Needs: A European Project FeedbackNETWAYS
BonFIRE is an european project which aims at providing a ”multi-site cloud facility for applications, services and systems research and experimentation”. Grouping different research cloud providers behind a common set of tools, APIs and services, it enables users to run their experiment against a heterogeneous set of infrastructure, hypervisors, networks, etc …
BonFIRE, and thus the (OpenNebula) testbeds, provide a relatively small set of images used to boot VMs. However, the experimental nature of BonFIRE projects results in a big ”turnover” of running VMs. Lot of VMs are used for a time period between a few hours and a few days, and an experiment startup can trigger deployment of many VMs at same time on a small set of OpenNebula workers, which does not correspond to usual Cloud workflow.
Default OpenNebula is not optimized for such usecase (small amount of worker nodes, high VMs turnover). However, thanks to its ability to be easily modified at each level of a Cloud deployment workflow, OpenNebula has been tuned to make it fit better with BonFIRE deployment process. This presentation will explain how to change OpenNebula TM and VMM to improve the parrallel deployment of many VMs in a short amount of time, reducing time needed to deploy an experiment to its lowest without lot of expensive hardware.
Parallel computing in bioinformatics t.seemann - balti bioinformatics - wed...Torsten Seemann
I describe the three levels of parallelism that can be exploited in bioinformatics software (1) clusters of multiple computers; (2) multiple cores on each computer; and (3) vector machine code instructions.
Trying and evaluating the new features of GlusterFS 3.5Keisuke Takahashi
My presentation in LinuxCon/CloudOpen Japan 2014.
It has passed few days since GlusterFS 3.5 released so feel free to correct me if you find my mistakes or misunderstandings. Thanks.
For the first time this year, 10gen will be offering a track completely dedicated to Operations at MongoSV, 10gen's annual MongoDB user conference on December 4. Learn more at MongoSV.com
Come learn about the different ways to back up your single servers, replica sets, and sharded clusters
Backup with Bareos and ZFS - by Christian ReißNETWAYS
Doing backups is great, but storing the data somewhere is a whole different ballgame. You can use tapes, of course; but with always
declining prices and increasing reliability of Hard Disks storing all your data as files is becoming more and more preferable. There is just the matter on how to save them. As single files in a single filesystem, shared across a multitude of servers or even in one large archive. The option is only limited by the Administrators imagination.
In my speech I want to tell you about my experiences with storing all archives in ZFS. Opting for one-dataset-per-host, server-side compression, ZFS Raid and quota enforcement. And since we are all loving the fully automated approach I will show you how to do this in puppet. This option I am presenting you is in production. Hundreds of servers are fully automated with Puppet, Bareos/Bacula and ZFS.
Disaster Recovery Strategies Using oVirt's new Storage Connection Management ...Allon Mureinik
A short overview of oVirt 3.3's Storage Connection Management feature, and several examples how this feature can be used in Disaster Recovery strategies.
Boosting I/O Performance with KVM io_uringShapeBlue
Storage performance is becoming much more important. KVM io_uring attempts to bring the I/O performance of a virtual machine on almost the same level of bare metal. Apache CloudStack has support for io_uring since version 4.16. Wido will show the difference in performance io_uring brings to the table.
Wido den Hollander is the CTO of CLouDinfra, an infrastructure company offering total Webhosting solutions. CLDIN provides datacenter, IP and virtualization services for the companies within TWS. Wido den Hollander is a PMC member of the Apache CloudStack Project and a Ceph expert. He started with CloudStack 9 years ago. What attracted his attention is the simplicity of CloudStack and the fact that it is an open-source solution. During the years Wido became a contributor, a PMC member and he was a VP of the project for a year. He is one of our most active members, who puts a lot of efforts to keep the project active and transform it into a turnkey solution for cloud builders.
-----------------------------------------
The CloudStack European User Group 2022 took place on 7th April. The day saw a virtual get together for the European CloudStack Community, hosting 265 attendees from 25 countries. The event hosted 10 sessions with from leading CloudStack experts, users and skilful engineers from the open-source world, which included: technical talks, user stories, new features and integrations presentations and more.
------------------------------------------
About CloudStack: https://cloudstack.apache.org/
How do you operate over 1,200 deployments on a single BOSH Director? In the past many talks have had the Topic of Cloud Foundry at scale. But how about the underlying automation layer? BOSH has its own set of challenges and limits for running VMs and Deployments at scale. Learn which obstacles and limits came up and how we solved them with the help of the BOSH core development team. Learn how we monitor the directors, be it via logging and metrics or performance indicators. We’ll also show you how we automate BOSH itself to ensure the best experience for end users, and to keep them blissfully unaware of the complexity of the processes working on their behalf After this talk you will also be able to run at least 1,200 deployments on your directors.
How Can OpenNebula Fit Your Needs: A European Project FeedbackNETWAYS
BonFIRE is an european project which aims at providing a ”multi-site cloud facility for applications, services and systems research and experimentation”. Grouping different research cloud providers behind a common set of tools, APIs and services, it enables users to run their experiment against a heterogeneous set of infrastructure, hypervisors, networks, etc …
BonFIRE, and thus the (OpenNebula) testbeds, provide a relatively small set of images used to boot VMs. However, the experimental nature of BonFIRE projects results in a big ”turnover” of running VMs. Lot of VMs are used for a time period between a few hours and a few days, and an experiment startup can trigger deployment of many VMs at same time on a small set of OpenNebula workers, which does not correspond to usual Cloud workflow.
Default OpenNebula is not optimized for such usecase (small amount of worker nodes, high VMs turnover). However, thanks to its ability to be easily modified at each level of a Cloud deployment workflow, OpenNebula has been tuned to make it fit better with BonFIRE deployment process. This presentation will explain how to change OpenNebula TM and VMM to improve the parrallel deployment of many VMs in a short amount of time, reducing time needed to deploy an experiment to its lowest without lot of expensive hardware.
Parallel computing in bioinformatics t.seemann - balti bioinformatics - wed...Torsten Seemann
I describe the three levels of parallelism that can be exploited in bioinformatics software (1) clusters of multiple computers; (2) multiple cores on each computer; and (3) vector machine code instructions.
Trying and evaluating the new features of GlusterFS 3.5Keisuke Takahashi
My presentation in LinuxCon/CloudOpen Japan 2014.
It has passed few days since GlusterFS 3.5 released so feel free to correct me if you find my mistakes or misunderstandings. Thanks.
For the first time this year, 10gen will be offering a track completely dedicated to Operations at MongoSV, 10gen's annual MongoDB user conference on December 4. Learn more at MongoSV.com
Come learn about the different ways to back up your single servers, replica sets, and sharded clusters
Backup with Bareos and ZFS - by Christian ReißNETWAYS
Doing backups is great, but storing the data somewhere is a whole different ballgame. You can use tapes, of course; but with always
declining prices and increasing reliability of Hard Disks storing all your data as files is becoming more and more preferable. There is just the matter on how to save them. As single files in a single filesystem, shared across a multitude of servers or even in one large archive. The option is only limited by the Administrators imagination.
In my speech I want to tell you about my experiences with storing all archives in ZFS. Opting for one-dataset-per-host, server-side compression, ZFS Raid and quota enforcement. And since we are all loving the fully automated approach I will show you how to do this in puppet. This option I am presenting you is in production. Hundreds of servers are fully automated with Puppet, Bareos/Bacula and ZFS.
Disaster Recovery Strategies Using oVirt's new Storage Connection Management ...Allon Mureinik
A short overview of oVirt 3.3's Storage Connection Management feature, and several examples how this feature can be used in Disaster Recovery strategies.
Sizing an alfresco infrastructure has always been an interesting topic with lots of unrevealed questions. There is no perfect formula that can accurately define what is the perfect sizing for your architecture considering your use case. However, we can provide you with valuable guidance on how to size your Alfresco solution, by asking the right questions, collecting the right numbers, and taking the right assumptions on a very interesting sizing exercise.
How many alfresco servers will you need on your alfresco cluster? How many CPUs/cores do you need on those servers to handle your estimated user concurrency? How do you estimate the sizing and growth of your storage? How much memory do you need on your Solr servers? How many Solr servers do you need to get the response times you require? What are the golden rules that can drive and maintain the success of an Alfresco project?
bccon-2014 adm04 ibm-domino-64bit-all-you-need-to-knowICS User Group
Native 64bit applications are more and more standard in many customer environments. This session is about the benefits and technical background for 32bit IBM Domino on 64bit OS and 64bit native IBM Domino. We'll provide best practices and also best combinations and choices that you have for IBM Domino with add-on applications from IBM and other vendors. We'll also discuss recent changes in a mixed bit environment and the pitfalls to avoid. You will also learn what business partners and IBM have to do to port their applications to understand in more detail how Domino 64bit works in your daily operations. The session mainly covers Domino 9.0 64bit for Windows and Linux (new in Domino 9).
Welcome to the Live Memory Forensics class!
This is an introduction to live memory forensics
It is designed for the investigator who has digital forensic experience, and who has intermediate ability with the Microsoft Windows operating system
UKUUG presentation about µCLinux on Pluto 6edlangley
Slides from a <a>talk</a> given at the UKUUG 2006 conference derived from my final year project on the UWE CRTS degree which involved porting uCLinux to the Pluto 6 gaming control board.
INTELLIGENT DISK SUBSYSTEMS – 2, I/O TECHNIQUES – 1
Caching: Acceleration of Hard Disk Access; Intelligent disk subsystems; Availability of disk subsystems. The Physical I/O path from the CPU to the Storage System; SCSI.
I/O TECHNIQUES – 2, NETWORK ATTACHED STORAGE
Fibre Channel Protocol Stack; Fibre Channel SAN; IP Storage. The NAS Architecture, The NAS hardware Architecture, The NAS Software Architecture, Network connectivity, NAS as a storage system.
Data deduplication is a hot topic in storage and saves significant disk space for many environments, with some trade offs. We’ll discuss what deduplication is and where the Open Source solutions are versus commercial offerings. Presentation will lean towards the practical – where attendees can use it in their real world projects (what works, what doesn’t, should you use in production, etcetera).
Gears of Perforce: AAA Game Development ChallengesPerforce
How does Vancouver-based Xbox team, The Coalition, use Perforce to build Gears of War? By pulling UE4 source from Epic Games, sharing source with other Microsoft Studios, supporting outsourcers—all while delivering 100GB/day inside the studio. Learn how and why we do what we do.
Hardware refers to all of the physical parts of a computer system. F.pdfanjaniar7gallery
Hardware refers to all of the physical parts of a computer system. For a typical desktop computer
this comprises the main system unit, a display screen, a keyboard, a mouse, a router/modem (for
connection to the Internet), and usually a printer. Speakers, a webcam and an external hard disk
for back-up storage are often also included. Many of these items are integrated into a single unit
on a laptop or other form of mobile computer
This option isn’t as easy or cheap, but we have to mention it anyway. If you can open up your
laptop, you can replace its internal drive with a larger drive — or insert a second internal drive,
in the off chance that your laptop has a second drive bay. Upgrading your laptop is often
possible, but it’s definitely more work than quickly plugging in an external storage device!
RAM
RAM -- or \"random access memory\" -- is the temporary storage space that a computer loads
software applications and user data into when it is running. All current RAM technologies are
\"volatile\", which means that everything held in RAM is lost when a computer\'s power is
removed. To a large extent, the more RAM a computer has the faster and more effectively it will
operate. Computers with little RAM have to keep moving data to and from their hard disks in
order to keep running. This tends to make them not just slow in general, but more annoyingly
intermittently sluggish.
The above all said, those hoping to speed up thier PC by installing more RAM need to note that
any PC with a 32 bit operating system can only access a maximum of 4GB of RAM. Add more,
and the PC simply will not recognise it. In practice this that means the vast majority of PCs in
use and being sold today cannot benefit from more than 4GB of RAM -- and this includes many
PCs running Windows 7 (which is very widely sold in its 32 rather than 64 bit format to
maximise compatabilty with older software and perhipherals).]
RAM is measured in megabytes (MB) and gigabytes (GB), as detailed on the storage page. Just
how much RAM a computer needs depends on the software it is required to run effectively. A
computer running Windows XP will usually function quite happily with 1GB of RAM, whereas
twice this amount (ie 2GB) is the realistic minimum for computers running Windows 7.
HARD DRIVE
Hard disk drives are the high capacity storage devices inside a computer from which software
and user data are loaded. Like most other modern storage devices, the capacity of the one or
more internal hard disks inside a computer is measured in gigabytes (GB), as detailed on the
storage page. Today 40GB is an absolute minimum hard drive size for a new computer running
Windows 7, with a far larger capacity being recommended in any situation where more than
office software is going to be installed. Where a computer will frequently be used to edit video, a
second internal hard disk dedicated only to video storage is highly recommended for stable
operation. Indeed, for professional video editing using a .
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
4. Current Information service provided : file sharing,samba hard disk system :single hard disk of HP desptop PC,no raid,no volume management. space information : 140G total,38G used(two years) stable information :2008--2010,stable as a rock HP Desktop
5. Backup Server Information Device information :old IBM x86 pc server,bought in 2003 or 2004 Space information : 33G total Backup information :trigger backup script at 10:00 pm every night and sync the data solution of "primary server" crash :change the backup server's IP to Primary server's IP to take over the file sharing service. user transparent guarantee : rsync daemon to sync primary server data,and switch over to backup server if primary goes down.
6. Problems 1.backup server : hard disk space is too small, 33G total ,all the disk space was used. 2.What happen if primary server goes down now ? (1)the backup server can take over in about 20 minutes, however,data lose will happen (2) Impact: more than 30 users and some critical server will be affected (3 )The workflow which use the freenas as samba disk mapping will be affected hard disk space is too small
7.
8. solution replace the old backup server with a desktop pc with a large disk space .Within 5 years, It will meet the uses request. resources I need to solve it 1.one desktop PC 2.several IP for test 3.a little space in our datacenter 5th floor