Compared to a company, there are some characteristics at an university like the Friedrich Schiller University Jena. Many faculties have different IT knowledge and each has its own kingdom. In summary there exists a great heterogeneous infrastructure.
Last year we have migrated our backup infrastructure to Bareos. Particular topics: User self-service, difference of desktop and server backup,
NDMP of our Isilon and monitoring.
Machine Data to Readable Reports - System Monitoring, Alerting and Reporting ...Blackboard APAC
Within a year, USC have enhanced various system administration tasks. From length file and database interrogation, we are now running with a proactive instant alerting process where incidents are captured and actioned before staff and students are impacted. A number of commercial, open-source and in-house tools have been utilised to facilitate these improvements and sights are now set on shifting to self-healing incidents.
Delivered at Innovate and Educate: Teaching and Learning Conference by Blackboard. 24 -27 August 2015 in Adelaide, Australia.
Uzstāsies: Jurix, DBACC
Tēma: Migration challenges and Migration process from IBM AIX to Oracle Solaris
Valoda: Latviešu
Tēmas apraksts:
Šajā prezentācijā pastāstīšu par savu pieredzi organizējot klienta datubāzes migrāciju uz Oracle SPARC SuperCluster. Uzdevums ir nomigrēt datubāzi no IBM AIX uz Oracle Solaris. Aprakstīšu dažus migrācijas variantus, kurus izskatījām, kā arī problēmas, kuras sagaidīja procesā.
Tips for Administering Complex Distributed Perforce EnvironmentsPerforce
Most users do not have administrator privileges, so how do you allow selected users to forcefully delete their changes and clients? What can be automated to proactively prevent database growth before it affects performance? How do you handle controlled failover in a hierarchical server system between dozens of servers? How do you work around the limitations of shelves in distributed environments? Learn several top tips and tricks we use to handle Perforce servers and how to use the broker’s filtering functionality to your advantage to administer complex Perforce environments.
Database Smart Flash Cache. This feature is available on Solaris and Oracle Enterprise Linux and allows customers to increase the effective size of the Oracle database buffer cache without adding more main memory to the system. For transaction-based workloads, Oracle database blocks are normally loaded into a dedicated shared memory area in main memory called the System Global Area (SGA). Database Smart Flash Cache allows the database buffer cache to be expanded beyond the SGA in main memory to a second level cache on flash memory.
Machine Data to Readable Reports - System Monitoring, Alerting and Reporting ...Blackboard APAC
Within a year, USC have enhanced various system administration tasks. From length file and database interrogation, we are now running with a proactive instant alerting process where incidents are captured and actioned before staff and students are impacted. A number of commercial, open-source and in-house tools have been utilised to facilitate these improvements and sights are now set on shifting to self-healing incidents.
Delivered at Innovate and Educate: Teaching and Learning Conference by Blackboard. 24 -27 August 2015 in Adelaide, Australia.
Uzstāsies: Jurix, DBACC
Tēma: Migration challenges and Migration process from IBM AIX to Oracle Solaris
Valoda: Latviešu
Tēmas apraksts:
Šajā prezentācijā pastāstīšu par savu pieredzi organizējot klienta datubāzes migrāciju uz Oracle SPARC SuperCluster. Uzdevums ir nomigrēt datubāzi no IBM AIX uz Oracle Solaris. Aprakstīšu dažus migrācijas variantus, kurus izskatījām, kā arī problēmas, kuras sagaidīja procesā.
Tips for Administering Complex Distributed Perforce EnvironmentsPerforce
Most users do not have administrator privileges, so how do you allow selected users to forcefully delete their changes and clients? What can be automated to proactively prevent database growth before it affects performance? How do you handle controlled failover in a hierarchical server system between dozens of servers? How do you work around the limitations of shelves in distributed environments? Learn several top tips and tricks we use to handle Perforce servers and how to use the broker’s filtering functionality to your advantage to administer complex Perforce environments.
Database Smart Flash Cache. This feature is available on Solaris and Oracle Enterprise Linux and allows customers to increase the effective size of the Oracle database buffer cache without adding more main memory to the system. For transaction-based workloads, Oracle database blocks are normally loaded into a dedicated shared memory area in main memory called the System Global Area (SGA). Database Smart Flash Cache allows the database buffer cache to be expanded beyond the SGA in main memory to a second level cache on flash memory.
Gears of Perforce: AAA Game Development ChallengesPerforce
How does Vancouver-based Xbox team, The Coalition, use Perforce to build Gears of War? By pulling UE4 source from Epic Games, sharing source with other Microsoft Studios, supporting outsourcers—all while delivering 100GB/day inside the studio. Learn how and why we do what we do.
Stacki DC Meetup (11/30/16)
Presenter: Justin Senseney- Senior Computer Scientist, NIST
Description:Stacki was used to upgrade a high performance computing (HPC) cluster at the National Institute of Standards and Technology (NIST) in Gaithersburg, MD. A 1,200 node CentOS5 Maui/Torque cluster was upgraded to CentOS7 with a slurm queuing system. This presentation will show the application of Stacki to this HPC cluster and contrast previous methods used for provisioning. Stacki carts and pallets are used to provision role-based servers. Ideas are presented that would make it easier for multiple clusters to be managed. Any mention of commercial products, including Stacki, within this presentation is for information only; it does not imply recommendation or endorsement by NIST.
Training Slides: Basics 107: Simple Tungsten Replicator Installation to Extra...Continuent
In this basic training session, we illustrate how to install and configure the Tungsten Replicator to extract events from a MySQL instance. This course is aimed at anyone looking to set up replication from MySQL. A basic understanding of Tungsten Replicator and MySQL replication is assumed.
AGENDA
- Review of the Tungsten Replicator
- Inspect the required prerequisites
- Explain the available installation methods
- Demonstrate a simple installation
Hybrid collaborative tiered storage with alluxioThai Bui
Systems that deal with AWS S3 often come with a negative performance impact. There's no co-location and the data has to move through slower, often congested wire networks. Alluxio can provide a caching layer for the data, however there's still the question of how and when to move which data. Should all the data by default be cached or should they be cached when used? In this talk, I will explore that gray area in between where the users and the dataset publishers will collaborate to decide what and how the data is cache in a tiered-storage architecture to maximize performance and minimize operating costs.
Deploying SOA on the Oracle Database ApplianceO-box
This is the O-box section from the presentation delivered at Oracle OpenWorld 2014 in San Francisco by Simon Haslam from O-box, and Frances Zhao-Perez from Oracle.
For the full slide deck see: "Oracle WebLogic on Oracle Database Appliance: Combining High Availability and Simplicity [CON8004]"
https://oracleus.activeevents.com/2014/connect/sessionDetail.ww?SESSION_ID=8004
This presentation, delivered by Travis Hankins, Greystone's VP of Technology, provides an introduction to the features and benefits of the Windows Server 2016 operating system.
Items of focus includes:
• Licensing
• Nano Server
• Containers/Docker
• Hands on Labs
• Active Directory Recovery Feature
• Deploying a Container
From Windows to Linux: Converting a Distributed Perforce Helix InfrastructurePerforce
There are many advantages to running Perforce Helix on Linux servers. See the process and pitfalls encountered when converting a distributed Perforce infrastructure from Windows to Linux.
OSBConf 2016: The Database Backup is done - what next? - by Jörg BrüheNETWAYS
“Nobody wants to do a backup – but everybody demands a restore”: This is a pointed description as seen by many system or database administrators.
Only the fact that restore cannot be had without backup ensures that they are given the resources for backup.
But creating a backup does not settle the matter. In their very own interest, admins must ensure that restore will really work, that the backup has no
hidden damage or is otherwise unsuitable. So a test system is needed to perform and verify a restore.
The result should not be simply overwritten: It is real live data, available on a system which is neither loaded nor changed by production!
These are perfect conditions to do load-intensive evaluations, create excerpts for billing or anonymize the data so that they may be used as snapshots for
developments and tests.
All this can be automated, and data privacy rules can be fit in.
I will present the relevant scripts and settings of a MySQL setup with about 30 production instances (each a master-master replication), from which you
may take the concepts and code snippets that fit your environment.
Backup with Bareos and ZFS - by Christian ReißNETWAYS
Doing backups is great, but storing the data somewhere is a whole different ballgame. You can use tapes, of course; but with always
declining prices and increasing reliability of Hard Disks storing all your data as files is becoming more and more preferable. There is just the matter on how to save them. As single files in a single filesystem, shared across a multitude of servers or even in one large archive. The option is only limited by the Administrators imagination.
In my speech I want to tell you about my experiences with storing all archives in ZFS. Opting for one-dataset-per-host, server-side compression, ZFS Raid and quota enforcement. And since we are all loving the fully automated approach I will show you how to do this in puppet. This option I am presenting you is in production. Hundreds of servers are fully automated with Puppet, Bareos/Bacula and ZFS.
OSBConf 2016: Backup of Scale - Bareos Active Clients and Puppet - Tobias GroßNETWAYS
Using central Bareos servers with hundreds of clients behind NAT can be quite a hassle. To enable the director to reach the clients one portforwarding per client has to be configured. This is a massive overhead in configuration and is contrary to normal server client architecture. To fix this Bareos GmbH & Co. KG developed a new feature called “active client” sponsored by Globalways AG.
With the active client feature of Bareos and puppet as configuration management software maintaining a Bareos setup has become very easy. Just add a puppet Bareos class to the clients definition and the configuration of client and director is completely made by puppet. Even setups with TLS and PKI infrastructure can be achieved very easily using trocla as a password store.
OSBConf 2016: Building a Business Continuity Plan with Bareos and Rear - by G...NETWAYS
Business Continuity (BC) Management is often confused with Disaster Recovery (DR) Planning as often BC steps are mentioned as part of a DR handbook. During this talk we will highlight what BC is all about and how to get started with it. Creating a DR Plan for certain key systems is only a small part of BC Management and as an example we will illustrate how a central backup server (using Bareos backup software) plays an important role in DR scenarios of keys systems of your datacenter.
We will explain how Relax-and-Recover (rear), a Linux DR tool, can be integrated with Bareos backup software to fill the DR gab and as such using Bareos to restore the system from scratch again. Using the right tools for each task, rear for the bare-metal preparation and Bareos for the restore, is the perfect match.
Gears of Perforce: AAA Game Development ChallengesPerforce
How does Vancouver-based Xbox team, The Coalition, use Perforce to build Gears of War? By pulling UE4 source from Epic Games, sharing source with other Microsoft Studios, supporting outsourcers—all while delivering 100GB/day inside the studio. Learn how and why we do what we do.
Stacki DC Meetup (11/30/16)
Presenter: Justin Senseney- Senior Computer Scientist, NIST
Description:Stacki was used to upgrade a high performance computing (HPC) cluster at the National Institute of Standards and Technology (NIST) in Gaithersburg, MD. A 1,200 node CentOS5 Maui/Torque cluster was upgraded to CentOS7 with a slurm queuing system. This presentation will show the application of Stacki to this HPC cluster and contrast previous methods used for provisioning. Stacki carts and pallets are used to provision role-based servers. Ideas are presented that would make it easier for multiple clusters to be managed. Any mention of commercial products, including Stacki, within this presentation is for information only; it does not imply recommendation or endorsement by NIST.
Training Slides: Basics 107: Simple Tungsten Replicator Installation to Extra...Continuent
In this basic training session, we illustrate how to install and configure the Tungsten Replicator to extract events from a MySQL instance. This course is aimed at anyone looking to set up replication from MySQL. A basic understanding of Tungsten Replicator and MySQL replication is assumed.
AGENDA
- Review of the Tungsten Replicator
- Inspect the required prerequisites
- Explain the available installation methods
- Demonstrate a simple installation
Hybrid collaborative tiered storage with alluxioThai Bui
Systems that deal with AWS S3 often come with a negative performance impact. There's no co-location and the data has to move through slower, often congested wire networks. Alluxio can provide a caching layer for the data, however there's still the question of how and when to move which data. Should all the data by default be cached or should they be cached when used? In this talk, I will explore that gray area in between where the users and the dataset publishers will collaborate to decide what and how the data is cache in a tiered-storage architecture to maximize performance and minimize operating costs.
Deploying SOA on the Oracle Database ApplianceO-box
This is the O-box section from the presentation delivered at Oracle OpenWorld 2014 in San Francisco by Simon Haslam from O-box, and Frances Zhao-Perez from Oracle.
For the full slide deck see: "Oracle WebLogic on Oracle Database Appliance: Combining High Availability and Simplicity [CON8004]"
https://oracleus.activeevents.com/2014/connect/sessionDetail.ww?SESSION_ID=8004
This presentation, delivered by Travis Hankins, Greystone's VP of Technology, provides an introduction to the features and benefits of the Windows Server 2016 operating system.
Items of focus includes:
• Licensing
• Nano Server
• Containers/Docker
• Hands on Labs
• Active Directory Recovery Feature
• Deploying a Container
From Windows to Linux: Converting a Distributed Perforce Helix InfrastructurePerforce
There are many advantages to running Perforce Helix on Linux servers. See the process and pitfalls encountered when converting a distributed Perforce infrastructure from Windows to Linux.
OSBConf 2016: The Database Backup is done - what next? - by Jörg BrüheNETWAYS
“Nobody wants to do a backup – but everybody demands a restore”: This is a pointed description as seen by many system or database administrators.
Only the fact that restore cannot be had without backup ensures that they are given the resources for backup.
But creating a backup does not settle the matter. In their very own interest, admins must ensure that restore will really work, that the backup has no
hidden damage or is otherwise unsuitable. So a test system is needed to perform and verify a restore.
The result should not be simply overwritten: It is real live data, available on a system which is neither loaded nor changed by production!
These are perfect conditions to do load-intensive evaluations, create excerpts for billing or anonymize the data so that they may be used as snapshots for
developments and tests.
All this can be automated, and data privacy rules can be fit in.
I will present the relevant scripts and settings of a MySQL setup with about 30 production instances (each a master-master replication), from which you
may take the concepts and code snippets that fit your environment.
Backup with Bareos and ZFS - by Christian ReißNETWAYS
Doing backups is great, but storing the data somewhere is a whole different ballgame. You can use tapes, of course; but with always
declining prices and increasing reliability of Hard Disks storing all your data as files is becoming more and more preferable. There is just the matter on how to save them. As single files in a single filesystem, shared across a multitude of servers or even in one large archive. The option is only limited by the Administrators imagination.
In my speech I want to tell you about my experiences with storing all archives in ZFS. Opting for one-dataset-per-host, server-side compression, ZFS Raid and quota enforcement. And since we are all loving the fully automated approach I will show you how to do this in puppet. This option I am presenting you is in production. Hundreds of servers are fully automated with Puppet, Bareos/Bacula and ZFS.
OSBConf 2016: Backup of Scale - Bareos Active Clients and Puppet - Tobias GroßNETWAYS
Using central Bareos servers with hundreds of clients behind NAT can be quite a hassle. To enable the director to reach the clients one portforwarding per client has to be configured. This is a massive overhead in configuration and is contrary to normal server client architecture. To fix this Bareos GmbH & Co. KG developed a new feature called “active client” sponsored by Globalways AG.
With the active client feature of Bareos and puppet as configuration management software maintaining a Bareos setup has become very easy. Just add a puppet Bareos class to the clients definition and the configuration of client and director is completely made by puppet. Even setups with TLS and PKI infrastructure can be achieved very easily using trocla as a password store.
OSBConf 2016: Building a Business Continuity Plan with Bareos and Rear - by G...NETWAYS
Business Continuity (BC) Management is often confused with Disaster Recovery (DR) Planning as often BC steps are mentioned as part of a DR handbook. During this talk we will highlight what BC is all about and how to get started with it. Creating a DR Plan for certain key systems is only a small part of BC Management and as an example we will illustrate how a central backup server (using Bareos backup software) plays an important role in DR scenarios of keys systems of your datacenter.
We will explain how Relax-and-Recover (rear), a Linux DR tool, can be integrated with Bareos backup software to fill the DR gab and as such using Bareos to restore the system from scratch again. Using the right tools for each task, rear for the bare-metal preparation and Bareos for the restore, is the perfect match.
Alluxio 2.0 & Near Real-time Big Data Platform w/ Spark & AlluxioAlluxio, Inc.
Alluxio Bay Area Meetup March 14th
Join the Alluxio Meetup group: https://www.meetup.com/Alluxio
Alluxio Community slack: https://www.alluxio.org/slack
Speeding Up Spark Performance using Alluxio at China UnicomAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
Speeding Up Spark Performance using Alluxio at China Unicom
Ce Zhang, Big Data Engineer (China Unicom)
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
Building a Distributed File System for the Cloud-Native EraAlluxio, Inc.
Big Data Bellevue Meetup
May 19, 2022
For more Alluxio events: https://alluxio.io/events/
Speaker: Bin Fan (Founding Engineer & VP of Open Source, Alluxio)
Today, data engineering in modern enterprises has become increasingly more complex and resource-consuming, particularly because (1) the rich amount of organizational data is often distributed across data centers, cloud regions, or even cloud providers, and (2) the complexity of the big data stack has been quickly increasing over the past few years with an explosion in big-data analytics and machine-learning engines (like MapReduce, Hive, Spark, Presto, Tensorflow, PyTorch to name a few).
To address these challenges, it is critical to provide a single and logical namespace to federate different storage services, on-prem or cloud-native, to abstract away the data heterogeneity, while providing data locality to improve the computation performance. [Bin Fan] will share his observation and lessons learned in designing, architecting, and implementing such a system – Alluxio open-source project — since 2015.
Alluxio originated from UC Berkeley AMPLab (used to be called Tachyon) and was initially proposed as a daemon service to enable Spark to share RDDs across jobs for performance and fault tolerance. Today, it has become a general-purpose, high-performance, and highly available distributed file system to provide generic data service to abstract away complexity in data and I/O. Many companies and organizations today like Uber, Meta, Tencent, Tiktok, Shopee are using Alluxio in production, as a building block in their data platform to create a data abstraction and access layer. We will talk about the journey of this open source project, especially in its design challenges in tiered metadata storage (based on RocksDB), embedded state-replicate machine (based on RAFT) for HA, and evolution in RPC framework (based on gRPC) and etc.
Optimizing Latency-Sensitive Queries for Presto at Facebook: A Collaboration ...Alluxio, Inc.
Alluxio Global Online Meetup
May 7, 2020
For more Alluxio events: https://www.alluxio.io/events/
Speakers:
Rohit Jain, Facebook
Yutian "James" Sun, Facebook
Bin Fan, Alluxio
For many latency-sensitive SQL workloads, Presto is often bound by retrieving distant data. In this talk, Rohit Jain, James Sun from Facebook and Bin Fan from Alluxio will introduce their teams’ collaboration on adding a local on-SSD Alluxio cache inside Presto workers to improve unsatisfied Presto latency.
This talk will focus on:
- Insights of the Presto workloads at Facebook w.r.t. cache effectiveness
- API and internals of the Alluxio local cache, from design trade-offs (e.g. caching granularity, concurrency level and etc) to performance optimizations.
- Initial performance analysis and timeline to deliver this feature for general Presto users.
- Discussion on our future work to optimize cache performance with deeper integration with Presto
Desktop as a Service supporting Environmental ‘omicsDavid Wallom
Within the Environmental 'omics community Bio-Linux is a widely used tool. This has the advantage of providing in a single deliverable package all necessary software and tools to support common analyses. With the growth in data volumes within the community and increasing
constraints on user access and control over their own desktops an alternative delivery method of Bio-Linux and, in future, the Docker container environment is necessary.
Within the EOS Cloud project we have constructed a Desktop as a Service system to centrally host virtual machines with these tools preconfigured and maintained. To enable efficient use of the resources we have enabled user controlled resource scaling so that users are able to utilise small scale VMs for task configuration and data manipulation and boost to a larger scale to run analysis applications all the while
maintaining the user environment in a consistent manner. Alongside this within the project we have been developed tools to simplify the increasingly popular Docker software usage model. This includes ensure uniformity of behaviour between the host system and the running Docker container.
Within the invitation only trial user community we identify two different exemplars groups and explain their usage and how the products and services developed within the project are useful for them. We conclude discussing the useful nature of Desktop as a Service, how it is of great benefit to the bioinformatics community but could also be of great use elsewhere, where the need for a stable user environment with applications already available that do not rely on local ICT
support.
Still All on One Server: Perforce at Scale Perforce
Google runs the busiest single Perforce server on the planet, and one of the largest repositories in any source control system. This session will address server performance and other issues of scale, as well as where Google is in general, how it got there and how it continues to stay ahead of its users.
BIO IT 15 - Are Your Researchers Paying Too Much for Their Cloud-Based Data B...Dirk Petersen
Dirk Petersen, Scientific Computing Manager, Fred Hutchinson Cancer Research Center (FHCRC)
Joe Arnold, President and Chief Product Officer, SwiftStack
Considering deploying a multi-petabyte storage-as-a-service offering in your research environment? Learn how an industry-leading software-defined object storage solution, architected by SwiftStack and Silicon Mechanics, helped shift hundreds of users to an object-based workflow for their archival data. With an emphasis on cost efficiencies, scalability, and manageability, see how this implementation at Fred Hutchinson Cancer Research Center (FHCRC) is continually evolving across new use cases and access methods.
Introducing 3 FREE Smart solutions for SQL Server (Adi Sapir, Docco Labs)
As Database experts, we work with SQL Server Databases on a daily basis. We face the same problems every SQL Administrator and/or developer does. And – we spend our time writing solutions for these problems! In this session Adi will introduce the following 3, totally FREE solutions:
· ClipTable – A revolutionary new *anything* to SQL Table importer
· Database File Explorer – a much easier way to explore our database->filegroups->files->storage mapping
· Log Table Viewer – a complete client/server logger solution for SQL Server
Scalable and High available Distributed File System Metadata Service Using gR...Alluxio, Inc.
Alluxio Community Office Hour
Apr 7, 2020
For more Alluxio events: https://www.alluxio.io/events/
Speaker: Bin Fan
Alluxio (alluxio.io) is an open-source data orchestration system that provides a single namespace federating multiple external distributed storage systems. It is critical for Alluxio to be able to store and serve the metadata of all files and directories from all mounted external storage both at scale and at speed.
This talk shares our design, implementation, and optimization of Alluxio metadata service (master node) to address the scalability challenges. Particularly, we will focus on how to apply and combine techniques including tiered metadata storage (based on off-heap KV store RocksDB), fine-grained file system inode tree locking scheme, embedded state-replicate machine (based on RAFT), exploration and performance tuning in the correct RPC frameworks (thrift vs gRPC) and etc. As a result of the combined above techniques, Alluxio 2.0 is able to store at least 1 billion files with a significantly reduced memory requirement, serving 3000 workers and 30000 clients concurrently.
In this Office Hour, we will go over how to:
- Metadata storage challenges
- How to combine different open source technologies as building blocks
- The design, implementation, and optimization of Alluxio metadata service
Presentation from 2016 Austin OpenStack Summit.
The Ceph upstream community is declaring CephFS stable for the first time in the recent Jewel release, but that declaration comes with caveats: while we have filesystem repair tools and a horizontally scalable POSIX filesystem, we have default-disabled exciting features like horizontally-scalable metadata servers and snapshots. This talk will present exactly what features you can expect to see, what's blocking the inclusion of other features, and what you as a user can expect and can contribute by deploying or testing CephFS.
Managing Your Hyperion Environment – Performance Tuning, Problem Solving and ...eCapital Advisors
Casey Ratliff from eCapital Advisors provides recommendations on Oracle - Hyperion performance tuning at a Hyperion User Group meeting in Minnesota.
Diagnostics/Troubleshooting
- Where are all the logs
- Using Log Analysis Utility
- EPM System Registry
- Deployment Report
- EPM Diagnostic –Validation
- Zip to Logs
Changes that can improve performance
- Java Heap
- Data Connections
- Essbase/
Casey Ratliff, Lead System Architect
http://www.eCapitalAdvisors.com
In this talk, Tim Bird will discuss the recent status of the Linux with regard to embedded systems. This will include a review of the last year's worth of mainline kernel releases, as well as topic areas specifically related to embedded, such as boot-up time, security, system size, etc. Tim will also present recent and planned work by the Core Embedded Linux Project of the Linux Foundation, and discuss the current status of Linux in various markets and fields. Tim will go over current areas of work, and discuss remaining challenges faced by Linux in embedded projects.
When most topologies with Perforce involved a single P4D instance and proxies hanging off of that instance, the backup and performance needs were focused in one central location. As Perforce evolves into a multi-site design, there is a greater need to have high-performing and stable solutions in multiple locations. In this session, learn how to achieve a scalable multi-site design that addresses performance, stability, backups/Disaster Recovery, and monitoring with distributed Perforce.
Similar to OSBConf 2016: The Backup Report of the Friedrich Schiller University Jena - by Thomas Otto (20)
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
2. 10 Faculties
• Theology
• Faculty of Law
• Economics
• Philosophy
• Social and Behavioural Sciences
• Mathematic & Computer Science
• Physics and Astronomy
• Chemistry and Earth Sciences
• Biology and Pharmacy
• Medicine
…
• many scientific
departments and institutes
• other facilities
• Academic Affairs
• Human Resources
• Library (ThULB)
• …
Slide 2OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
Overview of FSU Jena
3. Needs & Conclusions
• independent -> difficult to set rules
• freedom of research and education
• very heterogeneous resources and knowledge
• sometimes own IT department
• sometimes nothing
• sometimes rival to each other
different requirements
self service (notification, restore)
separation (only show self information)
Slide 3OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
4. Goal for 2015/16
• replace our two backup systems
• Veritas NetBackup
old and no support
400TB, 400 million files, 120 clients
• Atempo Time Navigator
failed to replace NetBackup (unsatisfactory)
license expires in fall 2016
2 catalogs, 750TB, 460 million files, 150 clients
• long-term satisfying backup system
Slide 4OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
5. Veritas NetBackup
• antique GUI
command interface available
save backup data as files (.tar) in our HSM
license fee for special things (NDMP,…)
difficult client installing
not intuitively operable concerning restoring in client
• problem with offline clients retention expires
Slide 5OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
6. Atempo Time Navigator
campus license (all inclusive)
many features included (NDMP, NetworkShare, SQL, Exchange,
VMware, …)
intuitive restoring with GUI (on client)
• GUI only (no really useable CMD interface) many clicks necessary
proprietary (no SQL) catalog max. 512GB
no cross-restoring without catalog administrator rights
no/bad restrictions
• all configured backups in catalog are visible for everybody
• possible to restore world-readable files from other clients
• some world-writeable files/dirs in installation directory
• no spooling only indirect with VTL and migration
Slide 6OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
7. backup system
• stable system
• stable configuration
• stable catalog
• useful command-interface
• ACL, right management for
users (self service)
• LTO6-Library useable
• save files on HSM
• longtime or no license
• incremental forever or virtual
full (for laptops/desktops)
clients
• Windows
• Linux
• MacOS
• Filer (Isilon, NetApp) via
NDMP
• Novell Filer
• Exchange Server
• VMware Cluster
• DBMS (MySQL, MariaDB,
Oracle, MS SQL)
Slide 7OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
Evaluation: Needs
8. Evaluation of Bareos - Pros
• open source no migration necessary (no license validity) !!
• configuration files stable, easy saveable
• good and scriptable command-interface
• uses real! SQL database
• no catalog restrictions (size,…)
• possibility to develop own SQL reports
• uses standards (i.e. mtx for library control)
• good file backup (Windows, Linux, Novell Filer)
• self service
• restricted console
• notifications after backup
Slide 8OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
9. Evaluation Bareos - Cons
• self service for users
• no useable GUI
• WebUI was not available
• no LDAP users!
• NDMP at file level has to be developed
• Exchange no special client
• VMware not finished
• DBMS nothing special
• NetDisk problematic backup of Windows/NAS share
Slide 9OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
10. Handle Cons
• SQL use dump via ‘run before script’
• Exchange use Windows-Backup via ‘run before script’
• file level NDMP for Isilon funded development
• wait for WebUI with LDAP users (hopefully soon!!)
• VMware use temporary VMware integrated backup
• NetDisk try to unsupport this
Slide 10OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
11. Plan
• use consulting for initial installing
• thereby gathering of knowledges to run by our own
• replace backup systems until spring 2016
• use existing LTO6-Library
Slide 11OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
14. Implementation
• used consulting for initial installing and setup
Daniel Neuberger (dass IT) many thanks to him
• installing of Bareos-Dir (new Linux server)
• installing of Bareos-SD on Solaris
• with local virtual autochanger (on HSM file system)
• configured some default Pools, Jobdefs, Filesets, Schedules
• installed and tested NDMP-Clients
• migration of clients from Time Navigator to Bareos
Slide 14OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
15. Realization – Step 1
Slide 15OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
16. Realization – Step 2
Slide 16OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
17. Implementation 2
• some data migrated (freed space on library)
create new partition for Bareos
• install Bareos-SD on Linux
• map library partition in SAN
• move data / jobs to new SD
• change Jobdef to new Pool
• … from time to time …
• decrease Time Navigator partition (in library)
• increase Bareos partition (in library)
very easy to increase tape slots in Bareos
Slide 17OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
19. Server Backup Strategy
• backup data to local spool
• despooling to LTO6-Library (Linux-SD)
• Pools by retention (3 months, 3 weeks, 1 year, …)
• special Pools (NDMP3M, NDMP3W)
• inefficient use of tape drives detected!
• first 2 drives, now 4 drives
• all backups in a Pool uses only 1 drive more drives not used
• drive is already reserved while spooling drive is blocked for other
Pools
• ‚Prefer Mounted Volumes = no‘ one Pool reserves all drives
Slide 19OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
20. Desktop and Laptop Backup
• no normal scheduling of clients online/offline?
• 1 trigger job which starts a script daily at 09:00, 12:00, 15:00
• select clients by schedule=VFS
• check if client is available (FD-Port 9102 open)
• start incremental backup
• normal schedule (VFS) starts ‚virtual full‘ 2 times per month
• save backups to virtual autochanger (SD on Solaris) via HSM
Slide 20OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
21. Summary / Experiences
• works as designed
• stable (running, database, configuration)
• likely command-interface
• own SQL reports possible
• nice NDMP funding development
• support works, especially on critical bugs
• current status:
• 183 clients ( 244 jobs)
• >1000 million files, ~1PB saved
• 428 LTO6-Tapes, 4 LTO6-Drives
• on HSM 13TB (client backups)
Slide 21OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
22. Specials
• self service –> status e-mail
• set description for e-mail address and mode of client
• run after script: contact_user.pl
collects data from job and client
show client=…
llist jobid=…
list joblog jobid=…
send e-mail to user if necessary
Slide 22OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
23. Specials 2
• self service –> restricted consoles
• we use one configuration file per client
• set tags (in comment) on client files for allowed user
• # Admin: user1, user2
• script: update-user.pl
• generate restricted console with random password
• update ACLs for all necessary Jobs, Clients, …
• enable LDAP-Login on remote host
• copy Bareos bconsole-configuration to remote host
goal: known LDAP-Users on Bareos console
Slide 23OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
24. Specials 3
• status monitor via watch + status storage + awk-script
Slide 24OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
25. Specials 4
Slide 25OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
• SQL report: expired volumes
26. Specials 5
Slide 26OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto
28. List of Wishes
• efficient use of tape drives
• reserve on demand (despooling), not at spooling
• reserve more than one drive by Pool
• list-command respects ACL-Rights fixed in version 16.2
• usefull WebUI with LDAP users
• parallel despooling and spooling for a job
use of spooling extends backup time
• new command: audit volume=…
• autoupdate for Windows client?
Slide 28OSBConf 2016, The backup report of the Friedrich Schiller University Jena, Thomas Otto