The document describes the life cycle of a grid computing job in 3 stages:
1. Users authenticate and authorize themselves on the grid using certificates and proxies.
2. They submit jobs to the grid resource management system, which schedules and runs the jobs on available computing elements.
3. Users can check the status of their jobs and retrieve the output files once the jobs are completed.
The document appears to be log entries from a Windows installation. It records the date/time and file path each time a new section of the installation process begins or ends. Key events include calculating registry size, registering DLL files like rsaenh.dll and dssenh.dll, loading service pack files, copying system files from the E:\ drive to C:\Windows, and overall progress of the Windows installation process.
IBM Notes Traveler Administration and Log Troubleshooting tips - Part 2jayeshpar2006
This document summarizes an IBM Notes Traveler Administration and Log Troubleshooting session. It discusses understanding Traveler activity, error, and usage logs to help troubleshoot issues. Specifically, it demonstrates how to analyze logs to resolve two cases: 1) Users complaining of slow synchronization and errors connecting to the server, and 2) The Traveler server keeps going into red status. For the first case, it shows checking system dumps, CPU/memory usage, and activity logs to identify that high database connections were causing constrained server resources. For the second case, it briefly mentions another example of analyzing logs to resolve slow mail issues.
The document provides best practices for handling performance issues in an Odoo deployment. It recommends gathering deployment information, such as hardware specs, number of machines, and integration with web services. It also suggests monitoring tools to analyze system performance and important log details like CPU time, memory limits, and request processing times. The document further discusses optimizing PostgreSQL settings, using tools like pg_activity, pg_stat_statements, and pgbadger to analyze database queries and performance. It emphasizes reproducing issues, profiling code with tools like the Odoo profiler, and fixing problems in an iterative process.
This document provides information about different types of log formats and log analysis. It discusses common log formats like the Common Log Format, Extended W3C Log Format, and Squid Log Format. It also covers multi-line logs, Iptables logs, and tools for log analysis like Splunk and OSSEC. The key details provided include sample log entries for each format and basic configuration steps for Splunk after installation.
The document appears to be log entries from a Windows installation. It records the date/time and file path each time a new section of the installation process begins or ends. Key events include calculating registry size, registering DLL files like rsaenh.dll and dssenh.dll, loading service pack files, copying system files from the E:\ drive to C:\Windows, and overall progress of the Windows installation process.
IBM Notes Traveler Administration and Log Troubleshooting tips - Part 2jayeshpar2006
This document summarizes an IBM Notes Traveler Administration and Log Troubleshooting session. It discusses understanding Traveler activity, error, and usage logs to help troubleshoot issues. Specifically, it demonstrates how to analyze logs to resolve two cases: 1) Users complaining of slow synchronization and errors connecting to the server, and 2) The Traveler server keeps going into red status. For the first case, it shows checking system dumps, CPU/memory usage, and activity logs to identify that high database connections were causing constrained server resources. For the second case, it briefly mentions another example of analyzing logs to resolve slow mail issues.
The document provides best practices for handling performance issues in an Odoo deployment. It recommends gathering deployment information, such as hardware specs, number of machines, and integration with web services. It also suggests monitoring tools to analyze system performance and important log details like CPU time, memory limits, and request processing times. The document further discusses optimizing PostgreSQL settings, using tools like pg_activity, pg_stat_statements, and pgbadger to analyze database queries and performance. It emphasizes reproducing issues, profiling code with tools like the Odoo profiler, and fixing problems in an iterative process.
This document provides information about different types of log formats and log analysis. It discusses common log formats like the Common Log Format, Extended W3C Log Format, and Squid Log Format. It also covers multi-line logs, Iptables logs, and tools for log analysis like Splunk and OSSEC. The key details provided include sample log entries for each format and basic configuration steps for Splunk after installation.
Reproducible Computational Pipelines with Docker and Nextflowinside-BigData.com
This document summarizes a presentation about using Docker and Nextflow to create reproducible computational pipelines. It discusses two major challenges in computational biology being reproducibility and complexity. Containers like Docker help address these challenges by creating portable and standardized environments. Nextflow is introduced as a workflow framework that allows pipelines to run across platforms and isolates dependencies using containers, enabling fast prototyping. Examples are given of using Nextflow with Docker to run pipelines on different systems like HPC clusters in a scalable and reproducible way.
Designof traffic isolationby using flow based tunnelingsoichi shigeta
This document proposes a design to isolate production and mirror traffic in the underlay network using flow-based tunneling. This reduces the number of tunnels needed and avoids managing tunnels in the TaaS agent. The key aspects of the design are using "flow" as the remote IP when creating VXLAN ports, modifying Neutron and TaaS flow tables to support flooding and learning traffic, and registering local IP addresses in the database to discover remote IPs for mirror traffic. The current design only supports IPv4.
YANG model for NETCONF Event NotificationsThomasGraf42
This document proposes a YANG model to define the structure of NETCONF event notifications. It aims to update RFC5277 by specifying the notification structure in YANG. The draft is currently in revision 03, with editorial changes made based on feedback from the NETCONF working group. It defines the notification structure, provides examples in XML, JSON and CBOR encodings, and discusses next steps like requesting WG adoption.
project of c++ of student report card managment (This is an automated software system written in C++ programming language for Student Performance management system which is used to store records various information about the students and books details.)(thanks)
This document provides an overview of basic commands and functionality in the ONOS network operating system. It demonstrates how to set up an ONOS cluster, view network topology and flows using CLI commands, and activate applications like a reactive forwarding app to enable connectivity across the Mininet topology.
Method and apparatus for intelligent management of a network elementTal Lavian Ph.D.
A network element (NE) includes an intelligent interface (II) with its own operating environment rendering it active during the NE boot process, and with separate intelligence allowing it to take actions on the NE prior to, during, and after the boot process. The combination of independent operation and increased intelligence provides enhanced management opportunities to enable the NE to be controlled throughout the boot process and after completion of the boot process. For example, files may be uploaded to the NE before or during the boot process to restart the NE from a new software image. The II allows this downloading process to occur in parallel on multiple NEs from a centralized storage resource. Diagnostic checks may be run on the NE, and files, and MIB information, and other data may be transmitted from the II to enable a network manager to more effectively manage the NE.
https://www.google.com/patents/US7734748?dq=US+7734748&hl=en&sa=X&ei=5XNSVL_xDqTNmwXd44HoDA&ved=0CB8Q6AEwAA
Kernel Recipes 2017 - Performance analysis Superpowers with Linux BPF - Brend...Anne Nicolas
The in-kernel Berkeley Packet Filter (BPF) has been enhanced in recent kernels to do much more than just filtering packets. It can now run user-defined programs on events, such as on tracepoints, kprobes, uprobes, and perf_events, allowing advanced performance analysis tools to be created. These can be used in production as the BPF virtual machine is sandboxed and will reject unsafe code, and are already in use at Netflix.
Beginning with the bpf() syscall in 3.18, enhancements have been added in many kernel versions since, with major features for BPF analysis landing in Linux 4.1, 4.4, 4.7, and 4.9. Specific capabilities these provide include custom in-kernel summaries of metrics, custom latency measurements, and frequency counting kernel and user stack traces on events. One interesting case involves saving stack traces on wake up events, and associating them with the blocked stack trace: so that we can see the blocking stack trace and the waker together, merged in kernel by a BPF program (that particular example is in the kernel as samples/bpf/offwaketime).
This talk will discuss the new BPF capabilities for performance analysis and debugging, and demonstrate the new open source tools that have been developed to use it, many of which are in the Linux Foundation iovisor bcc (BPF Compiler Collection) project. These include tools to analyze the CPU scheduler, TCP performance, file system performance, block I/O, and more.
Brendan Gregg, Netflix
Kernel Recipes 2017: Performance Analysis with BPFBrendan Gregg
Talk by Brendan Gregg at Kernel Recipes 2017 (Paris): "The in-kernel Berkeley Packet Filter (BPF) has been enhanced in recent kernels to do much more than just filtering packets. It can now run user-defined programs on events, such as on tracepoints, kprobes, uprobes, and perf_events, allowing advanced performance analysis tools to be created. These can be used in production as the BPF virtual machine is sandboxed and will reject unsafe code, and are already in use at Netflix.
Beginning with the bpf() syscall in 3.18, enhancements have been added in many kernel versions since, with major features for BPF analysis landing in Linux 4.1, 4.4, 4.7, and 4.9. Specific capabilities these provide include custom in-kernel summaries of metrics, custom latency measurements, and frequency counting kernel and user stack traces on events. One interesting case involves saving stack traces on wake up events, and associating them with the blocked stack trace: so that we can see the blocking stack trace and the waker together, merged in kernel by a BPF program (that particular example is in the kernel as samples/bpf/offwaketime).
This talk will discuss the new BPF capabilities for performance analysis and debugging, and demonstrate the new open source tools that have been developed to use it, many of which are in the Linux Foundation iovisor bcc (BPF Compiler Collection) project. These include tools to analyze the CPU scheduler, TCP performance, file system performance, block I/O, and more."
The document discusses how to use the Choice component in Mule applications. The Choice component allows conditional routing of messages based on message properties, similar to if/else statements. An example Mule flow is provided that sets a session property, logs it, and then uses the Choice component to route the message based on checking the value of that property, logging different messages depending on the result.
The document discusses how to use the Choice component in Mule applications. The Choice component allows conditional routing of messages based on message properties, similar to if/else statements. An example Mule flow is provided that sets a session property, logs it, and then uses the Choice component to route the message based on checking the value of that property, logging different messages depending on the result.
This document discusses ways to diagnose performance issues in PostgreSQL. It begins with an introduction to common system resources like CPU, memory, disks, and network that can cause bottlenecks. It then covers specific PostgreSQL internal processes like locks that can lead to performance problems. The document provides examples of using tools like pg_stat_statements, gdb, perf, SystemTap, and trace files to analyze issues further. It emphasizes that performance problems can have complex causes and provides recommendations for improving monitoring and diagnostics.
How to Avoid Common Mistakes When Using Reactor NettyVMware Tanzu
The document discusses common mistakes when using Reactor Netty including logging, memory leaks, timeouts, connection closed issues, and connection pools. It provides examples of logging output that show a request-response lifecycle and handling of multiple concurrent connections. The presentation covers configuring logging, avoiding object retention, setting response timeouts, handling closed connections, and sizing connection pools properly.
OSSNA 2017 Performance Analysis Superpowers with Linux BPFBrendan Gregg
Talk by Brendan Gregg for OSSNA 2017. "Advanced performance observability and debugging have arrived built into the Linux 4.x series, thanks to enhancements to Berkeley Packet Filter (BPF, or eBPF) and the repurposing of its sandboxed virtual machine to provide programmatic capabilities to system tracing. Netflix has been investigating its use for new observability tools, monitoring, security uses, and more. This talk will be a dive deep on these new tracing, observability, and debugging capabilities, which sooner or later will be available to everyone who uses Linux. Whether you’re doing analysis over an ssh session, or via a monitoring GUI, BPF can be used to provide an efficient, custom, and deep level of detail into system and application performance.
This talk will also demonstrate the new open source tools that have been developed, which make use of kernel- and user-level dynamic tracing (kprobes and uprobes), and kernel- and user-level static tracing (tracepoints). These tools provide new insights for file system and storage performance, CPU scheduler performance, TCP performance, and a whole lot more. This is a major turning point for Linux systems engineering, as custom advanced performance instrumentation can be used safely in production environments, powering a new generation of tools and visualizations."
IRJET- Implementation and Simulation of Failsafe Network ArchitectureIRJET Journal
This document describes the implementation and simulation of a fail-safe network architecture using OMNeT++. It discusses ring and star topologies and proposes a hybrid StRing topology. It outlines building the network architectures in OMNeT++, including defining NED files to describe the topology, configuring ini files, declaring packages, and writing C++ programs. It provides an example of simulating a ring topology in OMNeT++ by defining the NED file to connect computers in a ring, configuring packages and ini files, and providing C++ source code for the computer modules.
The document describes a biolatency tool that traces block device I/O latency using eBPF. It discusses how the tool was originally written in the bcc framework using C/BPF, but has since been rewritten in the bpftrace framework using a simpler one-liner script. It provides examples of the bcc and bpftrace implementations of biolatency.
This document appears to be a computer science project report for a sales management system created in C++. It includes a certificate page, table of contents, and sections on acknowledgements, introduction to C++ and the project, program flow, source code, and screenshots. The source code provided implements classes for menu, products, and accounts to manage functions for the main menu, product items, and billing.
Container: is it safe enough to run you application?Aleksey Zalesov
In this talk I explore technologies that empower containerisation and look at several cases when container was able to break the walls around it. Talk was given at LinuxPiter at Nov 21, 2015
Computing in the cloud allows you flexible and easy access to computing and data resources that you would otherwise have to host yourself. SURFsara runs the HPC Cloud providing an “Infrastructure as a Service” (IaaS) model. This workshop provides a general introduction to cloud computing, teaches HPC Cloud characteristics and how to use it hands‐on.
Third edition of the SURF Research Boot Camp at the TU/e. About eighty researchers and research supporters came together in the Auditorium of the Technical University Eindhoven. The attendees choose between four tracks which contains three hands-on courses about UNIX, HPC Cloud, Cluster computing, Big Data, Visualization, data publishing with 4TU and research data management with IRODS.
Reproducible Computational Pipelines with Docker and Nextflowinside-BigData.com
This document summarizes a presentation about using Docker and Nextflow to create reproducible computational pipelines. It discusses two major challenges in computational biology being reproducibility and complexity. Containers like Docker help address these challenges by creating portable and standardized environments. Nextflow is introduced as a workflow framework that allows pipelines to run across platforms and isolates dependencies using containers, enabling fast prototyping. Examples are given of using Nextflow with Docker to run pipelines on different systems like HPC clusters in a scalable and reproducible way.
Designof traffic isolationby using flow based tunnelingsoichi shigeta
This document proposes a design to isolate production and mirror traffic in the underlay network using flow-based tunneling. This reduces the number of tunnels needed and avoids managing tunnels in the TaaS agent. The key aspects of the design are using "flow" as the remote IP when creating VXLAN ports, modifying Neutron and TaaS flow tables to support flooding and learning traffic, and registering local IP addresses in the database to discover remote IPs for mirror traffic. The current design only supports IPv4.
YANG model for NETCONF Event NotificationsThomasGraf42
This document proposes a YANG model to define the structure of NETCONF event notifications. It aims to update RFC5277 by specifying the notification structure in YANG. The draft is currently in revision 03, with editorial changes made based on feedback from the NETCONF working group. It defines the notification structure, provides examples in XML, JSON and CBOR encodings, and discusses next steps like requesting WG adoption.
project of c++ of student report card managment (This is an automated software system written in C++ programming language for Student Performance management system which is used to store records various information about the students and books details.)(thanks)
This document provides an overview of basic commands and functionality in the ONOS network operating system. It demonstrates how to set up an ONOS cluster, view network topology and flows using CLI commands, and activate applications like a reactive forwarding app to enable connectivity across the Mininet topology.
Method and apparatus for intelligent management of a network elementTal Lavian Ph.D.
A network element (NE) includes an intelligent interface (II) with its own operating environment rendering it active during the NE boot process, and with separate intelligence allowing it to take actions on the NE prior to, during, and after the boot process. The combination of independent operation and increased intelligence provides enhanced management opportunities to enable the NE to be controlled throughout the boot process and after completion of the boot process. For example, files may be uploaded to the NE before or during the boot process to restart the NE from a new software image. The II allows this downloading process to occur in parallel on multiple NEs from a centralized storage resource. Diagnostic checks may be run on the NE, and files, and MIB information, and other data may be transmitted from the II to enable a network manager to more effectively manage the NE.
https://www.google.com/patents/US7734748?dq=US+7734748&hl=en&sa=X&ei=5XNSVL_xDqTNmwXd44HoDA&ved=0CB8Q6AEwAA
Kernel Recipes 2017 - Performance analysis Superpowers with Linux BPF - Brend...Anne Nicolas
The in-kernel Berkeley Packet Filter (BPF) has been enhanced in recent kernels to do much more than just filtering packets. It can now run user-defined programs on events, such as on tracepoints, kprobes, uprobes, and perf_events, allowing advanced performance analysis tools to be created. These can be used in production as the BPF virtual machine is sandboxed and will reject unsafe code, and are already in use at Netflix.
Beginning with the bpf() syscall in 3.18, enhancements have been added in many kernel versions since, with major features for BPF analysis landing in Linux 4.1, 4.4, 4.7, and 4.9. Specific capabilities these provide include custom in-kernel summaries of metrics, custom latency measurements, and frequency counting kernel and user stack traces on events. One interesting case involves saving stack traces on wake up events, and associating them with the blocked stack trace: so that we can see the blocking stack trace and the waker together, merged in kernel by a BPF program (that particular example is in the kernel as samples/bpf/offwaketime).
This talk will discuss the new BPF capabilities for performance analysis and debugging, and demonstrate the new open source tools that have been developed to use it, many of which are in the Linux Foundation iovisor bcc (BPF Compiler Collection) project. These include tools to analyze the CPU scheduler, TCP performance, file system performance, block I/O, and more.
Brendan Gregg, Netflix
Kernel Recipes 2017: Performance Analysis with BPFBrendan Gregg
Talk by Brendan Gregg at Kernel Recipes 2017 (Paris): "The in-kernel Berkeley Packet Filter (BPF) has been enhanced in recent kernels to do much more than just filtering packets. It can now run user-defined programs on events, such as on tracepoints, kprobes, uprobes, and perf_events, allowing advanced performance analysis tools to be created. These can be used in production as the BPF virtual machine is sandboxed and will reject unsafe code, and are already in use at Netflix.
Beginning with the bpf() syscall in 3.18, enhancements have been added in many kernel versions since, with major features for BPF analysis landing in Linux 4.1, 4.4, 4.7, and 4.9. Specific capabilities these provide include custom in-kernel summaries of metrics, custom latency measurements, and frequency counting kernel and user stack traces on events. One interesting case involves saving stack traces on wake up events, and associating them with the blocked stack trace: so that we can see the blocking stack trace and the waker together, merged in kernel by a BPF program (that particular example is in the kernel as samples/bpf/offwaketime).
This talk will discuss the new BPF capabilities for performance analysis and debugging, and demonstrate the new open source tools that have been developed to use it, many of which are in the Linux Foundation iovisor bcc (BPF Compiler Collection) project. These include tools to analyze the CPU scheduler, TCP performance, file system performance, block I/O, and more."
The document discusses how to use the Choice component in Mule applications. The Choice component allows conditional routing of messages based on message properties, similar to if/else statements. An example Mule flow is provided that sets a session property, logs it, and then uses the Choice component to route the message based on checking the value of that property, logging different messages depending on the result.
The document discusses how to use the Choice component in Mule applications. The Choice component allows conditional routing of messages based on message properties, similar to if/else statements. An example Mule flow is provided that sets a session property, logs it, and then uses the Choice component to route the message based on checking the value of that property, logging different messages depending on the result.
This document discusses ways to diagnose performance issues in PostgreSQL. It begins with an introduction to common system resources like CPU, memory, disks, and network that can cause bottlenecks. It then covers specific PostgreSQL internal processes like locks that can lead to performance problems. The document provides examples of using tools like pg_stat_statements, gdb, perf, SystemTap, and trace files to analyze issues further. It emphasizes that performance problems can have complex causes and provides recommendations for improving monitoring and diagnostics.
How to Avoid Common Mistakes When Using Reactor NettyVMware Tanzu
The document discusses common mistakes when using Reactor Netty including logging, memory leaks, timeouts, connection closed issues, and connection pools. It provides examples of logging output that show a request-response lifecycle and handling of multiple concurrent connections. The presentation covers configuring logging, avoiding object retention, setting response timeouts, handling closed connections, and sizing connection pools properly.
OSSNA 2017 Performance Analysis Superpowers with Linux BPFBrendan Gregg
Talk by Brendan Gregg for OSSNA 2017. "Advanced performance observability and debugging have arrived built into the Linux 4.x series, thanks to enhancements to Berkeley Packet Filter (BPF, or eBPF) and the repurposing of its sandboxed virtual machine to provide programmatic capabilities to system tracing. Netflix has been investigating its use for new observability tools, monitoring, security uses, and more. This talk will be a dive deep on these new tracing, observability, and debugging capabilities, which sooner or later will be available to everyone who uses Linux. Whether you’re doing analysis over an ssh session, or via a monitoring GUI, BPF can be used to provide an efficient, custom, and deep level of detail into system and application performance.
This talk will also demonstrate the new open source tools that have been developed, which make use of kernel- and user-level dynamic tracing (kprobes and uprobes), and kernel- and user-level static tracing (tracepoints). These tools provide new insights for file system and storage performance, CPU scheduler performance, TCP performance, and a whole lot more. This is a major turning point for Linux systems engineering, as custom advanced performance instrumentation can be used safely in production environments, powering a new generation of tools and visualizations."
IRJET- Implementation and Simulation of Failsafe Network ArchitectureIRJET Journal
This document describes the implementation and simulation of a fail-safe network architecture using OMNeT++. It discusses ring and star topologies and proposes a hybrid StRing topology. It outlines building the network architectures in OMNeT++, including defining NED files to describe the topology, configuring ini files, declaring packages, and writing C++ programs. It provides an example of simulating a ring topology in OMNeT++ by defining the NED file to connect computers in a ring, configuring packages and ini files, and providing C++ source code for the computer modules.
The document describes a biolatency tool that traces block device I/O latency using eBPF. It discusses how the tool was originally written in the bcc framework using C/BPF, but has since been rewritten in the bpftrace framework using a simpler one-liner script. It provides examples of the bcc and bpftrace implementations of biolatency.
This document appears to be a computer science project report for a sales management system created in C++. It includes a certificate page, table of contents, and sections on acknowledgements, introduction to C++ and the project, program flow, source code, and screenshots. The source code provided implements classes for menu, products, and accounts to manage functions for the main menu, product items, and billing.
Container: is it safe enough to run you application?Aleksey Zalesov
In this talk I explore technologies that empower containerisation and look at several cases when container was able to break the walls around it. Talk was given at LinuxPiter at Nov 21, 2015
Computing in the cloud allows you flexible and easy access to computing and data resources that you would otherwise have to host yourself. SURFsara runs the HPC Cloud providing an “Infrastructure as a Service” (IaaS) model. This workshop provides a general introduction to cloud computing, teaches HPC Cloud characteristics and how to use it hands‐on.
Third edition of the SURF Research Boot Camp at the TU/e. About eighty researchers and research supporters came together in the Auditorium of the Technical University Eindhoven. The attendees choose between four tracks which contains three hands-on courses about UNIX, HPC Cloud, Cluster computing, Big Data, Visualization, data publishing with 4TU and research data management with IRODS.
This document discusses several topics related to protein conformational plasticity and aggregation including:
1) Probing protein aggregation using pulsed field gradient NMR and presenting NMR structure data of d-toxin.
2) Simulating a lipid bilayer and d-toxin interaction using molecular dynamics.
3) Modeling how secondary structure of d-toxin changes with different solvents.
4) Describing amyloid fibril formation by transthyretin protein.
The document provides an overview of grid computing and the grid architecture. It discusses how grid computing allows for global sharing of resources through coordinated resource sharing in virtual organizations. The grid architecture connects distributed resources through middleware and allows users to interact with the grid without needing to know details of where jobs run or data is stored. It also discusses authentication and authorization systems, information systems, workload management, data management, accounting, and monitoring services that make up the core of the grid.
This document discusses services that the EGI provides to support the science gateway community. It outlines a dedicated webpage for science gateways, an applications database to register gateways and related software, a training marketplace for events and materials, webinars to promote and teach gateway topics, and a virtual team framework to bring experts together around common goals. The services are aimed at creating a single point of access for gateway information, promoting community engagement and training, and supporting gateway developers.
This document discusses services that the EGI provides to support the science gateway community. It outlines a dedicated webpage for science gateways on EGI, an applications database to register gateways and related software, a training marketplace for events and materials, webinars to promote and teach gateway topics, and a virtual team framework to bring experts together around common goals. The services are aimed at creating a single point of access for gateway information, promoting community engagement, and supporting gateway developers.
EGI provides several services to support science gateway developers:
- A dedicated science gateway webpage serves as a single access point for information on gateways.
- The Applications Database stores information on science applications and gateways and allows searching and filtering.
- The Requirements Tracker allows community members to submit and view feature requests.
- The Training Marketplace promotes gateway-related training events and materials.
- A common glossary of terms aims to improve understanding across the EGI community.
- Policy documents establish operational guidelines for gateways using EGI resources.
The document discusses methods for collecting statistics on the number of users in EGI. It describes problems encountered like outdated VOMS information and unstable VOMS servers. Solutions included contacting VO managers, identifying issues with VOMS servers, and developing a framework to track "robot users" who access resources through applications. Recent statistics show 202 active VOs with 20,706 users. Open questions remain around monitoring science gateway users and defining EGI users accessing non-certified national resources.
European Grid Infrastructure aims to provide easy and open access for researchers from all disciplines to digital services, data, knowledge and expertise to facilitate collaboration and excellent research. Its mission is to support researchers with reliable and innovative ICT services needed to accelerate excellent science. The document provides a link for researchers to get started with the CLM Assembly 2014.
This document provides an introduction to the European Grid Infrastructure (EGI). It explains that EGI is a federated computing infrastructure that provides access to computing resources and data storage across over 30 European countries. EGI supports researchers from many disciplines through reliable ICT services and currently connects over 340 resource centers that provide over 435,000 CPU cores, 190 petabytes of disk storage, and 180 petabytes of tape storage. Key users of EGI include high energy physics, astronomy, life sciences, and earth sciences research communities.
The AppDB is a database that catalogs software applications and related information for use in e-infrastructures. It provides features like advanced search capabilities, metadata tagging, and dissemination tools to improve the quality, retrieval and sharing of application information. The AppDB sees over 25,000 visits per year and contains information on hundreds of software entries. It aims to integrate with external systems through its API and widget capabilities.
This document discusses EGI-InSPIRE, a project that provides a sustainable pan-European grid infrastructure for researchers. It outlines EGI's objectives to support current user communities and attract new ones. It describes the EGI resource infrastructure and collaboration model involving resource centers, national grid initiatives, and virtual research communities. It also discusses some technical support services EGI provides to help coordinate user requirements and applications, training events, and virtual organization monitoring tools.
3. 2/24
The Grid
“Coordinated resource sharing and problem solving in dynamic,
multi-institutional virtual organizations”.
Foster, I. et al., Int. J. Superc. Appli. (2000)15:3
4. 3/24
Why do scientists need the Grid?
High-energy physics (15 PB/year)
15 PB ~ 20*10^6 CD’s
Genome projects, data mining,
Tackling the protein folding,
Protein structure, …
5. 4/24
Enabling Grids for E-science
GStat (Jan 2010) : http://goc.grid.sinica.edu.tw/gstat/
Infrastructure
317 sites
58 countries
~ 140K CPU’s 24/7
~ 69 PB disk
Users
182 registered VO’s
~ 12K registered users
> 300K jobs / day
6. 5/24
Registered EGEE Virtual Organizations
Application domain Active VO’s Users
High-energy Physics 41 4737
Infrastructures 28 2365
Life Sciences 10 519
... ... ...
Total 182 11908
http://cic.gridops.org/index.php?section=home&page=volist
VO name Scope Registered Users
(20090210)
Registered Users
(20100125)
biomed Gobal 223 257
enmr.eu Global 54 155
15. Submit a job
15/24
[nuno@ui-enmr bcbr]$ glite-wms-job-submit -a -o jid hello.jdl
Connecting to the service https://wms-enmr.chem.uu.nl:7443/glite_wms_wmproxy_server
====================== glite-wms-job-submit Success ======================
The job has been successfully submitted to the WMProxy
Your job identifier is:
https://lb-enmr.chem.uu.nl:9000/gOtqQuG4ebqpz3m5z8_2Eg
The job identifier has been saved in the following file:
/home/nuno/grid/hello/bcbr/jid
==========================================================================
16. Query Job Status
16/24
[nuno@ui-enmr bcbr]$ glite-wms-job-status -i jid
*************************************************************
BOOKKEEPING INFORMATION:
Status info for the Job : https://lb-enmr.chem.uu.nl:9000/gOtqQuG4ebqpz3m5z8_2Eg
Current Status: Scheduled
Status Reason: Job successfully submitted to Globus
Destination: pbs-enmr.cerm.unifi.it:2119/jobmanager-lcgpbs-verylong
Submitted: Tue Jan 26 16:26:07 2010 CET
*************************************************************
[nuno@ui-enmr bcbr]$ glite-wms-job-status -i jid
*************************************************************
BOOKKEEPING INFORMATION:
Status info for the Job : https://lb-enmr.chem.uu.nl:9000/gOtqQuG4ebqpz3m5z8_2Eg
Current Status: Done (Success)
Exit code: 0
Status Reason: Job terminated successfully
Destination: pbs-enmr.cerm.unifi.it:2119/jobmanager-lcgpbs-verylong
Submitted: Tue Jan 26 16:26:07 2010 CET
*************************************************************
17. Retrieve Job Output
17/24
[nuno@ui-enmr bcbr]$ glite-wms-job-output -i jid --dir ./out
Connecting to the service https://wms-enmr.chem.uu.nl:7443/glite_wms_wmproxy_server
================================================================================
JOB GET OUTPUT OUTCOME
Output sandbox files for the job:
https://lb-enmr.chem.uu.nl:9000/gOtqQuG4ebqpz3m5z8_2Eg
have been successfully retrieved and stored in the directory:
/home/nuno/grid/hello/bcbr/out
================================================================================
[nuno@ui-enmr bcbr]$ ll ./out/
total 4
-rw-r--r-- 1 nuno users 0 Jan 26 17:31 hello.err
-rw-r--r-- 1 nuno users 48 Jan 26 17:31 hello.out
[nuno@ui-enmr bcbr]$ more ./out/hello.out
Hello Grid! I was here : wn3-enmr.cerm.unifi.it