This document discusses how Web 2.0 technologies can be used to enable real-time collaboration on scientific simulations. It describes how simulations done with the Cactus framework can integrate technologies like Twitter, Flickr, blogs and wikis to share status updates, images and documentation. This allows simulations to automatically post messages and images to social media sites to engage collaboration communities during long running simulations.
Cactus is an open source problem solving environment for scientists and engineers to develop modular, parallel simulation codes. It originated in the academic research community to simulate Einstein's equations for general relativity. Cactus uses a central "flesh" core connected to application-specific "thorn" modules through an extensible interface. Thorns implement scientific applications, while other thorns provide computational capabilities like parallel I/O. Cactus runs on many architectures from laptops to supercomputers. Large collaborations use Cactus to simulate astrophysical phenomena like black hole collisions through distributed, parallel computation across multiple institutions.
Cytoscape Tutorial Session 1 at UT-KBRIN Bioinformatics Summit 2014 (4/11/2014)Keiichiro Ono
This document outlines a tutorial on biological data analysis and visualization using Cytoscape. The tutorial covers basic concepts like networks and tables in Cytoscape, data import, network analysis features, and visualization techniques. It discusses loading sample network data, calculating network statistics, filtering networks, basic search functionality, and applying visual styles. The tutorial is intended to provide a practical introduction to Cytoscape's core features through examples and demos.
Presentation slides for SDCSB Cytoscape Workshop on 5/19/2016. The presentation contains current status of Cytoscape project and overview of the Cytoscape ecosystem. It briefly mentions the Cytoscape Cyberinfrastructure.
Avogadro: Open Source Libraries and Application for Computational ChemistryMarcus Hanwell
In order to tackle upcoming molecular simulation and visualization challenges in key areas of materials science, chemistry and biology it is necessary to move beyond fixed software applications. The Avogadro project is in the final stages of an ambitious rewrite of its core data structures, algorithms and visualization capabilities. The project began as a grass roots effort to address deficiencies observed by many of the early contributors in existing commercial and open source solutions. Avogadro is now a robust, flexible solution that can tie in to and harness the power of VTK for additional analysis and visualization capabilities.
Open Source Visualization of Scientific DataMarcus Hanwell
The document discusses open source visualization of scientific data using tools like the Visualization Toolkit (VTK) and ParaView. It describes how these tools are being used to improve and open up computational chemistry workflows by enabling interactive 3D visualization of molecular structures, properties, and simulation results. Future directions include improved standard representations for different data types, linked multi-view visualization, mobile/tablet apps, and integrating with database and informatics tools.
Open Chemistry: Input Preparation, Data Visualization & AnalysisMarcus Hanwell
The document outlines an open-source software development project called Open Chemistry that aims to integrate desktop chemistry applications, high-performance computing resources, and database/informatics resources. It describes several software applications being developed as part of Open Chemistry, including Avogadro 2 for structure editing and visualization, MoleQueue for running computational jobs on local and remote systems, and MongoChem for storing and searching chemistry data. The goal of Open Chemistry is to advance computational chemistry tools through open-source development and tight integration of related applications.
Avogadro is being rewritten and architected to put semantic chemical meaning at the center of its internal data structures in order to fully support data-centric workflows. Computational and experimental chemistry both suffer when semantic meaning is lost; through the use of expressive formats such as CML, along with lightweight data-exchange formats such as JSON, workflows that previously demanded manual intervention to retain semantic meaning can be used. Integration with projects like JUMBO and Open Babel when conversion is required, coupled with codes such as NWChem where direct support for CML is being added, allow for much richer storage, analysis, and indexing of data. As web-based data sources add more semantic structure to their data, Avogadro will take advantage of those resources.
This document discusses how Web 2.0 technologies can be used to enable real-time collaboration on scientific simulations. It describes how simulations done with the Cactus framework can integrate technologies like Twitter, Flickr, blogs and wikis to share status updates, images and documentation. This allows simulations to automatically post messages and images to social media sites to engage collaboration communities during long running simulations.
Cactus is an open source problem solving environment for scientists and engineers to develop modular, parallel simulation codes. It originated in the academic research community to simulate Einstein's equations for general relativity. Cactus uses a central "flesh" core connected to application-specific "thorn" modules through an extensible interface. Thorns implement scientific applications, while other thorns provide computational capabilities like parallel I/O. Cactus runs on many architectures from laptops to supercomputers. Large collaborations use Cactus to simulate astrophysical phenomena like black hole collisions through distributed, parallel computation across multiple institutions.
Cytoscape Tutorial Session 1 at UT-KBRIN Bioinformatics Summit 2014 (4/11/2014)Keiichiro Ono
This document outlines a tutorial on biological data analysis and visualization using Cytoscape. The tutorial covers basic concepts like networks and tables in Cytoscape, data import, network analysis features, and visualization techniques. It discusses loading sample network data, calculating network statistics, filtering networks, basic search functionality, and applying visual styles. The tutorial is intended to provide a practical introduction to Cytoscape's core features through examples and demos.
Presentation slides for SDCSB Cytoscape Workshop on 5/19/2016. The presentation contains current status of Cytoscape project and overview of the Cytoscape ecosystem. It briefly mentions the Cytoscape Cyberinfrastructure.
Avogadro: Open Source Libraries and Application for Computational ChemistryMarcus Hanwell
In order to tackle upcoming molecular simulation and visualization challenges in key areas of materials science, chemistry and biology it is necessary to move beyond fixed software applications. The Avogadro project is in the final stages of an ambitious rewrite of its core data structures, algorithms and visualization capabilities. The project began as a grass roots effort to address deficiencies observed by many of the early contributors in existing commercial and open source solutions. Avogadro is now a robust, flexible solution that can tie in to and harness the power of VTK for additional analysis and visualization capabilities.
Open Source Visualization of Scientific DataMarcus Hanwell
The document discusses open source visualization of scientific data using tools like the Visualization Toolkit (VTK) and ParaView. It describes how these tools are being used to improve and open up computational chemistry workflows by enabling interactive 3D visualization of molecular structures, properties, and simulation results. Future directions include improved standard representations for different data types, linked multi-view visualization, mobile/tablet apps, and integrating with database and informatics tools.
Open Chemistry: Input Preparation, Data Visualization & AnalysisMarcus Hanwell
The document outlines an open-source software development project called Open Chemistry that aims to integrate desktop chemistry applications, high-performance computing resources, and database/informatics resources. It describes several software applications being developed as part of Open Chemistry, including Avogadro 2 for structure editing and visualization, MoleQueue for running computational jobs on local and remote systems, and MongoChem for storing and searching chemistry data. The goal of Open Chemistry is to advance computational chemistry tools through open-source development and tight integration of related applications.
Avogadro is being rewritten and architected to put semantic chemical meaning at the center of its internal data structures in order to fully support data-centric workflows. Computational and experimental chemistry both suffer when semantic meaning is lost; through the use of expressive formats such as CML, along with lightweight data-exchange formats such as JSON, workflows that previously demanded manual intervention to retain semantic meaning can be used. Integration with projects like JUMBO and Open Babel when conversion is required, coupled with codes such as NWChem where direct support for CML is being added, allow for much richer storage, analysis, and indexing of data. As web-based data sources add more semantic structure to their data, Avogadro will take advantage of those resources.
Cyberinfrastructure and Applications Overview: Howard University June22marpierc
1) Cyberinfrastructure refers to the combination of computing systems, data storage systems, advanced instruments and data repositories, visualization environments, and people that enable knowledge discovery through integrated multi-scale simulations and analyses.
2) Cloud computing, multicore processors, and Web 2.0 tools are changing the landscape of cyberinfrastructure by providing new approaches to distributed computing and data sharing that emphasize usability, collaboration, and accessibility.
3) Scientific applications are increasingly data-intensive, requiring high-performance computing resources to analyze large datasets from sources like gene sequencers, telescopes, sensors, and web crawlers.
PDE2011 pythonOCC project status and plansThomas Paviot
Sldeshow presented at the latest NASA/ESA Product Data Exchange conference. Deals with pythonocc project status and midterm plans: WebGl renderer, high level API over the low level builtin data model.
The document summarizes a tutorial presentation about the Open Grid Computing Environments (OGCE) software tools for building science gateways. The OGCE tools include a gadget container, workflow composer called XBaya, and application factory service called GFac. The presentation demonstrates how these tools can be used to build portals and compose workflows to access resources like the TeraGrid.
Introduction to Xamarin Mobile PlatformDominik Minta
The document discusses using Xamarin to build multi-platform mobile applications for Android, iOS and Windows Phone. It outlines some of the benefits of using Xamarin such as writing code once that runs on multiple platforms, access to native APIs, and using C# and .NET. It also provides examples of applications built with Xamarin and discusses concepts like MVVM and Xamarin Forms. Some disadvantages mentioned are needing familiarity with multiple technologies and larger app sizes.
Presentation of the StratusLab cloud distribution at FOSDEM'13. It summarizes the current cloud services and their implementations. It concludes with the roadmap for upcoming releases.
This document provides an overview of web technology and its applications. It begins with a brief history of the internet and protocols like TCP/IP, DNS, and HTTP. It then explains the basic client-server model used by the web and defines common web-related terms like browsers, servers, URLs, and MIME types. The document concludes with a more detailed explanation of the hypertext transfer protocol (HTTP) and includes references for additional reading.
The document discusses grids and their potential use for data mining applications in Earth science. Some key points:
- Grids can connect distributed computing and data resources to enable large-scale applications and collaboration.
- The Grid Miner application was developed to mine satellite data on NASA's Information Power Grid as a demonstration.
- Grids could help couple satellite data archives to computational resources, allowing users to process large datasets.
- For this to be realized, data archives need to be connected to grids and tools developed to enable scientists to access and analyze data.
This document provides an overview and introduction to grid computing concepts. It discusses the benefits of grid computing such as exploiting underutilized resources and enabling collaboration. It also describes some key computational grid projects including a national fusion grid pilot project. The document outlines the layered architecture of grid systems and references some foundational projects and standards like Globus Toolkit and Global Grid Forum. Finally, it introduces the concepts of OGSA and OGSI which provide standard interfaces and behaviors for distributed system management in grid environments.
Jacques Magen - Future Internet Research and Experimentation (FIRE): Successf...FIA2010
FIRE is a European initiative to support experimentally-driven research for future internet technologies through large-scale experimental facilities. It has two dimensions: long-term visionary research and building testbeds to support both medium and long term research. Existing FIRE facilities have been used for experiments in areas like overlay routing, cognitive radio, open flow, IMS, services on clouds and grids, and the internet of things. New FIRE facilities provide unique opportunities for experiments involving dynamic service orchestration, wireless sensor networks, and software-defined networking.
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...OpenStack
Audience Level
Intermediate
Synopsis
High performance computing and cloud computing have traditionally been seen as separate solutions to separate problems, dealing with issues of performance and flexibility respectively. In a diverse research environment however, both sets of compute requirements can occur. In addition to the administrative benefits in combining both requirements into a single unified system, opportunities are provided for incremental expansion.
The deployment of the Spartan cloud-HPC hybrid system at the University of Melbourne last year is an example of such a design. Despite its small size, it has attracted international attention due to its design features. This presentation, in addition to providing a grounding on why one would wish to build an HPC-cloud hybrid system and the results of the deployment, provides a complete technical overview of the design from the ground up, as well as problems encountered and planned future developments.
Speaker Bio
Lev Lafayette is the HPC and Training Officer at the University of Melbourne. Prior to that he worked at the Victorian Partnership for Advanced Computing for several years in a similar role.
Eclipse Hawk provides scalable querying of models by indexing them into graph databases. It addresses challenges of collaborative modeling on large systems by distributed teams. The Hawk API is designed for flexibility, performance, and scalability through features like multiple communication styles, efficient encodings, and paged results.
Hopsworks - ExtremeEarth Open WorkshopExtremeEarth
This document summarizes a presentation about the three-year ExtremeEarth project. It discusses the ExtremeEarth platform architecture, which brings together Earth observation data access from DIASes, end-user products from TEPs, and scalable AI capabilities from Hopsworks. The architecture provides infrastructure on Creodias and uses Hopsworks to develop end-to-end machine learning pipelines for processing petabytes of Earth observation data. Results have been exploited through additional research projects and a product offering on Hopsworks.ai. The project has also led to several publications and blog posts about applying AI to Earth observation data.
Sambhab Mohapatra is seeking a full time position in system software, embedded software, firmware, security or IoT. He has a Master's degree in Computer Engineering from ASU and a Bachelor's degree from BITS Pilani in India. His experience includes projects in device drivers, file systems, security, distributed systems, algorithms, circuits, and internships in verification and software development.
StratusLab: A IaaS Cloud Distribution Focusing on Simplicitystratuslab
The StratusLab collaboration provides an Infrastructure as a Service (IaaS) cloud distribution, allowing data centres to install private, community, or public clouds for their users. The distribution focuses on simplicity: allowing quick, straightforward installation on commodity hardware as well as easy access by scientists and engineers. Having run a cloud service based on StratusLab for several years, we will present issues that have arisen and provide examples of scientific use of the infrastructure. This will be followed by a live demonstration of the main features of the StratusLab cloud distribution.
(Presentation given at Summer School on Cloud Computing, Evry, France, August 29, 2013.)
BlogMyData is a virtual research environment for collaboratively visualizing environmental data. It allows researchers to detect features in models, diagnose problems, preview data before downloading, make sense of large datasets, and communicate complex concepts. Existing scientific visualization software requires expert knowledge and has limited interoperability. BlogMyData addresses this by providing a web-based blogging tool for scientists to discuss, collaborate, and record discussions as part of the research record. It utilizes open authentication and spatial features from existing frameworks to overlay blogged data on other visualization clients and offer customized geospatial feeds of blog entries. The prototype received positive feedback and future features may include supporting more data types.
Saiyam Kohli provides his contact information and an overview of his education and skills. He has a Master of Science in Computer Science expected in 2006 from the University of Minnesota Duluth, and a Bachelor of Information Technology degree from the University of Delhi in India. His skills include programming languages like C, C++, Java, and Python as well as tools, databases, web technologies, and platforms. He has work experience as a research assistant and intern where he applied these skills. His projects include implementations of an n-gram statistics package and essay grader.
Metacomputer Architecture of the Global LambdaGridLarry Smarr
06.01.13
Invited Talk
Department of Computer Science
Donald Bren School of Information and Computer Sciences
Title: Metacomputer Architecture of the Global LambdaGrid
Irvine, CA
Reactive Microservices with Spring 5: WebFlux Trayan Iliev
On November 27 Trayan Iliev from IPT presented “Reactive microservices with Spring 5: WebFlux” @Dev.bg in Betahaus Sofia. IPT – Intellectual Products & Technologies has been organizing Java & JavaScript trainings since 2003.
Spring 5 introduces a new model for end-to-end functional and reactive web service programming with Spring 5 WebFlow, Spring Data & Spring Boot. The main topics include:
– Introduction to reactive programming, Reactive Streams specification, and project Reactor (as WebFlux infrastructure)
– REST services with WebFlux – comparison between annotation-based and functional reactive programming approaches for building.
– Router, handler and filter functions
– Using reactive repositories and reactive database access with Spring Data. Building end-to-end non-blocking reactive web services using Netty-based web runtime
– Reactive WebClients and integration testing. Reactive WebSocket support
– Realtime event streaming to WebClients using JSON Streams, and to JS client using SSE.
This document provides a summary of Andrew Barker's skills and experience as a software developer. It includes details of his technical skills in languages like C++, Java, Perl, and databases like Oracle, MySQL, and SQL. It also lists his personal skills such as attention to detail, problem solving, and communication. His experience includes over 25 years working as a consultant for companies like Atos, CSC, and British Telecom, where he developed and maintained various software solutions.
Scilab is a free and open-source numerical computation software developed by Scilab Enterprises. It includes a high-level programming language, hundreds of mathematical functions, advanced data structures, and a computation engine that can be easily embedded into applications. Scilab also includes Xcos for dynamic systems modeling and simulation and ATOMS for managing external modules. Scilab has over 2,000 functions and is used widely in industry, academia, and education.
Cyberinfrastructure and Applications Overview: Howard University June22marpierc
1) Cyberinfrastructure refers to the combination of computing systems, data storage systems, advanced instruments and data repositories, visualization environments, and people that enable knowledge discovery through integrated multi-scale simulations and analyses.
2) Cloud computing, multicore processors, and Web 2.0 tools are changing the landscape of cyberinfrastructure by providing new approaches to distributed computing and data sharing that emphasize usability, collaboration, and accessibility.
3) Scientific applications are increasingly data-intensive, requiring high-performance computing resources to analyze large datasets from sources like gene sequencers, telescopes, sensors, and web crawlers.
PDE2011 pythonOCC project status and plansThomas Paviot
Sldeshow presented at the latest NASA/ESA Product Data Exchange conference. Deals with pythonocc project status and midterm plans: WebGl renderer, high level API over the low level builtin data model.
The document summarizes a tutorial presentation about the Open Grid Computing Environments (OGCE) software tools for building science gateways. The OGCE tools include a gadget container, workflow composer called XBaya, and application factory service called GFac. The presentation demonstrates how these tools can be used to build portals and compose workflows to access resources like the TeraGrid.
Introduction to Xamarin Mobile PlatformDominik Minta
The document discusses using Xamarin to build multi-platform mobile applications for Android, iOS and Windows Phone. It outlines some of the benefits of using Xamarin such as writing code once that runs on multiple platforms, access to native APIs, and using C# and .NET. It also provides examples of applications built with Xamarin and discusses concepts like MVVM and Xamarin Forms. Some disadvantages mentioned are needing familiarity with multiple technologies and larger app sizes.
Presentation of the StratusLab cloud distribution at FOSDEM'13. It summarizes the current cloud services and their implementations. It concludes with the roadmap for upcoming releases.
This document provides an overview of web technology and its applications. It begins with a brief history of the internet and protocols like TCP/IP, DNS, and HTTP. It then explains the basic client-server model used by the web and defines common web-related terms like browsers, servers, URLs, and MIME types. The document concludes with a more detailed explanation of the hypertext transfer protocol (HTTP) and includes references for additional reading.
The document discusses grids and their potential use for data mining applications in Earth science. Some key points:
- Grids can connect distributed computing and data resources to enable large-scale applications and collaboration.
- The Grid Miner application was developed to mine satellite data on NASA's Information Power Grid as a demonstration.
- Grids could help couple satellite data archives to computational resources, allowing users to process large datasets.
- For this to be realized, data archives need to be connected to grids and tools developed to enable scientists to access and analyze data.
This document provides an overview and introduction to grid computing concepts. It discusses the benefits of grid computing such as exploiting underutilized resources and enabling collaboration. It also describes some key computational grid projects including a national fusion grid pilot project. The document outlines the layered architecture of grid systems and references some foundational projects and standards like Globus Toolkit and Global Grid Forum. Finally, it introduces the concepts of OGSA and OGSI which provide standard interfaces and behaviors for distributed system management in grid environments.
Jacques Magen - Future Internet Research and Experimentation (FIRE): Successf...FIA2010
FIRE is a European initiative to support experimentally-driven research for future internet technologies through large-scale experimental facilities. It has two dimensions: long-term visionary research and building testbeds to support both medium and long term research. Existing FIRE facilities have been used for experiments in areas like overlay routing, cognitive radio, open flow, IMS, services on clouds and grids, and the internet of things. New FIRE facilities provide unique opportunities for experiments involving dynamic service orchestration, wireless sensor networks, and software-defined networking.
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...OpenStack
Audience Level
Intermediate
Synopsis
High performance computing and cloud computing have traditionally been seen as separate solutions to separate problems, dealing with issues of performance and flexibility respectively. In a diverse research environment however, both sets of compute requirements can occur. In addition to the administrative benefits in combining both requirements into a single unified system, opportunities are provided for incremental expansion.
The deployment of the Spartan cloud-HPC hybrid system at the University of Melbourne last year is an example of such a design. Despite its small size, it has attracted international attention due to its design features. This presentation, in addition to providing a grounding on why one would wish to build an HPC-cloud hybrid system and the results of the deployment, provides a complete technical overview of the design from the ground up, as well as problems encountered and planned future developments.
Speaker Bio
Lev Lafayette is the HPC and Training Officer at the University of Melbourne. Prior to that he worked at the Victorian Partnership for Advanced Computing for several years in a similar role.
Eclipse Hawk provides scalable querying of models by indexing them into graph databases. It addresses challenges of collaborative modeling on large systems by distributed teams. The Hawk API is designed for flexibility, performance, and scalability through features like multiple communication styles, efficient encodings, and paged results.
Hopsworks - ExtremeEarth Open WorkshopExtremeEarth
This document summarizes a presentation about the three-year ExtremeEarth project. It discusses the ExtremeEarth platform architecture, which brings together Earth observation data access from DIASes, end-user products from TEPs, and scalable AI capabilities from Hopsworks. The architecture provides infrastructure on Creodias and uses Hopsworks to develop end-to-end machine learning pipelines for processing petabytes of Earth observation data. Results have been exploited through additional research projects and a product offering on Hopsworks.ai. The project has also led to several publications and blog posts about applying AI to Earth observation data.
Sambhab Mohapatra is seeking a full time position in system software, embedded software, firmware, security or IoT. He has a Master's degree in Computer Engineering from ASU and a Bachelor's degree from BITS Pilani in India. His experience includes projects in device drivers, file systems, security, distributed systems, algorithms, circuits, and internships in verification and software development.
StratusLab: A IaaS Cloud Distribution Focusing on Simplicitystratuslab
The StratusLab collaboration provides an Infrastructure as a Service (IaaS) cloud distribution, allowing data centres to install private, community, or public clouds for their users. The distribution focuses on simplicity: allowing quick, straightforward installation on commodity hardware as well as easy access by scientists and engineers. Having run a cloud service based on StratusLab for several years, we will present issues that have arisen and provide examples of scientific use of the infrastructure. This will be followed by a live demonstration of the main features of the StratusLab cloud distribution.
(Presentation given at Summer School on Cloud Computing, Evry, France, August 29, 2013.)
BlogMyData is a virtual research environment for collaboratively visualizing environmental data. It allows researchers to detect features in models, diagnose problems, preview data before downloading, make sense of large datasets, and communicate complex concepts. Existing scientific visualization software requires expert knowledge and has limited interoperability. BlogMyData addresses this by providing a web-based blogging tool for scientists to discuss, collaborate, and record discussions as part of the research record. It utilizes open authentication and spatial features from existing frameworks to overlay blogged data on other visualization clients and offer customized geospatial feeds of blog entries. The prototype received positive feedback and future features may include supporting more data types.
Saiyam Kohli provides his contact information and an overview of his education and skills. He has a Master of Science in Computer Science expected in 2006 from the University of Minnesota Duluth, and a Bachelor of Information Technology degree from the University of Delhi in India. His skills include programming languages like C, C++, Java, and Python as well as tools, databases, web technologies, and platforms. He has work experience as a research assistant and intern where he applied these skills. His projects include implementations of an n-gram statistics package and essay grader.
Metacomputer Architecture of the Global LambdaGridLarry Smarr
06.01.13
Invited Talk
Department of Computer Science
Donald Bren School of Information and Computer Sciences
Title: Metacomputer Architecture of the Global LambdaGrid
Irvine, CA
Reactive Microservices with Spring 5: WebFlux Trayan Iliev
On November 27 Trayan Iliev from IPT presented “Reactive microservices with Spring 5: WebFlux” @Dev.bg in Betahaus Sofia. IPT – Intellectual Products & Technologies has been organizing Java & JavaScript trainings since 2003.
Spring 5 introduces a new model for end-to-end functional and reactive web service programming with Spring 5 WebFlow, Spring Data & Spring Boot. The main topics include:
– Introduction to reactive programming, Reactive Streams specification, and project Reactor (as WebFlux infrastructure)
– REST services with WebFlux – comparison between annotation-based and functional reactive programming approaches for building.
– Router, handler and filter functions
– Using reactive repositories and reactive database access with Spring Data. Building end-to-end non-blocking reactive web services using Netty-based web runtime
– Reactive WebClients and integration testing. Reactive WebSocket support
– Realtime event streaming to WebClients using JSON Streams, and to JS client using SSE.
This document provides a summary of Andrew Barker's skills and experience as a software developer. It includes details of his technical skills in languages like C++, Java, Perl, and databases like Oracle, MySQL, and SQL. It also lists his personal skills such as attention to detail, problem solving, and communication. His experience includes over 25 years working as a consultant for companies like Atos, CSC, and British Telecom, where he developed and maintained various software solutions.
Scilab is a free and open-source numerical computation software developed by Scilab Enterprises. It includes a high-level programming language, hundreds of mathematical functions, advanced data structures, and a computation engine that can be easily embedded into applications. Scilab also includes Xcos for dynamic systems modeling and simulation and ATOMS for managing external modules. Scilab has over 2,000 functions and is used widely in industry, academia, and education.
Invited talk at workshop "Exascale Computing in Astrophysics" held in Ascona, Switzerland, 8-13 September 2013.
http://www.itp.uzh.ch/exastro2013/Home.html
Cyberinfrastructure in Louisiana: From Black Holes to Hurricanes. Presentation at Cyberinfrastructure Days, Notre Dame, April 29-30, 2010. http://ci.nd.edu/
Scientific applications increasingly rely on large datasets that require high-speed networks for remote collaboration and distributed analysis. Dr. Allen's research in multi-physics simulations, data archiving, and visualization faces challenges due to the complexity of networking multiple sites. There is a need for high-level application services and APIs to integrate networks and enable new science scenarios beyond simply moving files. Demonstrating working prototypes will help address technical details and allow computational scientists to fully leverage network capabilities.
More from University of Illinois at Urbana-Champaign (7)
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
1. Integrating Web 2.0 Technologies with Scientific Simulation Codes for Real-Time Collaboration Gabrielle Allen (LSU), Frank Loeffler (LSU), Thomas Radke (AEI), Erik Schnetter (LSU), Edward Seidel (NSF/LSU) IEEE Cluster Computing, New Orleans, September 2009
2. Gravitational Wave Physics Models Analysis & Insight Observations Petascale problems: Full 3D general relativistic models of binary systems, supernova, gamma-ray bursts
8. Cactus Application Environment Individual research groups Domain specific shared infrastructure Flesh: APIs, information, orchestration Adaptive mesh refinement, parallel I/O, interaction, …
9. Typical Black Hole Simulations At LSU … 300 Cactus thorns 10,000 potential parameters 20 different supercomputers 100-2000 cores Days/weeks to run (checkpoint/restart) GBs to TBs of data (HDF5, ASCII, jpeg) 7
10. Collaborative Technologies Technologies to share simulation-related information developed in our group from the early 1990s Essential to support the scientific research Review historical evolution of these technologies Show how Web 2.0 provides new tools to enable old scenarios 8
11. Web-based Mail Lists Mosaic web browser (1993, NCSA) Seidel’s group at NCSA worry about content http://archive.ncsa.illinois.edu/Cyberia/NumRel/GravWaves.html(1995) Collaborative Cork Board (CoCoBoard) (Mid 90’s) Researchers have web-based “project pages” Could attach images!! (usually 1-D plots of results) Used till late 90’s Currently Project based private wikis: parameter/output files, figures Organize material for weekly project conference calls Cons: network to access/edit wiki, editing slow 9
13. Simulation Web Interfaces Thorn “httpd” First collaborative tool fundamentally integrated into Cactus Werner Benger (1999), visiting NCSA from Germany (7 hr time difference and email) Used socket library developed for remote viz (John Shalf & TIKSL project) Thorn “HTTPD” in standard toolkit (2000) Simulation status, variables, timing, viewport, output files, parameter steering, etc Thorns can include their own web content 11
14. Issues Authorization to web pages (username/password in parameter file is insecure and awkward, newer version uses https and can also use X.509) Browsers can display images in certain formats, a Visualization thorn uses gnuplot to include e.g. performance with time, physical parameters Problem deploying on compute nodes where web server cannot be directly accessed (port forwarding, filewalls) How to find and track the simulations, publicize existence to a collaboration? 12
17. Simulation Reports and Email Readable report automatically generated for each simulation (computation and physics) Prototyped 2001 but not used (?) How to collect reports in one place? Mail Thorn (sendmail) Email reliable and fault tolerant (spool) Supercomputers do not allow mail to be sent from compute nodes. 15
19. Announcing and Grid Portals Collaborations need reliable, live information about long running simulations. NSF Astrophysics Simulation Collaboratory (ASC), 1999 Grid Portal provided centralized, collaborative interface to submit, monitor and archive simulations Java, JSP, Javascript with back-end data base, contributed to GridSphere design (GridLab) JavaCOG to submit jobs and basic monitoring. 17 ASC Portal (2002)
20. Announcing Simulation Info 18 Publish (application provided) simulation information Thorn Announce, in prototype Cactus Worm scenario (2001) Message from Flesh/Thorn info Transport: XML-RPC to remote socket (portal) Issues Job IDs Security, mapping users Cumbersome user set parameters (portal location, visibility of job, notification needs) Announcing to ASC Portal (2002)
21. Notification Portal notification service Portal users configure at portal, simulations configure in parameter file Email, SMS, Instant Message Initial experiments generated large telecom bills! 19 Cool and useful, but lots of work (FTE) to develop and modify portal service, difficult to configure.
22. Web 2.0 Technologies Use for collaborative, simulation-level messaging and information archiving Reliable, persistent, well-documented, user-configurable, cheap, well supported, good APIs 20
23. Twitter March 2006 Real-time short messaging system. Users send and receive each others updates (tweets). Wide range of devices and rudimentary social networking. Receivers can filter messages they see and specify how they receive them Twitter API (e.g. post a new Twitter message from a user) Free 21
24. Thorn Twitter Uses libcurl Cactus parameters for twitter username/password Twitter API: statuses/update At LSU “numrel” group account Messages when simulation starts and at different stages 22
25. Flickr 2004, image hosting website for digital photographs (and now videos). Bought by Yahoo (2005). Professional account ($25/yr) for unlimited use Web service API for uploading and manipulating images Group images into Sets and Collections Tags, title, description, metadata from EXIF headers Social networking: users can comment on images, flag them, order by popularity, etc. Public/Private/Friends/Family. Blogs. RSS field allows quick previewing. 23
26. Thorn Flickr Send images from running simulation Uses: flickcurl, libcurl, libxml2, openssl Authentication more complex (api key, shared secret) Thorn uploads images that are generated by Cactus (and known to I/O layer), e.g. IoJpeg Each simulation given its own Flickr set 24
27. Future Work Extend capabilities, production testing Common authentication mechanism Social networking model (individual/shared accounts) Development of common tags, more metadata etc Storing videos (Flickr, YouTube, Vimeo) Advantage for scientists presenting Lots of other possibilities: DropBox to publish files across a collaboration, WordPress for simulation reports/blogs, FaceBook to replace grid portals and aggregate services, Cloud computing APIs for “grid” scenarios, … 25
28. Conclusions Started as a fun project (undergrad) Web 2.0 provide reliable delivery, storage, access, and flexible collaborative features Can use Web 2.0 to easily prototype new interactive and collaborative scenarios (have really missed this) Small groups and individuals can do this too!! Target standard of ease-of-use for cyberinfrastructure development For real use need unified authentication, clear policies on data, site versions 26