The document describes a prototype 3D VO archive called B0DEGA that would provide discovery, querying, and access to galaxy datacubes from the AMIGA and B0DEGA projects. It discusses the context and goals of those projects, which involve analyzing the interstellar medium of isolated galaxies using 3D spectral line data. The archive would support capabilities like cone searches, getCapabilities queries, and virtual data access to enable workflows with the large 3D datasets.
Analysis Andromeda Galaxy Data Using Spark: Spark Summit East Talk by Jose Na...Spark Summit
The Andromeda Galaxy, or M31, is a spiral galaxy approximately 2.5 million light years away from the Milky Way. As the nearest large external galaxy, it allows us to study galaxy features not visible in our own Milky Way due to our position within the galaxy. Recent studies have shown that the disc and halo of the Andromeda Galaxy extend further than previously thought. Rafiei Ravandi et al 2016 extended previous surveys of Andromeda at mid-infrared wvalengths to produce a catalog containing 426,529 objects. We have used the Apache Spark API for Python in order to cross correlate these objects with previous astronomical catalogs, such as SIMBAD, NED, and MAST (over 11 million objects). The aim is to know whether the objects from the new survey are all part of the M31 galaxy or are part of the background or foreground. The Spark-Python code makes full use of Spark RDDs in order to join multiple catalogs in a single table; this helps us to predict if a particular object is in fact part of Andromeda.
We used key-value pairs in order to reduce the data duplicate data from the MAST catalog, and using groupByKey, we can classify a particular astronomical object using previous catalogs. We can conclude that our new tool can help us to better understand multiple astronomical catalogs for the Andromeda galaxy, such as resolution between astronomical catalogs, and the region in the galaxy where the astronomical objects (such as X-ray binaries, or black holes) dwell.
NASA Earth Exchange (NEX) is virtual collaborative that brings scientists and researchers together in a knowledge-based social network and provides the necessary tools, computing power, and data to accelerate research, innovation and provide transparency.
Semantically-Enabling the Web of Things: The W3C Semantic Sensor Network Onto...Laurent Lefort
Presentation of the SSN XG results at eResearch Australia 2011 https://eresearchau.files.wordpress.com/2012/06/74-semantically-enabling-the-web-of-things-the-w3c-semantic-sensor-network-ontology.pdf
A Recommender Story: Improving Backend Data Quality While Reducing CostsDatabricks
A recommender story: improving backend data quality while reducing costsnInformation overload is one of the biggest challenges academics face on a daily basis while finding the right knowledge to advance science. With around 7k research articles being published every day, how do you find the right ones?
Elsevier is a global information analytics business that helps institutions and professionals advance healthcare, open science and improve performance. With many data sources and signals being available, data science and big data engineering provide the perfect opportunity to deliver more value to researchers.
Here we will focus on Mendeley, an open (free of charge) academic content platform to help researchers discover new information via functionalities such as a crowd sourced collection of academic related documents (Catalogue) and various personalized recommender systems. MendeleySuggest, the recommender system, helps millions of researchers worldwide to find documents and people relevant to their research field, they did not yet know exist. The personalised recommenders are powered by Mendeley Catalogue, clustering 2 billion records correctly into canonical records, state of the art algorithms and big data solutions (e.g. Spark).
In the past few years, we noticed that with our content growth, quality of the canonical records started drifting due to scalability issues. As a result, we faced clustering accuracy problems and, in turn, impacting also the recommenders. In this talk we will highlight how we rearchitected the fabrication of Mendeley Catalogue to improve its scalability and accuracy. In addition, we will show how the migration from Hadoop Map Reduce to Spark has helped us reduce costs as well as improving maintainability.
Cyberinfrastructure to Support Ocean ObservatoriesLarry Smarr
05.03.18
Invited Talk to the Ocean Studies Board
National Research Council
Title: Cyberinfrastructure to Support Ocean Observatories
University of California San Diego
Astronomical Data Processing on the LSST Scale with Apache SparkDatabricks
The next decade promises to be exciting for both astronomy and computer science with a number of large-scale astronomical surveys in preparation. One of the most important ones is Large Scale Survey Telescope, or LSST. LSST will produce the first ‘video’ of the deep sky in history by continually scanning the visible sky and taking one 3.2 giga-pixel image every 20 seconds. In this talk we will describe LSST’s unique design and how its image processing pipeline produces catalogs of astronomical objects. To process and quickly cross-match catalog data we built AXS (Astronomy Extensions for Spark), a system based on Apache Spark. We will explain its design and what is behind its great cross-matching performance.
Researchers use OpenData to inform their work, and are also producers of data and software that can be re-shared to the public. In Canada, much university research is supported by public funds and an argument can be made that the results of that research should be made accessible to the public. The research at the Geomatics and Cartographic Research Centre will be featured as will community based social policy research in Ottawa. In Canada some data are accessible, but mostly data are not, and if they are, cost recovery policies and regressive licensing impede their use. The talk will feature examples where data are open and where opportunities for evidence based decision making are restricted.
Using the Data Cube vocabulary for Publishing Environmental Linked Data on la...Laurent Lefort
Canberra Semantic Web Meetup.
Initiatives have been launched to develop semantic vocabularies representing statistical classifications and discovery metadata. Tools are also being created by statistical organizations to support the publication of dimensional data conforming to the Data Cube specification, now in Last Call at W3C.
The meeting will be an opportunity to hear about two semantic Web and Linked Data initiatives for statistical data that are driven by the Australian Government. The Bureau of Meteorlogy and CSIRO have recently released a Linked Data version of the ACORN-SAT historical climate data at http://lab.environment.data.gov.au and the ABS has released the Census data modelled in the Data Cube vocabulary which is part of a challenge the ABS is organising in context of the SemStats Workshop (http://www.datalift.org/en/event/semstats2013/challenge) at the International Semantic Web Conference (ISWC) in Sydney (http://iswc2013.semanticweb.org).
Come along to hear about these two projects, the challenges encountered and the solutions developed.
Time to Science/Time to Results: Transforming Research in the CloudAmazon Web Services
This session demonstrates how cloud can accelerate breakthroughs in scientific research by providing on-demand access to powerful computing. You will gain insight into how scientific researchers are using the cloud to solve complex science, engineering, and business problems that require high bandwidth, low latency networking and very high compute capabilities. You will hear how leveraging the cloud reduces the costs and time to conduct large scale, worldwide collaborative research. Researchers can then access computational power, data storage, and supercomputing resources, and data sharing capabilities in a cost-efficient manner without implementation delays. Disease research can be accomplished in a fraction of the time, and innovative researchers in small schools or distant corners of the world have access to the same computing power as those at major research institutions by leveraging Amazon EC2, Amazon S3, optimizing C3 instances and more to increase collaboration. This session will provide best practices and insight from UC Berkeley AMP Lab on the services used to connect disparate sets of data to drive meaningful new insight and impact.
Analysis Andromeda Galaxy Data Using Spark: Spark Summit East Talk by Jose Na...Spark Summit
The Andromeda Galaxy, or M31, is a spiral galaxy approximately 2.5 million light years away from the Milky Way. As the nearest large external galaxy, it allows us to study galaxy features not visible in our own Milky Way due to our position within the galaxy. Recent studies have shown that the disc and halo of the Andromeda Galaxy extend further than previously thought. Rafiei Ravandi et al 2016 extended previous surveys of Andromeda at mid-infrared wvalengths to produce a catalog containing 426,529 objects. We have used the Apache Spark API for Python in order to cross correlate these objects with previous astronomical catalogs, such as SIMBAD, NED, and MAST (over 11 million objects). The aim is to know whether the objects from the new survey are all part of the M31 galaxy or are part of the background or foreground. The Spark-Python code makes full use of Spark RDDs in order to join multiple catalogs in a single table; this helps us to predict if a particular object is in fact part of Andromeda.
We used key-value pairs in order to reduce the data duplicate data from the MAST catalog, and using groupByKey, we can classify a particular astronomical object using previous catalogs. We can conclude that our new tool can help us to better understand multiple astronomical catalogs for the Andromeda galaxy, such as resolution between astronomical catalogs, and the region in the galaxy where the astronomical objects (such as X-ray binaries, or black holes) dwell.
NASA Earth Exchange (NEX) is virtual collaborative that brings scientists and researchers together in a knowledge-based social network and provides the necessary tools, computing power, and data to accelerate research, innovation and provide transparency.
Semantically-Enabling the Web of Things: The W3C Semantic Sensor Network Onto...Laurent Lefort
Presentation of the SSN XG results at eResearch Australia 2011 https://eresearchau.files.wordpress.com/2012/06/74-semantically-enabling-the-web-of-things-the-w3c-semantic-sensor-network-ontology.pdf
A Recommender Story: Improving Backend Data Quality While Reducing CostsDatabricks
A recommender story: improving backend data quality while reducing costsnInformation overload is one of the biggest challenges academics face on a daily basis while finding the right knowledge to advance science. With around 7k research articles being published every day, how do you find the right ones?
Elsevier is a global information analytics business that helps institutions and professionals advance healthcare, open science and improve performance. With many data sources and signals being available, data science and big data engineering provide the perfect opportunity to deliver more value to researchers.
Here we will focus on Mendeley, an open (free of charge) academic content platform to help researchers discover new information via functionalities such as a crowd sourced collection of academic related documents (Catalogue) and various personalized recommender systems. MendeleySuggest, the recommender system, helps millions of researchers worldwide to find documents and people relevant to their research field, they did not yet know exist. The personalised recommenders are powered by Mendeley Catalogue, clustering 2 billion records correctly into canonical records, state of the art algorithms and big data solutions (e.g. Spark).
In the past few years, we noticed that with our content growth, quality of the canonical records started drifting due to scalability issues. As a result, we faced clustering accuracy problems and, in turn, impacting also the recommenders. In this talk we will highlight how we rearchitected the fabrication of Mendeley Catalogue to improve its scalability and accuracy. In addition, we will show how the migration from Hadoop Map Reduce to Spark has helped us reduce costs as well as improving maintainability.
Cyberinfrastructure to Support Ocean ObservatoriesLarry Smarr
05.03.18
Invited Talk to the Ocean Studies Board
National Research Council
Title: Cyberinfrastructure to Support Ocean Observatories
University of California San Diego
Astronomical Data Processing on the LSST Scale with Apache SparkDatabricks
The next decade promises to be exciting for both astronomy and computer science with a number of large-scale astronomical surveys in preparation. One of the most important ones is Large Scale Survey Telescope, or LSST. LSST will produce the first ‘video’ of the deep sky in history by continually scanning the visible sky and taking one 3.2 giga-pixel image every 20 seconds. In this talk we will describe LSST’s unique design and how its image processing pipeline produces catalogs of astronomical objects. To process and quickly cross-match catalog data we built AXS (Astronomy Extensions for Spark), a system based on Apache Spark. We will explain its design and what is behind its great cross-matching performance.
Researchers use OpenData to inform their work, and are also producers of data and software that can be re-shared to the public. In Canada, much university research is supported by public funds and an argument can be made that the results of that research should be made accessible to the public. The research at the Geomatics and Cartographic Research Centre will be featured as will community based social policy research in Ottawa. In Canada some data are accessible, but mostly data are not, and if they are, cost recovery policies and regressive licensing impede their use. The talk will feature examples where data are open and where opportunities for evidence based decision making are restricted.
Using the Data Cube vocabulary for Publishing Environmental Linked Data on la...Laurent Lefort
Canberra Semantic Web Meetup.
Initiatives have been launched to develop semantic vocabularies representing statistical classifications and discovery metadata. Tools are also being created by statistical organizations to support the publication of dimensional data conforming to the Data Cube specification, now in Last Call at W3C.
The meeting will be an opportunity to hear about two semantic Web and Linked Data initiatives for statistical data that are driven by the Australian Government. The Bureau of Meteorlogy and CSIRO have recently released a Linked Data version of the ACORN-SAT historical climate data at http://lab.environment.data.gov.au and the ABS has released the Census data modelled in the Data Cube vocabulary which is part of a challenge the ABS is organising in context of the SemStats Workshop (http://www.datalift.org/en/event/semstats2013/challenge) at the International Semantic Web Conference (ISWC) in Sydney (http://iswc2013.semanticweb.org).
Come along to hear about these two projects, the challenges encountered and the solutions developed.
Time to Science/Time to Results: Transforming Research in the CloudAmazon Web Services
This session demonstrates how cloud can accelerate breakthroughs in scientific research by providing on-demand access to powerful computing. You will gain insight into how scientific researchers are using the cloud to solve complex science, engineering, and business problems that require high bandwidth, low latency networking and very high compute capabilities. You will hear how leveraging the cloud reduces the costs and time to conduct large scale, worldwide collaborative research. Researchers can then access computational power, data storage, and supercomputing resources, and data sharing capabilities in a cost-efficient manner without implementation delays. Disease research can be accomplished in a fraction of the time, and innovative researchers in small schools or distant corners of the world have access to the same computing power as those at major research institutions by leveraging Amazon EC2, Amazon S3, optimizing C3 instances and more to increase collaboration. This session will provide best practices and insight from UC Berkeley AMP Lab on the services used to connect disparate sets of data to drive meaningful new insight and impact.
How e-Science tools are needed for the new data intensive science, specifically targeted to the Square Kilometre Array. Talk given at the Special Symposium 15 on Data Intensive Astronomy, held during the General Assembly Meeting of the International Astronomical Union in Bejing, 2012.
• “Detecting radio-astronomical "Fast Radio Transient Events" via an OODT-based metadata processing pipeline”, Chris Mattmann, Andrew Hart , Luca Cinquini, David Thompson, Kiri Wagstaff, Shakeh Khudikyan. ApacheCon NA 2013, Februrary 2013
LambdaGrids--Earth and Planetary Sciences Driving High Performance Networks a...Larry Smarr
05.02.04
Invited Talk to the NASA Jet Propulsion Laboratory
Title: LambdaGrids--Earth and Planetary Sciences Driving High Performance Networks and High Resolution Visualizations
Pasadena, CA
Mission-Critical, Real-Time Fault-Detection for NASA's Deep Space Network usi...confluent
NASA's Deep Space Network (DSN) operates spacecraft communication links for NASA deep-space spacecraft missions, including the Curiosity Rover, the Voyager twin spacecraft, Galileo, New Horizons, etc., and has done so reliably for over fifty years. The DSN Complex Event Processing (DCEP) software assembly is a new software system being deployed worldwide into NASA's DSN Deep Space Communication Complexes (DSCC's), including facilities in Spain, Australia, and the United States. The system brings into the DSN next-generation "Big Data" and "Fast Data" infrastructural tools, including Apache Kafka, for correlating real-time network data with other critical data assets, including predicted antenna pointing parameters and extensive logging of physical hardware in the DSN. The ultimate use case is to ingest, filter, store, and visualize all of the DSN's monitor and control data and to actively ensure the successful DSN tracking, ranging, and communication integrity of dozens of concurrent deep-space missions. The system is also intended to support future autonomy applications, including automated anomaly detection in real-time network monitor streams and automated reconfiguration of antenna related assets as needed by future, increasingly autonomous spacecraft. This talk will focus upon the software system behind DCEP, and introduce novel approaches to increasing NASA spacecraft link-control operator cognizance into anomalies that may and do occur during spacecraft tracking activities. This talk will also offer lessons learned, and provide a glimpse into one of the most unique, "out-of-this-world", applications of Apache Kafka.
Evolving Storage and Cyber Infrastructure at the NASA Center for Climate Simu...inside-BigData.com
Ellen Salmon from NASA gave this talk at the 2017 MSST conference. "This talk will describe recent developments at the NASA Center for Climate Simulation, which is funded by NASA’s Science Mission Directorate, and supports the specialized data storage and computational needs of weather, ocean, and climate researchers, as well as astrophysicists, heliophysicists, and planetary scientists. To meet requirements for higher-resolution, higher-fidelity simulations, the NCCS augments its High Performance Computing and storage/retrieval environment. As the petabytes of model and observational data grow, the NCCS is broadening data services offerings and deploying and expanding virtualization resources for high performance analytics."
Watch the video: http://wp.me/p3RLHQ-gPj
Learn more: https://www.nccs.nasa.gov/
and
http://storageconference.us/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Organizations around the world are facing a "data tsunami" as next-generation sensors produce enormous volumes of Earth observation data. Come learn how NASA is leveraging AWS to efficiently work with data and computing resources at massive scales. NASA is transforming its Earth Sciences EOSDIS (Earth Observing System Data Information System) program by moving data processing and archiving to the cloud. NASA anticipates that their Data Archives will grow from 16PB today to over 400PB by 2023 and 1 Exabyte by 2030, and they are moving to the cloud in order to scale their operations for this new paradigm. Learn More: https://aws.amazon.com/government-education/
Virtual Observatories as the Drivers of Space Science - Robert Rankin, Univer...Cybera Inc.
Robert Rankin, Professor of Physics and Astronomy at the University of Alberta, presented these slides as part of the Cybera Summit 2010 session "The Evolution of Collaborative Science." For more information please visit http://www.cybera.ca/evolution-collaborative-science
Applying Photonics to User Needs: The Application ChallengeLarry Smarr
05.02.28
Invited Talk to the 4th Annual On*VECTOR International Photonics Workshop
Sponsored by NTT Network Innovation Laboratories
Title: Applying Photonics to User Needs: The Application Challenge
University of California, San Diego
The Academic and R&D Sectors' Current and Future Broadband and Fiber Access N...Larry Smarr
05.02.23
Invited Access Grid Talk
MSCMC FORUM Series
Examining the National Vision for Global Peace and Prosperity
Title: The Academic and R&D Sectors' Current and Future Broadband and Fiber Access Needs for US Global Competitiveness
Arlington, VA
Similar to B0DEGA 3D VO Archive - IVOA 2010 Fall Interop (20)
Jupyter notebooks have arrived to stay as a means to document the scientific analysis protocol, as well as to provide executable recipes shared seamlessly among the community. This has triggered the rise of a plethora of complementary tools and services associated to them. This talk will cover different possibilities to use Jupyter notebooks and JupyterLab interface. We will start with the description of their basic functionalities, as well as functionality extensions not widely known by the community. We will describe how to take advantage of their cross-language capabilities to enhance collaborative work, and also use them as complementary assets in the paper publication process to provide reproducibility of the results. Other aspects on how to deal with modularity and scalability of long complex notebooks will be covered, and we will see several platforms for rendering and execution others then the browser and the local desktop. We will finish on how they are actually being used together with Docker and Binder as part of the versioned executable documentation of a project like Gammapy.
Los IPython Notebooks nos han proporcionado una sustancial mejora en la documentación del scripts, así como su inspección y una mayor re-utilización. Los IPython Notebooks también permiten acceder a distintos lenguajes de programación (Fortran, IDL, R, Shell,..) en un mismo script, lo que unido a su modo de acceso Web les hace ser un elemento ideal para el trabajo colaborativo (multi-lenguaje, multi-usuario, multi-plataforma, etc..) Os contaré qué tipo de cosas pueden hacerse con IPython Notebooks, desde desarrollo colaborativo de código multi-lenguaje, pasando por la reutilización de tutoriales, visualización interactiva de resultados, hasta la distribución de código más modular, y la publicación final de un experimento digital verificable y reproducible: el preámbulo de los papers ejecutables.
Astronomy is a collaborative science, but it has also become highly specialized, as many other disciplines. Improvement of sharing, discovery and access to resources will enable astronomers to greatly benefit from each other’s highly specialized knowhow. Some initiatives led by scientists and publishers, complement traditional paper publishing with assets published in more interactive digital formats. Among the main goals of these efforts are improving the reproducibility and clarity of the scientific outcome, going beyond the static PDF file, and fostering re-use, which turns into a more efficient exploitation of available digital resources.
The science performed in Astronomy is digital science, from observing proposals to final publication, including data and software used: each of the elements and actions involved in the scientific output could be recorded in electronic form.
This fact does not prevent the final outcome of an experiment is still difficult to reproduce. An exhaustive process of documentation can be long, tedious, where access to all the resources must be granted, and after all, the repeatability of results is not even guaranteed. At the same time, we have access to a wealth of files, observational data and publications that could be used more efficiently with a better visibility of the scientific production, avoiding duplication of effort and reinvention.
Digital Science: Reproducibility and Visibility in AstronomyJose Enrique Ruiz
The science done in Astronomy is digital science, from observing proposals to final publication, to data and software used: each of the elements and actions involved in scientific output could be recorded in electronic form. This fact does not prevent the final outcome of an experiment is still difficult to reproduce. This procedure can be long, tedious, not easily accessible or understandable, even to the author. At the same time, we have a rich infrastructure of files, observational data and publications. This could be used more efficiently if we reach greater visibility of the scientific production, which avoids duplication of effort and reinvention.
Reproducibility is a cornerstone in scientific method, and extraction of relevant information in the current and future data flood is key in Astronomy. The AMIGA group (Analysis of the interstellar Medium of Isolated GAlaxies, IAA-CSIC, http://amiga.iaa.es) faces these two challenges in the European project "Wf4Ever: Advanced technologies for enhanced preservation workflow Science" to enable the preservation of the methodology in scalable semantic repositories to facilitate their discovery, access, inspection, exploitation and distribution. These repositories store the experiments on "Research Objects" whose main constituents are digital scientific workflows. These provide a comprehensive view and clear scientific interpretation of the experiment as well as the automation of the method, going beyond the usual pipelines that normally end up in data processing.
The quantitative leap in volume and complexity of the next generation of archives will need analysis and data mining tasks to live closer to the data, in computing and distributed storage environments, but they should also be modular enough to allow customization from scientists and be easily accessible to foster their dissemination among the community. Astronomy is a collaborative science, but it has also become highly specialized, as many other disciplines. Sharing, preservation, discovery and a much simplified access to resources in the composition of scientific workflows will enable astronomers to greatly benefit from each other’s highly specialized knowhow, they constitute a way to push Astronomy to share and publish not only results and data, but also processes and methodologies.
We will show how the use of scientific workflows can help to improve the reproducibility of the experiment and a more efficient exploitation of astronomical archives, as well as the visibility of the scientific methodology and its reuse.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
1. B0DEGA 3D VO Archive
A prototype service for a catalog of
galaxy datacubes
Jose Enrique Ruiz
IAA - CSIC
IVOA Nara Fall Interop 07/12/2010
2. Context
• AMIGA
• B0DEGA
3D Archive
• discovery
• queryData
• accessData
• getCapabilities
Workflows
• A scenario for Virtual Data
3. Analysis of the interstellar Medium of Isolated Galaxies
PI : Lourdes Verdes-Montenegro
IAA-CSIC, IRAM
http://amiga.iaa.es
Obs. Marseille, Obs. Paris, CfA, ASIAA, MPIfA, IAC,
Univ. Alabama, Mc Donald Observatory, Arcetri, UNAM,
Kapteyn Astronomical Institute.
Need of a statistically significant sample of isolated galaxies, in order
to provide a baseline to compare with the behaviour of galaxies in
denser environments
Multiλ analysis ~1000 galaxias
+
Need of intensive and complex analysis of 3D data
2D spatial + 1 Velocity
4. AMIGA Catalog
• ConeSearch Service
• Web Interface
RADAMS
Radio Astronomy Data Model for Single-dish telescopes
Juan de Dios Santader-Vela
Robledo DSS-63 VO Archive
• ConeSearch Service
• SSA Service
• Web Interface
TAPAS
Telescope Archive for Public Access System
IRAM-30m VO Archive
• ConeSearch Service
• Web Interface
5. Below 0 DEgrees Galaxies
PI : D. Espada
Legacy project of Submillimiter Array interferometer (SMA)
http://b0dega.iaa.es
IAA-CSIC
CfA (Harvard-Smithsonian Center for Astrophysics)
ASIAA (Institute of Academia Sinica Astronomy and Astrophysics)
Molecular gas properties in circumnuclear regions of a large survey of nearby
galaxies. Spiral galaxies that have undergone recent interactions. Many
of them characterized by central starbursts.
30 processed and reduced datacubes of galaxies
6.
7.
8. Data needed by the Astronomer
• Decoupled coordinates
• Distances
• Morphological Type
• Bar
• Ring
• Multiple
• Linear diameter
• Masses
• Luminosities
• Inclination
• Position Angle
21. • SIAv2 Draft + SSA answer most of the issues
• Some tabular data from catalogs also needed
• SpecDM + CharDM cover most of all metadata
• A complete generic DM is needed for UTypes
• Virtual Data generation needed for huge files
• accesData standard params needed for Virtual Data
• getCapabilities method is key for building Workflows
• Upcoming facilities will provide 3D datacubes