See presentation at http://www.slideshare.net/soilandreyes/2011-0716-scufl2-because-a-workflow-is-more-than-its-definition-bosc-2011
From BOSC 2011 - http://www.open-bio.org/wiki/BOSC_2011_Schedule
Status Quo and (current) Limitations of Library Linked DataDaniel Vila Suero
Talk at the Semantic Web in Libraries Conference 2012 (SWIB2012). Cologne 28/12/2012 during the session "TOWARDS AN INTERNATIONAL LOD LIBRARY ECOLOGY".
(http://swib.org/swib12/programme.php)
Slide deck presenting the Provenance support of Taverna workflow system, detailing architecture, ontologies and how results are exported as Research Object bundles, including the PROV-O provenance of the workflow run.
This upload is the PDF version, for PPTX source, see https://www.slideshare.net/soilandreyes/20130529-taverna-provenance-pptx-source/
Watch the videos at http://cloudify.co/webinars/tosca-training-videos
Getting up to speed with TOSCA simple profile in YAML and its ARIA implementation.
Theory and OpenLDAP implementation
Ce support est diffusé sous licence Creative Commons (CC BY-SA 3.0 FR)
Attribution - Partage dans les Mêmes Conditions 3.0 France
En savoir plus sur www.opensourceschool.fr
Plan :
1. Introduction
2. Anatomy of a LDAP directory
3. OpenLDAP: A LDAP implementation
4. Lab : Install an OpenLDAP server
5. Working with LDAP servers
6. Extending LDAP
Even though this is a trivial example, the advantages of Python stand out.
Yorktown’s Computer Science I course has no prerequisites, so many of the
students seeing this example are looking at their first program. Some of them
are undoubtedly a little nervous, having heard that computer programming is
difficult to learn. The C++ version has always forced me to choose between
two unsatisfying options: either to explain the #include, void main(), {, and
} statements and risk confusing or intimidating some of the students right at
the start, or to tell them, “Just don’t worry about all of that stuff now; we will
talk about it later,” and risk the same thing. The educational objectives at
this point in the course are to introduce students to the idea of a programming
statement and to get them to write their first program, thereby introducing
them to the programming environment. The Python program has exactly what
is needed to do these things, and nothing more.
Comparing the explanatory text of the program in each version of the book
further illustrates what this means to the beginning student. There are thirteen
paragraphs of explanation of “Hello, world!” in the C++ version; in the Python
version, there are only two. More importantly, the missing eleven paragraphs
do not deal with the “big ideas” in computer programming but with the minutia
of C++ syntax. I found this same thing happening throughout the book.
Whole paragraphs simply disappear from the Python version of the text because
Python’s much clearer syntax renders them unnecessary.
Using a very high-level language like Python allows a teacher to postpone talking
about low-level details of the machine until students have the background that
they need to better make sense of the details. It thus creates the ability to put
“first things first” pedagogically. One of the best examples of this is the way in
which Python handles variables. In C++ a variable is a name for a place that
holds a thing. Variables have to be declared with types at least in part because
the size of the place to which they refer needs to be predetermined. Thus, the
idea of a variable is bound up with the hardware of the machine. The powerful
and fundamental concept of a variable is already difficult enough for beginning
students (in both computer science and algebra). Bytes and addresses do not
help the matter. In Python a variable is a name that refers to a thing. This
is a far more intuitive concept for beginning students and is much closer to the
meaning of “variable” that they learned in their math courses. I had much less
difficulty teaching variables this year than I did in the past, and I spent less
time helping students with problems using them.
Near Real time Indexing Kafka Messages to Apache Blur using Spark StreamingDibyendu Bhattacharya
My presentation at recently concluded Apache Big Data Conference Europe about the Reliable Low Level Kafka Spark Consumer I developed and an use case of real time indexing to Apache Blur using this consumer
Status Quo and (current) Limitations of Library Linked DataDaniel Vila Suero
Talk at the Semantic Web in Libraries Conference 2012 (SWIB2012). Cologne 28/12/2012 during the session "TOWARDS AN INTERNATIONAL LOD LIBRARY ECOLOGY".
(http://swib.org/swib12/programme.php)
Slide deck presenting the Provenance support of Taverna workflow system, detailing architecture, ontologies and how results are exported as Research Object bundles, including the PROV-O provenance of the workflow run.
This upload is the PDF version, for PPTX source, see https://www.slideshare.net/soilandreyes/20130529-taverna-provenance-pptx-source/
Watch the videos at http://cloudify.co/webinars/tosca-training-videos
Getting up to speed with TOSCA simple profile in YAML and its ARIA implementation.
Theory and OpenLDAP implementation
Ce support est diffusé sous licence Creative Commons (CC BY-SA 3.0 FR)
Attribution - Partage dans les Mêmes Conditions 3.0 France
En savoir plus sur www.opensourceschool.fr
Plan :
1. Introduction
2. Anatomy of a LDAP directory
3. OpenLDAP: A LDAP implementation
4. Lab : Install an OpenLDAP server
5. Working with LDAP servers
6. Extending LDAP
Even though this is a trivial example, the advantages of Python stand out.
Yorktown’s Computer Science I course has no prerequisites, so many of the
students seeing this example are looking at their first program. Some of them
are undoubtedly a little nervous, having heard that computer programming is
difficult to learn. The C++ version has always forced me to choose between
two unsatisfying options: either to explain the #include, void main(), {, and
} statements and risk confusing or intimidating some of the students right at
the start, or to tell them, “Just don’t worry about all of that stuff now; we will
talk about it later,” and risk the same thing. The educational objectives at
this point in the course are to introduce students to the idea of a programming
statement and to get them to write their first program, thereby introducing
them to the programming environment. The Python program has exactly what
is needed to do these things, and nothing more.
Comparing the explanatory text of the program in each version of the book
further illustrates what this means to the beginning student. There are thirteen
paragraphs of explanation of “Hello, world!” in the C++ version; in the Python
version, there are only two. More importantly, the missing eleven paragraphs
do not deal with the “big ideas” in computer programming but with the minutia
of C++ syntax. I found this same thing happening throughout the book.
Whole paragraphs simply disappear from the Python version of the text because
Python’s much clearer syntax renders them unnecessary.
Using a very high-level language like Python allows a teacher to postpone talking
about low-level details of the machine until students have the background that
they need to better make sense of the details. It thus creates the ability to put
“first things first” pedagogically. One of the best examples of this is the way in
which Python handles variables. In C++ a variable is a name for a place that
holds a thing. Variables have to be declared with types at least in part because
the size of the place to which they refer needs to be predetermined. Thus, the
idea of a variable is bound up with the hardware of the machine. The powerful
and fundamental concept of a variable is already difficult enough for beginning
students (in both computer science and algebra). Bytes and addresses do not
help the matter. In Python a variable is a name that refers to a thing. This
is a far more intuitive concept for beginning students and is much closer to the
meaning of “variable” that they learned in their math courses. I had much less
difficulty teaching variables this year than I did in the past, and I spent less
time helping students with problems using them.
Near Real time Indexing Kafka Messages to Apache Blur using Spark StreamingDibyendu Bhattacharya
My presentation at recently concluded Apache Big Data Conference Europe about the Reliable Low Level Kafka Spark Consumer I developed and an use case of real time indexing to Apache Blur using this consumer
Do you know what your Drupal is doing Observe it! (DrupalCon Prague 2022)sparkfabrik
Our Drupal 8 websites are true applications, often very complex ones.
More and more workload is delegated to external systems, usually microservices, that are used for many different tasks.
Architectures are always more distributed and fragmented.
To trace the lifecycle of a single request that originates in a client, passes through all Drupal subsystems, reaches external (micro)services and comes back, it will become mandatory to track down problems and to optimize for performances. This is often time consuming and without the right tools may become very difficult.
A simple unstructured log stream isn't enough anymore, we need to find a way to observe the details of what is going on.
Observability is all about this and is based on structured logs, metrics and traces. In this talk we will see how to implement these techniques in Drupal, which tools and which modules to use to trace and log all requests that reach our website and how to expose and display useful metrics.
We will integrate Drupal with OpenTelemetry, Monolog and Grafana to collect, scrape, store and visualize telemetry data.
Quickly re-publish CSV/TSV files from existing repositories as FAIR Data with just a few mouse clicks!
You select the columns to "project" as Linked Data, and the associated ontology terms. The FAIR Projector Builder will create a FAIR Projector for you: a Triple Pattern Fragment server to provide the Linked Data; a published DCAT Distribution containing metadata about those triples and their source; and an RML model (syntactic and semantic of the triples, to aid in third-party discovery of this novel projection.
(current status - first prototype, not ready for public consumption)
-------
Thanks to the NBDC/DBCLS for sponsoring the hackathon series.
MDW also funded by Ministerio de Economía y Competitividad grant number TIN2014-55993-RM
Princeton Dec 2022 Meetup_ StreamNative and Cloudera StreamingTimothy Spann
Princeton Dec 2022 Meetup_ StreamNative and Cloudera Streaming
https://www.meetup.com/new-york-city-apache-pulsar-meetup/
https://www.meetup.com/new-york-city-apache-pulsar-meetup/events/289674210/
|WHAT THE SESSION WILL COVER|
Apache NiFi
Apache Pulsar
Apache Flink
Flink SQL
We will show you how to build apps, so download beforehand to Docker, K8, your Laptop, or the cloud.
Cloudera CSP Setup
Getting Started with Cloudera Stream Processing Community Edition
You may download CSP-CE here:
Cloudera Stream Processing Community Edition
The Cloudera CDP User's page:
CDP Resources Page
https://youtu.be/s80sz3NWwHo
https://docs.cloudera.com/csp-ce/latest/index.html
https://www.cloudera.com/downloads/cdf/csp-community-edition.html
Apache Pulsar
https://pulsar.apache.org/docs/getting-started-standalone/
or
https://streamnative.io/free-cloud/
Cloudera + Pulsar
https://community.cloudera.com/t5/Cloudera-Stream-Processing-Forum/Using-Apache-Pulsar-with-SQL-Stream-Builder/m-p/349917
https://community.cloudera.com/t5/Community-Articles/Using-Apache-NiFi-with-Apache-Pulsar-for-Streaming/ta-p/337891
|AGENDA|
6:00 - 6:30 PM EST: Food, Drink, and Networking!!!
6:30 - 7:15 PM EST: Presentation - Tim Spann, StreamNative Developer Advocate
7:15 - 8:00 PM EST: Presentation - John Kuchmek, Cloudera Principal Solutions Engineer
8:00 - 8:30 PM EST: Round Table on Real-Time Streaming, Q&A
|ABOUT THE SPEAKERS|
John Kuchmek is a Principal Solutions Engineer for Cloudera. Before joining Cloudera, John transitioned to the Autonomous Intelligence team where he was in charge of integrating the platforms to allow data scientists to work with various types of data.
Tim Spann is a Developer Advocate for StreamNative. He works with StreamNative Cloud, Apache Pulsar™, Apache Flink®, Flink® SQL, Big Data, the IoT, machine learning, and deep learning. Tim has over a decade of experience with the IoT, big data, distributed computing, messaging, streaming technologies, and Java programming.
See:
https://www.meetup.com/new-york-city-apache-pulsar-meetup/events/283837865/
https://github.com/tspannhw/SpeakerProfile
https://www.meetup.com/futureofdata-newyork/
https://github.com/tspannhw/pulsar-transit-function
https://www.meetup.com/futureofdata-princeton/
https://github.com/tspannhw/create-nifi-pulsar-flink-apps
https://medium.com/@tspann/using-apache-pulsar-with-cloudera-sql-builder-apache-flink-b518aa9eadff
https://github.com/tspannhw/meetups/blob/main/15December2022.md All resources for the meetup
Taverna workflows: provenance and reproducibility - STFC/NERC workshop 2013anpawlik
Slides on Taverna www.tvaerna.org.uk from the talk given at STFC/NERC workshop "Workflow approaches to investigation of biological complexity", 15-16 October 2013.
bigdata 2022_ FLiP Into Pulsar Apps
In this session, Timothy will introduce you to the world of Apache Pulsar and how to build real-time messaging and streaming applications with a variety of OSS libraries, schemas, languages, frameworks, and tools.
FLiP Into Pulsar Apps
08:30 – 09:15
•
23 Nov, 2022
In this session, Timothy will introduce you to the world of Apache Pulsar and how to build real-time messaging and streaming applications with a variety of OSS libraries, schemas, languages, frameworks, and tools.
Large scale preservation workflows with Taverna – SCAPE Training event, Guima...SCAPE Project
Sven Schlarb of the Austrian National Library gave this introduction to large scale preservation workflows with Taverna at the first SCAPE Training event, ‘Keeping Control: Scalable Preservation Environments for Identification and Characterisation’, in Guimarães, Portugal on 6-7 December 2012.
nanopub-java: A Java Library for NanopublicationsTobias Kuhn
The concept of nanopublications was first proposed about six years ago, but it lacked openly available implementations. The library presented here is the first one that has become an official implementation of the nanopublication community. Its core features are stable, but it also contains unofficial and experimental extensions: for publishing to a decentralized server network, for defining sets of nanopublications with indexes, for informal assertions, and for digitally signing nanopublications. Most of the features of the library can also be accessed via an online validator interface.
Do you know what your Drupal is doing Observe it! (DrupalCon Prague 2022)sparkfabrik
Our Drupal 8 websites are true applications, often very complex ones.
More and more workload is delegated to external systems, usually microservices, that are used for many different tasks.
Architectures are always more distributed and fragmented.
To trace the lifecycle of a single request that originates in a client, passes through all Drupal subsystems, reaches external (micro)services and comes back, it will become mandatory to track down problems and to optimize for performances. This is often time consuming and without the right tools may become very difficult.
A simple unstructured log stream isn't enough anymore, we need to find a way to observe the details of what is going on.
Observability is all about this and is based on structured logs, metrics and traces. In this talk we will see how to implement these techniques in Drupal, which tools and which modules to use to trace and log all requests that reach our website and how to expose and display useful metrics.
We will integrate Drupal with OpenTelemetry, Monolog and Grafana to collect, scrape, store and visualize telemetry data.
Quickly re-publish CSV/TSV files from existing repositories as FAIR Data with just a few mouse clicks!
You select the columns to "project" as Linked Data, and the associated ontology terms. The FAIR Projector Builder will create a FAIR Projector for you: a Triple Pattern Fragment server to provide the Linked Data; a published DCAT Distribution containing metadata about those triples and their source; and an RML model (syntactic and semantic of the triples, to aid in third-party discovery of this novel projection.
(current status - first prototype, not ready for public consumption)
-------
Thanks to the NBDC/DBCLS for sponsoring the hackathon series.
MDW also funded by Ministerio de Economía y Competitividad grant number TIN2014-55993-RM
Princeton Dec 2022 Meetup_ StreamNative and Cloudera StreamingTimothy Spann
Princeton Dec 2022 Meetup_ StreamNative and Cloudera Streaming
https://www.meetup.com/new-york-city-apache-pulsar-meetup/
https://www.meetup.com/new-york-city-apache-pulsar-meetup/events/289674210/
|WHAT THE SESSION WILL COVER|
Apache NiFi
Apache Pulsar
Apache Flink
Flink SQL
We will show you how to build apps, so download beforehand to Docker, K8, your Laptop, or the cloud.
Cloudera CSP Setup
Getting Started with Cloudera Stream Processing Community Edition
You may download CSP-CE here:
Cloudera Stream Processing Community Edition
The Cloudera CDP User's page:
CDP Resources Page
https://youtu.be/s80sz3NWwHo
https://docs.cloudera.com/csp-ce/latest/index.html
https://www.cloudera.com/downloads/cdf/csp-community-edition.html
Apache Pulsar
https://pulsar.apache.org/docs/getting-started-standalone/
or
https://streamnative.io/free-cloud/
Cloudera + Pulsar
https://community.cloudera.com/t5/Cloudera-Stream-Processing-Forum/Using-Apache-Pulsar-with-SQL-Stream-Builder/m-p/349917
https://community.cloudera.com/t5/Community-Articles/Using-Apache-NiFi-with-Apache-Pulsar-for-Streaming/ta-p/337891
|AGENDA|
6:00 - 6:30 PM EST: Food, Drink, and Networking!!!
6:30 - 7:15 PM EST: Presentation - Tim Spann, StreamNative Developer Advocate
7:15 - 8:00 PM EST: Presentation - John Kuchmek, Cloudera Principal Solutions Engineer
8:00 - 8:30 PM EST: Round Table on Real-Time Streaming, Q&A
|ABOUT THE SPEAKERS|
John Kuchmek is a Principal Solutions Engineer for Cloudera. Before joining Cloudera, John transitioned to the Autonomous Intelligence team where he was in charge of integrating the platforms to allow data scientists to work with various types of data.
Tim Spann is a Developer Advocate for StreamNative. He works with StreamNative Cloud, Apache Pulsar™, Apache Flink®, Flink® SQL, Big Data, the IoT, machine learning, and deep learning. Tim has over a decade of experience with the IoT, big data, distributed computing, messaging, streaming technologies, and Java programming.
See:
https://www.meetup.com/new-york-city-apache-pulsar-meetup/events/283837865/
https://github.com/tspannhw/SpeakerProfile
https://www.meetup.com/futureofdata-newyork/
https://github.com/tspannhw/pulsar-transit-function
https://www.meetup.com/futureofdata-princeton/
https://github.com/tspannhw/create-nifi-pulsar-flink-apps
https://medium.com/@tspann/using-apache-pulsar-with-cloudera-sql-builder-apache-flink-b518aa9eadff
https://github.com/tspannhw/meetups/blob/main/15December2022.md All resources for the meetup
Taverna workflows: provenance and reproducibility - STFC/NERC workshop 2013anpawlik
Slides on Taverna www.tvaerna.org.uk from the talk given at STFC/NERC workshop "Workflow approaches to investigation of biological complexity", 15-16 October 2013.
bigdata 2022_ FLiP Into Pulsar Apps
In this session, Timothy will introduce you to the world of Apache Pulsar and how to build real-time messaging and streaming applications with a variety of OSS libraries, schemas, languages, frameworks, and tools.
FLiP Into Pulsar Apps
08:30 – 09:15
•
23 Nov, 2022
In this session, Timothy will introduce you to the world of Apache Pulsar and how to build real-time messaging and streaming applications with a variety of OSS libraries, schemas, languages, frameworks, and tools.
Large scale preservation workflows with Taverna – SCAPE Training event, Guima...SCAPE Project
Sven Schlarb of the Austrian National Library gave this introduction to large scale preservation workflows with Taverna at the first SCAPE Training event, ‘Keeping Control: Scalable Preservation Environments for Identification and Characterisation’, in Guimarães, Portugal on 6-7 December 2012.
nanopub-java: A Java Library for NanopublicationsTobias Kuhn
The concept of nanopublications was first proposed about six years ago, but it lacked openly available implementations. The library presented here is the first one that has become an official implementation of the nanopublication community. Its core features are stable, but it also contains unofficial and experimental extensions: for publishing to a decentralized server network, for defining sets of nanopublications with indexes, for informal assertions, and for digitally signing nanopublications. Most of the features of the library can also be accessed via an online validator interface.
Sustaining research software at the Apache Software Foundation
Presented at BOSC 2015, Dublin on 2015-07-11. http://www.open-bio.org/wiki/BOSC_2015
Source: http://slides.com/soilandreyes/20150611-bosc2015-apache
Presented 2014-10-30 at Taverna Open Development Workshop in Manchester http://dev.mygrid.org.uk/wiki/display/developer/Taverna+Open+Development+Workshop
Also available at http://slides.com/soilandreyes/2014-10-31-taverna-3-architecture#/
2014-10-30 Taverna 3 status
Presented at Taverna Open Development Workshop 2014 in Manchester.
http://dev.mygrid.org.uk/wiki/display/developer/Taverna+Open+Development+Workshop#TavernaOpenDevelopmentWorkshop-Day1-Thursday2014-10-30
Taverna is becoming an Apache Incubator project. What are the effects on Taverna as an open source project and its future development?
HTML version: http://slides.com/soilandreyes/2014-10-30-taverna-incubator/
Wiki version: http://dev.mygrid.org.uk/wiki/display/developer/Taverna+as+an+Apache+Incubator+project
Presented 2014-10-30 at Taverna Open Development Workshop http://dev.mygrid.org.uk/wiki/display/developer/Taverna+Open+Development+Workshop
OMEX Combine Archives as example of Research Object in the wild - how converting it to RO Bundles using http://dx.doi.org/10.5281/zenodo.10439
Source pptx:
https://onedrive.live.com/view.aspx?cid=37935FEEE4DF1087&resid=37935FEEE4DF1087!788&app=PowerPoint%20f
2013-07-19 myExperiment research objects, beyond workflows and packs (PPTX)Stian Soiland-Reyes
Presentation at BOSC 2013 / ISMB 2013. (PowerPoint 2013 source)
PDF: https://www.slideshare.net/soilandreyes/2013-0719bosc-2013myexperimentresearchobjectsslides
See also poster at http://www.slideshare.net/soilandreyes/2013-0718bosc-2013myexperimentresearchobjectsposter-24242509 or
submitted abstract: https://docs.google.com/document/d/1jaAuPV-EnbsyI14L56HKHBQP7eDVfeXGLlK-LwohnWw/edit?usp=sharing
We have evolved Research Objects as a mechanism to preserve digital resources related to research, by providing mechanisms, formats and architecture for describing aggregated resources (hypothesis, workflow, datasets, scripts, services), their relations (is input for, explains, used by), provenance (graph was derived from dataset A, B and C) and attribution (who contributed what, and when?).
The website myExperiment is already popular for collaborating on, publishing and sharing scientific workflows, however we have found that for understanding and preserving a workflow over time, its definition is not enough, specially faced with workflow decay, services and tools that change over time. We have therefore adapted the research object model as a foundation for the myExperiment packs, allowing uploading of workflow runs, inputs, outputs and other files relevant to the workflow, relating them with annotations and integrated the Wf4Ever architecture for performing decay analysis and tracking a research object’s evolution as it and its constituent resources change over time.
Open Annotation Rollout, Manchester, 2013-06-25
See also PPTX version with Notes: http://www.slideshare.net/soilandreyes/2013-0624annotatingr-osopenannotationmeeting
Open Annotation Rollout, Manchester, 2013-06-25
See also PDF version: http://www.slideshare.net/soilandreyes/2013-0624annotatingr-osopenannotationmeeting-23289491
At "Metagenomics, metagenetics and Pylogenetic workflows for Ocean Sampling Day" Workshop
Max Planck Institute for Marine Microbiology, Bremen, Germany 2013-03-21
For PPTX source - download http://www.wf4ever-project.org/wiki/download/attachments/2064544/2013-03-21-OSD-Bremen-Stian-What+can+provenance+do+for+me.pptx
2012 03-28 Wf4ever, preserving workflows as digital research objectsStian Soiland-Reyes
Presented on 2012-03-28 at EGI Community Forum 2012, Munich.
http://www.wf4ever-project.org/
http://purl.org/wf4ever/model
http://cf2012.egi.eu/
https://www.egi.eu/indico/sessionDisplay.py?sessionId=66&confId=679#20120328
Presentation of Taverna from UKOLN DevSci "Workflow Tools" event in Bath, 2010-11-30
PDF version: http://www.slideshare.net/soilandreyes/taverna-workflow-management-system-2010-1130-bath-workflow-tools
http://taverna.org.uk/
http://www.ukoln.ac.uk/events/devcsi/workflow_tools/programme/index.html
http://devcsi.ukoln.ac.uk/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
2011 07-06 SCUFL2 Poster - because a workflow is more than its definition (BOSC 2011)
1. http://www.taverna.org.uk/
Project site:
Scufl2 – because a workflow
http://www.taverna.org.uk/
Source code:
is more than its definition
https://github.com/mygrid/scufl2
Stian Soiland-Reyes, Alan R Williams, Stuart Owen, David Withers and Carole Goble
http://taverna.googlecode.com/
School of Computer Science, University of Manchester, UK
License: GNU Lesser General Public {stian.soiland-reyes, alan.r.williams, stuart.owen, david.withers, carole.a.goble}@manchester.ac.uk
License (LGPL) 2.1
“Structured” zip-file • Taverna’s future workflow and data format
Self-documenting media
(can be unpacked to be • Goal: Simplify third-party reading, writing,
exposed on the web)
type (OpenOffice ODF,
ePub OCF, Adobe UCF) – annotating and extending
•
for tools like file(1) and
mime magic Scufl2: specification, schema, ontology,
Java API and conversion tool
the-workflow-bundle.scufl2
mimetype Manifest listing all
resources in
application/vnd.taverna.scufl2.workflow-bundle
META-INF/manifest.xml bundle with media
META-INF/container.xml types
<manifest:manifest
xmlns:manifest="urn:oasis:names:tc:opendocument:xmlns:manifest:1.0
">
<container version="1.0“ xmlns="urn:oasis:names:tc:opendocument:xmlns:container">
<manifest:file-entry manifest:media-
<rootfiles>
type="application/vnd.taverna.scufl2.workflow-bundle"
<rootfile full-path="workflowBundle.rdf" media-type="application/rdf+xml" /> manifest:full-path="/"/>
<!-- <rootfile full-path="workflowBundle.ttl" <!– Standard resources -->
media-type="text/turtle" /> Alternative repr. --> <manifest:file-entry manifest:media-type="application/rdf+xml"
</rootfiles> manifest:full-path="workflowBundle.rdf"/>
</container> <manifest:file-entry manifest:media-type="application/rdf+xml"
manifest:full-path="workflow/HelloWorld.rdf"/>
<manifest:file-entry manifest:media-type="application/rdf+xml"
manifest:full-path="profile/tavernaWorkbench.rdf"/>
<manifest:file-entry manifest:media-type="application/rdf+xml"
workflowBundle.rdf manifest:full-path="profile/tavernaServer.rdf"/>
<!– Any additional resources -->
<manifest:file-entry manifest:media-type="application/rdf+xml"
<rdf:RDF xmlns="http://ns.taverna.org.uk/2010/scufl2#" xmlns:rdf=".." manifest:full-path="annotation/user_annotations.rdf"/>
xmlns:rdfs=".." xmlns:xsi=".."
xsi:schemaLocation="http://ns.taverna.org.uk/2010/scufl2#
<manifest:file-entry manifest:media-type="application/rdf+xml"
manifest:full-path="annotation/myExperiment-wf-765.rdf"/>
Definition of
<manifest:file-entry manifest:media-type="image/png" manifest:full-
http://ns.taverna.org.uk/2010/scufl2/scufl2.xsd .."
xsi:type="WorkflowBundleDocument" xml:base="./"> path="Thumbnails/thumbnail.png"/>
<manifest:file-entry manifest:media-type="image/svg+xml" manifest:full-
workflow structure.
<WorkflowBundle rdf:about="">
<name>HelloWorld</name>
<sameBaseAs rdf:resource=
path="diagram/workflow/HelloWorld.svg"/>
</manifest:manifest> Nested workflows
"http://ns.taverna.org.uk/2010/workflowBundle/28f7c..a0ef731/" />
<mainWorkflow rdf:resource="workflow/HelloWorld/" />
<workflow>
are separate
<Workflow rdf:about="workflow/HelloWorld/">
<rdfs:seeAlso rdf:resource="workflow/HelloWorld.rdf" /> resources.
</Workflow> <!-- plus each nested <Workflow> -->
</workflow>
<mainProfile rdf:resource="profile/tavernaWorkbench/" /> workflow/HelloWorld.rdf
<profile>
<Profile rdf:about="profile/tavernaWorkbench/">
<rdfs:seeAlso rdf:resource="profile/tavernaWorkbench.rdf" /> <rdf:RDF xmlns="http://ns.taverna.org.uk/2010/scufl2#"
</Profile> <!– plus optional alternative <Profile>--> xmlns:rdf=".." xmlns:owl=".." xmlns:rdfs=".." xmlns:xsi=".."
xsi:schemaLocation="http://ns.taverna.org.uk/2010/scufl2#
</profile> http://ns.taverna.org.uk/2010/scufl2/scufl2.xsd ..
<rdfs:seeAlso rdf:resource="annotation/user_annotations.rdf" />
Profile gives implementation <rdfs:seeAlso rdf:resource="annotation/myExperiment-wf-765.rdf" />
xsi:type="WorkflowDocument" xml:base="HelloWorld/">
<Workflow rdf:about="">
</WorkflowBundle> <name>HelloWorld</name>
bindings, alternative profiles </rdf:RDF> <workflowIdentifier
rdf:resource="http://ns.taverna.org.uk/2010/workflow/006...c84e2ca/" />
can customize execution of <inputWorkflowPort>
<InputWorkflowPort rdf:about="in/yourName">
<name>yourName</name>
workflow steps for different
environments (e.g. desktop,
profile/tavernaWorkbench.rdf <portDepth>0</portDepth>
</InputWorkflowPort>
</inputWorkflowPort>
Every workflow part
<outputWorkflowPort><!-- .. --></outputWorkflowPort>
server, cloud) <rdf:RDF xmlns="http://ns.taverna.org.uk/2010/scufl2#"
xmlns:beanshell="http://ns.taverna.org.uk/2010/activity/beanshell#" xmlns:dc=".."
<processor>
<Processor rdf:about="processor/Hello/">
xmlns:owl=".." xmlns:rdf=".." xmlns:rdfs=".." xmlns:xsi=".." <name>Hello</name>
xsi:schemaLocation="http://ns.taverna.org.uk/2010/scufl2#
http://ns.taverna.org.uk/2010/scufl2/scufl2.xsd .."
xsi:type="ProfileDocument" xml:base="tavernaWorkbench/">
<inputProcessorPort><!-- .. --></inputProcessorPort>
<outputProcessorPort>
<OutputProcessorPort rdf:about="processor/Hello/out/greeting">
has a URI, allowing
<Profile rdf:about=""> <name>greeting</name>
<name>tavernaWorkbench</name>
<processorBinding rdf:resource="processorbinding/Hello/" />
<portDepth>0</portDepth>
<granularPortDepth>0</granularPortDepth> deep annotations
Hybrid of RDF/XML and XML <activateConfiguration rdf:resource="configuration/Hello/" />
</Profile>
</OutputProcessorPort>
</outputProcessorPort>
<Activity rdf:about="activity/HelloScript/"> <dispatchStack>
schema - If the xsi:type is <rdf:type rdf:resource="http://ns.taverna.org.uk/2010/activity/beanshell" />
<name>HelloScript</name>
<DispatchStack rdf:about="processor/wait4me/dispatchStack/">
<rdf:type Relative references can also
given, documents can be <inputActivityPort>
<InputActivityPort rdf:about="activity/HelloScript/in/personName">
rdf:resource="http://ns.taverna.org.uk/2010/scufl2/taverna#defaultDispatchStack" />
</DispatchStack> be made absolute using the
<name>personName</name> </dispatchStack>
parsed and generated as <portDepth>0</portDepth> <iterationStrategyStack>
sameBaseAs prefix, RDF clients
</InputActivityPort> <IterationStrategyStack rdf:about="processor/Hello/iterationstrategy/">
regular XML, using xpath, etc. </inputActivityPort>
<outputActivityPort><!-- .. --></outputActivityPort>
<iterationStrategies rdf:parseType="Collection">
<CrossProduct> <!-- .. --></CrossProduct> resolving such URIs at
</Activity> </iterationStrategies>
<ProcessorBinding rdf:about="processorbinding/Hello/"> </IterationStrategyStack> http://ns.taverna.org.uk/2010/
<name>Hello</name> </iterationStrategyStack>
The schema ensures the <bindActivity rdf:resource="activity/HelloScript/" />
<bindProcessor rdf:resource="../../workflow/HelloWorld/processor/Hello/" />
</Processor>
</processor> <!-- plus other <processor>s -->
workflowBundle/can be
document can still be parsed <activityPosition>10</activityPosition>
<inputPortBinding>
<datalink>
<DataLink
redirected to a generated RDF
<InputPortBinding rdf:about="processorbinding/Hello/in/name"> rdf:about="datalink?from=processor/Hello/out/greeting&to=out/results&mergePosit
as RDF/XML as well. Pure RDF <bindInputActivityPort rdf:resource="activity/HelloScript/in/personName" /> ion=0"> resource stating what’s
<bindInputProcessorPort <receiveFrom rdf:resource="processor/Hello/out/greeting" />
writers can ommit the xsi:type rdf:resource="../../workflow/HelloWorld/processor/Hello/in/name" />
</InputPortBinding>
<sendTo rdf:resource="out/results" />
<mergePosition>0</mergePosition>
obvious from the URI.
</inputPortBinding> </DataLink>
<outputPortBinding><!-- --> </outputPortBinding> </datalink> <!-- plus more <datalink>s -->
</ProcessorBinding>
<Configuration rdf:about="configuration/Hello/">
<control>
<Blocking
“Cool URI”-style relative
<rdf:type
rdf:resource="http://ns.taverna.org.uk/2010/activity/beanshell#Config" />
rdf:about="control?block=processor/Hello/&untilFinished=processor/wait4me/">
<block rdf:resource="processor/Hello/" /> references allow parsers to
<name>Hello</name> <untilFinished rdf:resource="processor/wait4me/" />
<configure rdf:resource="activity/HelloScript/" />
<beanshell:script>hello = "Hello, " + personName;
<block
</Blocking>
‘cheat’ and pick out processor
JOptionPane.showMessageDialog(null, hello);</beanshell:script>
</Configuration>
</control>
</Workflow> “Hello” and port “greeting” -
</rdf:RDF> </rdf:RDF>
but only if starts with the
keyword paths out/ in/ or
processor/
Resources Annotations Binaries Provenance Inputs Outputs
The bundle (as a ZIP file) can be extended to include Wf4Ever: A Research
arbitrary resources. These could be referenced Object (RO) captures Embedding provenance, input and output data of a
internally by workflow activities, it could be PDFs, enough data and workflow run can provide example use and
spreadsheets, etc. provenance about results expected outputs. Changing the root-file to point
to make data and methods to the provenance makes the bundle a workflow
reproducible, verifiable, run bundle, where the workflow definition is
shareable, reusable and secondary.
repeatable. The Scufl2
workflow run bundle forms Similarly a data bundle represents primarily the
such an RO for Taverna data, but by including the workflow and the
workflow results – with the provenance also embed information on how it was
http://www.wf4ever-project.org/ generated.
goal of becoming an http://www.mygrid.org.uk/
executable paper.
Funded by: EPSRC EP/G026238/1; European Commission’s 7th FWP FP7-ICT-2007-6 270192; FP7-ICT-2007-6 270137