This document provides an overview of SPARQL, the query language for the Semantic Web. SPARQL allows querying RDF data by matching triple patterns and combining them with operations like optional and union patterns. Key features discussed include the anatomy of SPARQL queries, matching RDF literals and numerical values, filtering solutions, and defining datasets with the FROM clause. The document also covers SPARQL result forms and resources for learning more about SPARQL implementations and extensions.
OWL stands for Web Ontology Language
OWL is built on top of RDF
OWL is for processing information on the web
OWL was designed to be interpreted by computers
OWL was not designed for being read by people
OWL is written in XML
OWL has three sublanguages
- OWL Lite , OWL DL , OWL Full
OWL is a W3C standard
Talk given for the #phpbenelux user group, March 27th in Gent (BE), with the goal of convincing developers that are used to build php/mysql apps to broaden their horizon when adding search to their site. Be sure to also have a look at the notes for the slides; they explain some of the screenshots, etc.
An accompanying blog post about this subject can be found at http://www.jurriaanpersyn.com/archives/2013/11/18/introduction-to-elasticsearch/
ElasticSearch introduction talk. Overview of the API, functionality, use cases. What can be achieved, how to scale? What is Kibana, how it can benefit your business.
OWL stands for Web Ontology Language
OWL is built on top of RDF
OWL is for processing information on the web
OWL was designed to be interpreted by computers
OWL was not designed for being read by people
OWL is written in XML
OWL has three sublanguages
- OWL Lite , OWL DL , OWL Full
OWL is a W3C standard
Talk given for the #phpbenelux user group, March 27th in Gent (BE), with the goal of convincing developers that are used to build php/mysql apps to broaden their horizon when adding search to their site. Be sure to also have a look at the notes for the slides; they explain some of the screenshots, etc.
An accompanying blog post about this subject can be found at http://www.jurriaanpersyn.com/archives/2013/11/18/introduction-to-elasticsearch/
ElasticSearch introduction talk. Overview of the API, functionality, use cases. What can be achieved, how to scale? What is Kibana, how it can benefit your business.
The talk covers how Elasticsearch, Lucene and to some extent search engines in general actually work under the hood. We'll start at the "bottom" (or close enough!) of the many abstraction levels, and gradually move upwards towards the user-visible layers, studying the various internal data structures and behaviors as we ascend. Elasticsearch provides APIs that are very easy to use, and it will get you started and take you far without much effort. However, to get the most of it, it helps to have some knowledge about the underlying algorithms and data structures. This understanding enables you to make full use of its substantial set of features such that you can improve your users search experiences, while at the same time keep your systems performant, reliable and updated in (near) real time.
What is TensorFlow? | Introduction to TensorFlow | TensorFlow Tutorial For Be...Simplilearn
This presentation on TensorFlow will help you in understanding what exactly is TensorFlow and how it is used in Deep Learning. TensorFlow is a software library developed by Google for the purposes of conducting machine learning and deep neural network research. In this tutorial, you will learn the fundamentals of TensorFlow concepts, functions, and operations required to implement deep learning algorithms and leverage data like never before. This TensorFlow tutorial is ideal for beginners who want to pursue a career in Deep Learning. Now, let us deep dive into this TensorFlow tutorial and understand what TensorFlow actually is and how to use it.
Below topics are explained in this TensorFlow presentation:
1. What is Deep Learning?
2. Top Deep Learning Libraries
3. Why TensorFlow?
4. What is TensorFlow?
5. What are Tensors?
6. What is a Data Flow Graph?
7. Program Elements in TensorFlow
8. Use case implementation using TensorFlow
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
1. Understand the concepts of TensorFlow, its main functions, operations and the execution pipeline
2. Implement deep learning algorithms, understand neural networks and traverse the layers of data abstraction which will empower you to understand data like never before
3. Master and comprehend advanced topics such as convolutional neural networks, recurrent neural networks, training deep networks and high-level interfaces
4. Build deep learning models in TensorFlow and interpret the results
5. Understand the language and fundamental concepts of artificial neural networks
6. Troubleshoot and improve deep learning models
7. Build your own deep learning project
8. Differentiate between machine learning, deep learning and artificial intelligence
Learn more at: https://www.simplilearn.com
In this presentation, we are going to discuss how elasticsearch handles the various operations like insert, update, delete. We would also cover what is an inverted index and how segment merging works.
Cost-based Query Optimization in Apache Phoenix using Apache CalciteJulian Hyde
This talk, given by Maryann Xue and Julian Hyde at Hadoop Summit, San Jose on June 30th, 2016, describes how we re-engineered Apache Phoenix with a cost-based optimizer based on Apache Calcite.
Apache Phoenix has rapidly become a workhorse in many organizations, providing a convenient standard SQL interface to HBase suitable for a wide variety of workloads from transactions to ETL and analytics. But Phoenix's initial query optimizer was based on static optimization procedures and thus could not choose between several potential plans or indices based on cost metrics.
We describe how we rebuilt Phoenix's parser and query optimizer using the Calcite framework, improving Phoenix's performance and SQL compliance. The new architecture uses relational algebra as an intermediate language, and this enables you to switch in other engines, especially those also based on Calcite. As an example of this, we demonstrate querying a Phoenix database via Apache Drill.
Apache Spark Data Source V2 with Wenchen Fan and Gengliang WangDatabricks
As a general computing engine, Spark can process data from various data management/storage systems, including HDFS, Hive, Cassandra and Kafka. For flexibility and high throughput, Spark defines the Data Source API, which is an abstraction of the storage layer. The Data Source API has two requirements.
1) Generality: support reading/writing most data management/storage systems.
2) Flexibility: customize and optimize the read and write paths for different systems based on their capabilities.
Data Source API V2 is one of the most important features coming with Spark 2.3. This talk will dive into the design and implementation of Data Source API V2, with comparison to the Data Source API V1. We also demonstrate how to implement a file-based data source using the Data Source API V2 for showing its generality and flexibility.
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...Edureka!
This Edureka "What is Spark" tutorial will introduce you to big data analytics framework - Apache Spark. This tutorial is ideal for both beginners as well as professionals who want to learn or brush up their Apache Spark concepts. Below are the topics covered in this tutorial:
1) Big Data Analytics
2) What is Apache Spark?
3) Why Apache Spark?
4) Using Spark with Hadoop
5) Apache Spark Features
6) Apache Spark Architecture
7) Apache Spark Ecosystem - Spark Core, Spark Streaming, Spark MLlib, Spark SQL, GraphX
8) Demo: Analyze Flight Data Using Apache Spark
Lenguaje de recuperación de datos en la web (sparql)Hugo Peña
Simple Protocol and RDF Query Language, Lenguaje de consulta del ámbito
de la Web Semántica de W3C. Define la sintaxis y la semántica necesarias para
una expresión de consulta sobre un grafo RDF y las diferentes formas de
resultados obtenidos.
The talk covers how Elasticsearch, Lucene and to some extent search engines in general actually work under the hood. We'll start at the "bottom" (or close enough!) of the many abstraction levels, and gradually move upwards towards the user-visible layers, studying the various internal data structures and behaviors as we ascend. Elasticsearch provides APIs that are very easy to use, and it will get you started and take you far without much effort. However, to get the most of it, it helps to have some knowledge about the underlying algorithms and data structures. This understanding enables you to make full use of its substantial set of features such that you can improve your users search experiences, while at the same time keep your systems performant, reliable and updated in (near) real time.
What is TensorFlow? | Introduction to TensorFlow | TensorFlow Tutorial For Be...Simplilearn
This presentation on TensorFlow will help you in understanding what exactly is TensorFlow and how it is used in Deep Learning. TensorFlow is a software library developed by Google for the purposes of conducting machine learning and deep neural network research. In this tutorial, you will learn the fundamentals of TensorFlow concepts, functions, and operations required to implement deep learning algorithms and leverage data like never before. This TensorFlow tutorial is ideal for beginners who want to pursue a career in Deep Learning. Now, let us deep dive into this TensorFlow tutorial and understand what TensorFlow actually is and how to use it.
Below topics are explained in this TensorFlow presentation:
1. What is Deep Learning?
2. Top Deep Learning Libraries
3. Why TensorFlow?
4. What is TensorFlow?
5. What are Tensors?
6. What is a Data Flow Graph?
7. Program Elements in TensorFlow
8. Use case implementation using TensorFlow
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
1. Understand the concepts of TensorFlow, its main functions, operations and the execution pipeline
2. Implement deep learning algorithms, understand neural networks and traverse the layers of data abstraction which will empower you to understand data like never before
3. Master and comprehend advanced topics such as convolutional neural networks, recurrent neural networks, training deep networks and high-level interfaces
4. Build deep learning models in TensorFlow and interpret the results
5. Understand the language and fundamental concepts of artificial neural networks
6. Troubleshoot and improve deep learning models
7. Build your own deep learning project
8. Differentiate between machine learning, deep learning and artificial intelligence
Learn more at: https://www.simplilearn.com
In this presentation, we are going to discuss how elasticsearch handles the various operations like insert, update, delete. We would also cover what is an inverted index and how segment merging works.
Cost-based Query Optimization in Apache Phoenix using Apache CalciteJulian Hyde
This talk, given by Maryann Xue and Julian Hyde at Hadoop Summit, San Jose on June 30th, 2016, describes how we re-engineered Apache Phoenix with a cost-based optimizer based on Apache Calcite.
Apache Phoenix has rapidly become a workhorse in many organizations, providing a convenient standard SQL interface to HBase suitable for a wide variety of workloads from transactions to ETL and analytics. But Phoenix's initial query optimizer was based on static optimization procedures and thus could not choose between several potential plans or indices based on cost metrics.
We describe how we rebuilt Phoenix's parser and query optimizer using the Calcite framework, improving Phoenix's performance and SQL compliance. The new architecture uses relational algebra as an intermediate language, and this enables you to switch in other engines, especially those also based on Calcite. As an example of this, we demonstrate querying a Phoenix database via Apache Drill.
Apache Spark Data Source V2 with Wenchen Fan and Gengliang WangDatabricks
As a general computing engine, Spark can process data from various data management/storage systems, including HDFS, Hive, Cassandra and Kafka. For flexibility and high throughput, Spark defines the Data Source API, which is an abstraction of the storage layer. The Data Source API has two requirements.
1) Generality: support reading/writing most data management/storage systems.
2) Flexibility: customize and optimize the read and write paths for different systems based on their capabilities.
Data Source API V2 is one of the most important features coming with Spark 2.3. This talk will dive into the design and implementation of Data Source API V2, with comparison to the Data Source API V1. We also demonstrate how to implement a file-based data source using the Data Source API V2 for showing its generality and flexibility.
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...Edureka!
This Edureka "What is Spark" tutorial will introduce you to big data analytics framework - Apache Spark. This tutorial is ideal for both beginners as well as professionals who want to learn or brush up their Apache Spark concepts. Below are the topics covered in this tutorial:
1) Big Data Analytics
2) What is Apache Spark?
3) Why Apache Spark?
4) Using Spark with Hadoop
5) Apache Spark Features
6) Apache Spark Architecture
7) Apache Spark Ecosystem - Spark Core, Spark Streaming, Spark MLlib, Spark SQL, GraphX
8) Demo: Analyze Flight Data Using Apache Spark
Lenguaje de recuperación de datos en la web (sparql)Hugo Peña
Simple Protocol and RDF Query Language, Lenguaje de consulta del ámbito
de la Web Semántica de W3C. Define la sintaxis y la semántica necesarias para
una expresión de consulta sobre un grafo RDF y las diferentes formas de
resultados obtenidos.
This talk features the basics behind the science of Information Retrieval with a story-mode on information and its various aspects. It then takes you through a quick journey into the process behind building of the search engine.
Broad introduction to information retrieval and web search, used to teaching at the Yahoo Bangalore Summer School 2013. Slides are a mash-up from my own and other people's presentations.
Web Semantica. La nueva Generación de Portales. (Infotec - SemanticWebBuilder...Víctor Jesús Hernández
La Web ha cambiado profundamente la forma en la que nos comunicamos, hacemos negocios y realizamos nuestro trabajo. La comunicación prácticamente con todo el mundo en cualquier momento y a bajo costo es posible hoy en día, sin embargo, al mismo tiempo, también han originado sus principales problemas:
Sobrecarga de información
Heterogeneidad de fuentes de información
Información no clasificada
Ambigüedad y falta de precisión
Problema de interoperabilidad.
La evolución de la Web podría dividirse en tres niveles principalmente:
Web 1.0 es una red que permite leer
Web 2.0 es una extensión, que permite leer y escribir, concediendo a los usuarios en un papel activo
Web 3.0 es una extensión de Web 2.0 destinada a añadir significado a la Web, y por tal motivo se le llama Web Semántica
El primer paso hacia la Web 3.0 es el nacimiento de la Data Web, que se puede concebir gracias al reciente arribo de SPARQL, que permite un lenguaje estandarizado y un API para la búsqueda a través de bases de datos en la Red y la utilización de RDF, y OWL para clasificar y organizar la información.
"En 2012 el 80% de los sitios web públicos usarán algún nivel de hipertexto semántico para crear documentos de la Web Semántica […] el 15% de los sitios web públicos usarán ontologías basadas en Web Semántica más amplias para crear bases de datos semánticas" (“Finding and exploiting value in semantic web technologies on the web”, Informe de Investigación de Gartner, Mayo de 2007)
Sesam4 project presentation sparql - april 2011Robert Engels
This slide set is a provided by the SESAM4 consortium as one out of three Technology Primers on Semantic Web technology. This Primer is on SPARQL and gives you a short introduction to its constructs followed by some examples. You can find the belonging slideset at youtube,
Sesam4 project presentation sparql - april 2011sesam4able
This slide set is a provided by the SESAM4 consortium as one out of three Technology Primers on Semantic Web technology. This Primer is on SPARQL and gives you a short introduction to its constructs followed by some examples. You can find the belonging slideset at youtube under SESAM4.
This is a lecture note #10 for my class of Graduate School of Yonsei University, Korea.
It describes SPARQL to retrieve and manipulate data stored in Resource Description Framework format
This presentation looks in detail at SPARQL (SPARQL Protocol and RDF Query Language) and introduces approaches for querying and updating semantic data. It covers the SPARQL algebra, the SPARQL protocol, and provides examples for reasoning over Linked Data. We use examples from the music domain, which can be directly tried out and ran over the MusicBrainz dataset. This includes gaining some familiarity with the RDFS and OWL languages, which allow developers to formulate generic and conceptual knowledge that can be exploited by automatic reasoning services in order to enhance the power of querying.
In this webinar Thomas Cook, Sales Director, AnzoGraph DB, provides a history lesson on the origins of SPARQL, including its roots in the Semantic Web, and how linked open data is used to create Knowledge Graphs. Then, he dives into "What is RDF?", "What is a URI?" and "What is SPARQL?", wrapping up with a real-world demonstration via a Zeppelin notebook.
SPARQL introduction and training (130+ slides with exercices)Thomas Francart
Full SPARQL training
Covers all SPARQL : basic graph patterns, FILTERs, functions, property paths, optional, negation, assignation, aggregation, subqueries, federated queries.
Does not cover except SPARQL updates.
Includes exercices on DBPedia.
CC BY license
Introduction to metadata cleansing using SPARQL update queriesEuropean Commission
What is it about?
-How to transform your metadata using simple SPARQL Update queries;
-How to conform to the ADMS-AP to get your interoperability solutions ready to be shared on Joinup;
-The main types of errors that you could face when uploading metadata of interoperability solutions on Joinup.
Data streams take many forms and their velocity is hard to tame. They can be myriads of tiny flows that you can collect to tame with Time-series Databases; continuous massive flows than you cannot stop to tame with Data Stream Management Systems; Continuous numerous flows that can turn into a torrent to tame with Event-based Systems; and myriads of continuous flows of any size and speed that form an immense delta to tame with Event-Driven Architectures. Enjoy this introductory talk!
This is the presentation that I did for PoliMI Data Scientists on Stream Reasoning, an approach to blend Artificial Intelligence and Stream Processing.
While the state of the art in Machine Learning offers practitioners effective tecniques to deal with static data sets, there are only accademic results tailored to data streams. In this presentation for the 4th Stream Reasoning workshop, I report on an effort of Alessio Bernardo (a student of mines) to set up a benchmark enviroment to (i) repeat academic results, (ii) perform studies on real data for confirming the academic results, and (iii) study the research problem of "incremental rebalancing learning on evolving data streams".
HiPPO and Flipism are no longer the only way to take decisions. In the Big Data / Data Science era one can dream of data-driven organization. If the data were "oil", Big Data technologies extract, transport, and store it, while Data Science methods provide the a way to "refine the crude oil". This presentation elaborates on the Ws (What, Why, When, Who and How) of Big Data and Data Science.
From the semantic interoperability problem to Google's knowledge graph passing from the Semantic Web, Linked Data, Yahoo! search monkey, Facebook Open Graph, and schema.org.
La Città dei Balocchi, con le sue luci, è un evento chiave nel panorama dell'offerta turistica Natalizia Lombarda. La presentazione riporta i risultati di un'analisi di chi è venuto e quando.
Realizzato da Fluxedo srl e Olivetti spa per il Consorzio Como Turistica, con la collaborazione di Politecnico di Milano, TIM e Comune di Como, nel contesto del progetto CrowdInsights finanziato da EIT Digital.
Stream Reasoning: a summary of ten years of research and a vision for the nex...Emanuele Della Valle
Stream reasoning studies the application of inference techniques to data characterised by being highly dynamic. It can find application in several settings, from Smart Cities to Industry 4.0, from Internet of Things to Social Media analytics. This year stream reasoning turns ten, and this talk analyses its growth. In the first part, it traces the main results obtained so far, by presenting the most prominent studies. It starts by an overview of the most relevant studies developed in the context of semantic web, and then it extends the analysis to include contributions from adjacent areas, such as database and artificial intelligence. Looking at the past is useful to prepare for the future: the second part presents a set of open challenges and issues that stream reasoning will face in the next future.
Stream reasoning: an approach to tame the velocity and variety dimensions of ...Emanuele Della Valle
Big Data tech can tame volume and velocity. Taming Variety in presence of volume and velocity is the real challenge. I’ve been working on taming variety and velocity simultaneously (Stream Reasoning) for 10 years, now. In this talk, I give you some examples of application domains where this is necessary. I explain where the Stream Reasoning community went so far in theory, applications and products. In particular I focus on my applications and my startup Fluxedo, which is offering real-time social media analytics across social networks. I conclude the talk discussing what comes next: 1) the need to focus on languages and abstractions able to easily capture user needs; 2) the need to find the sweet-spot between scalability and expressive semantics; 3) the need to used semantics to model more than the data access; and 4) the need to get over imperfect data. If you are exited, I did my job for today!
Every body talks about Big Data, but why? Do it create value? Do it enable some paradigmatic shifts in the way we work with data? This talk I did at ComoNext research and technological park cast some light on those questions.
Listening to the pulse of our cities with Stream Reasoning (and few more tech...Emanuele Della Valle
The digital reflection of our cities is sharpening and it is tracking their evolution with a decreasing delay. However, we risk that data piles up without easing decision making. This key note, which I gave at the 12th Semantic Web Summer School, presents how stream reasoning (an approach to tame simultaneously the variety and velocity dimensions of Big Data) and advance visual analytics can support decision makers and discusses the lesson learnt.
The forth lecture of the course I'm giving on "Interoperability and Semantic Technologies" at Politecnico di Milano in the academic year 2015-16. It presents an introduction to RDF. It starts presenting the data model. Then it presents the turtle serialization. It compares XML vs. RDF. Finally, it provides few informations about RDFa and Linked Data.
The third lecture of the course I'm giving on "Interoperability and Semantic Technologies" at Politecnico di Milano in the academic year 2015-16. It presents an introduction to the Semantic Web taking a brief walk through in this 15 years of research, standardisation and industrial uptake.
The second lecture of the course I'm giving on "Interoperability and Semantic Technologies" at Politecnico di Milano in the academic year 2015-16. It discusses interoperability using HL7 v2 and v3 as examples of syntactic and semantic interoperability, respectively.
Stream reasoning: mastering the velocity and the variety dimensions of Big Da...Emanuele Della Valle
More and more applications require real-time processing of heterogeneous data streams. In terms of the “Vs” of Big Data (volume, velocity, variety and veracity), they require addressing velocity and variety at the same time. Big Data solutions able to handle separately velocity and variety have been around for a while, but only Stream Reasoning approaches those two dimensions at once. Current results in the Stream Reasoning field are relevant for application areas that require to: handle massive datasets, process data streams on the fly, cope with heterogeneous incomplete and noisy data, provide reactive answers, support fine-grained information access, and integrate complex domain models. This talk starting from those requirements, frames the problem addressed by Stream Reasoning. It poses the research question and operationalise it with four simpler sub-questions. It describes how the database group of Politecnico di Milano positively answered those sub-questions in the last 7 years of research. It briefly surveys alternative approaches investigated by other research groups world wide and it elaborates on current limitations and open challenges.
The 10 minutes presentation I gave at my PhD defence on 21.9.2015 in Amsterdam. Prof. Frank van Harmelen was my promoter. Prof. Ian Horrocks, prof. Manfred Hauswirth, prof. Geert-Jan Houben, Peter Boncz and prof. Guus Schreiber were my opponents.
Listening to the pulse of our cities fusing Social Media Streams and Call Dat...Emanuele Della Valle
The digital reflection of our cities is sharpening and it is tracking their evolution with a decreasing delay. This happens thanks to the pervasive deployment of sensors, the wide adoption of smart phones, the usage of (location-based) social networks and the availability of datasets about urban environment. So while data becomes every day more abundant, decision makers face the challenge to increase their capability to create value out of the analysis of this data. This key note presents how advance visual analytics, ontology base data access and information flow processing methods can help in making sense of Social Media Streams and Call Data Records from Mobile Network Operators during city scale events. Real-world deployments demonstrate the ability of those methods to advance our ability to feel the pulse of our cities in order to deliver innovative services.
C’è un modo di raccontare un evento che passa attraverso la lettura dei flussi social che genera. Quella traccia digitale che ogni partecipante lascia sui social network quando condivide la sua partecipazione o la sua opinione. E’ possibile fondere e interpretare in tempo reale tali tracce utilizzando tecnologie d’analisi d’avanguardia e modelli avanzati di visualizzazione dei dati. Nel 2014 in collaborazione con StudioLabo e Telecom Italia, il Politecnico di Milano ha realizzato CitySensing, per mostrare l’impronta lasciata dal FuoriSalone sui social network. Focalizzando, in seguito, CitySensing sulle esigenze del gestore dell’evento, il Politecnico di Milano ha mostrato la potenzialità dell’approccio per il Festival della Comunicazione di Camogli e per il Festival delle Letterature di Pescara. La soluzione è ora offerta da Fluxedo.
C'è un modo di racocontare la città che passa attraverso la lettura dei flussi di dati che essa genera. Quelle tracce digitali che ciascuno di noi lascia ogni volta che compie un piccolo gesto quotidiano, come fare una telefonata o inviare un tweet.
In City Data Fusion, il Politecnico di Milano e Telecom Italia raccontano le città fondendo, interpretando e visualizzando i Big Data, ovvero quell'enorme e continuo flusso di tracce digitali che i loro abitanti e visitotori lasciano utilizzando il proprio smartphone o i servizi della città.
Questa presentazione vi introduce all'osservazione alcune città italiane in una prospettiva nuova.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar