This document summarizes research analyzing metadata from over 630,000 learning objects using the Learning Object Metadata (LOM) standard. The analysis found that LOM instances take up around 5KB of storage space on average. Only 20 of LOM's 50 elements are used frequently, capturing similar information to the Dublin Core standard. Educational elements are underused and dependent on individual communities. Validation found loose implementation of LOM's XML structure results in good interoperability despite unclear value spaces. Metadata quality is varied, showing a need for quality assurance processes. The conclusion advocates for more studies of this kind to improve metadata standards and learning technologies.
EKAW 2016 - TechMiner: Extracting Technologies from Academic PublicationsFrancesco Osborne
In recent years we have seen the emergence of a variety of scholarly datasets. Typically these capture ‘standard’ scholarly entities and their connections, such as authors, affiliations, venues, publications, citations, and others. However, as the repositories grow and the technology improves, researchers are adding new entities to these repositories to develop a richer model of the scholarly domain. In this paper, we introduce TechMiner, a new approach, which combines NLP, machine learning and semantic technologies, for mining technologies from research publications and generating an OWL ontology describing their relationships with other research entities. The resulting knowledge base can support a number of tasks, such as: richer semantic search, which can exploit the technology dimension to support better retrieval of publications; richer expert search; monitoring the emergence and impact of new technologies, both within and across scientific fields; studying the scholarly dynamics associated with the emergence of new technologies; and others.
TechMiner was evaluated on a manually annotated gold standard and the results indicate that it significantly outperforms alternative NLP approaches and that its semantic features improve performance significantly with respect to both recall and precision.
Annotopia open annotation services platformTim Clark
Annotopia is an open-access, open-source, open annotation services platform developed for scientific annotation of documents and datasets on the web using the W3C Open Annotation model http://www.openannotation.org/spec/core/.
Using Annotopia, virtually any client application including lightweight web clients, can create, selectively share, and access annotation of web documents and data. This can be done regardless of the ownership of the base objects being annotated.
Annotopia supports unstructured, semi-structured and fully-structured (semantic) annotation; manual and automated (textmining) annotation; permissions, groups, and sharing. It also provides access to specialized vocabulary and text analytics services.
Annotopia is an open source platform licensed under Apache 2.0.
Automatic Classification of Springer Nature Proceedings with Smart Topic MinerFrancesco Osborne
The process of classifying scholarly outputs is crucial to ensure timely access to knowledge. However, this process is typically carried out manually by expert editors, leading to high costs and slow throughput. In this paper we present Smart Topic Miner (STM), a novel solution which uses semantic web technologies to classify scholarly publications on the basis of a very large automatically generated ontology of research areas. STM was developed to support the Springer Nature Computer Science editorial team in classifying proceedings in the LNCS family. It analyses in real time a set of publications provided by an editor and produces a structured set of topics and a number of Springer Nature classification tags, which best characterise the given input. In this paper we present the architecture of the system and report on an evaluation study conducted with a team of Springer Nature editors. The results of the evaluation, which showed that STM classifies publications with a high degree of accuracy, are very encouraging and as a result we are currently discussing the required next steps to ensure large-scale deployment within the company.
EKAW 2016 - TechMiner: Extracting Technologies from Academic PublicationsFrancesco Osborne
In recent years we have seen the emergence of a variety of scholarly datasets. Typically these capture ‘standard’ scholarly entities and their connections, such as authors, affiliations, venues, publications, citations, and others. However, as the repositories grow and the technology improves, researchers are adding new entities to these repositories to develop a richer model of the scholarly domain. In this paper, we introduce TechMiner, a new approach, which combines NLP, machine learning and semantic technologies, for mining technologies from research publications and generating an OWL ontology describing their relationships with other research entities. The resulting knowledge base can support a number of tasks, such as: richer semantic search, which can exploit the technology dimension to support better retrieval of publications; richer expert search; monitoring the emergence and impact of new technologies, both within and across scientific fields; studying the scholarly dynamics associated with the emergence of new technologies; and others.
TechMiner was evaluated on a manually annotated gold standard and the results indicate that it significantly outperforms alternative NLP approaches and that its semantic features improve performance significantly with respect to both recall and precision.
Annotopia open annotation services platformTim Clark
Annotopia is an open-access, open-source, open annotation services platform developed for scientific annotation of documents and datasets on the web using the W3C Open Annotation model http://www.openannotation.org/spec/core/.
Using Annotopia, virtually any client application including lightweight web clients, can create, selectively share, and access annotation of web documents and data. This can be done regardless of the ownership of the base objects being annotated.
Annotopia supports unstructured, semi-structured and fully-structured (semantic) annotation; manual and automated (textmining) annotation; permissions, groups, and sharing. It also provides access to specialized vocabulary and text analytics services.
Annotopia is an open source platform licensed under Apache 2.0.
Automatic Classification of Springer Nature Proceedings with Smart Topic MinerFrancesco Osborne
The process of classifying scholarly outputs is crucial to ensure timely access to knowledge. However, this process is typically carried out manually by expert editors, leading to high costs and slow throughput. In this paper we present Smart Topic Miner (STM), a novel solution which uses semantic web technologies to classify scholarly publications on the basis of a very large automatically generated ontology of research areas. STM was developed to support the Springer Nature Computer Science editorial team in classifying proceedings in the LNCS family. It analyses in real time a set of publications provided by an editor and produces a structured set of topics and a number of Springer Nature classification tags, which best characterise the given input. In this paper we present the architecture of the system and report on an evaluation study conducted with a team of Springer Nature editors. The results of the evaluation, which showed that STM classifies publications with a high degree of accuracy, are very encouraging and as a result we are currently discussing the required next steps to ensure large-scale deployment within the company.
Transforming repositories: from repository managers to institutional data man...JISC KeepIt project
The last decade has seen support for digital preservation transformed. There are now a multitude of organisations, training courses, and software development tools to help guide managers of digital data towards preservation decisions and solutions. But how well do these approaches understand the needs and requirements of users? This presentation was given at ECA 2010, a conference for digital archiving professionals. But not everyone can be a digital archiving specialist. At a time of exploding volumes of digital content, especially on the Web, many non-specialists need help in preserving digital content. The presentation looks at the applicability and practicality of all this support for one class of user, digital repositories, and in particular institutional repositories (IRs) and their managers. We report on a course on digital preservation tools, designed by repository managers as part of the JISC KeepIt project. Positive feedback from the evaluations of this course have show that the emergence of the tools used in this course is a great story for digital preservation.
The ontology engineering research community has focused for many years on supporting the creation, development and evolution of ontologies. Ontology forecasting, which aims at predicting semantic changes in an ontology, represents instead a new challenge. In this paper, we want to give a contribution to this novel endeavour by focusing on the task of forecasting semantic concepts in the research domain. Indeed, ontologies representing scientific disciplines contain only research topics that are already popular enough to be selected by human experts or automatic algorithms. They are thus unfit to support tasks which require the ability of describing and exploring the forefront of research, such as trend detection and horizon scanning. We address this issue by introducing the Semantic Innovation Forecast (SIF) model, which predicts new concepts of an ontology at time t+1 , using only data available at time t. Our approach relies on lexical innovation and adoption information extracted from historical data. We evaluated the SIF model on a very large dataset consisting of over one million scientific papers belonging to the Computer Science domain: the outcomes show that the proposed approach offers a competitive boost in mean average precision-at-ten compared to the baselines when forecasting over 5 years.
Supporting Springer Nature Editors by means of Semantic TechnologiesFrancesco Osborne
The Open University and Springer Nature have been collaborating since 2015 in the development of an array of semantically-enhanced solutions supporting editors in i) classifying proceedings and other editorial products with respect to the relevant research areas and ii) taking informed decisions about their marketing strategy. These solutions include i) the Smart Topic API, which automatically maps keywords associated with published papers to semantically characterized topics, which are drawn from a very large and automatically-generated ontology of Computer Science topics; ii) the Smart Topic Miner, which helps editors to associate scholarly metadata to books; and iii) the Smart Book Recommender, which assists editors in deciding which editorial products should be marketed in a specific venue.
exFrame: a Semantic Web Platform for Genomics ExperimentsTim Clark
slides from talk given at Bio-ontologies 2013, Berlin DE, 20 July 2013
Emily Merrill*, Stephane Corlosquet*, Paolo Ciccarese†*, Tim Clark*†‡, Sudeshna Das†*
* Massachusetts General Hospital
† Harvard Medical School
‡ School of Computer Science, University of Manchester
Fairport domain specific metadata using w3 c dcat & skos w ontology viewsTim Clark
FAIRPORT is an international project to develop a lightweight interoperability architecture for biomedical - and potentially other - data repositories.
This slide deck is a presentation to the FAIRPORT technical team. It describes a proposed model for supporting domain-specific search metadata using a common schema model across all repositories.
The proposal makes use of the following existing technologies, with minor extensions:
- the W3C DCAT model for dataset description
- the W3C SKOS knowledge organization system
- OWL2 Ontology Language
- Dublin Core Vocabulary
- NCBO Bioportal biomedical ontologies collection
Apply chinese radicals into neural machine translation: deeper than character...Lifeng (Aaron) Han
LPRC 2018: Limerick Postgraduate Research Conference
Lifeng Han and Shaohui Kuang. 2018. Apply Chinese radicals into neural machine translation: Deeper than character level. ArXiv pre-print https://arxiv.org/abs/1805.01565v1
Modeling the Macro-Behavior of Learning Object RepositoriesXavier Ochoa
Presentation at LACLO 2010. How the publication in Learning Object Repositories can be simply modelled based on the rate of production, the lifetime and the user growth.
Future of Metadata and Learning ObjectsWayne Hodgins
slides from my keynote presentation at the International Conference on Digital Archives Technology in Taipei on Oct. 19, 2006. See blog entry for more details and comments at www.autodesk.com/waynehodgins
Local and Unique and Digital: A Evolving Trend for Libraries and Cultural Her...Peter Murray
Slides and audio from presentation given at the LOUIS Users Group meeting, 4-Oct-2013, Baton Rouge, LA.
Libraries have been digitizing materials for decades as surrogates for access to physical materials, and in doing so have broadened the range of people and uses for library materials. With projects like Hathi Trust and Google Book Search systematically digitizing massproduced monographs and making them available within the bounds of copyright law, libraries continue the trend of digitizing what is local and unique, and the emergence of projects like the Digital Public Library of America and OCLC's WorldCat Digital Collection Gateway expand discoverability of the the local and unique well beyond the library's traditional reach. This presentation provides an overview of this trend, updates on what libraries can do, and describes activities LYRASIS is doing to help libraries and other cultural heritage institutions expand their reach.
TERENA OER portal, metadata extraction analysis, LAK, Leuven @9apr2013Ilias Hatzakis
A presentation on "Extraction and Visualization of Metadata Analytics for Multimedia Learning Object Repositories: The case of TERENA TF-media network OER portal" presented at the LACRO workshop of the LAK Conference, on April 9th, 2013
Transforming repositories: from repository managers to institutional data man...JISC KeepIt project
The last decade has seen support for digital preservation transformed. There are now a multitude of organisations, training courses, and software development tools to help guide managers of digital data towards preservation decisions and solutions. But how well do these approaches understand the needs and requirements of users? This presentation was given at ECA 2010, a conference for digital archiving professionals. But not everyone can be a digital archiving specialist. At a time of exploding volumes of digital content, especially on the Web, many non-specialists need help in preserving digital content. The presentation looks at the applicability and practicality of all this support for one class of user, digital repositories, and in particular institutional repositories (IRs) and their managers. We report on a course on digital preservation tools, designed by repository managers as part of the JISC KeepIt project. Positive feedback from the evaluations of this course have show that the emergence of the tools used in this course is a great story for digital preservation.
The ontology engineering research community has focused for many years on supporting the creation, development and evolution of ontologies. Ontology forecasting, which aims at predicting semantic changes in an ontology, represents instead a new challenge. In this paper, we want to give a contribution to this novel endeavour by focusing on the task of forecasting semantic concepts in the research domain. Indeed, ontologies representing scientific disciplines contain only research topics that are already popular enough to be selected by human experts or automatic algorithms. They are thus unfit to support tasks which require the ability of describing and exploring the forefront of research, such as trend detection and horizon scanning. We address this issue by introducing the Semantic Innovation Forecast (SIF) model, which predicts new concepts of an ontology at time t+1 , using only data available at time t. Our approach relies on lexical innovation and adoption information extracted from historical data. We evaluated the SIF model on a very large dataset consisting of over one million scientific papers belonging to the Computer Science domain: the outcomes show that the proposed approach offers a competitive boost in mean average precision-at-ten compared to the baselines when forecasting over 5 years.
Supporting Springer Nature Editors by means of Semantic TechnologiesFrancesco Osborne
The Open University and Springer Nature have been collaborating since 2015 in the development of an array of semantically-enhanced solutions supporting editors in i) classifying proceedings and other editorial products with respect to the relevant research areas and ii) taking informed decisions about their marketing strategy. These solutions include i) the Smart Topic API, which automatically maps keywords associated with published papers to semantically characterized topics, which are drawn from a very large and automatically-generated ontology of Computer Science topics; ii) the Smart Topic Miner, which helps editors to associate scholarly metadata to books; and iii) the Smart Book Recommender, which assists editors in deciding which editorial products should be marketed in a specific venue.
exFrame: a Semantic Web Platform for Genomics ExperimentsTim Clark
slides from talk given at Bio-ontologies 2013, Berlin DE, 20 July 2013
Emily Merrill*, Stephane Corlosquet*, Paolo Ciccarese†*, Tim Clark*†‡, Sudeshna Das†*
* Massachusetts General Hospital
† Harvard Medical School
‡ School of Computer Science, University of Manchester
Fairport domain specific metadata using w3 c dcat & skos w ontology viewsTim Clark
FAIRPORT is an international project to develop a lightweight interoperability architecture for biomedical - and potentially other - data repositories.
This slide deck is a presentation to the FAIRPORT technical team. It describes a proposed model for supporting domain-specific search metadata using a common schema model across all repositories.
The proposal makes use of the following existing technologies, with minor extensions:
- the W3C DCAT model for dataset description
- the W3C SKOS knowledge organization system
- OWL2 Ontology Language
- Dublin Core Vocabulary
- NCBO Bioportal biomedical ontologies collection
Apply chinese radicals into neural machine translation: deeper than character...Lifeng (Aaron) Han
LPRC 2018: Limerick Postgraduate Research Conference
Lifeng Han and Shaohui Kuang. 2018. Apply Chinese radicals into neural machine translation: Deeper than character level. ArXiv pre-print https://arxiv.org/abs/1805.01565v1
Modeling the Macro-Behavior of Learning Object RepositoriesXavier Ochoa
Presentation at LACLO 2010. How the publication in Learning Object Repositories can be simply modelled based on the rate of production, the lifetime and the user growth.
Future of Metadata and Learning ObjectsWayne Hodgins
slides from my keynote presentation at the International Conference on Digital Archives Technology in Taipei on Oct. 19, 2006. See blog entry for more details and comments at www.autodesk.com/waynehodgins
Local and Unique and Digital: A Evolving Trend for Libraries and Cultural Her...Peter Murray
Slides and audio from presentation given at the LOUIS Users Group meeting, 4-Oct-2013, Baton Rouge, LA.
Libraries have been digitizing materials for decades as surrogates for access to physical materials, and in doing so have broadened the range of people and uses for library materials. With projects like Hathi Trust and Google Book Search systematically digitizing massproduced monographs and making them available within the bounds of copyright law, libraries continue the trend of digitizing what is local and unique, and the emergence of projects like the Digital Public Library of America and OCLC's WorldCat Digital Collection Gateway expand discoverability of the the local and unique well beyond the library's traditional reach. This presentation provides an overview of this trend, updates on what libraries can do, and describes activities LYRASIS is doing to help libraries and other cultural heritage institutions expand their reach.
TERENA OER portal, metadata extraction analysis, LAK, Leuven @9apr2013Ilias Hatzakis
A presentation on "Extraction and Visualization of Metadata Analytics for Multimedia Learning Object Repositories: The case of TERENA TF-media network OER portal" presented at the LACRO workshop of the LAK Conference, on April 9th, 2013
Although animals do not use language, they are capable of many of the same kinds of cognition as us; much of our experience is at a non-verbal level.
Semantics is the bridge between surface forms used in language and what we do and experience.
Language understanding depends on world knowledge (i.e. “the pig is in the pen” vs. “the ink is in the pen”)
We might not be ready for executives to specify policies themselves, but we can make the process from specification to behavior more automated, linked to precise vocabulary, and more traceable.
Advances such as SVBR and an English serialization for ISO Common Logic means that executives and line workers can understand why the system does certain things, or verify that policies and regulations are implemented
Machine Learning Interpretability - Mateusz Dymczyk - H2O AI World London 2018Sri Ambati
This talk was recorded in London on Oct 30, 2018 and can be viewed here: https://youtu.be/p4iAnxwC_Eg
The good news is building fair, accountable, and transparent machine learning systems is possible. The bad news is it’s harder than many blogs and software package docs would have you believe. The truth is nearly all interpretable machine learning techniques generate approximate explanations, that the fields of eXplainable AI (XAI) and Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) are very new, and that few best practices have been widely agreed upon. This combination can lead to some ugly outcomes!
This talk aims to make your interpretable machine learning project a success by describing fundamental technical challenges you will face in building an interpretable machine learning system, defining the real-world value proposition of approximate explanations for exact models, and then outlining the following viable techniques for debugging, explaining, and testing machine learning models
Mateusz is a software developer who loves all things distributed, machine learning and hates buzzwords. His favourite hobby data juggling.
He obtained his M.Sc. in Computer Science from AGH UST in Krakow, Poland, during which he did an exchange at L’ECE Paris in France and worked on distributed flight booking systems. After graduation he move to Tokyo to work as a researcher at Fujitsu Laboratories on machine learning and NLP projects, where he is still currently based.
Usage of AI and machine learning models is likely to become more commonplace as larger swaths of the economy embrace automation and data-driven decision-making. While these predictive systems can be quite accurate, they have been treated as inscrutable black boxes in the past, that produce only numeric predictions with no accompanying explanations. Unfortunately, recent studies and recent events have drawn attention to mathematical and sociological flaws in prominent weak AI and ML systems, but practitioners usually don’t have the right tools to pry open machine learning black-boxes and debug them.
This presentation introduces several new approaches to that increase transparency, accountability, and trustworthiness in machine learning models. If you are a data scientist or analyst and you want to explain a machine learning model to your customers or managers (or if you have concerns about documentation, validation, or regulatory requirements), then this presentation is for you!
Finding knowledge, data and answers on the Semantic Webebiquity
Web search engines like Google have made us all smarter by providing ready access to the world's knowledge whenever we need to look up a fact, learn about a topic or evaluate opinions. The W3C's Semantic Web effort aims to make such knowledge more accessible to computer programs by publishing it in machine understandable form.
<p>
As the volume of Semantic Web data grows software agents will need their own search engines to help them find the relevant and trustworthy knowledge they need to perform their tasks. We will discuss the general issues underlying the indexing and retrieval of RDF based information and describe Swoogle, a crawler based search engine whose index contains information on over a million RDF documents.
<p>
We will illustrate its use in several Semantic Web related research projects at UMBC including a distributed platform for constructing end-to-end use cases that demonstrate the semantic web’s utility for integrating scientific data. We describe ELVIS (the Ecosystem Location Visualization and Information System), a suite of tools for constructing food webs for a given location, and Triple Shop, a SPARQL query interface which searches the Semantic Web for data relevant to a given query ELVIS functionality is exposed as a collection of web services, and all input and output data is expressed in OWL, thereby enabling its integration with Triple Shop and other semantic web resources.
Introduction to Multimodal Language models with LLaVA. What are Multimodal models, how do they work, the LLaVA papers/models, and Image classification experiment.
Introduction to Multimodal Language models with LLaVA. What are Multimodal models, how do they work, the LLaVA papers/models, and Image classification experiment.
Keynote presentation for the International Semantic Web Conference in Athens Greece, on November 9, 2023. The talk addresses the generative AI explosion and its potential impacts on the Semantic Web and Knowledge Graph communities and, in fact, may spark a research Renaissance.
Abstract:
We are living in an age of rapidly advancing technology. History may view this period as one in which generative artificial intelligence is seen as reshaping the landscape and narrative of many technology-based fields of research and application. Times of disruptions often present both opportunities and challenges. We will discuss some areas that may be ripe for consideration in the field of Semantic Web research and semantically-enabled applications. Semantic Web research has historically focused on representation and reasoning and enabling interoperability of data and vocabularies. At the core are ontologies along with ontology-enabled (or ontology-compatible) knowledge stores such as knowledge graphs. Ontologies are often manually constructed using a process that (1) identifies existing best practice ontologies (and vocabularies) and (2) generates a plan for how to leverage these ontologies by aligning and augmenting them as needed to address requirements. While semi-automated techniques may help, there is typically a significant portion of the work that is often best done by humans with domain and ontology expertise. This is an opportune time to rethink how the field generates, evolves, maintains, and evaluates ontologies. We consider how hybrid approaches, i.e., those that leverage generative AI components along with more traditional knowledge representation and reasoning approaches to create improved processes. The effort to build a robust ontology that meets a use case can be large. Ontologies are not static however and they need to evolve along with knowledge evolution and expanded usage. There is potential for hybrid approaches to help identify gaps in ontologies and/or refine content. Further, ontologies need to be documented with term definitions and their provenance. Opportunities exist to consider semi-automated techniques for some types of documentation, provenance, and decision rationale capture for annotating ontologies. The area of human-AI collaboration for population and verification presents a wide range of areas of research collaboration and impact. Ontologies need to be populated with class and relationship content. Knowledge graphs and other knowledge stores need to be populated with instance data in order to be used for question answering and reasoning. Population of large knowledge graphs can be time consuming. Generative AI holds the promise to create candidate knowledge graphs that are compatible with the ontology schema. The knowledge graph should contain provenance information identifying how the content was populated and its source and correctness and currency should be checked. A human-AI assistant approach is presented.
GPT-2: Language Models are Unsupervised Multitask LearnersYoung Seok Kim
Review of paper
Language Models are Unsupervised Multitask Learners
(GPT-2)
by Alec Radford et al.
Paper link: https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf
YouTube presentation: https://youtu.be/f5zULULWUwM
(Slides are written in English, but the presentation is done in Korean)
Research Objects: more than the sum of the partsCarole Goble
Workshop on Managing Digital Research Objects in an Expanding Science Ecosystem, 15 Nov 2017, Bethesda, USA
https://www.rd-alliance.org/managing-digital-research-objects-expanding-science-ecosystem
Research output is more than just the rhetorical narrative. The experimental methods, computational codes, data, algorithms, workflows, Standard Operating Procedures, samples and so on are the objects of research that enable reuse and reproduction of scientific experiments, and they too need to be examined and exchanged as research knowledge.
A first step is to think of Digital Research Objects as a broadening out to embrace these artefacts or assets of research. The next is to recognise that investigations use multiple, interlinked, evolving artefacts. Multiple datasets and multiple models support a study; each model is associated with datasets for construction, validation and prediction; an analytic pipeline has multiple codes and may be made up of nested sub-pipelines, and so on. Research Objects (http://researchobject.org/) is a framework by which the many, nested and contributed components of research can be packaged together in a systematic way, and their context, provenance and relationships richly described.
Poster: Using Open Source Tools to Improve Access to Oral History CollectionsBecky Yoose
Presented at the Library Technology Conference 2011 in St. Paul, MN.
Program Description: Oral history collections provide a wealth of information, yet current practices in metadata creation and
access limit the amount of information within the interview transcripts that can be discovered. This
poster describes the Miami University Libraries current project of using Open Source Software in
creating enhanced access to our Oral History collection. The Oral History Project at Miami University contains over 100 interviews pertaining to experiences at the University, with transcripts for over half of
the interviews. The poster will describe the process of batch processing transcripts using OpenCalais, a web service that automates the creation of metadata for content using natural language processing and machine learning, and displaying both the transcripts and metadata in the content management system Drupal using various modules. We will discuss the results from the comparison of machine generated
and human generated metadata in this project and the benefits and concerns surrounding both methods. Future project developments will also be included.
Supporting the Acquisition of 21st Century Skills through Multimodal Learning...Xavier Ochoa
Collaboration, communication, creativity, critical thinking and problem-solving are among the skills that are needed to study and work in this 21st century. As important as they are, evaluating, assessing and teaching them in a practical, scalable and efficient way is still a challenge not fully met by current pedagogical-technological practices. Multimodal Learning Analytics (MmLA), the processing and analysis of multiple sources of data to better understand and improve learning processes, has been posed as a possible solution to augment the natural capabilities of both instructors and students to provide and receive feedback to support the development of those skills. During this session, we will explore the affordances that low-cost sensors and current advances in artificial intelligence provide to automatically record and analyze face-to-face, complex learning processes as those involved for the development of 21st-Century Skills. Finally, we will discuss and ideate practical MmLA tools that could be built to augment your current teaching and learning practices.
Talk at EdD week at NYU - January 2020. This talk describes how Learning Analytics and Artificial Intelligence will help to augment teachers and students.
Developing 21st-Century Skills with Multimodal Learning AnalyticsXavier Ochoa
Collaboration, communication, creativity, critical thinking and problem-solving are among the skills that are needed to study and work in this 21st century. As important as they are, evaluating, assessing and teaching them in a practical, scalable and efficient way is still a challenge not fully met by current pedagogical-technological practices. Multimodal Learning Analytics (MmLA), the processing and analysis of multiple sources of data to better understand and improve learning processes, has been posed as a possible solution to augment the natural capabilities of both instructors and students to provide and receive feedback to support the development of those skills. During this session, we will have a hands-on demo of two systems to automatically generate feedback for communication and collaboration skills; then, we will explore the affordances that low-cost sensors and current advances in artificial intelligence provide to automatically record and analyze face-to-face, complex learning processes as those involved for the development of 21st-Century Skills. Finally, we will discuss and ideate practical MmLA tools that could be built to augment your current teaching and learning practices.
Presentation at NYU - November 2019.
Automatic Feedback for Oral PresentationsXavier Ochoa
This presentation was given at the Learning Analytics and Knowledge Conference about an automatic feedback for oral presentation system for entry-level higher-education students.
Education as the meta-problem: Opportunities for Technology R&DXavier Ochoa
Keynote at ECTM 2016: The massification of education at the start of the industrial revolution created an efficient but less effective learning process compared to one-to-one tutoring. The information age has only increased the pressure on the educational system and revealed its shortcomings. However, the same technological advancement can also help the system to be not only more efficient but even more effective than before. During this talk, the impact that applied research in a large array of technological fields (from AI to IoT) could have in understanding and improving the learning process will be discussed.
Medir para Entender y Mejorar: la Analítica del Aprendizaje como nuevo paradi...Xavier Ochoa
Keynote en LACLO 2016. La Analítica del Aprendizaje es una nueva herramienta que promete revolucionar las ciencias y tecnologías educativas. Nacidad de la intersección de la Ciencias de Datos, Computacionales y Educativas, la Analítica del Aprendizaje permite obtener una mejor imagen de lo que sucede durante el proceso de enseñanza - aprendizaje. Pero más allá de simplmente mejorar nuestra comprensión del proceso, la retroalimentación oportuna a los humanos involucrados (estudiantes, profesores y administradores educativos) hace que la Analítica del Aprendizaje sea el vehículo para epoderar a estos actores y así mejorar desde dentro el proceso educativo. En esta charla examinaremos los mas recientes desarrollos en el campo de la Analítica del Aprendizaje, sus oportunidades para la educación en América Latina, así como también los posibles escollos y problemas que tendría su implementación. Esta charla también es una propuesta e invitación a la incorporación de componentes analíticos en las herramientas tecnológicas que desarrollamos con la finalidad de medir su verdadero impacto en la mejora educativa.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Connector Corner: Automate dynamic content and events by pushing a button
GLOBE Metadata Analysis
1. On the use of Learning Object Metadata the GLOBE experience Xavier Ochoa, ESPOL JorisKlerkx, KULeuven Bram Vandeputte, KULeuven Erik Duval, KULeuven
3. LOM is known and widely used But its real use is not known
4. How much space 1000 LOM instances will take? What are the most used elements of LOM? Is it really a “standard”? Has it been extended as designed? What is the information stored in real instances? Is it better than Dublin Core for Educational Resources?
42. There must be a QA process That is true for bothautomatic and manual metadata
43. Conclusion of the Conclusions This kind of studies should be made an integral part of the development of LOM (or any other metadata standard) (or any other Learning technology)
44. Want the data?Want the code? https://sites.google.com/site/globemetadata/
45. Gracias / Thank you / Grazie Xavier Ochoa xavier@cti.espol.edu.ec http://ariadne.cti.espol.edu.ec/xavier Twitter: @xaoch