Renga: a collaborative data science platformrrrrrok
Renga is a collaborative data science platform developed by the Swiss Data Science Center to:
1. Provide scientists tools to create reproducible data science workflows and share research artifacts.
2. Track research provenance by capturing inputs, outputs, and computational steps in a knowledge graph to allow reproducibility.
3. Enable discovery of relevant data and algorithms by allowing graph-based search of publications and data lineages, and reuse of data and code in new contexts.
This document describes a semantic-driven complex event processing (CEP) approach for delivering personalized information streams from data-intensive monitoring systems. The approach combines semantic web technologies like ontologies and SPARQL queries with CEP engines to analyze runtime events and match them to predefined patterns and rules. An architecture is proposed that uses an ontological model, a runtime event processor using predefined rules, information source adapters to transform data, and an information dispatcher to deliver personalized data to applications. The implementation separates each layer as a Java application and uses technologies like Jena, Drools Fusion, and ZeroMQ for communication.
This document provides an overview of the NANO53 course at Foothill College. The goals are to learn characterization tools and how industry uses them, and to prepare for integrated nanomaterials engineering. The course covers various characterization techniques using different instruments and industry applications. Students will complete writing assignments, exams, and hands-on lab demonstrations and projects centered around key materials and applications.
This document provides an overview and summary of the MSR 2014 conference program. It includes statistics on paper submissions and acceptance rates by track. A trend of increasing submissions over time is shown. Popular topics of accepted papers are listed by number of accepts over total submissions. The selection process and thanks to reviewers and authors are noted. Details are given on the program including sessions, presentations, discussions and awards.
Rina korea-eu-ws-2013-sergi figuerola-last version_uploadi2CAT Foundation
This document summarizes RINA research activities funded by the European Commission, including the IRATI, PRISTINE, and IRINA projects. IRATI developed an initial RINA prototype and specifications and experimented with use cases like corporate VPNs and cloud networking. PRISTINE aims to build an SDK for the prototype to enable programmable DIF policies and trial use cases like distributed cloud and datacenter networking. IRINA performs a comparative analysis of network architectures and trials a use case study involving research network operators. The projects contribute to advancing the RINA reference model, specifications, policies, use cases, and experimentation to develop the technology toward deployment.
Short presentation of RINA and its associated European research activities, with a special emphasis in the IRATI project. Presented at the EU-Korea Workshop 2013.
The document discusses big data use cases and requirements. It provides 51 detailed use cases across various domains that generate many terabytes to petabytes of data. It also describes extracting 437 specific requirements from the use cases and analyzing trends. The next steps involve matching requirements to a reference architecture and prioritizing use cases for implementation.
Renga: a collaborative data science platformrrrrrok
Renga is a collaborative data science platform developed by the Swiss Data Science Center to:
1. Provide scientists tools to create reproducible data science workflows and share research artifacts.
2. Track research provenance by capturing inputs, outputs, and computational steps in a knowledge graph to allow reproducibility.
3. Enable discovery of relevant data and algorithms by allowing graph-based search of publications and data lineages, and reuse of data and code in new contexts.
This document describes a semantic-driven complex event processing (CEP) approach for delivering personalized information streams from data-intensive monitoring systems. The approach combines semantic web technologies like ontologies and SPARQL queries with CEP engines to analyze runtime events and match them to predefined patterns and rules. An architecture is proposed that uses an ontological model, a runtime event processor using predefined rules, information source adapters to transform data, and an information dispatcher to deliver personalized data to applications. The implementation separates each layer as a Java application and uses technologies like Jena, Drools Fusion, and ZeroMQ for communication.
This document provides an overview of the NANO53 course at Foothill College. The goals are to learn characterization tools and how industry uses them, and to prepare for integrated nanomaterials engineering. The course covers various characterization techniques using different instruments and industry applications. Students will complete writing assignments, exams, and hands-on lab demonstrations and projects centered around key materials and applications.
This document provides an overview and summary of the MSR 2014 conference program. It includes statistics on paper submissions and acceptance rates by track. A trend of increasing submissions over time is shown. Popular topics of accepted papers are listed by number of accepts over total submissions. The selection process and thanks to reviewers and authors are noted. Details are given on the program including sessions, presentations, discussions and awards.
Rina korea-eu-ws-2013-sergi figuerola-last version_uploadi2CAT Foundation
This document summarizes RINA research activities funded by the European Commission, including the IRATI, PRISTINE, and IRINA projects. IRATI developed an initial RINA prototype and specifications and experimented with use cases like corporate VPNs and cloud networking. PRISTINE aims to build an SDK for the prototype to enable programmable DIF policies and trial use cases like distributed cloud and datacenter networking. IRINA performs a comparative analysis of network architectures and trials a use case study involving research network operators. The projects contribute to advancing the RINA reference model, specifications, policies, use cases, and experimentation to develop the technology toward deployment.
Short presentation of RINA and its associated European research activities, with a special emphasis in the IRATI project. Presented at the EU-Korea Workshop 2013.
The document discusses big data use cases and requirements. It provides 51 detailed use cases across various domains that generate many terabytes to petabytes of data. It also describes extracting 437 specific requirements from the use cases and analyzing trends. The next steps involve matching requirements to a reference architecture and prioritizing use cases for implementation.
ICT refers to technologies that provide access to telecommunications like the internet, wireless networks, and cell phones. This document discusses how ICT can be used in research to identify information sources, analyze information critically, manage information effectively, and link to specialized databases. ICT also aids in literature searches, data collection, quantitative and qualitative data analysis, and detecting plagiarism. The key uses of ICT in research are to speed up the research process, increase knowledge contribution, and improve research quality through expanded accessibility of data.
Automatic Classification of Springer Nature Proceedings with Smart Topic MinerFrancesco Osborne
The document summarizes research on automatically classifying Springer Nature proceedings using the Smart Topic Miner (STM). STM extracts topics from publications, maps them to a computer science ontology, selects relevant topics using a greedy algorithm, and infers tags. It was tested on 8 Springer Nature editors who found STM accurately classified 75-90% of proceedings and improved their work. However, STM is currently limited to computer science and occasional noisy results were found in books with few chapters. Future work aims to expand STM to characterize topic evolution over time and directly support author tagging.
This document announces a conference on Big Data from Space to be held from 12-14 November 2014 in Frascati, Italy. The conference aims to bring together researchers, engineers, and users in the area of Big Data from the space sector to discuss topics across the data lifecycle from acquisition to analysis and exploitation. It seeks to foster networking and synergies across domains like Earth observation, space science, and other fields dealing with large, complex datasets. The call invites abstract submissions on major Big Data topics until July 31, addressing data volume, velocity, variety and veracity across the space domain.
A Two-Fold Quality Assurance Approach for Dynamic Knowledge Bases : The 3cixt...Nandana Mihindukulasooriya
This document describes a two-fold quality assurance approach for dynamic knowledge bases that was used for the 3cixty knowledge base project. The approach uses exploratory testing with the Loupe tool to find unexpected errors and scripted fine-grained analysis with SPARQL queries tested by the SPARQL Interceptor tool. Both techniques were complementary and helped identify issues like inconsistencies, outliers, missing properties, and constraint violations during knowledge base generation. The approach aims to adapt lessons from software engineering for quality assurance of knowledge bases.
Digital preservation faces challenges of scalability, cost, and uncertainty. The thesis proposes applying computational intelligence techniques like swarm intelligence to develop self-preserving digital objects that can autonomously manage their own preservation through replication and format migration using a social network environment. The research will study appropriate behaviors for self-preserving objects, their architecture as intelligent agents, and how social networks can support preservation. Preliminary work in modeling object behaviors has been done, and the status reports completing literature review, implementing simulation platforms, experimenting in identified research areas, and publishing results to develop and test a full prototype.
This document summarizes a presentation on applying nanotechnology principles to cognitive radio. It outlines the project members, problem statement, approach, and current status. The problem is applying nanotechnology to address spectrum sensing requirements for cognitive radio. The approach includes a literature review on integrating nanotechnology into wireless networks and relating nanotechnology, GPUs, and cognitive radio networks. Students studied these topics and neural networks concepts. Results show GPUs can speed calculations for real-time cognitive radio applications. Current research includes papers and book chapters on related topics. The scheduled is outlined through March 2014 to develop a nanocomputing solution for cognitive radio networks.
Technology Analysis Using Patent Citation Network of a Seminal PatentShanmukha S. Potti
There exists a need for a quick technology intelligence process to visualize, identify and track changes in science and technology for technology driven entities. In order to fill this gap, a methodology to portray a patent citation network of a seminal patent is proposed here. This methodology can be primarily used to determine what sub-technologies contributed to the technology in question, at what period in time and with how much impact. The same can also be extended to identify what sub technologies spun out from the technology/invention and also the application areas of the same. Patent latent variables like strength and similarity are also used to illustrate the development and flow of scientific knowledge in the technology considered. The efficacy of the methodology is demonstrated by considering patents related to light emitting GaN based compound semi-conductor devices technology (Blue LED Technology), and analyzing the findings resulting from the use of the methodology.
Japan and Canada Consortium Model that WorkTakashi Torii
The document summarizes two consortium models - the Okinawa Open Laboratory (OOL) and the Centre of Excellence in Next Generation Networks (CENGN). OOL is located in Okinawa, Japan and aims to attract companies and organizations to conduct international R&D on technologies like SDN and OpenStack. CENGN is a Canadian consortium that brings together industry, government and academia to drive innovation in ICT. It offers testing services and training programs. Both models aim to collaborate on joint proof-of-concepts and intern exchanges to further their goals of advancing next-generation technologies.
Antonio Marcos Alberti is an associate professor at the National Institute of Telecommunications in Brazil. He received his B.S. in electrical engineering from Santa Maria Federal University and his M.Sc. and doctorate from Campinas State University. His research interests include modeling, simulation, performance evaluation, and management of communication networks. He has published papers in several journals and conferences and advises graduate students.
This document provides an overview of an Internet of Things course. It outlines the course details including the instructor, website, schedule, grading scheme, and topics to be covered. The course will introduce IoT concepts and technologies like RFID, wireless communications, sensors, and localization. Students will gain hands-on experience through projects and presentations. They will learn about recent research papers and understand good research practices through teamwork. The course aims to help students appreciate IoT-related research.
As the volume and complexity of data from myriad Earth Observing platforms, both remote sensing and in-situ increases so does the demand for access to both data and information products from these data. The audience no longer is restricted to an investigator team with specialist science credentials. Non-specialist users from scientists from other disciplines, science-literate public, to teachers, to the general public and decision makers want access. What prevents them from this access to resources? It is the very complexity and specialist developed data formats, data set organizations and specialist terminology. What can be done in response? We must shift the burden from the user to the data provider. To achieve this our developed data infrastructures are likely to need greater degrees of internal code and data structure complexity to achieve (relatively) simpler end-user complexity. Evidence from numerous technical and consumer markets supports this scenario. We will cover the elements of modern data environments, what the new use cases are and how we can respond to them.
This document summarizes information about Chiba University's institutional repository called CURATOR. It provides details about Chiba University such as its programs, faculty, students and research centers. It then describes how CURATOR captures, preserves and makes publicly available intellectual works from the university including articles, theses, data, and course materials. CURATOR is aimed to be the portal for research outputs from Chiba University. The document highlights how CURATOR was an early adopter of institutional repositories in Japan and discusses its current contents and partnerships. It also outlines Chiba University's involvement in projects to improve institutional repositories in Japan.
SemsorGrid4Env is a 36-month European project that aims to develop semantic sensor grids for rapid application development for environmental management. Some of the key technologies being developed include STRABON for spatio-temporal RDF queries, SPARQLSTR for querying sensor data, and SNEE/SNEEql for in-network and out-of-network sensor data processing. The project will also provide REST APIs and is developing sensor network ontologies. Example datasets that will be accessible include the Channel Coastal Observatory and Spanish geographic data transformed into linked data. Components and results from the project can be used openly for building applications that integrate sensor and archive data.
Exposing the hidden caveats of technical debt with machine learning and A.I
Leevi Rantala's slides from his talk introducing his research work on technical debt at University of Oulu.
Paper presented at the 12th International Conference on Digital Preservation, November 2-6, 2015. University of North Carolina at Chapel Hill.
Abstract:
This paper describes how the E-ARK project (European Archival Records and Knowledge Preservation) aims to develop an
overarching methodology for curating digital assets. This methodology must address business needs and operational issues, proposing a technical wall-to-wall reference implementation for the core OAIS flow – Ingest, Archival Storage and Access. The focal point of the paper is the Access part of the OAIS flow. The paper first lays out the access vision of the E-ARK project, and secondly describes the method employed to enable information processing and to pin-point the functional and non-functional requirements. These requirements will allow the E-ARK project to create a standardized format for the Dissemination Information Package (DIP), and to develop the access tools that will process this format. The paper then proceeds to describe the actual DIP format before detailing what the access solution will look like, which tools will be developed and, not least, why the E-ARK Access system will be used and work.
Semantic Web Methodologies, Best Practices and Ontology Engineering Applied t...Ghislain ATEMEZING
This talk presents some best practices and ontology engineering applied to internet of things. The talk was presented during the 2nd IEEE World Forum on Internet of Things held in Milan, from December 14th to December 16th, 2015.
ICT refers to technologies that provide access to telecommunications like the internet, wireless networks, and cell phones. This document discusses how ICT can be used in research to identify information sources, analyze information critically, manage information effectively, and link to specialized databases. ICT also aids in literature searches, data collection, quantitative and qualitative data analysis, and detecting plagiarism. The key uses of ICT in research are to speed up the research process, increase knowledge contribution, and improve research quality through expanded accessibility of data.
Automatic Classification of Springer Nature Proceedings with Smart Topic MinerFrancesco Osborne
The document summarizes research on automatically classifying Springer Nature proceedings using the Smart Topic Miner (STM). STM extracts topics from publications, maps them to a computer science ontology, selects relevant topics using a greedy algorithm, and infers tags. It was tested on 8 Springer Nature editors who found STM accurately classified 75-90% of proceedings and improved their work. However, STM is currently limited to computer science and occasional noisy results were found in books with few chapters. Future work aims to expand STM to characterize topic evolution over time and directly support author tagging.
This document announces a conference on Big Data from Space to be held from 12-14 November 2014 in Frascati, Italy. The conference aims to bring together researchers, engineers, and users in the area of Big Data from the space sector to discuss topics across the data lifecycle from acquisition to analysis and exploitation. It seeks to foster networking and synergies across domains like Earth observation, space science, and other fields dealing with large, complex datasets. The call invites abstract submissions on major Big Data topics until July 31, addressing data volume, velocity, variety and veracity across the space domain.
A Two-Fold Quality Assurance Approach for Dynamic Knowledge Bases : The 3cixt...Nandana Mihindukulasooriya
This document describes a two-fold quality assurance approach for dynamic knowledge bases that was used for the 3cixty knowledge base project. The approach uses exploratory testing with the Loupe tool to find unexpected errors and scripted fine-grained analysis with SPARQL queries tested by the SPARQL Interceptor tool. Both techniques were complementary and helped identify issues like inconsistencies, outliers, missing properties, and constraint violations during knowledge base generation. The approach aims to adapt lessons from software engineering for quality assurance of knowledge bases.
Digital preservation faces challenges of scalability, cost, and uncertainty. The thesis proposes applying computational intelligence techniques like swarm intelligence to develop self-preserving digital objects that can autonomously manage their own preservation through replication and format migration using a social network environment. The research will study appropriate behaviors for self-preserving objects, their architecture as intelligent agents, and how social networks can support preservation. Preliminary work in modeling object behaviors has been done, and the status reports completing literature review, implementing simulation platforms, experimenting in identified research areas, and publishing results to develop and test a full prototype.
This document summarizes a presentation on applying nanotechnology principles to cognitive radio. It outlines the project members, problem statement, approach, and current status. The problem is applying nanotechnology to address spectrum sensing requirements for cognitive radio. The approach includes a literature review on integrating nanotechnology into wireless networks and relating nanotechnology, GPUs, and cognitive radio networks. Students studied these topics and neural networks concepts. Results show GPUs can speed calculations for real-time cognitive radio applications. Current research includes papers and book chapters on related topics. The scheduled is outlined through March 2014 to develop a nanocomputing solution for cognitive radio networks.
Technology Analysis Using Patent Citation Network of a Seminal PatentShanmukha S. Potti
There exists a need for a quick technology intelligence process to visualize, identify and track changes in science and technology for technology driven entities. In order to fill this gap, a methodology to portray a patent citation network of a seminal patent is proposed here. This methodology can be primarily used to determine what sub-technologies contributed to the technology in question, at what period in time and with how much impact. The same can also be extended to identify what sub technologies spun out from the technology/invention and also the application areas of the same. Patent latent variables like strength and similarity are also used to illustrate the development and flow of scientific knowledge in the technology considered. The efficacy of the methodology is demonstrated by considering patents related to light emitting GaN based compound semi-conductor devices technology (Blue LED Technology), and analyzing the findings resulting from the use of the methodology.
Japan and Canada Consortium Model that WorkTakashi Torii
The document summarizes two consortium models - the Okinawa Open Laboratory (OOL) and the Centre of Excellence in Next Generation Networks (CENGN). OOL is located in Okinawa, Japan and aims to attract companies and organizations to conduct international R&D on technologies like SDN and OpenStack. CENGN is a Canadian consortium that brings together industry, government and academia to drive innovation in ICT. It offers testing services and training programs. Both models aim to collaborate on joint proof-of-concepts and intern exchanges to further their goals of advancing next-generation technologies.
Antonio Marcos Alberti is an associate professor at the National Institute of Telecommunications in Brazil. He received his B.S. in electrical engineering from Santa Maria Federal University and his M.Sc. and doctorate from Campinas State University. His research interests include modeling, simulation, performance evaluation, and management of communication networks. He has published papers in several journals and conferences and advises graduate students.
This document provides an overview of an Internet of Things course. It outlines the course details including the instructor, website, schedule, grading scheme, and topics to be covered. The course will introduce IoT concepts and technologies like RFID, wireless communications, sensors, and localization. Students will gain hands-on experience through projects and presentations. They will learn about recent research papers and understand good research practices through teamwork. The course aims to help students appreciate IoT-related research.
As the volume and complexity of data from myriad Earth Observing platforms, both remote sensing and in-situ increases so does the demand for access to both data and information products from these data. The audience no longer is restricted to an investigator team with specialist science credentials. Non-specialist users from scientists from other disciplines, science-literate public, to teachers, to the general public and decision makers want access. What prevents them from this access to resources? It is the very complexity and specialist developed data formats, data set organizations and specialist terminology. What can be done in response? We must shift the burden from the user to the data provider. To achieve this our developed data infrastructures are likely to need greater degrees of internal code and data structure complexity to achieve (relatively) simpler end-user complexity. Evidence from numerous technical and consumer markets supports this scenario. We will cover the elements of modern data environments, what the new use cases are and how we can respond to them.
This document summarizes information about Chiba University's institutional repository called CURATOR. It provides details about Chiba University such as its programs, faculty, students and research centers. It then describes how CURATOR captures, preserves and makes publicly available intellectual works from the university including articles, theses, data, and course materials. CURATOR is aimed to be the portal for research outputs from Chiba University. The document highlights how CURATOR was an early adopter of institutional repositories in Japan and discusses its current contents and partnerships. It also outlines Chiba University's involvement in projects to improve institutional repositories in Japan.
SemsorGrid4Env is a 36-month European project that aims to develop semantic sensor grids for rapid application development for environmental management. Some of the key technologies being developed include STRABON for spatio-temporal RDF queries, SPARQLSTR for querying sensor data, and SNEE/SNEEql for in-network and out-of-network sensor data processing. The project will also provide REST APIs and is developing sensor network ontologies. Example datasets that will be accessible include the Channel Coastal Observatory and Spanish geographic data transformed into linked data. Components and results from the project can be used openly for building applications that integrate sensor and archive data.
Exposing the hidden caveats of technical debt with machine learning and A.I
Leevi Rantala's slides from his talk introducing his research work on technical debt at University of Oulu.
Paper presented at the 12th International Conference on Digital Preservation, November 2-6, 2015. University of North Carolina at Chapel Hill.
Abstract:
This paper describes how the E-ARK project (European Archival Records and Knowledge Preservation) aims to develop an
overarching methodology for curating digital assets. This methodology must address business needs and operational issues, proposing a technical wall-to-wall reference implementation for the core OAIS flow – Ingest, Archival Storage and Access. The focal point of the paper is the Access part of the OAIS flow. The paper first lays out the access vision of the E-ARK project, and secondly describes the method employed to enable information processing and to pin-point the functional and non-functional requirements. These requirements will allow the E-ARK project to create a standardized format for the Dissemination Information Package (DIP), and to develop the access tools that will process this format. The paper then proceeds to describe the actual DIP format before detailing what the access solution will look like, which tools will be developed and, not least, why the E-ARK Access system will be used and work.
Semantic Web Methodologies, Best Practices and Ontology Engineering Applied t...Ghislain ATEMEZING
This talk presents some best practices and ontology engineering applied to internet of things. The talk was presented during the 2nd IEEE World Forum on Internet of Things held in Milan, from December 14th to December 16th, 2015.
This presentation by Tim Capel, Director of the UK Information Commissioner’s Office Legal Service, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
1.) Introduction
Our Movement is not new; it is the same as it was for Freedom, Justice, and Equality since we were labeled as slaves. However, this movement at its core must entail economics.
2.) Historical Context
This is the same movement because none of the previous movements, such as boycotts, were ever completed. For some, maybe, but for the most part, it’s just a place to keep your stable until you’re ready to assimilate them into your system. The rest of the crabs are left in the world’s worst parts, begging for scraps.
3.) Economic Empowerment
Our Movement aims to show that it is indeed possible for the less fortunate to establish their economic system. Everyone else – Caucasian, Asian, Mexican, Israeli, Jews, etc. – has their systems, and they all set up and usurp money from the less fortunate. So, the less fortunate buy from every one of them, yet none of them buy from the less fortunate. Moreover, the less fortunate really don’t have anything to sell.
4.) Collaboration with Organizations
Our Movement will demonstrate how organizations such as the National Association for the Advancement of Colored People, National Urban League, Black Lives Matter, and others can assist in creating a much more indestructible Black Wall Street.
5.) Vision for the Future
Our Movement will not settle for less than those who came before us and stopped before the rights were equal. The economy, jobs, healthcare, education, housing, incarceration – everything is unfair, and what isn’t is rigged for the less fortunate to fail, as evidenced in society.
6.) Call to Action
Our movement has started and implemented everything needed for the advancement of the economic system. There are positions for only those who understand the importance of this movement, as failure to address it will continue the degradation of the people deemed less fortunate.
No, this isn’t Noah’s Ark, nor am I a Prophet. I’m just a man who wrote a couple of books, created a magnificent website: http://www.thearkproject.llc, and who truly hopes to try and initiate a truly sustainable economic system for deprived people. We may not all have the same beliefs, but if our methods are tried, tested, and proven, we can come together and help others. My website: http://www.thearkproject.llc is very informative and considerably controversial. Please check it out, and if you are afraid, leave immediately; it’s no place for cowards. The last Prophet said: “Whoever among you sees an evil action, then let him change it with his hand [by taking action]; if he cannot, then with his tongue [by speaking out]; and if he cannot, then, with his heart – and that is the weakest of faith.” [Sahih Muslim] If we all, or even some of us, did this, there would be significant change. We are able to witness it on small and grand scales, for example, from climate control to business partnerships. I encourage, invite, and challenge you all to support me by visiting my website.
Gamify it until you make it Improving Agile Development and Operations with ...Ben Linders
So many challenges, so little time. While we’re busy developing software and keeping it operational, we also need to sharpen the saw, but how? Gamification can be a way to look at how you’re doing and find out where to improve. It’s a great way to have everyone involved and get the best out of people.
In this presentation, Ben Linders will show how playing games with the DevOps coaching cards can help to explore your current development and deployment (DevOps) practices and decide as a team what to improve or experiment with.
The games that we play are based on an engagement model. Instead of imposing change, the games enable people to pull in ideas for change and apply those in a way that best suits their collective needs.
By playing games, you can learn from each other. Teams can use games, exercises, and coaching cards to discuss values, principles, and practices, and share their experiences and learnings.
Different game formats can be used to share experiences on DevOps principles and practices and explore how they can be applied effectively. This presentation provides an overview of playing formats and will inspire you to come up with your own formats.
This presentation by Professor Giuseppe Colangelo, Jean Monnet Professor of European Innovation Policy, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
This presentation by Katharine Kemp, Associate Professor at the Faculty of Law & Justice at UNSW Sydney, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
11June 2024. An online pre-engagement session was organized on Tuesday June 11 to introduce the Science Policy Lab approach and the main components of the conceptual framework.
About 40 experts from around the globe gathered online for a pre-engagement session, paving the way for the first SASi-SPi Science Policy Lab event scheduled for June 18-19, 2024 in Malmö. The session presented the objectives for the upcoming Science Policy Lab (S-PoL), which featured a role-playing game designed to simulate stakeholder interactions and policy interventions for food systems transitions. Participants called for the sharing of meeting materials and continued collaboration, reflecting a strong commitment to advancing towards sustainable agrifood systems.
This presentation by OECD, OECD Secretariat, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
• For a full set of 530+ questions. Go to
https://skillcertpro.com/product/servicenow-cis-itsm-exam-questions/
• SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
• It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
• SkillCertPro updates exam questions every 2 weeks.
• You will get life time access and life time free updates
• SkillCertPro assures 100% pass guarantee in first attempt.
1. Mika Saari*, Ahmad Muzaffar bin Baharudin**, Pekka Sillberg*, Sami Hyrynsalmi*
and Wanglin Yan**
*Tampere University of Technology, Pori, Finland
**Graduate School of Media and Governance, Keio University, Japan
May 21 - 25, 2018 Opatija, Croatia
LoRa - A Survey of Recent Research
Trends
2. 24.05.18 2
Tampere University of Technology (TUT),
Pori
• Pori is town of 85 000 citizens, students and modern
enterprises and also one of the leading Finnish cities in
happenings.
• TUT Pori is located in University Consortium of Pori
(UCPori).
• Laboratory: Pervasive Computing
• Group: SEIntS - Software engineering and intelligent
systems
3. LoRa - A Survey of Recent Research Trends
Content of presentation
• Background
• Research questions.
• Literature research
• Findings and conclusion
• Future work
Mika.saari@tut.fi
4. LoRa - A Survey of Recent Research Trends
Background
• Collaboration between Keio University and Tampere university
of Technology.
• IoT prototype development and testing within various projects.
• The power consumption is one markable issue in these
prototypes
• This study introduces one Low Power Wide Area Network
(LPWAN) technology - LoRa
Mika.saari@tut.fi
5. LoRa - A Survey of Recent Research Trends
●
The key features of LoRa
●
Operates in the lower Industrial, Scientific, and Medical (ISM)
bandwidths (USA: 915MHz, EU: 433MHz and 868MHz).
●
Long-range communication from 10 to 15 kilometers in an
outdoor.
●
very low power consumption
●
Low data throughput
• The LoRaWAN
• The communication protocol
• Standardized by LoRa Alliance
• Version 1.1 of the LoRaWAN specification published in 2017
• ensures data rates from 0.3 kbps up to 50 kbps
•
Mika.saari@tut.fi
6. LoRa - A Survey of Recent Research Trends
●
The aim was to find out what the others have done with LoRa
●
Research questions:
1. How to categorize LoRa research papers?
2. What are the application trends in LoRa?
• To answer these questions, we performed a literature study.
• The method: Systematic Literature Review approach (SLR)
Mika.saari@tut.fi
7. LoRa - A Survey of Recent Research Trends
The databases used:
●
IEEE Xplore Digital Library (IEEE)
●
Multidisciplinary Digital Publishing Institute (MDPI),
●
Association for Computing Machinery Digital library (ACM),
●
ScienceDirect,
●
The others (Google Scholar and similar)
Keywords(in various combinations):
●
IoT, LoRa, LPWAN, Wireless sensor networks
●
Result of combinations: 54 research papers (33 mentioned
in paper)
Mika.saari@tut.fi
8. RQ1: How to categorize LoRa research papers?
a)Analysis/Survey/Factual Discussion
●
Introduction of LoRa or LoRaWAN technology.
●
Technology-based issues: Architecture and protocols,
Functional components, Performance, Security issues such
as possible vulnerabilities.
b)Performance/Technical Evaluation
●
Performance testing with the real deployment of LoRa in a test
environment.
●
LoRa technology was key technology in the experiment.
●
LoRa technology was used to test some technological features,
for example, the scalability of the network.
●
LoRa signal propagation testing indoors and outdoors
Mika.saari@tut.fi
9. RQ1: How to categorize LoRa research papers?
c)Real Deployment/Experimental/Prototype Implementation
●
Environmental sensor data. Tracking applications. Few industry-specific prototypes.
●
The reasons to choose LoRa were low power devices with batteries or solar power.
●
The most commonly used hardware was Semtech LoRa module with Arduino or
Raspberry Pi
d)Simulation/Modeling/Networking Stack/Software
●
Commonly used research method was simulation when one or more LoRa
technology issues were tested.
●
Actual device deployment was not fully implemented.
●
Typical simulation was large amount of simulated devices and connections were
tested with some program.
●
Modeling used the same technique. -Without any real devices.
●
Research related to networking stacks typically used the network protocol and
included some special software combination which was tested or simulated.
●
Simulated the presence of a (maximum) number of end devices
Mika.saari@tut.fi
10. RQ1: How to categorize LoRa research papers?
e) Application
●
Targeted applications aimed to use LoRa as a long-range communication interface
for real time monitoring.
●
Monitoring the temperature of the blood fridges.
●
LoRa for a sensor node powered by vibration from an electromechanical energy
harvester on a real bridge to monitor road conditions, such as the temperature of
the asphalt and the presence of water or rain.
●
Monitor and control the temperature and humidity of different rooms, with the aim of
reducing costs related to heating ventilation and air conditioning.
●
Real-time air quality monitoring using various gas sensors.
●
Detect and prevent a destructive landslide.
●
Monitor the environment of sea and sailboats, including the speed and direction of
wind and current, the location and orientation of the sailboat for sailing sports and
races
●
Geo-location tracking applications, such as tracking animals or elderly people.
●
LoRa in gateways targeting a vehicle diagnostic system for driving safety.
Mika.saari@tut.fi
11. LoRa - A Survey of Recent Research Trends
2. RQ: What are the application trends in LoRa?
●
(Battery powered) WSN’s sensor node
●
Data: temperature, humidity, vibration, location and
tracking, …
=> a small amount of data to be transferred
●
Great potential for observing the physical environment
both indoors and outdoors.
●
...
Mika.saari@tut.fi
12. LoRa - A Survey of Recent Research Trends
Thank you
Questions?
Mika.saari@tut.fi