IJERST is described as a publication dedicated to presenting the results of scientific and technical research in the fields of Engineering, Science, and Technology.
Internet Of Things(IoT) is emerging technology in future world.The term IoT comprises of Cloud computing, Data mining,
Big data analytics, hardware board. The Security and Interoperability is a main factor that influences the IoT Enegy
consumption is also main fator for IoT application designing.The various protocols such as MQTT,AMQP,XMPP are used in
IoT.This paper analysis the various protocols used in Internet of Things.
F2CDM: Internet of Things for Healthcare Network Based Fog-to-Cloud and Data-...Istabraq M. Al-Joboury
Internet of Things (IoT) evolves very rapidly over time, since everything such as sensors/actuators linked together from around the world with use of evolution of ubiquitous computing through the Internet. These devices have a unique IP address in order to communicate with each other and transmit data with features of wireless technologies. Fog computing or so called edge computing brings all Cloud features to embedded devices at edge network and adds more features to servers like pre-store data of Cloud, fast response, and generate overhasty users reporting. Fog mediates between Cloud and IoT devices and thus enables new types of computing and services. The future applications take the advantage of combing the two concepts Fog and Cloud in order to provide low delay Fog-based and high capacity of storage Cloud-based. This paper proposes an IoT architecture for healthcare network based on Fog to Cloud and Data in Motion (F2CDM). The proposed architecture is designed and implemented over three sites: Site 1 contains the embedded devices layer, Site 2 consists of the Fog network layer, while Site 3 consists of the Cloud network. The Fog layer is represented by a middleware server in Al-Nahrain University with temporary storage such that the data lives inside for 30 min. During this time, the selection of up-normality in behavior is send to the Cloud while the rest of the data is wiped out. On the other hand, the Cloud stores all the incoming data from Fog permanently. The F2CDM works using Message Queue Telemetry Transport (MQTT) for fast response. The results show that all data can be monitored from the Fog in real time while the critical data can be monitored from Cloud. In addition, the response time is evaluated using traffic generator called Tsung. It has been found that the proposed architecture reduces traffic on Cloud network and provides better data analysis.
Shceduling iot application on cloud computingEman Ahmed
Resource scheduling considers the execution time of every distinct workload, but most importantly, the overall performance is also based on type of workload i.e. with different QoS requirements (heterogeneous workloads) and with similar QoS requirements (homogenous workloads).
Tiarrah Computing: The Next Generation of ComputingIJECEIAES
The evolution of Internet of Things (IoT) brought about several challenges for the existing Hardware, Network and Application development. Some of these are handling real-time streaming and batch bigdata, real- time event handling, dynamic cluster resource allocation for computation, Wired and Wireless Network of Things etc. In order to combat these technicalities, many new technologies and strategies are being developed. Tiarrah Computing comes up with integration the concept of Cloud Computing, Fog Computing and Edge Computing. The main objectives of Tiarrah Computing are to decouple application deployment and achieve High Performance, Flexible Application Development, High Availability, Ease of Development, Ease of Maintenances etc. Tiarrah Computing focus on using the existing opensource technologies to overcome the challenges that evolve along with IoT. This paper gives you overview of the technologies and design your application as well as elaborate how to overcome most of existing challenge.
Performance Analysis of Internet of Things Protocols Based Fog/Cloud over Hig...Istabraq M. Al-Joboury
The Internet of Things (IoT) becomes the future of a global data field in which the embedded devices communicate with each other, exchange data and making decisions through the Internet. IoT could improves the qualityoflife in smart cities, but a massive amount of data from different smart devices could slow down or crash database systems. In addition, IoT data transfer to Cloud for monitoring information and generating feedback thus will lead to highdelay in infrastructure level. Fog Computing can help by offering services closer to edge devices. In this paper, we propose an efficient system architecture to mitigate the problem of delay. We provide performance analysis like responsetime, throughput and packet loss for MQTT (Message Queue Telemetry Transport) and HTTP (Hyper Text Transfer Protocol) protocols based on Cloud or Fog serverswith large volume of data form emulated traffic generator working alongsidewith one real sensor. We implement both protocols in the same architecture, with low cost embedded devices to local and Cloud servers with different platforms. The results show that HTTP response time is 12.1 and 4.76 times higher than MQTT Fog and cloud based located in the same geographical area of the sensors respectively. The worst case in performance is observed when the Cloud is public and outside the country region. The results obtained for throughput shows that MQTT has the capability to carry the data with available bandwidth and lowest percentage of packet loss. We also prove that the proposed Fog architecture is an efficient way to reduce latency and enhance performance in Cloud based IoT.
Internet Of Things(IoT) is emerging technology in future world.The term IoT comprises of Cloud computing, Data mining,
Big data analytics, hardware board. The Security and Interoperability is a main factor that influences the IoT Enegy
consumption is also main fator for IoT application designing.The various protocols such as MQTT,AMQP,XMPP are used in
IoT.This paper analysis the various protocols used in Internet of Things.
F2CDM: Internet of Things for Healthcare Network Based Fog-to-Cloud and Data-...Istabraq M. Al-Joboury
Internet of Things (IoT) evolves very rapidly over time, since everything such as sensors/actuators linked together from around the world with use of evolution of ubiquitous computing through the Internet. These devices have a unique IP address in order to communicate with each other and transmit data with features of wireless technologies. Fog computing or so called edge computing brings all Cloud features to embedded devices at edge network and adds more features to servers like pre-store data of Cloud, fast response, and generate overhasty users reporting. Fog mediates between Cloud and IoT devices and thus enables new types of computing and services. The future applications take the advantage of combing the two concepts Fog and Cloud in order to provide low delay Fog-based and high capacity of storage Cloud-based. This paper proposes an IoT architecture for healthcare network based on Fog to Cloud and Data in Motion (F2CDM). The proposed architecture is designed and implemented over three sites: Site 1 contains the embedded devices layer, Site 2 consists of the Fog network layer, while Site 3 consists of the Cloud network. The Fog layer is represented by a middleware server in Al-Nahrain University with temporary storage such that the data lives inside for 30 min. During this time, the selection of up-normality in behavior is send to the Cloud while the rest of the data is wiped out. On the other hand, the Cloud stores all the incoming data from Fog permanently. The F2CDM works using Message Queue Telemetry Transport (MQTT) for fast response. The results show that all data can be monitored from the Fog in real time while the critical data can be monitored from Cloud. In addition, the response time is evaluated using traffic generator called Tsung. It has been found that the proposed architecture reduces traffic on Cloud network and provides better data analysis.
Shceduling iot application on cloud computingEman Ahmed
Resource scheduling considers the execution time of every distinct workload, but most importantly, the overall performance is also based on type of workload i.e. with different QoS requirements (heterogeneous workloads) and with similar QoS requirements (homogenous workloads).
Tiarrah Computing: The Next Generation of ComputingIJECEIAES
The evolution of Internet of Things (IoT) brought about several challenges for the existing Hardware, Network and Application development. Some of these are handling real-time streaming and batch bigdata, real- time event handling, dynamic cluster resource allocation for computation, Wired and Wireless Network of Things etc. In order to combat these technicalities, many new technologies and strategies are being developed. Tiarrah Computing comes up with integration the concept of Cloud Computing, Fog Computing and Edge Computing. The main objectives of Tiarrah Computing are to decouple application deployment and achieve High Performance, Flexible Application Development, High Availability, Ease of Development, Ease of Maintenances etc. Tiarrah Computing focus on using the existing opensource technologies to overcome the challenges that evolve along with IoT. This paper gives you overview of the technologies and design your application as well as elaborate how to overcome most of existing challenge.
Performance Analysis of Internet of Things Protocols Based Fog/Cloud over Hig...Istabraq M. Al-Joboury
The Internet of Things (IoT) becomes the future of a global data field in which the embedded devices communicate with each other, exchange data and making decisions through the Internet. IoT could improves the qualityoflife in smart cities, but a massive amount of data from different smart devices could slow down or crash database systems. In addition, IoT data transfer to Cloud for monitoring information and generating feedback thus will lead to highdelay in infrastructure level. Fog Computing can help by offering services closer to edge devices. In this paper, we propose an efficient system architecture to mitigate the problem of delay. We provide performance analysis like responsetime, throughput and packet loss for MQTT (Message Queue Telemetry Transport) and HTTP (Hyper Text Transfer Protocol) protocols based on Cloud or Fog serverswith large volume of data form emulated traffic generator working alongsidewith one real sensor. We implement both protocols in the same architecture, with low cost embedded devices to local and Cloud servers with different platforms. The results show that HTTP response time is 12.1 and 4.76 times higher than MQTT Fog and cloud based located in the same geographical area of the sensors respectively. The worst case in performance is observed when the Cloud is public and outside the country region. The results obtained for throughput shows that MQTT has the capability to carry the data with available bandwidth and lowest percentage of packet loss. We also prove that the proposed Fog architecture is an efficient way to reduce latency and enhance performance in Cloud based IoT.
An Event-based Middleware for Syntactical Interoperability in Internet of Th...IJECEIAES
Internet of Things (IoT) connecting sensors or devices that record physical observations of the environment and a variety of applications or other Internet services. Along with the increasing number and diversity of devices connected, there arises a problem called interoperability. One type of interoperability is syntactical interoperability, where the IoT should be able to connect all devices through various data protocols. Based on this problem, we proposed a middleware that capable of supporting interoperability by providing a multi-protocol gateway between COAP, MQTT, and WebSocket. This middleware is developed using event-based architecture by implementing publish-subscribe pattern. We also developed a system to test the performance of middleware in terms of success rate and delay delivery of data. The system consists of temperature and humidity sensors using COAP and MQTT as a publisher and web application using WebSocket as a subscriber. The results for data transmission, either from sensors or MQTT COAP has a success rate above 90%, the average delay delivery of data from sensors COAP and MQTT below 1 second, for packet loss rate varied between 0% - 25%. The interoperability testing has been done using Interoperability assessment methodology and found out that ours is qualified.
The evolution of Internet of Things (IoT) brought about several challenges for the existing Hardware, Network and Application development. Some of these are handling real-time streaming and batch bigdata, real- time event handling, dynamic cluster resource allocation for computation, Wired and Wireless Network of Things etc. In order to combat these technicalities, many new technologies and strategies are being developed. Tiarrah Computing comes up with integration the concept of Cloud Computing, Fog Computing and Edge Computing. The main objectives of Tiarrah Computing are to decouple application deployment and achieve High Performance, Flexible Application Development, High Availability, Ease of Development, Ease of Maintenances etc. Tiarrah Computing focus on using the existing opensource technologies to overcome the challenges that evolve along with IoT. This paper gives you overview of the technologies and design your application as well as elaborate how to overcome most of existing challenge.
A CLOUD BASED ARCHITECTURE FOR WORKING ON BIG DATA WITH WORKFLOW MANAGEMENTIJwest
In real environment there is a collection of many noisy and vague data, called Big Data. On the other hand,
to work on the data middleware have been developed and is now very widely used. The challenge of
working on Big Data is its processing and management. Here, integrated management system is required
to provide a solution for integrating data from multiple sensors and maximize the target success. This is in
situation that the system has constant time constrains for processing, and real-time decision-making
processes. A reliable data fusion model must meet this requirement and steadily let the user monitor data
stream. With widespread using of workflow interfaces, this requirement can be addressed. But, the work
with Big Data is also challenging. We provide a multi-agent cloud-based architecture for a higher vision to
solve this problem. This architecture provides the ability to Big Data Fusion using a workflow management
interface. The proposed system is capable of self-repair in the presence of risks and its risk is low.
Internet of things is one of the catch words now a day.
It promises a great future for the internet. Today common types
of communications are person to person, machine to person, or
person to machine. But Internet of things brings a new
technology where a type of communication is machine to
machine. Many technology and protocols have been studied for
this new communication. One of the new and emerging
technologies is VMware Pulse IoT center which provides IoT
device management in a pretty manner. It serves as management
glue between hardware. This paper will take a look on features,
benefits and working of VMware pulse IoT center including
summary of IoT solutions by VMware pulse IoT center.
In recent years, the number of end users connected to the internet of things (IoT) has increased, and we have witnessed the emergence of the cloud computing paradigm. These users utilize network resources to meet their quality of service (QoS) requirements, but traditional networks are not configured to backing maximum of scalability, real-time data transfer, and dynamism, resulting in numerous challenges. This research presents a new platform of IoT architecture that adds the benefits of two new technologies: software-defined networking and fog paradigm. Software-defined networking (SDN) refers to a centralized control layer of the network that enables sophisticated methods for traffic control and resource allocation. So, fog paradigm allows for data to be analyzed and managed at the edge of the network, making it suitable for tasks that require low and predictable delay. Thus, this research provides an in-depth view of the platform organize and performance of its base ingredients, as well as the potential uses of the suggested platform in various applications.
A comprehensive IoT ecosystem consists of many different parts. Irrespective of nature and complexity, a holistic IoT system can be understood through this quick guide.
The Internet of Things (IoT) has become a popular buzzword around technology forums and organizations in recent years. But, how many truly understand the IoT system? IoT consists of a complex web of varied parts including electronic circuitry, embedded systems, network protocols, and more. This quick guide helps you classify and understand these various parts of the comprehensive IoT ecosystem.
Smart offload chain: a proposed architecture for blockchain assisted fog off...IJECEIAES
Blockchain enables smart contract for secure data transfer by which fog offloading servers can have trustworthy access control to work with data execution. When cloud is used for handling requests from mobile users, the attacker may perform denial of service attack and the same is possible at fog nodes and the same can be handled with the help of blockchain technology. In this paper, smart city application is discussed a use case study for blockchain based fog computing architecture. We propose a novel offload chain architecture for blockchain-based offloading in internet of things (IoT) networks where mobile devices can offload their data to fog servers for computation by an access control mechanism. The offload chain model using deep reinforcement learning (DRL) is proposed to improve the efficiency of blockchain based fog offloading amongst existing models.
CONTAINERIZED SERVICES ORCHESTRATION FOR EDGE COMPUTING IN SOFTWARE-DEFINED W...IJCNCJournal
As SD-WAN disrupts legacy WAN technologies and becomes the preferred WAN technology adopted by corporations, and Kubernetes becomes the de-facto container orchestration tool, the opportunities for deploying edge-computing containerized applications running over SD-WAN are vast. Service orchestration in SD-WAN has not been provided with enough attention, resulting in the lack of research focused on service discovery in these scenarios. In this article, an in-house service discovery solution that works alongside Kubernetes’ master node for allowing improved traffic handling and better user experience when running micro-services is developed. The service discovery solution was conceived following a design science research approach. Our research includes the implementation of a proof-ofconcept SD-WAN topology alongside a Kubernetes cluster that allows us to deploy custom services and delimit the necessary characteristics of our in-house solution. Also, the implementation's performance is tested based on the required times for updating the discovery solution according to service updates. Finally, some conclusions and modifications are pointed out based on the results, while also discussing possible enhancements.
Architectural design of IoT-cloud computing integration platformTELKOMNIKA JOURNAL
An integration between the Internet of Things (IoT) and cloud computing can potentially leverage the
utilization of both sides. As the IoT based system is mostly composed by the interconnection of pervasive
and constrained devices, it can take a benefit of virtually unlimited resources of cloud entity i.e storage
and computation services to store and process its sensed data. On the other hand, the cloud computing
system may get benefit from IoT by broadening its reach to real world environment applications. In order
to incarnate this idea, a cloud software platform is needed to provide an integration layer between the IoT
and cloud computing taking into account the heterogenity of network communication protocols as well as the
security and data management issues. In this study, an architectural design of IoT-cloud platform for IoT and
cloud computing integration is presented. The proposed software platform can be decomposed into five main
components namely cloud-to-device interface, authentication, data management, and cloud-to-user interface
component. In general, the cloud-to-device interface acts as a data transmission endpoint between the whole
cloud platform system and its IoT devices counterpart. Before a session of data transmission established,
the communication interface contact the authentication component to make sure that the corresponding IoT
device is legitimate before it allowed for sending the sensor data to cloud environment. Notice that a valid IoT
device can be registered to the cloud system through web console component. The received sensor data
are then collected in data storage component. Any stored data can be further analyzed by data processing
component. User or any developed applications can then retrieve collected data, either raw or processed
data, through API data access and web console.
journal in research These are typically in-depth papers that present original research findings. Research articles in agriculture science would likely include experimental data, methodology, results, discussion, and conclusions related to agricultural topics.
Alinteri Journal of Agriculture Sciences is focused on creating a platform for researchers in the fields of agriculture, fisheries, veterinary, biology, and related disciplines to share and discuss recent scientific progress.
An Event-based Middleware for Syntactical Interoperability in Internet of Th...IJECEIAES
Internet of Things (IoT) connecting sensors or devices that record physical observations of the environment and a variety of applications or other Internet services. Along with the increasing number and diversity of devices connected, there arises a problem called interoperability. One type of interoperability is syntactical interoperability, where the IoT should be able to connect all devices through various data protocols. Based on this problem, we proposed a middleware that capable of supporting interoperability by providing a multi-protocol gateway between COAP, MQTT, and WebSocket. This middleware is developed using event-based architecture by implementing publish-subscribe pattern. We also developed a system to test the performance of middleware in terms of success rate and delay delivery of data. The system consists of temperature and humidity sensors using COAP and MQTT as a publisher and web application using WebSocket as a subscriber. The results for data transmission, either from sensors or MQTT COAP has a success rate above 90%, the average delay delivery of data from sensors COAP and MQTT below 1 second, for packet loss rate varied between 0% - 25%. The interoperability testing has been done using Interoperability assessment methodology and found out that ours is qualified.
The evolution of Internet of Things (IoT) brought about several challenges for the existing Hardware, Network and Application development. Some of these are handling real-time streaming and batch bigdata, real- time event handling, dynamic cluster resource allocation for computation, Wired and Wireless Network of Things etc. In order to combat these technicalities, many new technologies and strategies are being developed. Tiarrah Computing comes up with integration the concept of Cloud Computing, Fog Computing and Edge Computing. The main objectives of Tiarrah Computing are to decouple application deployment and achieve High Performance, Flexible Application Development, High Availability, Ease of Development, Ease of Maintenances etc. Tiarrah Computing focus on using the existing opensource technologies to overcome the challenges that evolve along with IoT. This paper gives you overview of the technologies and design your application as well as elaborate how to overcome most of existing challenge.
A CLOUD BASED ARCHITECTURE FOR WORKING ON BIG DATA WITH WORKFLOW MANAGEMENTIJwest
In real environment there is a collection of many noisy and vague data, called Big Data. On the other hand,
to work on the data middleware have been developed and is now very widely used. The challenge of
working on Big Data is its processing and management. Here, integrated management system is required
to provide a solution for integrating data from multiple sensors and maximize the target success. This is in
situation that the system has constant time constrains for processing, and real-time decision-making
processes. A reliable data fusion model must meet this requirement and steadily let the user monitor data
stream. With widespread using of workflow interfaces, this requirement can be addressed. But, the work
with Big Data is also challenging. We provide a multi-agent cloud-based architecture for a higher vision to
solve this problem. This architecture provides the ability to Big Data Fusion using a workflow management
interface. The proposed system is capable of self-repair in the presence of risks and its risk is low.
Internet of things is one of the catch words now a day.
It promises a great future for the internet. Today common types
of communications are person to person, machine to person, or
person to machine. But Internet of things brings a new
technology where a type of communication is machine to
machine. Many technology and protocols have been studied for
this new communication. One of the new and emerging
technologies is VMware Pulse IoT center which provides IoT
device management in a pretty manner. It serves as management
glue between hardware. This paper will take a look on features,
benefits and working of VMware pulse IoT center including
summary of IoT solutions by VMware pulse IoT center.
In recent years, the number of end users connected to the internet of things (IoT) has increased, and we have witnessed the emergence of the cloud computing paradigm. These users utilize network resources to meet their quality of service (QoS) requirements, but traditional networks are not configured to backing maximum of scalability, real-time data transfer, and dynamism, resulting in numerous challenges. This research presents a new platform of IoT architecture that adds the benefits of two new technologies: software-defined networking and fog paradigm. Software-defined networking (SDN) refers to a centralized control layer of the network that enables sophisticated methods for traffic control and resource allocation. So, fog paradigm allows for data to be analyzed and managed at the edge of the network, making it suitable for tasks that require low and predictable delay. Thus, this research provides an in-depth view of the platform organize and performance of its base ingredients, as well as the potential uses of the suggested platform in various applications.
A comprehensive IoT ecosystem consists of many different parts. Irrespective of nature and complexity, a holistic IoT system can be understood through this quick guide.
The Internet of Things (IoT) has become a popular buzzword around technology forums and organizations in recent years. But, how many truly understand the IoT system? IoT consists of a complex web of varied parts including electronic circuitry, embedded systems, network protocols, and more. This quick guide helps you classify and understand these various parts of the comprehensive IoT ecosystem.
Smart offload chain: a proposed architecture for blockchain assisted fog off...IJECEIAES
Blockchain enables smart contract for secure data transfer by which fog offloading servers can have trustworthy access control to work with data execution. When cloud is used for handling requests from mobile users, the attacker may perform denial of service attack and the same is possible at fog nodes and the same can be handled with the help of blockchain technology. In this paper, smart city application is discussed a use case study for blockchain based fog computing architecture. We propose a novel offload chain architecture for blockchain-based offloading in internet of things (IoT) networks where mobile devices can offload their data to fog servers for computation by an access control mechanism. The offload chain model using deep reinforcement learning (DRL) is proposed to improve the efficiency of blockchain based fog offloading amongst existing models.
CONTAINERIZED SERVICES ORCHESTRATION FOR EDGE COMPUTING IN SOFTWARE-DEFINED W...IJCNCJournal
As SD-WAN disrupts legacy WAN technologies and becomes the preferred WAN technology adopted by corporations, and Kubernetes becomes the de-facto container orchestration tool, the opportunities for deploying edge-computing containerized applications running over SD-WAN are vast. Service orchestration in SD-WAN has not been provided with enough attention, resulting in the lack of research focused on service discovery in these scenarios. In this article, an in-house service discovery solution that works alongside Kubernetes’ master node for allowing improved traffic handling and better user experience when running micro-services is developed. The service discovery solution was conceived following a design science research approach. Our research includes the implementation of a proof-ofconcept SD-WAN topology alongside a Kubernetes cluster that allows us to deploy custom services and delimit the necessary characteristics of our in-house solution. Also, the implementation's performance is tested based on the required times for updating the discovery solution according to service updates. Finally, some conclusions and modifications are pointed out based on the results, while also discussing possible enhancements.
Architectural design of IoT-cloud computing integration platformTELKOMNIKA JOURNAL
An integration between the Internet of Things (IoT) and cloud computing can potentially leverage the
utilization of both sides. As the IoT based system is mostly composed by the interconnection of pervasive
and constrained devices, it can take a benefit of virtually unlimited resources of cloud entity i.e storage
and computation services to store and process its sensed data. On the other hand, the cloud computing
system may get benefit from IoT by broadening its reach to real world environment applications. In order
to incarnate this idea, a cloud software platform is needed to provide an integration layer between the IoT
and cloud computing taking into account the heterogenity of network communication protocols as well as the
security and data management issues. In this study, an architectural design of IoT-cloud platform for IoT and
cloud computing integration is presented. The proposed software platform can be decomposed into five main
components namely cloud-to-device interface, authentication, data management, and cloud-to-user interface
component. In general, the cloud-to-device interface acts as a data transmission endpoint between the whole
cloud platform system and its IoT devices counterpart. Before a session of data transmission established,
the communication interface contact the authentication component to make sure that the corresponding IoT
device is legitimate before it allowed for sending the sensor data to cloud environment. Notice that a valid IoT
device can be registered to the cloud system through web console component. The received sensor data
are then collected in data storage component. Any stored data can be further analyzed by data processing
component. User or any developed applications can then retrieve collected data, either raw or processed
data, through API data access and web console.
journal in research These are typically in-depth papers that present original research findings. Research articles in agriculture science would likely include experimental data, methodology, results, discussion, and conclusions related to agricultural topics.
Alinteri Journal of Agriculture Sciences is focused on creating a platform for researchers in the fields of agriculture, fisheries, veterinary, biology, and related disciplines to share and discuss recent scientific progress.
The primary focus of the journal is on Information Technology and Computer Engineering. This suggests that the journal covers a wide range of topics within these disciplines.
Alinteri Journal of Agriculture Sciences is focused on creating a platform for researchers in the fields of agriculture, fisheries, veterinary, biology, and related disciplines to share and discuss recent scientific progress.
These are typically in-depth papers that present original research findings. Research articles in agriculture science would likely include experimental data, methodology, results, discussion, and conclusions related to agricultural topics.
Agricultural Sciences has been published online biannually since 2007. This indicates a commitment to making agricultural research and knowledge accessible to a broader audience through regular online publications.
Alınteri Journal of Agriculture Science is a peer-reviewed academic journal that focuses on publishing research articles related to agriculture and its interdisciplinary aspects. The fact that it was established in 2007, is published biannually, and follows an open-access policy suggests a commitment to disseminating knowledge widely.
journals like IJERST to provide a valuable resource for researchers, students, and professionals who are interested in the latest developments and insights in the field of Information Technology and Computer Engineering.
Alınteri Journal of Agriculture Science, as you mentioned, is an international peer-reviewed journal established in 2007. This journal is published biannually and follows an open access policy, meaning that its content is freely accessible to anyone interested.
The journal appears to cover a broad spectrum of topics related to agriculture, fisheries, veterinary science, biology, and related disciplines. This indicates that it accepts submissions in these areas and aims to publish research from these fields.
The International Journal of Engineering Research and Science & Technology (IJERST) is a peer-reviewed online journal that focuses on publishing research articles, short communications, and review articles in the fields of Information Technology and Computer Engineering
¬¬International Journal of Engineering Research and Science and Technology (IJERST) offers a fast publication schedule while maintaining rigorous peer review. This is an important feature for many researchers who want to disseminate their work quickly while ensuring the quality and credibility of their research.
Alinteri Journal of Agriculture Sciences is an online academic journal that has been published biannually since 2007. It is an open-access, international journal that follows a double-blind peer-review process.
Alinteri Journal of Agriculture Sciences is dedicated to promoting open access and facilitating the exchange of scientific knowledge among researchers. Open access journals play a crucial role in making research findings more accessible to a wider audience, which can ultimately lead to greater collaboration and innovation in the field of agriculture sciences.
scope of a journal in the field of Information Technology and Computer Engineering. This journal accepts various types of articles, including research articles, review articles, and short communications.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
2. This article can be downloaded from http://www.ijerst.com/currentissue.php
ISSN 2319-5991 www.ijerst.com
Vol. 8, Issuse.3,July 2020
Industrial Internet of Things Environments: Assessing a Semantic
Thing-to-Service Matching Method
L.M.L. NARAYANA REDDY, CHAKKA REDDY USHA
Abstract—
Platforms for the Industrial Internet of Things make it possible to make better decisions based on the data at hand, which in turn boosts
efficiency in manufacturing and other commercial proprietary, and coupled with particular IoT gear, data interchange and provisioning
between the data sources and platform services continue to be an issue. As a result, we propose and describe in depth an open-source
software-based solution called Thing to Service Matching (TSMatch), which enables semantic matching at a fine-grained level between
accessible IoT data and services. The report also includes an assessment of the proposed solution's performance in a testbed setting and
details its deployment in two distinct Aerospace production scenarios.
Key words
Iot data sources, semantic matching, use cases, and the Iot application are some of the terms that may be found in
an index.
INTRODUCTION
The ability to use data available in industrial settings
to enhance production and business operations is
only one of the many advantages offered by
Industrial Internet of Things (IoT) systems.
Although data collection has improved, it is still
difficult to analyse and use the information gained.
In order to function properly, IoT systems need
densely populated IoT infrastructures filled with
sensors and actuators. Because IoT platforms are
often vendor-specific, proprietary, and tied to certain
pieces of IoT hardware or cyber-physical systems,
putting them up and ensuring they receive proper
maintenance may be a challenging task. Carrying the
data to the Cloud may be time consuming, error
prone, and/or expensive [1] due to the necessity for
extra, specialised human intervention. Due to the
vendor-based approach, the current IoT ecosystem is
fragmented, making interoperability a crucial issue
to address, whether from a communication or an
application standpoint.
This study is concerned with the second problem and
hopes to develop a solution that will facilitate the
automatic flow of data between IoT Things (data
sources) and the services offered by an IoT platform.
To address this problem, we have developed a piece
of open-source software called Thing to Service
Matching (TSMatch). TSMatch is able to execute
semantic matching between services and acquired
IoT data at a fine granularity. To that end, TSMatch
automates the matching and provisioning process to
reduce the complexity of connecting preexisting IoT
networks to third-party services. The contributions
of this work are I a description of the open-source
TSMatch1 engine,
(ii) proof that deploying TSMatch in manufacturing
scenarios with realistic operating circumstances is
possible, and (iii) an evaluation of TSMatch in a
simulated IIoT setting (testbed). The paper will
proceed as described below. Following this
introductory portion, the associated work is detailed
in section II, and the major software components of
TSMatch are introduced in section III. Section IV
gives a thorough breakdown of the actual
implementation. Additionally, section V of the
article shows that the system may be easily
integrated with other IoT systems as EFPF2. The
additional processing time and time to completion
components of TSMatch are then assessed in an
experimental setting. In section VI, we give the
findings and talk about them. Section VII provides a
summary and directions for moving forward
.
Assistant professor1,2
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
P.B.R.VISVODAYA INSTITUTE OF TECHNOLOGY & SCIENCE
S.P.S.R NELLORE DIST, A.P , INDIA , KAVALI-524201
3. This article can be downloaded from http://www.ijerst.com/currentissue.php
WORK IN RELATION
There are a variety of methods used in service
matching, from the logical [2] to the non-logical [3]
semantic based techniques or a hybrid (combining
logical and non-logical techniques) approach [4].
For instance, in order to realise a global semantic
interoperability solution, Kovacs et al. present the
technological approach and the system architecture,
which involves merging the FIWARE NGSI and the
oneM2M context interfaces. Although this is a
potential technique, it is only discussed in theory and
not tested in any real-world or experimental settings
[5]. Even more, Cassar et al. provide a matching
system that incorporates both a semantic and
probabilistic matchmaking. The accuracy and
Normalized Discounted Cumulative Gain of the
suggested method were computed taking into
account a dataset of 1007 Web service descriptions
in OWL-S. The notion has been validated using
simulations, however the suggested strategy shows
better performance than previous approaches [6].
We have introduced the approach and the TSMatch
idea in past work. Our previous work laid the
groundwork for our present effort, which focuses on
validating TSMatch in real-world settings once it
has been specified and implemented [7].
ARCHITECTURE THAT IS
APPROPRIATE FOR THE PRODUCT
OR SERVICE BEING PROVIDED
TSMatch is a software platform for semantic
matching of Internet of Things (IoT) data to
services. TSMatch's primary objective is to
automatically deliver data between IoT data sources
and services, while meeting the requirements of the
services. Presently, TSMatch employs a matching
strategy based on semantic similarity to accomplish
this goal. The main players in TSMatch are shown
on Figure 1. Meaningful representations of the many
sensors that make up the Internet of Things are
shown here. At now, TSMatch's Engine is a server-
based component that may be hosted anywhere from
the network's edge to an IoT gateway or even in the
cloud. The Thing registry is analogous to a database
where details on Things are saved on a regular basis.
The TSMatch Client is the programme that the end
user downloads and instals on their device, such as
an Android phone. The user is synonymous with the
recipient of an Internet of Things service, which
might be a person, a business, or even a piece of
software.
Fig. 1. Sequence diagram of the various actors involved in
TSMatch.
As seen in Figure 1, the first step involves the IoT
Things broadcasting their descriptions when
activated. TSMatch Engine is able to register things
in the Things registry and subscribe to events that
occur inside them. This is not an internal part of the
TSMatch semantic matching engine, but rather is
provided by Coaty3 in this instance. The TSMatch
Engine may also approach a third party, such a
broker, for descriptions of IoT devices. That is to
say, it may be set up in the role of a subscriber if
necessary. The TSMatch client receives a service
request from a user and transmits it to the TSMatch
Engine in a separate process. This may be used, for
example, to take a temperature reading in a
particular room. TSMatch uses similarity matching
to determine which IoT Things in the registry best
fit the request and the available descriptions in order
to provide an appropriate response to the service
request. In the event of a successful match, the data
gleaned is sent to the TSMatch Client.
The TSMATCH Implementation
Considerations
In this section, we'll go through how TSMatch's
various parts and interfaces are currently being
implemented. All of TSMatch's parts are now
dockerized4 since it is built on a micro-service
design. The TSMatch engine in this implementation
is hosted on an Internet of Things (IoT) gateway,
while the client is built for Android. Wi-Fi is used
for all of the interaction between the TSMatch
components and the rest of the IoT architecture.
After that, we'll go through each individual part.
Devices Connected to the Internet of
Things
The OGC Sensor Thing API [8] has been used to
model the IoT Things. Information on the sensors,
including their attributes and whereabouts, is
included in the description of the Thing. Coaty, an
application-layer communication platform, is used
to spread the word. Coaty allows Internet of Things
devices to broadcast their semantic descriptions via
4. This article can be downloaded from http://www.ijerst.com/currentissue.php
multicast, and it alerts subscribers when the device
goes offline by broadcasting a "deadvertized" event.
In order to find out what's out there, the TSMatch
Engine sends out a "discover" event and receives a
response with the thing's description. Semantic
representations of cyber physical systems using
heterogeneous hardware, such as a sensor, a single-
board computer (SBC), or a programmable logic
controller (PLC) coupled with sensors via its
Input/output interface, relate to IoT Things. The
sensor driver and a Coati agent are the software
components that make up the IoT cyber-physical
system and are responsible for handling tasks like
publishing information about an IoT Thing.
TSMatch System
Semantic matching between IoT Thing descriptions
and service descriptions is implemented on the
server side via the TSMatch engine. Requests for
services are processed by the engine after being sent
by the TSMatch Client. It also manages the
matching process between the characteristics and
attributes of Things descriptions and the semantic
description of services based on queries sent to the
TSMatch Thing registry. A Sorensen-dice
coefficient and a word frequency-inverse document
frequency are used to determine how similar two
documents are semantically. Once a matched set has
been determined, the accessible Things inside it are
gathered together and an aggregated object
representing this new set is again persisted in the
database. The TSMatch Client receives the matching
result and displays it to the user regardless of
whether a match was found. If a match is detected,
the engine will launch an event that subscribes to
data from the chosen sensors and determines an
average value for that data set based on the chosen
location. The TSMatch Client is then updated with
the new information. In the event that a request is
removed, TSMatch will discontinue subscribing the
TSMatch client to get updates on the IoT Things
observations from the broker. Node.js JavaScript
and the Typescript programming language5 have
been used to create the TSMatch Engine. We have
used the string-similarity and natural packages from
the Nebulous Plasma Muffin (npm) to implement
Srensendice similarity.
Consumer of TSMatch
The TSMatch Client takes care of the service's
semantic description, either by constructing one in
response to a user's request (such as "monitor
temperature") or by retrieving one from a distant
location. With the help of the React Native
JavaScript framework8, we have created a mobile
application called the TSMatch Client. In its present
state, the client only supports Android 4.1 and later.
TSMatch subscribes to a MQTT broker during
launch. Users may browse a catalogue of available
Things along with detailed descriptions, articulate
their needs, and watch as their data is updated in real
time.
Information Management, Search, and
Sharing Concerning Things
Coati v2.0 allows IoT Things to register, declare
themselves, be discovered, and communicate with
the end user (the subscriber). An MQTT broker built
on top of Mosquito v2.0.11 facilitates the
conversation. The IoT descriptions are kept in a
database that uses the JSONB data type for binary
storage and retrieval of JSON objects. This database
is built on PostgreSQL 13.2. Docker images9 built
from the official PostgreSQL source code have been
utilised.
Using TSMatch on EFPF Applications
Industry 4.0, the Internet of Things (IoT), artificial
intelligence (AI), big data, and digital manufacturing
are all represented in the European Connected
Factory Platform for Agile Manufacturing (EFPF)
ecosystem. The foundation of EFPF is an open,
standardised "Data Spine" that allows for the
seamless integration of various systems, platforms,
tools, and services. The various EFPF parts are
supplied by different companies and coordinate their
efforts through the system-wide Data Spine. As a
result, the EFPF ecosystem is built in a SOA fashion.
The EFPF is comprised of the Data Spine, the EFPF
Web-based platform (which provides unified access
to various tools and services through a Web-based
portal), the base digital platforms (four base
platforms funded by the European Commission's
Horizon 2020 programme), and the external
platforms (platforms connected to the EFPF
ecosystem that addresses the specific needs of
connected smart factories). Currently, TSMatch is
one of EFPF's integrated components. Together with
the EFPF partner Nextworks10's IoT automation
platform Symphony Factory Edition (one of the
External Platforms within the EFPF federation), its
integration has been tested in production scenarios.
Symphony is an end-to-end IoT platform with a
flexible design that allows it to work with a broad
variety of IoT sensors and actuators, among other
types of heterogeneous hardware. Using the EFPF
Data Spine, the IoT Symphony platform has been
integrated with TSMatch for provisioning and data
exchange with the production environments'
available IoT data. With input from Walter Otto
Muller & Co. KG11 (WOM) and Innovint Aircraft
Interior GmbH12, EFPF has integrated and
deployed TSMatch in three aerospace
manufacturing use-cases (IAI). The applications
include: 1) maintaining a constant temperature and
humidity in a factory to meet component tolerances
(WOM); 2) monitoring raw materials in a freezer to
prevent waste from excessive heat (IAI); and 3)
5. This article can be downloaded from http://www.ijerst.com/currentissue.php
keeping tabs on a vacuum former from afar to take
corrective action as soon as pressure values deviate
from acceptable ranges (IAI).
All three scenarios have aimed to protect the
consistency and high quality of manufacturing
operations by keeping tabs on critical process
variables and sounding alerts if certain limits are
breached. By using IoT Things, we are able to
collect data on the relevant environmental
characteristics, which are then sent to the external
platform Symphony by means of TSMatch. As can
be seen in Figure 2, the Symphony Factory
Connector13 makes use of the TSMatch Client to
make service requests for IoT Things. TS Match's
data is consumed by a Cloud instance of the
Symphony IoT automation platform via the EFPF
Data Spine, which provides services with
interoperable security capabilities. The Symphony
HAL (a software module that primarily abstracts the
low-level details of various heterogeneous fieldbus
technologies and provides a common interface to its
users), Symphony Data Storage, and Symphony
Visualization provide the visual monitoring, sensor
data and event storage, signal analysis, and alarm
systems for the Things. Actions, such as control
actions on field-level devices, notifications (emails,
SMS), and alarms, are determined by the Symphony
Event Reactor after merging data from various
sources and data brokers (e.g., TSMatch) (via stack
light which provides visual and audible indications).
In the Cloud deployment of the platform, the real-
time data and the current state of the thresholds and
alerts may be seen and managed using the
Symphony GUI.
Fig. 2. EFPF interconnected components on the
manufacturing use-cases.
As illustrated in Figure 3, a successful deployment
has been done on production sites. The EFPF
components, including TSMatch, have therefore
been validated based on the defined requirements,
usability aspects, as well as ease of
installation/configuration. After 6 months of
deployment, traceability and detection of
abnormalities has been also achieved, which
increased the reliability of the manufacturing
process, reduced delivery delays, and minimized
rejects/waste occurrences. Fig. 3. Installation of the
use-cases.
PERFORMANCE EVALUATION
AND RESULTS
TSMatch Testbed
Fig. 3. The TSMatch demonstrator at the forties IIoT Lab.
We have established a TSMatch testbed on the
forties IIoT Lab14 based on the operational
requirements generated from the TSMatch
integration on real-world industrial use-cases, as
shown in Fig. 4. As shown in Figure 3, the testbed
consists of the following parts:
Two real Internet of Things (IoT) devices and ten
simulated IoT devices are available, with the former
sporting a total of five sensors (for measuring things
like temperature, humidity, sound, air quality, and
particles in the air). According to section IV, each
virtual IoT thing is linked to a virtual IoT sensor,
each of which is housed in its own Docker container.
• TSMatch Engine, Message Broker, and Database:
TS Match Engine is containerized, and the Mosquito
message bus and PostgreSQL database are also
deployed as containers in the forties IoT gateway.
Using a MQTT client15, the IoT service request
simulator may mimic third-party IoT services.
TSMatch Client was abandoned in favour of the
service request, which allowed for precise regulation
of the inter-request time gap.
6. This article can be downloaded from http://www.ijerst.com/currentissue.php
Fig. 4. TSMatch components and experimental
interconnections.
The hardware specifications for each of the devices
that is part of the demonstrator are given in Table I.
conclusion
TSMatch is a new approach described in this study
for semantically matching IoT data sources and
service descriptions. We outline the TSMatch
software architecture and discuss its practical
applications in the industrial.
TABLE I MEAN PT AND TTC OF
SEQUENTIAL AND SIMULTANEOUS
SCENARIOS.
contexts of the European EFPF project settings. The
report also includes a first performance assessment
of TSMatch and details the existing open-source
TSMatch implementation's processing time and time
to completion of requests. In order to identify the
"most" appropriate collection of IoT Things whose
aggregated data may satisfy a certain semantic
request, future work will concentrate on enhancing
the semantic matching engine by including a more
intelligent parsing and also learning.
REFERENCES
[1] A. Verma, and S. Kaushal,” Cloud computing security issues
and challenges: a survey,” In International Conference on
Advances in Computing and Communications Springer, Berlin,
pp. 445-454, July. 2011.
[2] A. Segev and E. Toch, ”Context-Based Matching and
Ranking of Web Services for Composition,” in IEEE
Transactions on Services Computing, vol. 2, no. 3, pp. 210-222,
July-Sept. 2009.
[3] H. Fethallah, A. Chikh , and A. Belabed. ”Automated
discovery of web services: an interface matching approach
based on similarity measure.” In Proceedings of the 1st
International Conference on Intelligent Semantic Web-Services
and Applications, pp. 1-4, 2010.
[4] M. Klusch and K. Patrick, ”isem: Approximated reasoning
for adaptive hybrid selection of semantic services,” In Extended
Semantic Web Conference, Springer, Berlin, Heidelberg pp. 30-
44, 2010 .
[5] E. Kovacs, M. Bauer, J. Kim, J. Yun, F. Le Gall and M.
Zhao, ”Standards-Based Worldwide Semantic Interoperability
for IoT,” in IEEE Communications Magazine, vol. 54, no. 12,
pp. 40-46, December 2016.
[6] G. Cassar, P. Barnaghi, W. Wang and K. Moessner, ”A
Hybrid Semantic Matchmaker for IoT Services,” 2012 IEEE
International Conference on Green Computing and
Communications, pp. 210-216, 2012.
[7] N. Bnouhanna, R. C. Sofia, and A. Pretschner,, ”IoT Thing
To Service Semantic Matching,” 2021 IEEE International
Conference on Pervasive Computing and Communications
Workshops and other Affiliated Events (PerCom Workshops),
pp. 418-419, March 2021.
[8] S. Liang, C.Y. Huang, and T. Khalafbeigi. ”OGC
SensorThings API Part 1: Sensing, Version 1.0.”, 2016.
[9] I. Martens, D9.1 - Implementation and Validation through
Pilot-1. [Deliverable] https://www.efpf.org/deliverables, 2021.
[10] I. Martens, D9.2 - Implementation and Validation through
Pilot-2. [Deliverable] https://www.efpf.org/deliverables, 2021.
[11] I. Martens, D9.3 - Implementation and Validation through
Pilot-3. [Deliverable] https://www.efpf.org/deliverables, 2021.