The document proposes a feedback-based ranking system to maximize utilization of a testbed. It develops an algorithm that initially assigns all platforms a high score of 1, then updates scores based on user feedback. A decision tree is used to select the highest-scoring available platform for the next user. Data from 500 users on service quality is analyzed to evaluate the testbed's reliability when user feedback scores differ from actual performance logs. Statistical analysis shows the testbed reliably performed at a moderate or high level even when user feedback scores were low or moderate, demonstrating high utilization.
This document discusses continuous sampling plans (CSP) as an alternative to lot acceptance sampling plans (LASP) for quality control during manufacturing. CSP involves continuously inspecting units at a specified frequency (f) until a clearing number (i) of defect-free units is reached, at which point inspection drops to the specified frequency. If a defect is found, inspection returns to 100% until the clearing number is reached again. The key parameters of a CSP are the frequency, clearing number, and resulting average outgoing quality (AOQ) and average fraction inspected (AFI). An example illustrates how a custom cart builder could use CSP on dynamometer testing to ensure cart specifications are met and analyze quality levels.
A report on designing a model for improving CPU Scheduling by using Machine L...MuskanRath1
Disclaimer: Please let me know in case some of the portions of the article match your research. I would include the link to your research in the description section of my article.
Description:
The main concern of our paper describes that we are proposing a model for a uniprocessor system for improving CPU scheduling. Our model is implemented at low-level language or assembly language and LINUX is used for the implementation of the model as it is an open-source environment and its kernel is editable.
There are several methods to predict the length of the CPU bursts, such as the exponential averaging method, however, these methods may not give accurate or reliable predicted values. In this paper, we will propose a Machine Learning (ML) based on the best approach to estimate the length of the CPU bursts for processes. We will make use of Bayesian Theory for our model as a classifier tool that will decide which process will execute first in the ready queue. The proposed approach aims to select the most significant attributes of the process using feature selection techniques and then predicts the CPU-burst for the process in the grid. Furthermore, applying attribute selection techniques improves the performance in terms of space, time, and estimation.
This document defines key concepts in measurement system analysis including accuracy, precision, stability, bias, repeatability, and reproducibility. It provides guidelines for conducting a measurement system analysis, including determining the number of appraisers and parts to measure, ensuring the measurement procedure is documented and followed, and analyzing the results in terms of stability, bias, and gauge R&R to determine if the measurement system is capable and can be used for decision making. The goal is to qualify measurement systems and identify opportunities for improvement.
Aco based solution for tsp model for evaluation of software test suiteIAEME Publication
The document discusses evaluating software test suites using an ant colony optimization (ACO) approach. It proposes formulating the testing problem as a traveling salesman problem that can be solved using ACO. Specifically:
1) It describes generating test cases using equivalence class partitioning and discusses tools that can automatically generate test cases.
2) It explains how ACO can be used to execute the test suite by modeling it as a TSP and having "ants" find optimal test case execution orders.
3) It provides pseudocode for the ACO algorithm to evaluate the test suite and discusses updating pheromone trails to bias testing toward high quality solutions.
Aco based solution for tsp model for evaluation of software test suiteIAEME Publication
The document discusses evaluating software test suites using an ant colony optimization (ACO) approach. It proposes formulating the testing problem as a traveling salesman problem that can be solved using ACO. Specifically:
1) It describes generating test cases using equivalence class partitioning and discusses tools that can automatically generate test cases.
2) It explains how ACO can be used to execute the test suite by modeling it as a TSP and having "ants" find optimal test case execution orders.
3) It provides pseudocode for the ACO algorithm to evaluate the test suite and discusses updating pheromone trails to bias testing toward high quality solutions.
Aco based solution for tsp model for evaluation of software test suiteIAEME Publication
This document discusses using ant colony optimization (ACO) to evaluate software test suites. It begins by introducing software testing and describing how test cases are generated and test suites created. It then proposes using ACO to execute the test suite by formulating it as a traveling salesman problem (TSP) and having "ants" find optimal paths through test cases. The paper outlines the ACO algorithm and applies it to a sample test suite evaluation. It evaluates the accuracy and efficiency of the approach using metrics like precision, recall, iterative best cost, and average node branching. The technique is shown to evaluate test suites more efficiently than other algorithms like Dijkstra's algorithm.
This document discusses continuous sampling plans (CSP) as an alternative to lot acceptance sampling plans (LASP) for quality control during manufacturing. CSP involves continuously inspecting units at a specified frequency (f) until a clearing number (i) of defect-free units is reached, at which point inspection drops to the specified frequency. If a defect is found, inspection returns to 100% until the clearing number is reached again. The key parameters of a CSP are the frequency, clearing number, and resulting average outgoing quality (AOQ) and average fraction inspected (AFI). An example illustrates how a custom cart builder could use CSP on dynamometer testing to ensure cart specifications are met and analyze quality levels.
A report on designing a model for improving CPU Scheduling by using Machine L...MuskanRath1
Disclaimer: Please let me know in case some of the portions of the article match your research. I would include the link to your research in the description section of my article.
Description:
The main concern of our paper describes that we are proposing a model for a uniprocessor system for improving CPU scheduling. Our model is implemented at low-level language or assembly language and LINUX is used for the implementation of the model as it is an open-source environment and its kernel is editable.
There are several methods to predict the length of the CPU bursts, such as the exponential averaging method, however, these methods may not give accurate or reliable predicted values. In this paper, we will propose a Machine Learning (ML) based on the best approach to estimate the length of the CPU bursts for processes. We will make use of Bayesian Theory for our model as a classifier tool that will decide which process will execute first in the ready queue. The proposed approach aims to select the most significant attributes of the process using feature selection techniques and then predicts the CPU-burst for the process in the grid. Furthermore, applying attribute selection techniques improves the performance in terms of space, time, and estimation.
This document defines key concepts in measurement system analysis including accuracy, precision, stability, bias, repeatability, and reproducibility. It provides guidelines for conducting a measurement system analysis, including determining the number of appraisers and parts to measure, ensuring the measurement procedure is documented and followed, and analyzing the results in terms of stability, bias, and gauge R&R to determine if the measurement system is capable and can be used for decision making. The goal is to qualify measurement systems and identify opportunities for improvement.
Aco based solution for tsp model for evaluation of software test suiteIAEME Publication
The document discusses evaluating software test suites using an ant colony optimization (ACO) approach. It proposes formulating the testing problem as a traveling salesman problem that can be solved using ACO. Specifically:
1) It describes generating test cases using equivalence class partitioning and discusses tools that can automatically generate test cases.
2) It explains how ACO can be used to execute the test suite by modeling it as a TSP and having "ants" find optimal test case execution orders.
3) It provides pseudocode for the ACO algorithm to evaluate the test suite and discusses updating pheromone trails to bias testing toward high quality solutions.
Aco based solution for tsp model for evaluation of software test suiteIAEME Publication
The document discusses evaluating software test suites using an ant colony optimization (ACO) approach. It proposes formulating the testing problem as a traveling salesman problem that can be solved using ACO. Specifically:
1) It describes generating test cases using equivalence class partitioning and discusses tools that can automatically generate test cases.
2) It explains how ACO can be used to execute the test suite by modeling it as a TSP and having "ants" find optimal test case execution orders.
3) It provides pseudocode for the ACO algorithm to evaluate the test suite and discusses updating pheromone trails to bias testing toward high quality solutions.
Aco based solution for tsp model for evaluation of software test suiteIAEME Publication
This document discusses using ant colony optimization (ACO) to evaluate software test suites. It begins by introducing software testing and describing how test cases are generated and test suites created. It then proposes using ACO to execute the test suite by formulating it as a traveling salesman problem (TSP) and having "ants" find optimal paths through test cases. The paper outlines the ACO algorithm and applies it to a sample test suite evaluation. It evaluates the accuracy and efficiency of the approach using metrics like precision, recall, iterative best cost, and average node branching. The technique is shown to evaluate test suites more efficiently than other algorithms like Dijkstra's algorithm.
This document discusses measurement system analysis (MSA), which is used to evaluate statistical properties of process measurement systems. MSA determines if current measurement systems provide representative, unbiased and minimal variability measurements. The document outlines the MSA process, including preparing for a study, evaluating stability, accuracy, precision, linearity, and repeatability and reproducibility. Accuracy looks at bias while precision considers repeatability and reproducibility. MSA is required for certification and helps identify process variation sources and minimize defects.
The document summarizes key concepts related to failure and repair rates in manufacturing industries. It defines reliability as the probability a system will perform as intended without failure for a given period of time. Availability accounts for both reliability and how quickly a system can be repaired. It also defines failure rate, repair rate, and different types of availability like point availability and mean availability. Maintainability is defined as how easily and quickly a system can be restored after failure.
The document discusses process capability and defines key terms related to process capability. It provides the standard formula for process capability using 6 sigma and explains how process capability is compared to specification limits. It then discusses different process capability indices including Cp, Cpk, and Cpm. It explains how these indices measure both potential and actual process capability. The document also discusses limitations of the Cp index and the use of Cpk to address process centering. It describes how to calculate confidence intervals for process capability ratios and discusses some key process performance metrics.
This document summarizes a knowledge engineering approach using analytic hierarchy process (AHP) to resolve conflicts between experts in risk-related decision making. It proposes using a modified version of AHP to increase transparency in the analysis procedure. This allows identification of major causes of inter-expert discrepancy, which are differences in unstated assumptions and subjective weightings of risk factors. The document demonstrates how AHP can systematically decompose complex decision problems, evaluate alternatives based on multiple criteria, and aggregate results to provide an overall evaluation that incorporates differing expert opinions in a consistent manner.
The statistical Confidence Level (C.L.) is the probability that the corresponding confidence interval covers the true ( but unknown ) value of a population parameter. Such confidence interval is often used as a measure of uncertainty about estimates of population parameters
Atlason et al, 2003 WSC_Subgradient ApproximationMichael Beyer
This document discusses using simulation to approximate subgradients of convex performance measures in service systems. Specifically, it examines approximating subgradients of a discrete, convex service level function that is evaluated via simulation of a call center model. It considers three existing methods for estimating gradients via simulation - finite differences, likelihood ratio, and infinitesimal perturbation analysis - and how they could be applied to approximate subgradients when the variables are discrete numbers of agents. It provides analysis of each method and a computational study comparing their properties for approximating subgradients of the service level function.
DETECTION OF RELIABLE SOFTWARE USING SPRT ON TIME DOMAIN DATAIJCSEA Journal
In Classical Hypothesis testing volumes of data is to be collected and then the conclusions are drawn which may take more time. But, Sequential Analysis of statistical science could be adopted in order to decide upon the reliable / unreliable of the developed software very quickly. The procedure adopted for this is, Sequential Probability Ratio Test (SPRT). In the present paper we proposed the performance of SPRT on Time domain data using Weibull model and analyzed the results by applying on 5 data sets. The parameters are estimated using Maximum Likelihood Estimation.
This document describes a study that uses Gradient Boosted Decision Trees (GBDT) to predict flight delays. The researchers applied GBDT to flight on-time performance data from the US Department of Transportation to predict departure and arrival delays. They preprocessed the data, selected important features, then used GBDT to build a predictive model. The model was more accurate than other methods at predicting delays based on features like day of week, carrier, origin/destination airports, and scheduled departure/arrival times.
A NOVEL APPROACH FOR TEST CASEPRIORITIZATIONIJCSEA Journal
This paper proposes a novel approach to test case prioritization that calculates the product of a test case's statement coverage and number of function calls to determine priority. The test cases are ordered based on this product metric, with the highest product value first. An algorithm is presented and evaluations show the approach improves fault detection rates over non-prioritized test cases, as measured by the APFD metric. The approach addresses potential ambiguities when multiple test cases have the same product value by further prioritizing based on number of function calls or execution order. The results demonstrate the effectiveness of the proposed prioritization formula and algorithm.
The document provides an overview of measurement system analysis (MSA) techniques for both variable and attribute gages. It describes the average-range method and ANOVA method for analyzing variable gages, and the short method, hypothesis test analysis, and long method for attribute gages. Acceptability criteria are outlined for determining if a measurement system is capable of measuring process variation.
Business Market Research on Instant Messaging -2013Rajib Layek
The document discusses a social research study conducted to derive a model for measuring loyalty of IIT Masters and PhD students in using instant messaging on mobile. It aimed to verify if the model used in a Malaysian university study could be applied here. The researchers broke down loyalty into usefulness and satisfaction constructs, then further into variables like network size, perceived usefulness, and more. They tested the reliability and independence of questionnaire responses to validate the constructs. Factor and regression analyses showed satisfaction is defined by attention focus and perceived complementarity, while usefulness is defined by network size, perceived enjoyment and complementarity. The final standardized model showed loyalty is defined by satisfaction and usefulness.
090528 Miller Process Forensics Talk @ Asqrwmill9716
Talk presented to local ASQ chapter. It dealt with process improvement: continuous measurement system validation and utilizing capability metrics for process forensics. Further, a program was introduced that\'s been used to optimize spare parts inventory based on a resampling approach to historical data.
This document discusses system reliability. It defines reliability and explains that a system's reliability depends on the reliability of its individual components as well as how those components are configured. Components can be connected in series or parallel. For series connections, the system reliability is the product of the individual reliabilities. For parallel connections, the system reliability is higher than the individual reliabilities. More complex systems can have both series and parallel components. Having redundant parallel components, like standby components, improves reliability over simple parallel systems. Exponential and Weibull distributions are commonly used to model component failure rates and calculate reliability metrics.
What is MSA .
1. Why we Need MSA
2. How to use data.
3.Measurement Error Sources of Variation
• Precision (Resolution, Repeat ability, Reproducibility)
•Accuracy (Bias, Stability, Linearity)
4.What is Gage R&R?
5.Explain MSA Sheet
The document discusses quality assurance and quality control procedures for laboratory experiments and field work. It describes key elements of a quality assurance program such as trained personnel, proper analytical methods, documentation, calibration, and statistical analysis of data. The document provides references and guidelines for sampling, sample custody, analytical methods, detection limits, and using statistical tools like control charts to verify quality control.
A Comparative study of locality Preserving Projection & Principle Component A...RAHUL WAGAJ
The document compares the dimensionality reduction techniques of locality preserving projection (LPP) and principal component analysis (PCA) when used with logistic regression for classification. Five public datasets were used to evaluate the techniques. LPP was found to outperform PCA across all datasets and performance metrics by better preserving the local data structure, which is more important for classification than the global structure preserved by PCA. LPP achieved higher accuracy, sensitivity, specificity, precision, F-score, and area under the ROC curve than PCA for all datasets. The results indicate LPP is an effective dimensionality reduction method for classification tasks when local structure is significant.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Analysis of single server fixed batch service queueing system under multiple ...Alexander Decker
This document analyzes a single server queueing system with fixed batch service, multiple vacations, and the possibility of catastrophes. The system uses a Poisson arrival process and exponential service times. The server provides service in batches of size k. If fewer than k customers remain after service, the server takes an exponential vacation. If a catastrophe occurs, all customers are lost and the server vacations. The document derives the generating functions and steady state probabilities for the number of customers when the server is busy or vacationing. It also provides closed form solutions for performance measures like mean number of customers and variance. Numerical studies examine these measures for varying system parameters.
This document describes a proposed methodology for calculating ratings of a 5S system using five criteria for each of the five S's (Seiri, Seiton, Seiso, Seiketsu, Shitsuke). It provides details on the criteria and calculation methods for determining individual ratings for each S and an overall rating of the 5S system. Graphs are used to visualize the ratings over time and identify weak areas for improvement. The methodology is presented as a simple and effective way to evaluate a 5S system, understand its current effectiveness, and focus improvement efforts on low-scoring aspects to increase the overall rating and efficiency of the 5S system.
This document discusses the need for an open source IoT development environment and testbed to allow software developers to create IoT applications without requiring hardware expertise. It notes that existing IoT testbeds often use proprietary hardware and software, limiting interoperability. The proposed solution aims to provide virtual access to sensors and actuators through an API, as well as a microcontroller platform as a service. This would allow developers to write code without worrying about hardware integration and deployment details. The goal is to make IoT development and testing more accessible through an open testbed that addresses issues like sensor availability and cost.
The document discusses the design and implementation of an Internet of Things (IoT) testbed framework with an enhanced performance approach. It aims to create an open IoT testbed that is accessible locally and over the internet for developers to create and test IoT applications and for data engineers to perform analytics on generated data. The testbed will host a range of sensors and be able to interface with microcontrollers like Arduino and Raspberry Pi to account for heterogeneous devices. It seeks to address challenges with proprietary systems like vendor lock-in and provide solutions for insufficient control, lack of concurrency, and diminished reusability.
This document discusses measurement system analysis (MSA), which is used to evaluate statistical properties of process measurement systems. MSA determines if current measurement systems provide representative, unbiased and minimal variability measurements. The document outlines the MSA process, including preparing for a study, evaluating stability, accuracy, precision, linearity, and repeatability and reproducibility. Accuracy looks at bias while precision considers repeatability and reproducibility. MSA is required for certification and helps identify process variation sources and minimize defects.
The document summarizes key concepts related to failure and repair rates in manufacturing industries. It defines reliability as the probability a system will perform as intended without failure for a given period of time. Availability accounts for both reliability and how quickly a system can be repaired. It also defines failure rate, repair rate, and different types of availability like point availability and mean availability. Maintainability is defined as how easily and quickly a system can be restored after failure.
The document discusses process capability and defines key terms related to process capability. It provides the standard formula for process capability using 6 sigma and explains how process capability is compared to specification limits. It then discusses different process capability indices including Cp, Cpk, and Cpm. It explains how these indices measure both potential and actual process capability. The document also discusses limitations of the Cp index and the use of Cpk to address process centering. It describes how to calculate confidence intervals for process capability ratios and discusses some key process performance metrics.
This document summarizes a knowledge engineering approach using analytic hierarchy process (AHP) to resolve conflicts between experts in risk-related decision making. It proposes using a modified version of AHP to increase transparency in the analysis procedure. This allows identification of major causes of inter-expert discrepancy, which are differences in unstated assumptions and subjective weightings of risk factors. The document demonstrates how AHP can systematically decompose complex decision problems, evaluate alternatives based on multiple criteria, and aggregate results to provide an overall evaluation that incorporates differing expert opinions in a consistent manner.
The statistical Confidence Level (C.L.) is the probability that the corresponding confidence interval covers the true ( but unknown ) value of a population parameter. Such confidence interval is often used as a measure of uncertainty about estimates of population parameters
Atlason et al, 2003 WSC_Subgradient ApproximationMichael Beyer
This document discusses using simulation to approximate subgradients of convex performance measures in service systems. Specifically, it examines approximating subgradients of a discrete, convex service level function that is evaluated via simulation of a call center model. It considers three existing methods for estimating gradients via simulation - finite differences, likelihood ratio, and infinitesimal perturbation analysis - and how they could be applied to approximate subgradients when the variables are discrete numbers of agents. It provides analysis of each method and a computational study comparing their properties for approximating subgradients of the service level function.
DETECTION OF RELIABLE SOFTWARE USING SPRT ON TIME DOMAIN DATAIJCSEA Journal
In Classical Hypothesis testing volumes of data is to be collected and then the conclusions are drawn which may take more time. But, Sequential Analysis of statistical science could be adopted in order to decide upon the reliable / unreliable of the developed software very quickly. The procedure adopted for this is, Sequential Probability Ratio Test (SPRT). In the present paper we proposed the performance of SPRT on Time domain data using Weibull model and analyzed the results by applying on 5 data sets. The parameters are estimated using Maximum Likelihood Estimation.
This document describes a study that uses Gradient Boosted Decision Trees (GBDT) to predict flight delays. The researchers applied GBDT to flight on-time performance data from the US Department of Transportation to predict departure and arrival delays. They preprocessed the data, selected important features, then used GBDT to build a predictive model. The model was more accurate than other methods at predicting delays based on features like day of week, carrier, origin/destination airports, and scheduled departure/arrival times.
A NOVEL APPROACH FOR TEST CASEPRIORITIZATIONIJCSEA Journal
This paper proposes a novel approach to test case prioritization that calculates the product of a test case's statement coverage and number of function calls to determine priority. The test cases are ordered based on this product metric, with the highest product value first. An algorithm is presented and evaluations show the approach improves fault detection rates over non-prioritized test cases, as measured by the APFD metric. The approach addresses potential ambiguities when multiple test cases have the same product value by further prioritizing based on number of function calls or execution order. The results demonstrate the effectiveness of the proposed prioritization formula and algorithm.
The document provides an overview of measurement system analysis (MSA) techniques for both variable and attribute gages. It describes the average-range method and ANOVA method for analyzing variable gages, and the short method, hypothesis test analysis, and long method for attribute gages. Acceptability criteria are outlined for determining if a measurement system is capable of measuring process variation.
Business Market Research on Instant Messaging -2013Rajib Layek
The document discusses a social research study conducted to derive a model for measuring loyalty of IIT Masters and PhD students in using instant messaging on mobile. It aimed to verify if the model used in a Malaysian university study could be applied here. The researchers broke down loyalty into usefulness and satisfaction constructs, then further into variables like network size, perceived usefulness, and more. They tested the reliability and independence of questionnaire responses to validate the constructs. Factor and regression analyses showed satisfaction is defined by attention focus and perceived complementarity, while usefulness is defined by network size, perceived enjoyment and complementarity. The final standardized model showed loyalty is defined by satisfaction and usefulness.
090528 Miller Process Forensics Talk @ Asqrwmill9716
Talk presented to local ASQ chapter. It dealt with process improvement: continuous measurement system validation and utilizing capability metrics for process forensics. Further, a program was introduced that\'s been used to optimize spare parts inventory based on a resampling approach to historical data.
This document discusses system reliability. It defines reliability and explains that a system's reliability depends on the reliability of its individual components as well as how those components are configured. Components can be connected in series or parallel. For series connections, the system reliability is the product of the individual reliabilities. For parallel connections, the system reliability is higher than the individual reliabilities. More complex systems can have both series and parallel components. Having redundant parallel components, like standby components, improves reliability over simple parallel systems. Exponential and Weibull distributions are commonly used to model component failure rates and calculate reliability metrics.
What is MSA .
1. Why we Need MSA
2. How to use data.
3.Measurement Error Sources of Variation
• Precision (Resolution, Repeat ability, Reproducibility)
•Accuracy (Bias, Stability, Linearity)
4.What is Gage R&R?
5.Explain MSA Sheet
The document discusses quality assurance and quality control procedures for laboratory experiments and field work. It describes key elements of a quality assurance program such as trained personnel, proper analytical methods, documentation, calibration, and statistical analysis of data. The document provides references and guidelines for sampling, sample custody, analytical methods, detection limits, and using statistical tools like control charts to verify quality control.
A Comparative study of locality Preserving Projection & Principle Component A...RAHUL WAGAJ
The document compares the dimensionality reduction techniques of locality preserving projection (LPP) and principal component analysis (PCA) when used with logistic regression for classification. Five public datasets were used to evaluate the techniques. LPP was found to outperform PCA across all datasets and performance metrics by better preserving the local data structure, which is more important for classification than the global structure preserved by PCA. LPP achieved higher accuracy, sensitivity, specificity, precision, F-score, and area under the ROC curve than PCA for all datasets. The results indicate LPP is an effective dimensionality reduction method for classification tasks when local structure is significant.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Analysis of single server fixed batch service queueing system under multiple ...Alexander Decker
This document analyzes a single server queueing system with fixed batch service, multiple vacations, and the possibility of catastrophes. The system uses a Poisson arrival process and exponential service times. The server provides service in batches of size k. If fewer than k customers remain after service, the server takes an exponential vacation. If a catastrophe occurs, all customers are lost and the server vacations. The document derives the generating functions and steady state probabilities for the number of customers when the server is busy or vacationing. It also provides closed form solutions for performance measures like mean number of customers and variance. Numerical studies examine these measures for varying system parameters.
This document describes a proposed methodology for calculating ratings of a 5S system using five criteria for each of the five S's (Seiri, Seiton, Seiso, Seiketsu, Shitsuke). It provides details on the criteria and calculation methods for determining individual ratings for each S and an overall rating of the 5S system. Graphs are used to visualize the ratings over time and identify weak areas for improvement. The methodology is presented as a simple and effective way to evaluate a 5S system, understand its current effectiveness, and focus improvement efforts on low-scoring aspects to increase the overall rating and efficiency of the 5S system.
This document discusses the need for an open source IoT development environment and testbed to allow software developers to create IoT applications without requiring hardware expertise. It notes that existing IoT testbeds often use proprietary hardware and software, limiting interoperability. The proposed solution aims to provide virtual access to sensors and actuators through an API, as well as a microcontroller platform as a service. This would allow developers to write code without worrying about hardware integration and deployment details. The goal is to make IoT development and testing more accessible through an open testbed that addresses issues like sensor availability and cost.
The document discusses the design and implementation of an Internet of Things (IoT) testbed framework with an enhanced performance approach. It aims to create an open IoT testbed that is accessible locally and over the internet for developers to create and test IoT applications and for data engineers to perform analytics on generated data. The testbed will host a range of sensors and be able to interface with microcontrollers like Arduino and Raspberry Pi to account for heterogeneous devices. It seeks to address challenges with proprietary systems like vendor lock-in and provide solutions for insufficient control, lack of concurrency, and diminished reusability.
The document lists 57 references related to the Internet of Things (IoT). It covers topics such as the evolution of wireless sensor networks towards IoT, future directions for IoT, clustering techniques in wireless sensor networks, applications of wireless sensors, deployment algorithms for sensor networks, energy efficient routing protocols, performance of sensor network motes, adding value to sensor network simulations, overviews and definitions of IoT, enabling technologies and protocols for IoT, applications of IoT such as smart cities and healthcare, security and privacy issues in IoT, IoT testbeds and experimental platforms, middleware for IoT, and data analytics and management for large-scale IoT systems.
This document contains a list of 7 tables across 4 chapters. The tables summarize the differences between CoAP and MQTT protocols, propose a service mapping scheme, provide examples of data formats and sample sensor readings, perform a scenario-based comparison and measure service performance, reliability, and demand through various matrices and probability functions. The tables collectively analyze IoT service architectures, communication protocols, data generation and usage.
This document contains a list of 7 tables across 4 chapters. The tables summarize the differences between CoAP and MQTT protocols, propose a service mapping scheme, provide examples of data formats and sample sensor readings, perform a scenario-based comparison and analyze services feedback through various matrices to measure performance, reliability, and demand.
The document contains the declaration by Kayalvizhi Jayavel that the work presented in her thesis titled "DESIGN AND IMPLEMENTATION OF INTERNET OF THINGS TESTBED FRAMEWORK- A PERFORMANCE ENHANCED APPROACH" was carried out by her under the supervision of Dr. Revathi Venkataraman. She declares that the work has not been submitted for any other degree and that she has properly cited any works of other researchers that were referenced. Dr. Revathi Venkataraman certifies that the candidate's statements are correct and that a plagiarism check found the thesis contents to be free of plagiarism.
The document discusses providing actuator and sensor access as a service over the internet. It proposes an algorithm for resource requisition that creates locks on actuator instances to prevent multiple simultaneous requests. This ensures actuators can only respond to one command at a time. The algorithm also analyzes request volume to optimize traffic to unavailable resources. An API is developed to abstract away hardware details and provide platform-independent parameter retrieval and actuation. This allows developers to focus on application logic rather than hardware integration.
This document provides a list of 57 references related to the Internet of Things (IoT). The references cover topics such as the evolution of wireless sensor networks towards IoT, future Internet and IoT, clustering techniques in wireless sensor networks for IoT scenarios, civil applications of wireless sensors, deployment algorithms for coverage and connectivity in wireless sensor networks, energy efficient routing techniques for wireless sensor networks, performance analysis of sensor motes used in wireless sensor networks, adding value to wireless sensor network simulations using experimental IoT platforms, overviews of IoT, data fusion and IoT for smart environments, challenges of waste management in IoT-enabled smart cities, enabling IoT technologies and protocols, IoT gateways, semantics for IoT,
The document summarizes a research project that designed and implemented an open IoT testbed framework. The framework includes modules for sensor data, actuators, and APIs. It uses open source platforms to achieve interoperability, scalability, and reusability. Algorithms for code compilation and upload showed improved performance even with increased code size. Response times for remote sensors and actuators were only mildly increased compared to local access. A ranking model tested proved able to recommend the best services for different user types. The testbed was found to satisfactorily utilize resources based on user feedback. Future work could extend it to support mobile devices, include security software, test it with more users, and explore other statistical models.
This document contains the declaration by Kayalvizhi Jayavel that the work presented in her thesis titled "Design and Implementation of Internet of Things Testbed Framework- A Performance Enhanced Approach" was carried out by her under the supervision of Dr. Revathi Venkataraman. She declares that the work has not been submitted for any other degree and that she has properly cited all sources. Dr. Revathi Venkataraman certifies that the candidate's statements are correct and that a plagiarism check found the thesis contents to be within permissible limits.
This document contains a list of abbreviations and symbols used in the paper. It includes over 50 common abbreviations related to Internet of Things, wireless sensor networks, communication protocols, and more. It also defines several symbols used to represent concepts in the paper such as sensors, actuators, platforms, reliability, and various users.
The document discusses analyzing testbed utilization through a feedback-based ranking system. It proposes an algorithm that ranks services based on user feedback scores. Services are initially given a high score of 1, which gets replaced by the new user feedback score after each usage. The algorithm aims to enhance testbed utilization, performance, reliability and usability by allocating the highest ranked services to users. An analysis of 500 users' feedback on various services shows the testbed was reliable even when feedback scores were moderate, accurately predicting user satisfaction over 86% of the time. The most demanded services and combination of services and users are also identified to guide testbed scalability.
This document discusses providing sensor data as a service. It proposes an event collaboration model where sensor data is pushed to a database when it changes, rather than requiring polling. This would allow users to access up-to-date data through queries. The system would contain various sensors that store data in a database, and provide an interface for users to access visualizations and downloads of the sensor data in different formats like CSV and JSON.
The document contains a list of 15 figures referenced throughout a thesis on an IoT testbed architecture. Figure 1.1-1.6 describe common sensors and components used in IoT systems. Figures 3.1-3.9 illustrate the proposed testbed architecture and performance results. Figures 4.1-4.14 provide details on the sensor database design and experimental setup. Figures 5.1-5.16 demonstrate the actuator client, platform as a service prototype, APIs, and performance comparisons. Figures 6.1-6.5 analyze service allocation strategies and expected usage patterns by user types.
The document contains a list of 15 figures referenced throughout a thesis on an IoT testbed architecture. Figure 1.1-1.6 describe common sensors and components used in IoT systems. Figures 3.1-3.9 illustrate the proposed testbed architecture and performance results. Figures 4.1-4.14 provide details on the sensor database design and experimental setup. Figures 5.1-5.16 demonstrate the actuator client, platform as a service prototype, API functions and performance tests. Figures 6.1-6.5 analyze the resource allocation and usage patterns for different user types.
This document provides an introduction to Internet of Things (IoT) testbeds. It discusses that testbeds help validate research findings through real experimental setups, as opposed to simulations alone. IoT is described as an extension of wireless sensor networks, but with some prominent differences that demand exclusive IoT testbed models. The document outlines key IoT concepts like ingredients, features, and characteristics. It emphasizes the need for open source IoT testbeds to improve reusability, scalability, and utilization compared to existing proprietary testbeds primarily designed for wireless sensor networks. The goal of this research is to develop an open source heterogeneous IoT testbed framework with enhanced algorithms.
This document contains lists of abbreviations and symbols used throughout a paper on Internet of Things. There are over 50 abbreviations for concepts, protocols, and technologies related to IoT listed, such as IoT, WSN, MQTT, CoAP, HTTP, and more. There are also over 20 symbols defined for variables, parameters, and mathematical terms used in equations and formulas relating to reliability, performance, and probability distributions for IoT systems.
The document summarizes a research project that designed and implemented an open IoT testbed framework. The framework includes modules for sensor data, actuators, and APIs. It uses open source platforms to achieve interoperability, scalability, and reusability. Algorithms for code compilation and upload showed improved performance even with increased code size. Response times for remote sensors and actuators were only mildly increased compared to local access. A ranking model tested proved able to recommend the best services for different user types. The testbed was found to satisfactorily utilize resources based on performance and reliability feedback. Future work could extend the framework to mobile devices, add security software, test with more users, and explore other statistical models.
This document discusses providing sensor data as a service. It proposes an event collaboration model where sensor data is pushed to a database when it changes, rather than requiring polling. This would allow users to access up-to-date data through queries. The system would contain various sensors that store data in a database, and provide an interface for users to access visualizations and downloads of the sensor data in different formats like CSV and JSON.
This document discusses an open IoT testbed and architectural framework. It describes IoT systems as consisting of interconnected devices that can communicate and exchange data. A core component is embedded systems/devices that include sensors to measure the environment and actuators to perform physical actions. Microcontrollers interface with these devices and communicate via various protocols. The document proposes an open IoT testbed with a control plane that can discover resources/services, orchestrate based on user demands, and resolve conflicts through a lock release model. It provides a functional and detailed architecture for the proposed framework.
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
Gen Z and the marketplaces - let's translate their needsLaura Szabó
The product workshop focused on exploring the requirements of Generation Z in relation to marketplace dynamics. We delved into their specific needs, examined the specifics in their shopping preferences, and analyzed their preferred methods for accessing information and making purchases within a marketplace. Through the study of real-life cases , we tried to gain valuable insights into enhancing the marketplace experience for Generation Z.
The workshop was held on the DMA Conference in Vienna June 2024.
Ready to Unlock the Power of Blockchain!Toptal Tech
Imagine a world where data flows freely, yet remains secure. A world where trust is built into the fabric of every transaction. This is the promise of blockchain, a revolutionary technology poised to reshape our digital landscape.
Toptal Tech is at the forefront of this innovation, connecting you with the brightest minds in blockchain development. Together, we can unlock the potential of this transformative technology, building a future of transparency, security, and endless possibilities.
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
1. 93
CHAPTER 6
TESTBED UTILIZATION ANALYSIS- FEEDBACK BASED
RANKING SYSTEM
6.1 TESTBED FEEDBACK BASED RANKING SYSTEM
The whole purpose of the testbed is achieved at its maximum utilization. In
order to achieve this and to offer the best performance based service to a user, it is
proposed to design a feedback based ranking algorithm. The parameters considered are
sustainability/availability, usability, reliability and performance.
6.2 BACKGROUND
The primary goals of a good testbed includes research productivity, ease of
management, minimal learning curve, user friendly graphical user interface,
customizable, time shared, utilization by maximum users, scalable, support variety of
experimentation scenarios, space optimized and provide almost realistic results [120]
[121] [122]. This research proposal has achieved all of the listed parameters which have
been satisfactorily proved in earlier chapters. This chapter attempts to prove the point
on utilization and the variety of users.
Earlier works have developed testbeds, but they have not been explored in terms
of the type of users, most used services or combination of services. The testbed
performance can be analyzed based on the utilization factor. In other words number of
users getting successful access per minute. This can be determined based on number of
users being served in a given period of time. Also the expected services if not available
can be added or scaled, measuring the need and demand in a given span of time.
Moreover, if one could predict or determine the pattern of type of users vs. services, the
2. 94
most demanded services or group of services; it would be huge feedback closing the
loop, leading to better utilization and performance enhancement.
This chapter attempts to explore providing insights about how the above
mentioned factors are achieved, thereby enhancing the utilization, performance,
reliability and usability of the proposed testbed.
6.3 PROPOSED ALGORITHM FOR FEEDBACK BASED RANKING
As proposed, a feedback based ranking algorithm is developed to allocate the
best services to the users. Score ranges in between High(1), Medium(0.5), Low(0.25).
Initially all platform are assigned with High Score(1). After every usage of the platform,
user gives feedback. Previous score of the platform gets replaced with the new value
from recent user feedback.
6.3.1 Algorithm for Feedback Based Ranking
An algorithm is developed in the proposed research work as to arrive feedback
based ranking for the services under offering. The services available are looked up
based on dynamic service discovery. If availability is greater than the demand, the
services are ready to be offered. All the service platforms are given score of high (1).
The services are shuffled based on their feedback scores assigned based on their
performance. The highest performing service is offered as service to the users.
3. 95
In case of tie, it is resolved using Decision tree based ranking. The user provides
the feedback score based on usage satisfaction of the testbed. The parameters adopted
here are Sustainability/Availability, Reliability/Performance, and Usability. These
values are inserted as input to Decision Tree as shown in Figure 6.1. The Decision Tree
provides the best available platform to the next user based on the score card.
Figure 6.1 Allocation decision tree
Figure 6.2a shows the state machine for the proposed feedback based testbed
service allocation. Let ti and tj refers to time between failures and restore respectively.
S denotes at least one successful resource available for service and F indicates failure
with degrading or nil available resource to offer. N denotes the number of platforms
comprising of sensors and actuators offered for service.
4. 96
Figure 6.2a State machine for testbed service
Figure 6.2b State transition graph for testbed service
Figure 6.2b explains the transition logic between the states based on feedback
obtained based on resource availability and operational conditions. There are 5 states
formulated with 3 indicating states of success and 2 for states of failure. The failure
model can be catagorized as 1) occurred, detected, recoverable 2)occurred, latent
detect, recoverable 3) occurred, uncertain detect, uncertain recovery. The point 1 and
2 to probability of becoming operational is visibly high as against point 3. Point 3 may
5. 97
lead to all platform failure in its worst case scenario. The proposed system handles point
1 and 2 to offer demanded services to users using the feedback based ranking
mechanism by classifying between scores of 1, 0.5 and 0.25. 1, 0.5 and 0.25 denotes
high, medium and low based on its performance/reliability. The initial configuration to
services in this proposed research work is 1 as against 0.5, which is observed in most
existing literature. The ideal condition is to assure that all the services will be fully
functional to start with, later on degrade based on various factors like wear and tear,
malfunction, natural disasters and many more.
The testbed services are offered to various users and the usage pattern of the
services, probable cluster services etc., are studied based on 500 users comprising of
academicians, industrialists, data analysts, researchers and basic user.
6.3.2 Services vs. User Feedback Matrix
The feedback from 500 users comprising of academicians(Aci),
industrialists(Ii), data analysts(Di), researcher(Ri) and novice user(Ni) is considered for
evaluation. Based on the feedback collected, Table 6.1 depicts the services vs. user’s
feedback table for representational purpose.
Table 6.1 Services vs Users Feedback matrix
Based on log report, the job completion rate, signal strength and
request/response rate it was observed that the testbed was reliable (score ~ 1) even when
the user feedback scores were moderate. The utilization score is arrived based on the
6. 98
following rule: When scores are low, there may be 2 possibilities. The issue may be on
the provider side or issue on the receiver/user side. Hence, job completion, network and
request/response time log, is considered to decide accordingly. If there was any failure
or malfunctioning or any network issue on the system side and job was not completed
or incomplete or faulty, the system was graded 0.25. If there was network issue and job
was completed with the delay, the system was graded justifiably reliable (0.5). A
confusion matrix is constructed as shown in Table 6.2. The number of users was 500.
Table 6.2 Confusion Matrix
N = 500
Prediction Class
User score Low User score High
Actually
Low
40 05 45
Actually
High
65 390 455
105 395
True positive: 390, True negative: 40, False positive: 5, False negative: 65. This
was performed to predict how much time the testbed utilization (reliability) was
detected low when the testbed was performing fairly high. Based on the observations,
the results of needed parameters are calculated as follows:
Accuracy= TP+TN/Total=86%
Error rate=1-accuracy=14%
True positive rate=TP/Total positive=86%
False positive rate/Sensitivity/Recall=FP/Total negative=11%
Specificity: 1-FPR=TN/Total negative=89%
Precision=TP/total user score positive=98%
Prevalence=actual high/total=91%
Also to prove that the proposed testbed is reliable or utilized well, joint
probability function of the discrete random variables R (Reliability) and P′
(Performance) is utilized. P(R = ri, P′
= Pi
′
) denotes probability that R takes ri and P
7. 99
takes pi
′
, it is the probability of intersection of events, namely reliability low and
performance low will be P(R = rl, P′
= pl
′
). P(ri, pi
′
) is the probability mass function,
where i= l, m, h. Table 6.3 shows the joint probability function of the discrete random
variables reliability(R) and performance(P′
).
Table 6.3 Joint probability function of the discrete random variables R
(Reliability) and P′
(Performance)
P′
R
Low Medium High ∑aij=1 to 3
Low a11 a12 a13 Sl
Medium a21 a22 a23 Sm
High a31 a32 a33 Sh
∑aij=1 to 3 Tl Tm Th ∑Si=∑Ti
By rule, ∑Si = ∑Ti, where i = l, m, h
Marginal probability function of Reliability R
PR(ri) = P(R = ri)
= P(R = ri, P′
= Pl
′
)+ P(R = ri, P′
= Pm
′
)+ P(R = ri ,P′
= Ph
′
)=> rio (1)
R = ri, where i = l, m, h
The set (ri, pio) is the marginal distribution of reliability.
Similarly, to find the marginal distribution of performance P′, considering the marginal
probability function of Performance P′
PP
′
(pi
′
) = P(P′
= pi
′
)
= P(R = rl, P′
= Pi
′
)+ P(R = rm, P′
= Pi
′
)+ P(R = rh, P′
= Pi
′
)=> pio (2)
P′
= pi, where i = l, m, h
8. 100
The set (rio, p′i) is the marginal distribution of performance.
Having found the marginal distribution of R and P′, the conditional probability
distribution
F(P′
/R) = f(R, P′
)/f(R) and F(R/P′
) = f(R, P′
)/f(P′
) (3)
The conditional distribution of reliability when performance is low is given as
F(R/P′
) = f(R, P′
)/f(P′
= low) where f(P′
= low) = Tl (4)
The conditional distribution of reliability when performance is moderate is given as
F(R/P′
) = f(R, P′
)/f(P′
= medium) where f(P′
= medium) = Tm (5)
Thus, the conditional distribution of reliability when performance is low is given as
follows:
P(R = l/P′
= l) = a11/Tl (6)
P(R = m/P′
= l) = a21/Tl (7)
P(R = h/P′
= l) = a31/Tl (8)
Similarly, the conditional distribution of reliability when performance is medium is
given as follows:
P(R = l/P′
= m) = a12/Tm (9)
P(R = m/P′
= m) = a22/Tm (10)
P(R = h/P′
= m) = a32/Tm (11)
As the data set acquired was voluminous, manual calculation would be
infeasible. Thus the data set was run in statistical tool and has proved that the proposed
testbed in most times have recorded performance moderate/high even at times when the
scores were low/moderate. The results are discussed in detail in subsequent sections.
9. 101
6.4 PRINCIPLE COMPONENT ANALYSIS/ FACTOR ANALYSIS ON
TESTBED DATA
The average performance of each service is shown in Table 6.4, which services
were most demanded is shown in Table 6.5, which combination of sensors is used the
most and also which set of users have mostly used the testbed services is analyzed. 78
services listed earlier are considered for this proposed testbed framework. This analysis
can greatly help to decide on the scalability needs and new deployments. This also gives
an insight, to some extent the probable research trend and domain. Principal Component
Analysis (Factor analysis) is the approach deployed in this research.
6.4.1. Average Performance of Each Service
The developed services are offered as service to the users. 78 services were
made available for varied users for a span of 30 days duration. This experiment was
conducted to understand two factors namely 1. How well the services are performing
(utilization)? 2. Verify the claim of condition distribution. Table 6.4 reveals acceptable
scores of 0.5 and above, clearly indicating that the proposed testbed with offered
services are operating satisfactorily with performance and reliability well assured.
Similar results were recorded for other services as well.
Table 6.4 Sample set of performance measure of services
S1 S2 S3 S4 S5 S6 S7 S8 S9 S10
0.541244 0.573455 0.564429 0.663641 0.542526 0.63916 0.577314 0.659782 0.626282 0.658494
S1, S2, S3, S4, S5, S6, S7, S8, S9 and S10 refers to temperature, humidity, gas
service raw, gas service leakage, color, getRed, getGreen, getBlue, rainfall intensity,
rainfall raw respectively.
6.4.2 Most Demanded Services
The services mostly demanded were calculated using frequency distribution
analysis. The Table 6.5 is representational set of the results. The services mostly
demanded were used to improve scalability and future deployments. This is also used
10. 102
to determine the score apart from user scores in the dynamic resource allocation process
based on user needs.
Table 6.5 Sample set of most demanded services
S1, S2, S3, S4, S5, S6, S7, S8, S9, S10 refers to services as temperature,
humidity, gas service raw, gas service leakage, color, getRed, getGreen, getBlue,
rainfall intensity, rainfall raw respectively. One could observe S1, S2, S3, S4, S5, S9,
S16, S20, S34, S36, S60, S63 are the set of most demanded services and the services
map to temperature, humidity, gas service raw, gas service leakage, color, rainfall
intensity, lcd_display, fan_rotate service, arduino1 service, Pi1 service, arduino2
service and arduino3 service respectively. The result shows very clearly that usage span
across variety of services ranging from sensors till platforms. The research study is also
interested to know the type of users who have utilized these services, the pattern of
services used which in turn would provide insight about the utilization pattern and the
need for scalability. The result is based on the experimental set up offered for access to
users for 30 days period. This is also indicative of the fact that the services were well
utilized.
6.5 RESULTS AND DISCUSSION
6.5.1 Average Performance Measurement of Each Service
The proposed testbed offers 78 services and it is observed from Figure 6.3 the
usage pattern of the offered services. The service usage pattern is arrived based an
11. 103
average performance rating of all the users who used the testbed for a month. The same
experiment was repeated for a month with varied users. It can be clearly inferred from
Figure 6.3 that the utilization (successful access) rate of each service varies between
0.25, 0.5 and 1. 1 indicates service provided successful result as expected. 0.5 indicates
service provided result but with a delay. 0.25 indicates incomplete response for reasons
not limited to failure or malfunction or network issues etc., It shows the testbed score
of 0.25 is at the maximum of 21%, 0.5 ranges between 54% to 76% and 1 range between
45% and 93%.
Figure 6.3 Service usage pattern and the utilization (successful access) rate in %
6.5.2 Empirical Results of Most Demanded Services
With reference to the service usage pattern, the classification is attempted based
on most demanded services by varied users. The users are classified as novice,
academician, industrialist, data analyst and researcher. Figure 6.4 illustrates the
individual service usage of the testbed. The usage percentage of the testbed as a whole
is arrived based on mean of the usage values. This shows 36% of novice users, 23% of
industrial users, 38% of academicians, 47% of data analyst, 58% researchers have used
0
5
10
15
20
25
30
35
40
45
50
15
35
55
75
95
s1
s4
s7
s10
s13
s16
s19
s22
s25
s28
s31
s34
s37
s40
s43
s46
s49
s52
s55
s58
s61
s64
s67
s70
s73
s76
Utilization
rate
%
Services
0.5 0.25 1
12. 104
the proposed testbed services. This not only proves the suitability of the proposed
testbed among varied users, but also the improvement of usage of services by data
analyst and novice users, which is not the case with existing testbeds [38].
Figure 6.4 User usage rate of the proposed testbed
Figure 6.5a Novice user usage pattern
It is very vivid from the graph in Figure 6.5a the most frequent services used by
normal users or in other words any person who has zero knowledge on the technology
and just interested to use the service in a crud sense. These are kind of users who are
only interested to explore the data as service mostly except for few exceptions.
According to the expectation of this research, the results have shown the services
mostly used are s1 to s10, s37 to s46. The services s1 to s10 and s37 to s46 are data
based services.
Si: Services
13. 105
Figure 6.5b Academic user usage pattern
It is clear from the graph in Figure 6.5b that almost all the services are being
exploited. As job profile of academicians might demand use of the testbed service to
teach or research or test or as a raw data source. The results are convincingly mapping
the expectation.
Figure 6.5c Industrial user usage pattern
Figure 6.5c shows the services used by industrialists. The typical understanding
of an industrial user was he or she may be interested in more of testing of algorithms or
business logic or protocols basically. So in this regard the test results clearly reveal the
y = -0.0009x + 0.7465
R² = 0.0636
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
s1 s5 s9 s13s17s21s25s29s33s37s41s45s49s53s57s61s65s69s73s77
Testbed
usage
%
Services
Usage (%)
Linear (Usage (%))
0%
30%
60%
90%
s1
s3
s10
s12
s14
s16
s18
s20
s22
s24
s26
s28
s30
s32
s34
s36
s48
s50
s52
s54
s56
s58
s60
s62
s64
s66
s68
s70
s72
s74
s76
s78
Testbed
usage
%
Services
Usage (%)
14. 106
service sets s1, s3, s10, s12, s16, s18, s20, s22, s24, s26, s28, s30, s32, s34, s36, s48,
s50, s52, s54, s56, s58, s60, s64, s66, s70, s72, s74, s76, s77 corresponding to
temperature, gas service raw, rainfall service raw, buzzer off, lcd display, lcd lock, fan
rotate, fan lock, fan release, gsm call, gsm release, rgb led red, rgb led lock, arduino 1,
Pi 1, buzzer off, fan off, fan rotate, gsm sms, lcd clear, rgb led green, arduino 4, buzzer
on, fan off, gsm call, lcd display, rgb led red, rgb led green are the most used services
among the offered set. This well fits as expected that most used services are platforms,
actuators compared to sensor based services.
Figure 6.5d Data analyst usage pattern
Data analyst are one who is merely interested on the data and mostly ignorant
on the infrastructural set up involved behind the data generation. Most of the existing
testbeds demand atleast a minimal skill set to operate on the testbed. One example is
FIT-IoT were the access procedure is complex, makes it really tough for non-IoT or
other field individuals to reap the needed benefit. The proposed system by providing
downloadable platform independent API makes it convinient just extract the needed
detail. Also a bulk data download option is provided too, to make it even more simpler
for them access the needed data set. The data analyst can download the data in data
s1
s2
s3
s4
s5
s6
s7
s8
s9
s10
s37
s38
s39
s40
s41
s42
s43
s44
s45
s46, 92%
78%
80%
82%
84%
86%
88%
90%
92%
94%
0 5 10 15 20 25
Testbed
usage
%
Number of services
s1 s2 s3 s4 s5 s6 s7
s8 s9 s10 s37 s38 s39 s40
s41 s42 s43 s44 s45 s46
15. 107
format of his choice XML,CSV or JSON. As anticipated the results in Figure 6.5d have
shown the services ~ s1 to s10, ~ s37 to s46 are the ones mostly preferred.
Figure 6.6e Researcher usage pattern
It is candid from the graph in Figure 6.5e that almost all the services are being
exploited. As any researcher working on IoT would be interested to analyze the data
set, observe or infer or decide based on the results, to test actuation as his or her output
model or to test the algorithm or protocol developed for its successful implementation.
The results have shown a broad coverage across almost all the services offered.
6.6 CONCLUSION
Thus, the above results conclude that the proposed testbed framework is reliable
even when the performance was moderate or low based on the score metrics. The sensor
performance and other measurements were analyzed based on the usage pattern of over
500 users. 78 services have been considered and have successfully identified the most
demanded services and also the top users of the testbed.
y = -0.0013x + 0.8264
R² = 0.158
0%
13%
26%
39%
52%
65%
78%
91%
s1
s3
s5
s7
s9
s11
s13
s15
s17
s19
s21
s23
s25
s27
s29
s31
s33
s35
s37
s39
s41
s43
s45
s47
s49
s51
s53
s55
s57
s59
s61
s63
s65
s67
s69
s71
s73
s75
s77
Testbed
Usage
%
Services
Usage (%) Linear (Usage (%))