- The document discusses designing a sensor-based experiment using the Brownie framework. It focuses on integrating heart rate biofeedback from sensors into the experiment.
- It provides steps for creating sensor configuration and recorder classes to initialize and record data from a Bioplux sensor, and code examples for configuring sampling rates and storing data files.
- The document aims to teach experimenters how to design experiments that measure and provide real-time biofeedback of physiological signals like heart rate to support research.
Apache Hadoop India Summit 2011 talk "Provisioning Hadoop’s MapReduce in clou...Yahoo Developer Network
This document proposes provisioning Hadoop's MapReduce in the cloud for effective storage as a service. It discusses using MapReduce to parallelize AES encryption for data at rest to provide security. Compression and deduplication techniques are also implemented using MapReduce to improve storage utilization and reduce costs. Performance results show AES-XTS with MapReduce encryption provides better performance than other approaches. Compression reduced storage requirements by a factor of 10 for text data and 2 for images. The proposed techniques aim to provide secure, efficient storage as a service for businesses.
This document describes a study that uses data mining techniques to detect malware. The study involved extracting opcode frequencies from 300 malware samples and 150 benign software samples. The opcode data was analyzed using the WEKA machine learning tool to generate rules for classifying software as malware or benign. Through a recursive process of removing the top predictive opcode and re-analyzing the data, the study identified a set of opcodes that predicted malware versus benign software with 96% accuracy. Testing the rules against noise added to the data showed the classification remained over 91% accurate, demonstrating the robustness of the approach. The document outlines the full methodology used in the study.
A cyber physical stream algorithm for intelligent software defined storageMade Artha
The document presents a new Cyber Physical Stream (CPS) algorithm for selecting predominant items from large data streams. The algorithm works well for item frequencies starting from 2%. It is designed for use in intelligent Software-Defined Storage systems combined with fuzzy indexing. Experiments show CPS improves accuracy and efficiency over previous algorithms. CPS is inspired by a brain model and works by incrementing a "voltage" value when items match and decrementing it otherwise, selecting the item with highest voltage. It performs well on both uniform random and Zipf's law distributed streams, with optimal parameter values depending on the distribution.
Predicted the activities performed by the user (Bending, Walking etc.) using Classification technique in Data mining.
Performed data cleaning, data pre-processing (removing false predictors, identifying important attributes) in Weka.
Created a model with 77% accuracy by performing Activity Recognition Classification and cross-checked it using Experimental design by adding noise, ROC Curves and Principal component analysis to create new attributes in Weka.
Using Learning Vector Quantization in IDS Alert Management SystemCSCJournals
This document presents a new intrusion detection system (IDS) alert management system that uses learning vector quantization (LVQ) to classify IDS alerts. The proposed system takes in alerts generated by Snort from the DARPA 98 dataset, normalizes and filters the alerts, then trains an LVQ neural network on labeled alert data. The trained LVQ model is used to classify new alerts as either true positives or false positives. The system is shown to achieve high classification accuracy of 88.75% and a false positive reduction rate of 88.27%, while only taking 0.000018 seconds on average to classify each alert. This makes the system suitable for active alert management where alerts need to be classified in real-time
USB 3.0 introduces the SuperSpeed protocol which provides a significant increase in bandwidth over USB 2.0 through a new physical layer capable of 5Gbps speeds. Key features of SuperSpeed USB include bulk streaming which allows high-speed transfer of large files without host involvement, improved flow control mechanisms, and enhanced power management.
Modern software systems now increasingly span cloud and on-premises deployments and remote embedded devices and sensors. These distributed systems bring challenges with data, connectivity, performance, and systems management; to ensure success, you must design and build with operability as a first-class property.
Matthew Skelton shares five practical, tried-and-tested techniques for improving operability with many kinds of software systems, including the cloud, serverless, on-premises, and the IoT: logging as a live diagnostics vector with sparse event IDs; operational checklists and runbook dialog sheets as a discovery mechanism for teams; endpoint health checks as a way to assess runtime dependencies and complexity; correlation IDs beyond simple HTTP calls; and lightweight user personas as drivers for operational dashboards.
These techniques work very differently with different technologies. For instance, an IoT device has limited storage, processing, and I/O, so generating and shipping of logs and metrics looks very different from cloud or serverless cases. However, the principles—logging as a live diagnostics vector, event IDs for discovery, etc.—work remarkably well across very different technologies.
Drawing from his experience helping teams improve the operability of their software systems, Matthew explains what works (and what doesn’t) and how teams can expand their understanding and awareness of operability through these straightforward, team-friendly techniques.
From a talk given by Matthew Skelton at Velocity Conference EU 2017 - https://conferences.oreilly.com/velocity/vl-eu/public/schedule/detail/61954
SplunkLive! Zurich 2018: Integrating Metrics and LogsSplunk
This document discusses integrating metrics and logs in Splunk for enhanced troubleshooting and monitoring. It provides an overview of metrics and how they are defined, compared to events. Metrics support in Splunk allows for more efficient aggregation, storage, and analysis of time-series data. Example use cases mentioned include IT operations, application performance monitoring, and IoT. Pricing is still based on uncompressed data volume ingested, with each metrics measurement licensed at around 150 bytes.
Apache Hadoop India Summit 2011 talk "Provisioning Hadoop’s MapReduce in clou...Yahoo Developer Network
This document proposes provisioning Hadoop's MapReduce in the cloud for effective storage as a service. It discusses using MapReduce to parallelize AES encryption for data at rest to provide security. Compression and deduplication techniques are also implemented using MapReduce to improve storage utilization and reduce costs. Performance results show AES-XTS with MapReduce encryption provides better performance than other approaches. Compression reduced storage requirements by a factor of 10 for text data and 2 for images. The proposed techniques aim to provide secure, efficient storage as a service for businesses.
This document describes a study that uses data mining techniques to detect malware. The study involved extracting opcode frequencies from 300 malware samples and 150 benign software samples. The opcode data was analyzed using the WEKA machine learning tool to generate rules for classifying software as malware or benign. Through a recursive process of removing the top predictive opcode and re-analyzing the data, the study identified a set of opcodes that predicted malware versus benign software with 96% accuracy. Testing the rules against noise added to the data showed the classification remained over 91% accurate, demonstrating the robustness of the approach. The document outlines the full methodology used in the study.
A cyber physical stream algorithm for intelligent software defined storageMade Artha
The document presents a new Cyber Physical Stream (CPS) algorithm for selecting predominant items from large data streams. The algorithm works well for item frequencies starting from 2%. It is designed for use in intelligent Software-Defined Storage systems combined with fuzzy indexing. Experiments show CPS improves accuracy and efficiency over previous algorithms. CPS is inspired by a brain model and works by incrementing a "voltage" value when items match and decrementing it otherwise, selecting the item with highest voltage. It performs well on both uniform random and Zipf's law distributed streams, with optimal parameter values depending on the distribution.
Predicted the activities performed by the user (Bending, Walking etc.) using Classification technique in Data mining.
Performed data cleaning, data pre-processing (removing false predictors, identifying important attributes) in Weka.
Created a model with 77% accuracy by performing Activity Recognition Classification and cross-checked it using Experimental design by adding noise, ROC Curves and Principal component analysis to create new attributes in Weka.
Using Learning Vector Quantization in IDS Alert Management SystemCSCJournals
This document presents a new intrusion detection system (IDS) alert management system that uses learning vector quantization (LVQ) to classify IDS alerts. The proposed system takes in alerts generated by Snort from the DARPA 98 dataset, normalizes and filters the alerts, then trains an LVQ neural network on labeled alert data. The trained LVQ model is used to classify new alerts as either true positives or false positives. The system is shown to achieve high classification accuracy of 88.75% and a false positive reduction rate of 88.27%, while only taking 0.000018 seconds on average to classify each alert. This makes the system suitable for active alert management where alerts need to be classified in real-time
USB 3.0 introduces the SuperSpeed protocol which provides a significant increase in bandwidth over USB 2.0 through a new physical layer capable of 5Gbps speeds. Key features of SuperSpeed USB include bulk streaming which allows high-speed transfer of large files without host involvement, improved flow control mechanisms, and enhanced power management.
Modern software systems now increasingly span cloud and on-premises deployments and remote embedded devices and sensors. These distributed systems bring challenges with data, connectivity, performance, and systems management; to ensure success, you must design and build with operability as a first-class property.
Matthew Skelton shares five practical, tried-and-tested techniques for improving operability with many kinds of software systems, including the cloud, serverless, on-premises, and the IoT: logging as a live diagnostics vector with sparse event IDs; operational checklists and runbook dialog sheets as a discovery mechanism for teams; endpoint health checks as a way to assess runtime dependencies and complexity; correlation IDs beyond simple HTTP calls; and lightweight user personas as drivers for operational dashboards.
These techniques work very differently with different technologies. For instance, an IoT device has limited storage, processing, and I/O, so generating and shipping of logs and metrics looks very different from cloud or serverless cases. However, the principles—logging as a live diagnostics vector, event IDs for discovery, etc.—work remarkably well across very different technologies.
Drawing from his experience helping teams improve the operability of their software systems, Matthew explains what works (and what doesn’t) and how teams can expand their understanding and awareness of operability through these straightforward, team-friendly techniques.
From a talk given by Matthew Skelton at Velocity Conference EU 2017 - https://conferences.oreilly.com/velocity/vl-eu/public/schedule/detail/61954
SplunkLive! Zurich 2018: Integrating Metrics and LogsSplunk
This document discusses integrating metrics and logs in Splunk for enhanced troubleshooting and monitoring. It provides an overview of metrics and how they are defined, compared to events. Metrics support in Splunk allows for more efficient aggregation, storage, and analysis of time-series data. Example use cases mentioned include IT operations, application performance monitoring, and IoT. Pricing is still based on uncompressed data volume ingested, with each metrics measurement licensed at around 150 bytes.
Activity Recognition dataset (UCI dataset) consists of various activities performed by the user (Bending, Walking, Lying etc.). Various data mining methods and classifiers (NaiveBayes, Random Forest, Random Tree, etc.) can be used to find out the predictive capability of the classifiers in classifying each activity.
The hidden engineering behind machine learning products at HelixaAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
The hidden engineering behind machine learning products at Helixa
Gianmario Spacagna, (Helixa)
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
This presentation describes a intelligent IT monitoring solution that uses Nagios as source of information, Esper as the CEP engine and a PCA algorithm.
In this talk, Matthew Skelton (Skelton Thatcher Consulting) explores five practical, tried-and-tested, real-world techniques for improving operability with many kinds of software systems, including cloud, Serverless, on-premise, and IoT.
Logging as a live diagnostics vector with sparse event IDs
Operational checklists and 'run book dialogue sheets' as a discovery mechanism for teams
Endpoint healthchecks as a way to assess runtime dependencies and complexity
Correlation IDs beyond simple HTTP calls
Lightweight 'User Personas' as drivers for operational dashboards
These techniques work very differently with different technologies. For instance, an IoT device has limited storage, processing, and I/O, so generation and shipping of logs and metrics looks very different from the cloud or 'serverless' case. However, the principles - logging as a live diagnostics vector, event IDs for discovery, etc - work remarkably well across very different technologies.
From a talk at Agile in the City Bristol 2017 http://agileinthecity.net/2017/bristol/sessions/index.php?session=44
The document introduces Tag.bio as a low-code analytics application platform built from interconnected data products in a data mesh architecture. It consists of data, algorithms, and analysis apps contributed by different groups - data engineers, data scientists, and domain experts. The platform can integrate various data sources and enable collaboration between groups. It then provides demos of the Tag.bio developer studio and data portal. Key capabilities discussed include integration with AWS services like AI/ML and HealthLake, as well as security features like confidential computing. Example use cases presented are for clinical trials, healthcare, life sciences, and universities.
A Federated In-Memory Database Computing Platform Enabling Real-Time Analysis...Matthieu Schapranow
1) Dr. Schapranow presents a federated in-memory database computing platform called AnalyzeGenomes.com to enable real-time analysis of big medical data.
2) The platform aims to incorporate all available patient data, reference latest lab results and medical knowledge, and support interactive analysis to help clinicians make treatment decisions.
3) It uses a distributed in-memory database across nodes to combine and link heterogeneous medical data sources while addressing challenges of data privacy, locality, and volume.
The Brain Imaging Data Structure and its use for fNIRSRobert Oostenveld
These slides were prepared for the NIRS toolkit course at the Donders, which due to the Corona crisis has been postponed. The slides present BIDS, explain how fNIRS often involves multiple signals, and relates the two to synchronization and data management
Utilization of hpc in cancer research 2008 ricBIT002
The document discusses the use of high performance computing (HPC) for cancer research at Nationwide Children's Hospital. It describes how the Research Informatics Core uses an HPC cluster running Microsoft HPC Server 2008 to perform complex image analysis on whole slide tissue images to enhance cancer pathology review and optimize patient therapy. Challenges in configuring the Microsoft HPC environment are also discussed.
IRJET- IoT based Advanced Healthcare Architecture: A New ApproachIRJET Journal
This document proposes a new IoT-based advanced healthcare architecture with remote monitoring capabilities. The architecture includes new sensors for data collection, a fog computing layer for efficient storage and real-time analysis, and disease prediction and location tracking features. Sensors continuously generate large amounts of heterogeneous data that is analyzed using machine learning algorithms to predict health conditions. Doctors and authorized users can monitor patients' health data through a website or mobile application. The system aims to improve healthcare quality by enabling real-time remote health monitoring, analysis, and automatic control of home appliances based on patients' needs.
FEATURE EXTRACTION AND FEATURE SELECTION: REDUCING DATA COMPLEXITY WITH APACH...IJNSA Journal
Feature extraction and feature selection are the first tasks in pre-processing of input logs in order to detect cyber security threats and attacks while utilizing machine learning. When it comes to the analysis of heterogeneous data derived from different sources, these tasks are found to be time-consuming and difficult to be managed efficiently. In this paper, we present an approach for handling feature extraction and feature selection for security analytics of heterogeneous data derived from different network sensors. The approach is implemented in Apache Spark, using its python API, named pyspark.
Logging, tracing and metrics: Instrumentation in .NET 5 and AzureAlex Thissen
The document discusses instrumentation in .NET 5 and Azure, including logging, tracing, metrics, and health checks. It provides an overview of these concepts and how they can be implemented using built-in .NET APIs and services like Application Insights. The document also discusses how instrumentation data can be collected and correlated to monitor application and cloud resource performance.
MobiDE’2012, Phoenix, AZ, United States, 20 May, 2012Charith Perera
Charith Perera, Arkady Zaslavsky, Peter Christen, Ali Salehi, Dimitrios Georgakopoulos, Connecting Mobile Things to Global Sensor Network Middleware using System-generated Wrappers, Proceedings of the 11th ACM International Workshop on Data Engineering for Wireless and Mobile Access (ACM SIGMOD/PODS-Workshop-MobiDE), Scottsdale, Arizona, USA, May, 2012
HSC-IoT: A Hardware and Software Co-Verification based Authentication Scheme ...Mahmud Hossain
This document presents a hardware and software co-verification based authentication scheme for the Internet of Things (IoT). The scheme uses a Physical Uncloneable Function (PUF) for hardware integrity verification and a Hardware Performance Counter (HPC) for software integrity verification to protect against node cloning and reprogramming attacks. It also allows for privacy-aware identity usage to prevent location tracking. The scheme provides resource efficient mutual authentication between IoT devices and an IoT Identity Provider for secure network admission and service access. Security and performance analyses show the scheme reduces computation and communication overhead compared to existing approaches.
The document discusses expert systems, which are computer systems that emulate the decision-making ability of a human expert. It describes the typical architecture of an expert system, which includes a knowledge base, inference engine, user interface, explanation facility, and knowledge acquisition system. It provides details on key components like the knowledge base, which stores rules and data, and the inference engine, which applies rules and reasoning to derive conclusions. Specific expert systems are discussed like MYCIN for medical diagnosis, DART for computer fault diagnosis, and XCON for configuring DEC computer systems. The roles of knowledge engineers and domain experts in developing expert systems are also outlined.
The document discusses how mold plays an important role in the environment but can also cause harm if it grows undetected indoors. It emphasizes the importance of drying wet areas within 24-48 hours to prevent mold growth, as mold needs moisture to develop. Proper ventilation is also recommended to prevent routine indoor activities from causing excess moisture that can encourage mold growth.
The document discusses decision trees and the ID3 algorithm. It provides an overview of data mining techniques, including decision trees. It then describes the ID3 algorithm in detail, including how it uses information gain to build decision trees top-down and recursively to classify data. An example of applying the ID3 algorithm to a sample dataset is also provided to illustrate the step-by-step process.
This document discusses the need for protective apparatus in power systems. Power systems are designed to generate and distribute electric power continuously to meet demand. To ensure reliable service and maximize investment returns, the system must operate continuously without major breakdowns. This can be achieved by implementing protective devices that detect and quickly clear faults, minimizing disruption. Protective devices are needed to isolate faulty sections and maintain continuity of supply to unaffected sections. The basic requirements of protection systems and their components are also outlined.
Book of abstract volume 8 no 9 ijcsis december 2010Oladokun Sulaiman
The International Journal of Computer Science and Information Security (IJCSIS) is a publication venue for novel research in computer science and information security. This issue from December 2010 contains 5 research papers. The first paper proposes a 128-bit chaotic hash function that uses the logistic map and MD5/SHA-1 hashes. The second paper discusses constructing an ontology for representing human emotions in videos to improve video retrieval. The third paper proposes an intelligent memory controller for H.264 encoders to reduce external memory access. The fourth paper investigates the impact of fragmentation on query performance in distributed databases. The fifth paper examines the effect of guard intervals in a proposed MIMO-OFDM system for wireless communication.
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
More Related Content
Similar to 2018 03 brownie_sensor_integration_and_biofeedback
Activity Recognition dataset (UCI dataset) consists of various activities performed by the user (Bending, Walking, Lying etc.). Various data mining methods and classifiers (NaiveBayes, Random Forest, Random Tree, etc.) can be used to find out the predictive capability of the classifiers in classifying each activity.
The hidden engineering behind machine learning products at HelixaAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
The hidden engineering behind machine learning products at Helixa
Gianmario Spacagna, (Helixa)
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
This presentation describes a intelligent IT monitoring solution that uses Nagios as source of information, Esper as the CEP engine and a PCA algorithm.
In this talk, Matthew Skelton (Skelton Thatcher Consulting) explores five practical, tried-and-tested, real-world techniques for improving operability with many kinds of software systems, including cloud, Serverless, on-premise, and IoT.
Logging as a live diagnostics vector with sparse event IDs
Operational checklists and 'run book dialogue sheets' as a discovery mechanism for teams
Endpoint healthchecks as a way to assess runtime dependencies and complexity
Correlation IDs beyond simple HTTP calls
Lightweight 'User Personas' as drivers for operational dashboards
These techniques work very differently with different technologies. For instance, an IoT device has limited storage, processing, and I/O, so generation and shipping of logs and metrics looks very different from the cloud or 'serverless' case. However, the principles - logging as a live diagnostics vector, event IDs for discovery, etc - work remarkably well across very different technologies.
From a talk at Agile in the City Bristol 2017 http://agileinthecity.net/2017/bristol/sessions/index.php?session=44
The document introduces Tag.bio as a low-code analytics application platform built from interconnected data products in a data mesh architecture. It consists of data, algorithms, and analysis apps contributed by different groups - data engineers, data scientists, and domain experts. The platform can integrate various data sources and enable collaboration between groups. It then provides demos of the Tag.bio developer studio and data portal. Key capabilities discussed include integration with AWS services like AI/ML and HealthLake, as well as security features like confidential computing. Example use cases presented are for clinical trials, healthcare, life sciences, and universities.
A Federated In-Memory Database Computing Platform Enabling Real-Time Analysis...Matthieu Schapranow
1) Dr. Schapranow presents a federated in-memory database computing platform called AnalyzeGenomes.com to enable real-time analysis of big medical data.
2) The platform aims to incorporate all available patient data, reference latest lab results and medical knowledge, and support interactive analysis to help clinicians make treatment decisions.
3) It uses a distributed in-memory database across nodes to combine and link heterogeneous medical data sources while addressing challenges of data privacy, locality, and volume.
The Brain Imaging Data Structure and its use for fNIRSRobert Oostenveld
These slides were prepared for the NIRS toolkit course at the Donders, which due to the Corona crisis has been postponed. The slides present BIDS, explain how fNIRS often involves multiple signals, and relates the two to synchronization and data management
Utilization of hpc in cancer research 2008 ricBIT002
The document discusses the use of high performance computing (HPC) for cancer research at Nationwide Children's Hospital. It describes how the Research Informatics Core uses an HPC cluster running Microsoft HPC Server 2008 to perform complex image analysis on whole slide tissue images to enhance cancer pathology review and optimize patient therapy. Challenges in configuring the Microsoft HPC environment are also discussed.
IRJET- IoT based Advanced Healthcare Architecture: A New ApproachIRJET Journal
This document proposes a new IoT-based advanced healthcare architecture with remote monitoring capabilities. The architecture includes new sensors for data collection, a fog computing layer for efficient storage and real-time analysis, and disease prediction and location tracking features. Sensors continuously generate large amounts of heterogeneous data that is analyzed using machine learning algorithms to predict health conditions. Doctors and authorized users can monitor patients' health data through a website or mobile application. The system aims to improve healthcare quality by enabling real-time remote health monitoring, analysis, and automatic control of home appliances based on patients' needs.
FEATURE EXTRACTION AND FEATURE SELECTION: REDUCING DATA COMPLEXITY WITH APACH...IJNSA Journal
Feature extraction and feature selection are the first tasks in pre-processing of input logs in order to detect cyber security threats and attacks while utilizing machine learning. When it comes to the analysis of heterogeneous data derived from different sources, these tasks are found to be time-consuming and difficult to be managed efficiently. In this paper, we present an approach for handling feature extraction and feature selection for security analytics of heterogeneous data derived from different network sensors. The approach is implemented in Apache Spark, using its python API, named pyspark.
Logging, tracing and metrics: Instrumentation in .NET 5 and AzureAlex Thissen
The document discusses instrumentation in .NET 5 and Azure, including logging, tracing, metrics, and health checks. It provides an overview of these concepts and how they can be implemented using built-in .NET APIs and services like Application Insights. The document also discusses how instrumentation data can be collected and correlated to monitor application and cloud resource performance.
MobiDE’2012, Phoenix, AZ, United States, 20 May, 2012Charith Perera
Charith Perera, Arkady Zaslavsky, Peter Christen, Ali Salehi, Dimitrios Georgakopoulos, Connecting Mobile Things to Global Sensor Network Middleware using System-generated Wrappers, Proceedings of the 11th ACM International Workshop on Data Engineering for Wireless and Mobile Access (ACM SIGMOD/PODS-Workshop-MobiDE), Scottsdale, Arizona, USA, May, 2012
HSC-IoT: A Hardware and Software Co-Verification based Authentication Scheme ...Mahmud Hossain
This document presents a hardware and software co-verification based authentication scheme for the Internet of Things (IoT). The scheme uses a Physical Uncloneable Function (PUF) for hardware integrity verification and a Hardware Performance Counter (HPC) for software integrity verification to protect against node cloning and reprogramming attacks. It also allows for privacy-aware identity usage to prevent location tracking. The scheme provides resource efficient mutual authentication between IoT devices and an IoT Identity Provider for secure network admission and service access. Security and performance analyses show the scheme reduces computation and communication overhead compared to existing approaches.
The document discusses expert systems, which are computer systems that emulate the decision-making ability of a human expert. It describes the typical architecture of an expert system, which includes a knowledge base, inference engine, user interface, explanation facility, and knowledge acquisition system. It provides details on key components like the knowledge base, which stores rules and data, and the inference engine, which applies rules and reasoning to derive conclusions. Specific expert systems are discussed like MYCIN for medical diagnosis, DART for computer fault diagnosis, and XCON for configuring DEC computer systems. The roles of knowledge engineers and domain experts in developing expert systems are also outlined.
The document discusses how mold plays an important role in the environment but can also cause harm if it grows undetected indoors. It emphasizes the importance of drying wet areas within 24-48 hours to prevent mold growth, as mold needs moisture to develop. Proper ventilation is also recommended to prevent routine indoor activities from causing excess moisture that can encourage mold growth.
The document discusses decision trees and the ID3 algorithm. It provides an overview of data mining techniques, including decision trees. It then describes the ID3 algorithm in detail, including how it uses information gain to build decision trees top-down and recursively to classify data. An example of applying the ID3 algorithm to a sample dataset is also provided to illustrate the step-by-step process.
This document discusses the need for protective apparatus in power systems. Power systems are designed to generate and distribute electric power continuously to meet demand. To ensure reliable service and maximize investment returns, the system must operate continuously without major breakdowns. This can be achieved by implementing protective devices that detect and quickly clear faults, minimizing disruption. Protective devices are needed to isolate faulty sections and maintain continuity of supply to unaffected sections. The basic requirements of protection systems and their components are also outlined.
Book of abstract volume 8 no 9 ijcsis december 2010Oladokun Sulaiman
The International Journal of Computer Science and Information Security (IJCSIS) is a publication venue for novel research in computer science and information security. This issue from December 2010 contains 5 research papers. The first paper proposes a 128-bit chaotic hash function that uses the logistic map and MD5/SHA-1 hashes. The second paper discusses constructing an ontology for representing human emotions in videos to improve video retrieval. The third paper proposes an intelligent memory controller for H.264 encoders to reduce external memory access. The fourth paper investigates the impact of fragmentation on query performance in distributed databases. The fifth paper examines the effect of guard intervals in a proposed MIMO-OFDM system for wireless communication.
Similar to 2018 03 brownie_sensor_integration_and_biofeedback (20)
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
1. KIT – Universität des Landes Baden-Württemberg und
nationales Forschungszentrum in der Helmholtz-Gemeinschaft
INSTITUTE OF INFORMATION SYSTEMS AND MARKETING (IISM)
www.kit.edu
Know your Sensors
Designing your physiological experiment
Dr. Anuja Hariharan
/
2. Anuja Hariharan – Untersuchungsdesign und Implementierung 2Institute of Information Systems
and Marketing (IISM)
Learning objectives
Create an experiment with Sensor data
Content:
Design and Implement a Sensor-Based Lab Experiment with the Java-Framework BROWNIE.
Integrate the element of heart-rate based Biofeedback in your Experiment UI.
Literatur:
Jung D, Adam M, Dorner V, Hariharan A (2017): A Practical Guide for Human Lab Experiments in Information
Systems Research: A Tutorial with Brownie. Journal of Systems and Information Technology
(JSIT), Vol. 19 Issue: 3/4, pp.228-256.
Hariharan, A., Adam, M. T. P., Lux, E., Pfeiffer, J., Dorner, V., Müller, M. B., & Weinhardt, C. (2017). Brownie:
A platform for conducting NeuroIS experiments. Journal of the Association for Information Systems, 18(4),
264.
Material:
Downlod main repository from Brownie‘s bitbucket page
https://bitbucket.org/kit-iism/experimenttool/src
Java 8 JDK and Eclipse Neon
Exkurs
3. Anuja Hariharan – Untersuchungsdesign und Implementierung 3Institute of Information Systems
and Marketing (IISM)
Quick survey
Experience
With behavioral experiments…?
With physiological experiments…?
With Brownie…?
Other platforms / programming background…?
What is your intended purpose with Brownie..?
Brownie is just a tool. The research question is the key..
Interest in hands-on part / background?
4. Anuja Hariharan – Untersuchungsdesign und Implementierung 5Institute of Information Systems
and Marketing (IISM)
CASE:
IMPLEMENT THE WEB
SHOPPING EXPERIENCE
Add sensor data measurement to your experiment
5. Anuja Hariharan – Untersuchungsdesign und Implementierung 6Institute of Information Systems
and Marketing (IISM)
The case of the compulsive Buyer
In this world of product choices, market designs encourage
emotional buying.
However, this does not always favor the consumer since it
encourages impulsive buying
Can we make the consumer aware of his emotions and
support wiser purchase choices?
6. Anuja Hariharan – Untersuchungsdesign und Implementierung 7Institute of Information Systems
and Marketing (IISM)
Enter – the era of Biofeedback
A personal „drive-safe“ aide
7. Anuja Hariharan – Untersuchungsdesign und Implementierung 8Institute of Information Systems
and Marketing (IISM)
What building blocks do we need for
such an experiment?
Hardware to measure sensors ….
Bioplux in this case
Can also be camera, audio, mouse pressure, etc.
Hardware configuration for the sensor…..
How to record the data…
Store the sensor data (if required)….
Process sensor data and Biofeedback display….
8. Anuja Hariharan – Untersuchungsdesign und Implementierung 9Institute of Information Systems
and Marketing (IISM)
Implementation Steps on Brownie
Preparing your experiment for Biofeedback
How to create a SensorConfiguration file
How to create a SensorRecorder file
Specifying Sensor Data storage
Testing start/stop recording on clients
Coding and Configuring Biofeedback
Preparing for trouble: hardware
connectivity and data quality
9. Anuja Hariharan – Untersuchungsdesign und Implementierung 10Institute of Information Systems
and Marketing (IISM)
Use the ISensorRecorderConfiguration
to specify configuration variables
public class BiopluxRecorderConfiguration
extends ISensorRecorderConfiguration {
…….
}
1 We begin by creating a class in the sensor package extending
ISensorRecorderConfiguration
!
10. Anuja Hariharan – Untersuchungsdesign und Implementierung 11Institute of Information Systems
and Marketing (IISM)
2 Specify variables in the class, which correspond to sensor configuration
parameters, as required by the sensor recording API
Use the ISensorRecorderConfiguration
to specify configuration variables
public class BiopluxRecorderConfiguration extends
ISensorRecorderConfiguration {
public int samplingRate;
public int frameSize = 1000;
public int resolution;
public void setDefaultValues() {
samplingRate = 1000;
resolution = 12;
}
}
Set default values, if
any.
11. Anuja Hariharan – Untersuchungsdesign und Implementierung 12Institute of Information Systems
and Marketing (IISM)
Use the ISensorRecorderConfiguration
to specify configuration variables
public class BiopluxRecorderConfiguration extends
ISensorRecorderConfiguration {
@SensorConfigurationElement(name = "Sampling Rate",
description = "Value in Hz. Max value is 1000.")
public int samplingRate;
@SensorConfigurationElement(name = "Sampling Frame Size",
description = "frame buffer size that is transfered from the
device to the pc per request (in bits). Max value is 1000.")
public int frameSize = 1000;
@SensorConfigurationElement(name = "Measurement Resulution",
description = "Value rage of measurements in bits. E.g. 12 =
12 bit = measurement data range of 4096. Max value is 12.")
public int resolution;
public void setDefaultValues() {
samplingRate = 1000;
resolution = 12;
}
}
3 State the name and description to be displayed for the
experimenter, using @SensorConfigurationElement
12. Anuja Hariharan – Untersuchungsdesign und Implementierung 13Institute of Information Systems
and Marketing (IISM)
View on Brownie UI
13. Anuja Hariharan – Untersuchungsdesign und Implementierung 14Institute of Information Systems
and Marketing (IISM)
The case of Bioplux
How to create a SensorConfiguration file
How to create a SensorRecorder file
Specifying Sensor Data storage
Testing start/stop recording on clients
Coding and Configuring Biofeedback
Preparing for trouble: hardware
connectivity and data quality
14. Anuja Hariharan – Untersuchungsdesign und Implementierung 15Institute of Information Systems
and Marketing (IISM)
Creating your own Sensor Recorder
public class BiopluxRecorder extends
ISensorRecorder<BiopluxRecorderConfiguration> {
…..
}
1 • Specify behavior for standard sensor functions
(start recording, stop recording, specifying sensor configuration, etc. )
• These standard functions are defined in the interface ISensorRecorder
• Create a class which extends this interface, and link up your Sensor
Configuration that you just created
15. Anuja Hariharan – Untersuchungsdesign und Implementierung 16Institute of Information Systems
and Marketing (IISM)
2 Ádd implementations for the abstract methods
public class BiopluxRecorder extends
ISensorRecorder<BiopluxRecorderConfiguration> {
@Override
public String getMenuText() {
return „Bioplux";
}
}
Creating your own Sensor Recorder
getMenuText():
Provides the menu text for this
sensor, shown in the server GUI
16. Anuja Hariharan – Untersuchungsdesign und Implementierung 17Institute of Information Systems
and Marketing (IISM)
3 Ádd implementations for the abstract methods (configureRecorder)
public class BiopluxRecorder extends
ISensorRecorder<BiopluxRecorderConfiguration> {
@Override
public boolean
configureRecorder(BiopluxRecorderConfiguration
configuration) {
this.macAddress =
extractMadAdressFromProperties
(Constants.getComputername());
dev = new Device(macAddress);
fileWrite =
FileManager.getInstance().getFileWriter
(getMenuText());
fileWrite.write("ReceiveTime");
…….
fileWrite.write("n");
return true;
}
}
Creating your own Sensor Recorder
configureRecorder (…):
Configures the sensor recorder with
all parameters required to initiate the
hardware, all related parameters to
hardware, any initialization tests, and
data storage specifications (like file
format derivation, file header).
This example:
• Specifies macaddress for Bluetooth
connection.
• Initializes a Bioplux Device object
(from the Bioplux API)
• Creates file object and file header
for local sensor data storage.
• Return true if all operations are
successfully completed / initialized.
17. Anuja Hariharan – Untersuchungsdesign und Implementierung 18Institute of Information Systems
and Marketing (IISM)
3 Compare with equivalent code for AudioRecorder
Creating your own Sensor Recorder
public class AudioRecorder extends
ISensorRecorder<AudioRecorderConfiguration> {
@Override
public boolean
configureRecorder(AudioRecorderConfiguration
configuration) {
audioFormat = new AudioFormat
(configuration.sampleRate,
configuration.sampleSize,
configuration.channels, configuration.signed,
configuration.bigEndian);
info = new DataLine.Info(TargetDataLine.class,
audioFormat);
// checks if system supports the audio data line
if (!AudioSystem.isLineSupported(info)) {
(Log..and throw exception..)
return false;
}
return true;
}
}
configureRecorder (…):
This example:
• Creates an audioFormat object
from Sensor configuration
information
• Obtains Dataline for Audio
recording .
• Check is line is supported
• Returns true if all operations
successfully completed
!
18. Anuja Hariharan – Untersuchungsdesign und Implementierung 19Institute of Information Systems
and Marketing (IISM)
3 Some tips to know which initializations / configurations are needed:
Creating your own Sensor Recorder
• Find out if your purchased Sensor is accompanied with a documented API with
initialization requirements
• Find out if there is an open source API available for standard/in-built sensors – such as
Audio/Webcam
• Catch and log all exceptions that might occur, to help troubleshooting
• Log all necessary information to aid Troubleshooting, especially when running in the
lab.
• Test each step carefully before proceeding.
• Think of different hardware scenarios in the lab
• Issues due to 64-bit or 32-bit libraries
• Issues due to Java version requirements
• Differences in OS (Developing in mac, deploying on Windows, etc.)
19. Anuja Hariharan – Untersuchungsdesign und Implementierung 20Institute of Information Systems
and Marketing (IISM)
public class BiopluxRecorderConfiguration extends ISensorRecorderConfiguration {
public void Recording() throws RecordingException {
FileManager.getInstance().updateStartTime(fileWrite);
Device.Frame[] frames = new Device.Frame[config.frameSize];
for (int i = 0; i < config.frameSize; i++) {
frames[i] = new Device.Frame();
}
samplingTime = new Date();
dev.BeginAcq(config.samplingRate, config.channels, config.resolution);
while (this.isCapturingActive()) {
dev.GetFrames(config.frameSize, frames);
receiveTime = new Date();
//ShowConsoleOutput if needed
// convert acquired data in frames [i] into a desired
// string format, and write to File
for (int i = 0; i < config.frameSize; i++) {
samplingTime.setTime(samplingTime.getTime() + 1);
fileWrite.write(getStringFromFrame(frames[i]));
fileWrite.write("n");
}
fileWrite.flush();
}// end of while loop
} //end of Recording()
} // end of Class
4 Create the logic for starting the recording
Creating your own Sensor Recorder
Recording (…):
This example:
• More initializations
• Saves the start time into a variable
• Begins the acquisition on the initialized
Bioplux Device object (call to the Bioplux
API)
• Runs a loop till the capturing is active
• Fetches each frame of the specified
frameSize and stores it in the “frames”
array.
• Converts each frame to a desired string
format and stores a frame into the created
file.
• Flushes (i.e. frees up) the file write after
each write call
20. Anuja Hariharan – Untersuchungsdesign und Implementierung 21Institute of Information Systems
and Marketing (IISM)
4 Compare with equivalent code for AudioRecorder
Creating your own Sensor Recorder
public class AudioRecorder extends
ISensorRecorder<AudioRecorderConfiguration> {
@Override
public void Recording() throws RecordingException {
targetDataLine = (TargetDataLine)
AudioSystem.getLine(info);
targetDataLine.open(audioFormat);
// start capturing
targetDataLine.start();
AudioInputStream audioInputStream = new
AudioInputStream(targetDataLine);
// start recording
FileOutputStream fileOutputStream =
FileManager.getInstance().getFileOutputStream
(getMenuText());
AudioSystem.write(audioInputStream,
audioFileFormatType, fileOutputStream);
} // end of Recording
} // end of class
Recorder (…):
This example:
• Creates and opens an audio
data line in the specified
audio format.
• Starts the acquisition using
the “start” method in the
target dataline.
• Since the start method is a
thread call by itself, no check
is needed in a while loop.
• Writes Audio stream output
to a file.
• Also a suitable method to add data quality checks, and store different information in
the file according to the estimated quality of the data!
21. Anuja Hariharan – Untersuchungsdesign und Implementierung 22Institute of Information Systems
and Marketing (IISM)
5 Ádd implementations for the abstract methods (cleanupRecorder)
public class BiopluxRecorder extends
ISensorRecorder<BiopluxRecorderConfiguration> {
@Override
public void cleanupRecorder() {
dev.Close();
fileWrite.close();
}
}
Creating your own Sensor Recorder
cleanupRecorder():
Closes the Sensor acquisition
objects
Bioplux example:
- Closes the Device Object
Audio Recorder example:
- Stops and Closes the audio
data line
public class AudioRecorder extends
ISensorRecorder<AudioRecorderConfiguration> {
@Override
public void cleanupRecorder() {
targetDataLine.stop();
targetDataLine.close();
}
}
22. Anuja Hariharan – Untersuchungsdesign und Implementierung 23Institute of Information Systems
and Marketing (IISM)
The case of Bioplux
How to create a SensorConfiguration file
How to create a SensorRecorder file
Specifying Sensor Data storage
Testing start/stop recording on clients
Coding and Configuring Biofeedback
Preparing for trouble: hardware
connectivity and data quality
23. Anuja Hariharan – Untersuchungsdesign und Implementierung 24Institute of Information Systems
and Marketing (IISM)
Setup mcaddresses.properties and run
server
• Browse to the following location and edit the file
ExpSensorssrcedukitexpsensorbiopluxmcaddresses.properties
• Add entry for each hostname in your experiment, and the mcaddress it
should connect to. (<Hostname> = <Macaddress of Bioplux>
24. Anuja Hariharan – Untersuchungsdesign und Implementierung 25Institute of Information Systems
and Marketing (IISM)
Configure sensor for entire experiment
• Run Brownie in Server mode
• Select the experiment, a list of available sensors appear
• Double click „Bioplux“ to add it to the experiment
25. Anuja Hariharan – Untersuchungsdesign und Implementierung 26Institute of Information Systems
and Marketing (IISM)
Configure Sensor and apply changes
• Click „Configure selected sensor“ to edit the configuration
• Modify any parameters, save, and apply changes to the experiment
26. Anuja Hariharan – Untersuchungsdesign und Implementierung 27Institute of Information Systems
and Marketing (IISM)
Run session & connect clients
27. Anuja Hariharan – Untersuchungsdesign und Implementierung 28Institute of Information Systems
and Marketing (IISM)
View Bioplux connectivity on server side
28. Anuja Hariharan – Untersuchungsdesign und Implementierung 29Institute of Information Systems
and Marketing (IISM)
CASE:
ADD BIOFEEDBACK ELEMENT
TO YOUR EXPERIMENT
29. Anuja Hariharan – Untersuchungsdesign und Implementierung 30Institute of Information Systems
and Marketing (IISM)
The case of Bioplux
Steps
How to create a SensorConfiguration file
How to create a SensorRecorder file
Specifying Sensor Data storage
Testing start/stop recording on clients
Coding and Configuring Biofeedback
Preparing for trouble: hardware
connectivity and data quality
30. Anuja Hariharan – Untersuchungsdesign und Implementierung 31Institute of Information Systems
and Marketing (IISM)
1 Add a UI element to display the Live Biofeedback element on your experiment
screen
public class GaugeMeterScreen extends Screen {
JPanel panel = new JPanel();
panel.setBackground(Color.GRAY);
panel.setBounds((int)bounds.getWidth() - 250, 0,
250, bounds.height);
panel.setBounds(bounds);
add(panel);
panel.setLayout(null);
final RadialBargraph gauge = new RadialBargraph();
System.out.println((int)bounds.getWidth() + ";" +
bounds.getHeight());
gauge.setBounds((int)bounds.getWidth() - 200, 300,
180, 200);
gauge.setTitle("Heart Beats");
gauge.setLcdVisible(true);
gauge.setLedVisible(true);
panel.add(gauge);
}
In the Screen class
Position the UI
element
appropriately
31. Anuja Hariharan – Untersuchungsdesign und Implementierung 32Institute of Information Systems
and Marketing (IISM)
Configure BiopluxRecorder to send LBF
update events
public class BiopluxRecorder {
….
Recording() {
……
// make a call to your biosignal processor passing in the data frames
processor(config.frameSize, frames);
double processedValue = 0.0;
//( Result from Heart rate processing module)
// Add Value to LBFManager. Use name of this class as value-key.
LBFManager.getInstance().updateLBFValue(BiopluxRecorder.class.getName(),
processedValue );
……
}
….
}
2
32. Anuja Hariharan – Untersuchungsdesign und Implementierung 33Institute of Information Systems
and Marketing (IISM)
3
Receive updates from LBFManager via the BiopluxRecorder class
Write logic to update the UI element based on received values
public class GaugeMeterScreen extends Screen {
LBFManager.getInstance().addLBFUpdateEvent(BiopluxRecorder.class.getName(),
new LBFUpdateEvent() {
@Override
public void LBFValueUpdate(Object value) {
double gaugeValue = ((double) value) * 100;
gauge.setValue(gaugeValue);
if (gaugeValue < 50) {
gauge.setBarGraphColor(ColorDef.GREEN);
} else if (gaugeValue < 80) {
gauge.setBarGraphColor(ColorDef.YELLOW);
} else {
gauge.setBarGraphColor(ColorDef.RED);
}
}
});
}
In the Screen class, handle received
events
33. Anuja Hariharan – Untersuchungsdesign und Implementierung 34Institute of Information Systems
and Marketing (IISM)
Complete the code to integrate gauge in
Browser
public class GaugeMeterScreen extends Screen {
WebBrowser browser = new WebBrowser(true);
Rectangle bounds = MainFrame.getCurrentScreen().getBounds();
browser.setBounds(new Rectangle((int) bounds.getX(), (int) bounds.getY(), (int)
bounds.getWidth() - 250,
(int) bounds.getHeight()));
/* Set screen layout to null => full screen web browser */
setLayout(null);
add(browser);
/* Load initial web page */
browser.loadURL("https://www.amazon.de/");
/* Send client respone to end the experiment after a certain amount of time */
new java.util.Timer().schedule(new java.util.TimerTask() {
@Override
public void run() {
ClientGuiController.getInstance().sendClientResponse(parameter,
gameId, screenId);
}
}, DURATION_OF_SCREEN);
……
}
34. Anuja Hariharan – Untersuchungsdesign und Implementierung 35Institute of Information Systems
and Marketing (IISM)
Run the configured Bioplux experiment
again
35. Anuja Hariharan – Untersuchungsdesign und Implementierung 36Institute of Information Systems
and Marketing (IISM)
Basic concepts about Biofeedback
• LBFManager – A singleton class (i.e., only one instance exists across the
application) that contains a list of LBFValues and a map of LBF Events
• LBFUpdateEvent – An abstract Event listener that must be handled in each screen to
specify how to handle incoming values
RecorderClass (method Recording())
• Processes incoming values and
raises an event using the
udpateLBFValue call
LBFManager.getInstance().updateLBFVa
lue
(BiopluxSensorRecorder.class.getName(
), dummyValue);
Screen
• Registers the update event, along with
the corresponding RecorderClass
• Listener logic to specify how to react to
events
LBFManager.getInstance().addLBFUpdateE
vent
(BiopluxSensorRecorder.class.getName(),
new LBFUpdateEvent())
36. Anuja Hariharan – Untersuchungsdesign und Implementierung 37Institute of Information Systems
and Marketing (IISM)
Data processing considerations
- A Baseline recording phase is necessary, with an initial cool down period of
atleast 5 minutes to calibrate the values of each subject with his/her own
baseline
- Regular baselines (such as at the end of each round) can be thought of as
well
- Use of standardized packages (such as xAffect) to process the data
- More flexibility in using own algorithms for real-time processing
(if this is part of the research focus)
- Specify a time window where LBF value error is tolerable (such as 5
seconds), yet the update frequency is not too high on the screen.
37. Anuja Hariharan – Untersuchungsdesign und Implementierung 38Institute of Information Systems
and Marketing (IISM)
How to create a SensorConfiguration file
How to create a SensorRecorder file
Specifying Sensor Data storage
Testing start/stop recording on clients
Coding and Configuring Biofeedback
Preparing for trouble: hardware
connectivity and data quality
38. Anuja Hariharan – Untersuchungsdesign und Implementierung 39Institute of Information Systems
and Marketing (IISM)
Hands-on component
39. Anuja Hariharan – Untersuchungsdesign und Implementierung 40Institute of Information Systems
and Marketing (IISM)
Outlook
Fun future ideas with Brownie
• Camera based
• QR Code integration – (Zxing java library)
• Image recognition – (TensorFlow Library)
• EEG based
• Brainathon- Brain wave controlled 2 player game
• Signal processing
• JMathStudio – useful collection of signal processing functions
Editor's Notes
Hinweis: Ist eine BWL und keine Statistikvorlesung, geht darum die wichtigsten Methoden zu kennen und anwendnen zu können
Auf einzelne Blocks eingehen und vorstellen
Darauf hinweisen, dass diese Einheit Vorlesung UND Übung in einem ist
Can extend it later
(use it as a way to pass around data)
Purpose is to have a class which might be extended later depending on the hardware changes
Can extend it later
(use it as a way to pass around data)
Can extend it later
(use it as a way to pass around data)
Purpose is to have a class which might be extended later depending on the hardware changes
Purpose is to have a class which might be extended later depending on the hardware changes
Can extend it later
(use it as a way to pass around data)
Purpose is to have a class which might be extended later depending on the hardware changes
Purpose is to have a class which might be extended later depending on the hardware changes
Purpose is to have a class which might be extended later depending on the hardware changes