This document appears to be an advertisement from a company called Enjoy-AIIA or AIIAPROMOGIFTS that is interested in discussing how to make customers happier. It provides contact information including an email and phone number to get in touch along with their website address so potential clients can learn more.
The document provides a detailed design report for a redesigned hydrogen production plant. The base case design produces 8,000 kg/hr of pure hydrogen gas using steam methane reforming and pressure swing adsorption. The plant can also flexibly operate at 60% capacity. Total capital investment is $112 million over 2 years. The plant is economically viable with a 9% internal rate of return over 30 years of operation. Gaseous emissions are a concern and various abatement methods are discussed to improve sustainability.
This document provides a summary of business, economic, and political news from Mongolia in its April 25, 2008 issue. It highlights that a GSM network was supplied to a mobile company in Mongolia. It also notes that a major road project linking western Mongolia to Russia and China was announced. Additionally, it reports that Mongolia's parliament may pass a law allowing the state to own more than half of some mineral projects.
Olympus Global -Multiple Products One SourceShaun Jackson
Olympus Global operates from a 38,000 square foot facility in the Black Country, England and distributes OEM engineered automotive components and assemblies through established manufacturers. Since 1974, Olympus Global has partnered with many well-known automotive companies and their suppliers. They offer access to global suppliers, competitive pricing, quality assurance processes, and various supply methods tailored to their customers' needs. Olympus Global aims to be a market leader through continuous investment and a focus on customer service and quality.
El documento resume las especificaciones de varios autos deportivos de gama alta como el Ford Mustang GT, Ford GT Mirage Avro 720, Lamborghini Reventón, Porsche Carrera GT 2010 y BMW Efficient Dynamics Concept, incluyendo sus velocidades máximas, aceleración 0-100 km/h y potencia. También compara las especificaciones de un auto económico estándar con los autos deportivos.
The document discusses the tools and techniques used to create a film project. A blog was created to share work and a YouTube link. Celtx was used to create a professional script. Final Cut Express was used to edit the film, though it took some time to learn. The final product was uploaded to YouTube to get feedback from a wider audience. Filming went smoothly having previously used the cameras. A zoom recorder captured certain audio that cameras couldn't pick up well.
This document appears to be an advertisement from a company called Enjoy-AIIA or AIIAPROMOGIFTS that is interested in discussing how to make customers happier. It provides contact information including an email and phone number to get in touch along with their website address so potential clients can learn more.
The document provides a detailed design report for a redesigned hydrogen production plant. The base case design produces 8,000 kg/hr of pure hydrogen gas using steam methane reforming and pressure swing adsorption. The plant can also flexibly operate at 60% capacity. Total capital investment is $112 million over 2 years. The plant is economically viable with a 9% internal rate of return over 30 years of operation. Gaseous emissions are a concern and various abatement methods are discussed to improve sustainability.
This document provides a summary of business, economic, and political news from Mongolia in its April 25, 2008 issue. It highlights that a GSM network was supplied to a mobile company in Mongolia. It also notes that a major road project linking western Mongolia to Russia and China was announced. Additionally, it reports that Mongolia's parliament may pass a law allowing the state to own more than half of some mineral projects.
Olympus Global -Multiple Products One SourceShaun Jackson
Olympus Global operates from a 38,000 square foot facility in the Black Country, England and distributes OEM engineered automotive components and assemblies through established manufacturers. Since 1974, Olympus Global has partnered with many well-known automotive companies and their suppliers. They offer access to global suppliers, competitive pricing, quality assurance processes, and various supply methods tailored to their customers' needs. Olympus Global aims to be a market leader through continuous investment and a focus on customer service and quality.
El documento resume las especificaciones de varios autos deportivos de gama alta como el Ford Mustang GT, Ford GT Mirage Avro 720, Lamborghini Reventón, Porsche Carrera GT 2010 y BMW Efficient Dynamics Concept, incluyendo sus velocidades máximas, aceleración 0-100 km/h y potencia. También compara las especificaciones de un auto económico estándar con los autos deportivos.
The document discusses the tools and techniques used to create a film project. A blog was created to share work and a YouTube link. Celtx was used to create a professional script. Final Cut Express was used to edit the film, though it took some time to learn. The final product was uploaded to YouTube to get feedback from a wider audience. Filming went smoothly having previously used the cameras. A zoom recorder captured certain audio that cameras couldn't pick up well.
5 Practical Steps to a Successful Deep Learning ResearchBrodmann17
Deep Learning has gained a huge popularity over the last several years. Especially due to its magnificent progress in many domains.
Many resources are out there including open source implementations of recent research advancements. This vast availability is somehow misleading because when one actually wants to create a Deep Learning based product, he soon realizes that there is a large gap between these open source implementations and a real production grade Deep Learning product. Closing this gap can take months of work involving large costs, especially on man power and compute power.
Throughout this talk I will talk based on my experience leading the research at Brodmann17 about several aspects we have found to be important for building Deep Learning based computer vision products.
Begin at the beginning: Feature selection for Big Data by Amparo Alonso at Bi...Big Data Spain
Preprocessing data is one of the most effort consuming tasks in Machine Learning (ML). In the Big Data context, the models automatically derived from data should be as simple as possible, interpretable and fast, and for achieving that we will need to use the best variables, that is, use the best features of such data.
Although there are already several libraries available which approach ML tasks in Big Data, that is not the case for FS algorithms yet, and other preprocessing techniques such as discretization. However, the existing FS methods do not scale well when dealing with Big Data. In this presentation, we show our efforts and new ideas for parallelizing standard FS methods for its use on Big Data environments.
Session presented at Big Data Spain 2015 Conference
15th Oct 2015
Kinépolis Madrid
http://www.bigdataspain.org
Event promoted by: http://www.paradigmatecnologico.com
Abstract: http://www.bigdataspain.org/program/thu/slot-11.html
PLOTCON NYC: Interactive Visual Statistics on Massive DatasetsPlotly
Visualization is oftentimes the best way to explore raw data. But as data grows to include millions and billions of points, traditional visualization techniques break down. Whether you're loading the data into limited memory, or separating the signal from the noise when thousands of data points occupy each pixel, as data gets big, visualization gets challenging.
In this talk, Peter will describe an approach called "datashading" that deconstructs the classical infovis pipeline to place statistical processing at the heart of the visualization task. The result is a scalable, interactive system that is easy to use and produces perceptually accurate renderings of extremely large datasets. He will show the open-source Datashader library, which implements these ideas, and makes them available within Jupyter notebooks and Bokeh data applications.
Optimization Direct: Introduction and recent case studiesAlkis Vazacopoulos
This document provides an overview of Optimization Direct, an IBM business partner that specializes in optimization software and consulting. It discusses Optimization Direct's experience implementing optimization technology for various industries. The document also summarizes Optimization Direct's product offerings, which focus on IBM ILOG CPLEX Optimization Studio. It then highlights several recent case studies where Optimization Direct helped customers solve scheduling, resource allocation, and pricing problems using analytics and optimization modeling approaches like MIP and heuristic algorithms. Finally, it shares an example of how Optimization Direct helped a retail client optimize markdown pricing and promotions to improve sales and margins.
The Effect of Third Party Implementations on ReproducibilityBalázs Hidasi
This document examines the reproducibility of implementations of the GRU4Rec recommender algorithm. It analyzes several reimplementations of GRU4Rec in PyTorch, TensorFlow, Keras and benchmarking frameworks. It finds that while some reimplementations capture the overall architecture, they are missing features and hyperparameters described in the original papers. Some implementations also contain errors in their implementation. Offline experiments show performance degradations in the reimplementations compared to the original implementation, with median total performance losses ranging from 7-99% depending on the reimplementation and dataset. Training time comparisons show that versions with missing features require less time to train than a feature-complete version.
The document discusses a thesis on monitoring networks using Nagios. It outlines the objectives to understand the basic theory behind network monitoring tools, identify the functionalities a monitoring system should have, and examine if Nagios is suitable for small to medium organizations. A mixed methodology is used including literature review and lab experiments. Findings from questionnaires and the experiment show that Nagios complies with industry standards and meets the monitoring needs of small/medium organizations with no significant costs. Future work is proposed to extend Nagios' monitoring capabilities to additional network devices and servers.
Anton Muzhailo - Practical Test Process Improvement using ISTQBIevgenii Katsan
Here are a few potential questions from the document:
- What is the true value of ISTQB certifications beyond just checking a box for management? How can the knowledge be applied practically?
- How can metrics be designed and used effectively to assess quality and test coverage in an agile environment? What are some examples of valid and invalid metrics?
- What artifacts or information are useful to include in a test plan even for agile teams using tools like JIRA? How can a test plan provide value beyond just additional paperwork?
- What techniques can be used to effectively estimate defect severity when multiple testers with different perspectives are involved? How can consistency be achieved?
- How can root cause analysis be applied
The document summarizes a project to build predictive models for early disease detection using machine learning. It aims to streamline prediction of disease among patients to increase chances of survival through early detection. The project involves preprocessing data, analyzing features, training models using machine learning algorithms, testing the models, and creating a user interface for predictions. The models achieved prediction accuracy rates from 74-85% on various disease datasets.
This document outlines an agenda for a RIPE Atlas workshop. The goals are to learn how to use RIPE Atlas measurements for network monitoring and troubleshooting. Attendees will learn how to use the API, command line tool, and code to manipulate Atlas data. The workshop will cover creating measurements, viewing results, using the streaming API for real-time monitoring, and how to get involved with the RIPE Atlas community. Prerequisites include having a RIPE NCC account and credits to run measurements.
Hybrid Solution of the Cold-Start Problem in Context-Aware Recommender SystemsMatthias Braunhofer
This document summarizes Matthias Braunhofer's doctoral research on addressing the cold-start problem in context-aware recommender systems. It presents basic context-aware rating prediction models like CAMF-CC and SPF, and proposes novel variants that incorporate additional contextual information like item categories or user demographics. It also describes two approaches to building hybrid context-aware recommender systems - heuristic switching and adaptive weighting. An evaluation compares the performance of these models on three datasets in addressing new user, new item, and new context cold-start situations, finding that hybrid models generally outperform basic models.
Improve Estimation maturity using Functional Size Measurement and Historical ...Harold van Heeringen
Many software projects still fail in recent years, also agile projects. Improving estimation maturity in order to start with a realistic estimate instead of an optimistic one can really save billions of dollars in most local software industries. The Chinese government may now be moving towards an active software estimation maturity improvement strategy in its new 5-year plan. Functional size measurement and relevant historical data as well as parametric estimation tools are key to such a strategy. This presentation was the key-note speech at the China System and Software Process Improvement Association conference on software estimation, Beijing China, May 27 2016.
Lecture slides on the Design Process for Ambient Intelligence Systems.
Design process adopted in the Ambient Intelligence course at Politecnico di Torino, in year 2016.
More info at: http://bit.ly/polito-ami
The document discusses the history and goals of diagnostic engineering. It describes how diagnostic engineering optimizes fault detection and isolation to improve availability, safety, and mission success. Diagnostic engineering intersects with related fields like reliability engineering and uses tools for modeling systems, performing diagnostic analyses, and developing integrated diagnostic strategies. The goal is to effectively transfer diagnostic data and strategies to real-world applications and integrated logistic support systems.
Experimental Design for Distributed Machine Learning with Myles BakerDatabricks
This document discusses experimental design for distributed machine learning models. It outlines common problems in machine learning modeling like selecting the best algorithm and evaluating a model's expected generalization error. It describes steps in a machine learning study like collecting data, building models, and designing experiments. The goal of experimentation is to understand how model factors affect outcomes and obtain statistically significant conclusions. Techniques discussed for analyzing distributed model outputs include precision-recall curves, confusion matrices, and hypothesis testing methods like the chi-squared test and McNemar's test. The document emphasizes that experimental design for distributed learning poses new challenges around data characteristics, computational complexity, and reproducing results across models.
[UPDATE] Udacity webinar on Recommendation SystemsAxel de Romblay
A 1h webinar on RecSys for the Udacity NanoDegree Program "How to become a Data Scientist" : https://in.udacity.com/course/data-scientist-nanodegree--nd025.
The link to the ipynb : https://www.kaggle.com/axelderomblay/udacity-workshop-on-recommendation-systems
Using Bayesian Optimization to Tune Machine Learning ModelsScott Clark
1) Bayesian optimization can be used to efficiently tune the hyperparameters of machine learning models, requiring far fewer evaluations than standard random search or grid search methods to find good hyperparameters.
2) It builds a statistical model called a Gaussian process to model the objective function based on previous evaluations, and uses this to select the most promising hyperparameters to evaluate next in order to optimize an objective metric like accuracy.
3) SigOpt is a service that uses Bayesian optimization to tune machine learning models, outperforming expert humans on tasks like classifying images from CIFAR10 and reducing error rates more than standard methods.
Using Bayesian Optimization to Tune Machine Learning ModelsSigOpt
1. Tuning machine learning models is challenging due to the large number of non-intuitive hyperparameters.
2. Traditional tuning methods like grid search are computationally expensive and can find local optima rather than global optima.
3. Bayesian optimization uses Gaussian processes to build statistical models from prior evaluations to determine the most promising hyperparameters to test next, requiring far fewer evaluations than traditional methods to find better performing models.
LF Energy Webinar - Unveiling OpenEEMeter 4.0DanBrown980551
LF Energy OpenEEmeter measures the energy impacts of demand-side interventions in buildings. OpenEEmeter 4.0 provides enhanced performance of the daily model with dramatically reduced seasonal and weekend/weekday bias, along with increased computational efficiency.
This webinar explores how OpenEEMeter:
-Reduces seasonal bias in the daily model by 84%
-Reduces weekend/weekday bias in the daily model by 95%
-Runs up to 100x faster with monthly data, and 2 - 10x faster with daily data
-Along with this release, the OpenEEMeter community is also publishing a detailed 4.0 model specification and results of thorough testing conducted across residential and commercial sectors and gas and electric fuels.
Speakers:
-Adam Scheer, Vice President of Applied Data Science, Recurve
-Travis Sikes, Data Science Manager, Recurve
-Jason Chulock, Lead Engineer, Recurve
Improving the cosmic approximate sizing using the fuzzy logic epcu model al...IWSM Mensura
The document describes an experiment to improve the accuracy of the COSMIC functional size measurement (FSM) method for early stage software projects. It uses a fuzzy logic model called EPCU, which considers two variables - the perceived size of use cases and number of related objects of interest - to estimate functional size. The experiment applied the EPCU model and traditional "equal size bands" approach to estimate sizes for 14 use cases. The EPCU model initially underestimated sizes but accuracy improved when expanding the output range, demonstrating its sensitivity to variable definitions. Further experiments are needed with more test cases to validate the approach.
Evidence of Jet Activity from the Secondary Black Hole in the OJ 287 Binary S...Sérgio Sacani
Wereport the study of a huge optical intraday flare on 2021 November 12 at 2 a.m. UT in the blazar OJ287. In the binary black hole model, it is associated with an impact of the secondary black hole on the accretion disk of the primary. Our multifrequency observing campaign was set up to search for such a signature of the impact based on a prediction made 8 yr earlier. The first I-band results of the flare have already been reported by Kishore et al. (2024). Here we combine these data with our monitoring in the R-band. There is a big change in the R–I spectral index by 1.0 ±0.1 between the normal background and the flare, suggesting a new component of radiation. The polarization variation during the rise of the flare suggests the same. The limits on the source size place it most reasonably in the jet of the secondary BH. We then ask why we have not seen this phenomenon before. We show that OJ287 was never before observed with sufficient sensitivity on the night when the flare should have happened according to the binary model. We also study the probability that this flare is just an oversized example of intraday variability using the Krakow data set of intense monitoring between 2015 and 2023. We find that the occurrence of a flare of this size and rapidity is unlikely. In machine-readable Tables 1 and 2, we give the full orbit-linked historical light curve of OJ287 as well as the dense monitoring sample of Krakow.
More Related Content
Similar to Occupancy level estimation using pir sensors only
5 Practical Steps to a Successful Deep Learning ResearchBrodmann17
Deep Learning has gained a huge popularity over the last several years. Especially due to its magnificent progress in many domains.
Many resources are out there including open source implementations of recent research advancements. This vast availability is somehow misleading because when one actually wants to create a Deep Learning based product, he soon realizes that there is a large gap between these open source implementations and a real production grade Deep Learning product. Closing this gap can take months of work involving large costs, especially on man power and compute power.
Throughout this talk I will talk based on my experience leading the research at Brodmann17 about several aspects we have found to be important for building Deep Learning based computer vision products.
Begin at the beginning: Feature selection for Big Data by Amparo Alonso at Bi...Big Data Spain
Preprocessing data is one of the most effort consuming tasks in Machine Learning (ML). In the Big Data context, the models automatically derived from data should be as simple as possible, interpretable and fast, and for achieving that we will need to use the best variables, that is, use the best features of such data.
Although there are already several libraries available which approach ML tasks in Big Data, that is not the case for FS algorithms yet, and other preprocessing techniques such as discretization. However, the existing FS methods do not scale well when dealing with Big Data. In this presentation, we show our efforts and new ideas for parallelizing standard FS methods for its use on Big Data environments.
Session presented at Big Data Spain 2015 Conference
15th Oct 2015
Kinépolis Madrid
http://www.bigdataspain.org
Event promoted by: http://www.paradigmatecnologico.com
Abstract: http://www.bigdataspain.org/program/thu/slot-11.html
PLOTCON NYC: Interactive Visual Statistics on Massive DatasetsPlotly
Visualization is oftentimes the best way to explore raw data. But as data grows to include millions and billions of points, traditional visualization techniques break down. Whether you're loading the data into limited memory, or separating the signal from the noise when thousands of data points occupy each pixel, as data gets big, visualization gets challenging.
In this talk, Peter will describe an approach called "datashading" that deconstructs the classical infovis pipeline to place statistical processing at the heart of the visualization task. The result is a scalable, interactive system that is easy to use and produces perceptually accurate renderings of extremely large datasets. He will show the open-source Datashader library, which implements these ideas, and makes them available within Jupyter notebooks and Bokeh data applications.
Optimization Direct: Introduction and recent case studiesAlkis Vazacopoulos
This document provides an overview of Optimization Direct, an IBM business partner that specializes in optimization software and consulting. It discusses Optimization Direct's experience implementing optimization technology for various industries. The document also summarizes Optimization Direct's product offerings, which focus on IBM ILOG CPLEX Optimization Studio. It then highlights several recent case studies where Optimization Direct helped customers solve scheduling, resource allocation, and pricing problems using analytics and optimization modeling approaches like MIP and heuristic algorithms. Finally, it shares an example of how Optimization Direct helped a retail client optimize markdown pricing and promotions to improve sales and margins.
The Effect of Third Party Implementations on ReproducibilityBalázs Hidasi
This document examines the reproducibility of implementations of the GRU4Rec recommender algorithm. It analyzes several reimplementations of GRU4Rec in PyTorch, TensorFlow, Keras and benchmarking frameworks. It finds that while some reimplementations capture the overall architecture, they are missing features and hyperparameters described in the original papers. Some implementations also contain errors in their implementation. Offline experiments show performance degradations in the reimplementations compared to the original implementation, with median total performance losses ranging from 7-99% depending on the reimplementation and dataset. Training time comparisons show that versions with missing features require less time to train than a feature-complete version.
The document discusses a thesis on monitoring networks using Nagios. It outlines the objectives to understand the basic theory behind network monitoring tools, identify the functionalities a monitoring system should have, and examine if Nagios is suitable for small to medium organizations. A mixed methodology is used including literature review and lab experiments. Findings from questionnaires and the experiment show that Nagios complies with industry standards and meets the monitoring needs of small/medium organizations with no significant costs. Future work is proposed to extend Nagios' monitoring capabilities to additional network devices and servers.
Anton Muzhailo - Practical Test Process Improvement using ISTQBIevgenii Katsan
Here are a few potential questions from the document:
- What is the true value of ISTQB certifications beyond just checking a box for management? How can the knowledge be applied practically?
- How can metrics be designed and used effectively to assess quality and test coverage in an agile environment? What are some examples of valid and invalid metrics?
- What artifacts or information are useful to include in a test plan even for agile teams using tools like JIRA? How can a test plan provide value beyond just additional paperwork?
- What techniques can be used to effectively estimate defect severity when multiple testers with different perspectives are involved? How can consistency be achieved?
- How can root cause analysis be applied
The document summarizes a project to build predictive models for early disease detection using machine learning. It aims to streamline prediction of disease among patients to increase chances of survival through early detection. The project involves preprocessing data, analyzing features, training models using machine learning algorithms, testing the models, and creating a user interface for predictions. The models achieved prediction accuracy rates from 74-85% on various disease datasets.
This document outlines an agenda for a RIPE Atlas workshop. The goals are to learn how to use RIPE Atlas measurements for network monitoring and troubleshooting. Attendees will learn how to use the API, command line tool, and code to manipulate Atlas data. The workshop will cover creating measurements, viewing results, using the streaming API for real-time monitoring, and how to get involved with the RIPE Atlas community. Prerequisites include having a RIPE NCC account and credits to run measurements.
Hybrid Solution of the Cold-Start Problem in Context-Aware Recommender SystemsMatthias Braunhofer
This document summarizes Matthias Braunhofer's doctoral research on addressing the cold-start problem in context-aware recommender systems. It presents basic context-aware rating prediction models like CAMF-CC and SPF, and proposes novel variants that incorporate additional contextual information like item categories or user demographics. It also describes two approaches to building hybrid context-aware recommender systems - heuristic switching and adaptive weighting. An evaluation compares the performance of these models on three datasets in addressing new user, new item, and new context cold-start situations, finding that hybrid models generally outperform basic models.
Improve Estimation maturity using Functional Size Measurement and Historical ...Harold van Heeringen
Many software projects still fail in recent years, also agile projects. Improving estimation maturity in order to start with a realistic estimate instead of an optimistic one can really save billions of dollars in most local software industries. The Chinese government may now be moving towards an active software estimation maturity improvement strategy in its new 5-year plan. Functional size measurement and relevant historical data as well as parametric estimation tools are key to such a strategy. This presentation was the key-note speech at the China System and Software Process Improvement Association conference on software estimation, Beijing China, May 27 2016.
Lecture slides on the Design Process for Ambient Intelligence Systems.
Design process adopted in the Ambient Intelligence course at Politecnico di Torino, in year 2016.
More info at: http://bit.ly/polito-ami
The document discusses the history and goals of diagnostic engineering. It describes how diagnostic engineering optimizes fault detection and isolation to improve availability, safety, and mission success. Diagnostic engineering intersects with related fields like reliability engineering and uses tools for modeling systems, performing diagnostic analyses, and developing integrated diagnostic strategies. The goal is to effectively transfer diagnostic data and strategies to real-world applications and integrated logistic support systems.
Experimental Design for Distributed Machine Learning with Myles BakerDatabricks
This document discusses experimental design for distributed machine learning models. It outlines common problems in machine learning modeling like selecting the best algorithm and evaluating a model's expected generalization error. It describes steps in a machine learning study like collecting data, building models, and designing experiments. The goal of experimentation is to understand how model factors affect outcomes and obtain statistically significant conclusions. Techniques discussed for analyzing distributed model outputs include precision-recall curves, confusion matrices, and hypothesis testing methods like the chi-squared test and McNemar's test. The document emphasizes that experimental design for distributed learning poses new challenges around data characteristics, computational complexity, and reproducing results across models.
[UPDATE] Udacity webinar on Recommendation SystemsAxel de Romblay
A 1h webinar on RecSys for the Udacity NanoDegree Program "How to become a Data Scientist" : https://in.udacity.com/course/data-scientist-nanodegree--nd025.
The link to the ipynb : https://www.kaggle.com/axelderomblay/udacity-workshop-on-recommendation-systems
Using Bayesian Optimization to Tune Machine Learning ModelsScott Clark
1) Bayesian optimization can be used to efficiently tune the hyperparameters of machine learning models, requiring far fewer evaluations than standard random search or grid search methods to find good hyperparameters.
2) It builds a statistical model called a Gaussian process to model the objective function based on previous evaluations, and uses this to select the most promising hyperparameters to evaluate next in order to optimize an objective metric like accuracy.
3) SigOpt is a service that uses Bayesian optimization to tune machine learning models, outperforming expert humans on tasks like classifying images from CIFAR10 and reducing error rates more than standard methods.
Using Bayesian Optimization to Tune Machine Learning ModelsSigOpt
1. Tuning machine learning models is challenging due to the large number of non-intuitive hyperparameters.
2. Traditional tuning methods like grid search are computationally expensive and can find local optima rather than global optima.
3. Bayesian optimization uses Gaussian processes to build statistical models from prior evaluations to determine the most promising hyperparameters to test next, requiring far fewer evaluations than traditional methods to find better performing models.
LF Energy Webinar - Unveiling OpenEEMeter 4.0DanBrown980551
LF Energy OpenEEmeter measures the energy impacts of demand-side interventions in buildings. OpenEEmeter 4.0 provides enhanced performance of the daily model with dramatically reduced seasonal and weekend/weekday bias, along with increased computational efficiency.
This webinar explores how OpenEEMeter:
-Reduces seasonal bias in the daily model by 84%
-Reduces weekend/weekday bias in the daily model by 95%
-Runs up to 100x faster with monthly data, and 2 - 10x faster with daily data
-Along with this release, the OpenEEMeter community is also publishing a detailed 4.0 model specification and results of thorough testing conducted across residential and commercial sectors and gas and electric fuels.
Speakers:
-Adam Scheer, Vice President of Applied Data Science, Recurve
-Travis Sikes, Data Science Manager, Recurve
-Jason Chulock, Lead Engineer, Recurve
Improving the cosmic approximate sizing using the fuzzy logic epcu model al...IWSM Mensura
The document describes an experiment to improve the accuracy of the COSMIC functional size measurement (FSM) method for early stage software projects. It uses a fuzzy logic model called EPCU, which considers two variables - the perceived size of use cases and number of related objects of interest - to estimate functional size. The experiment applied the EPCU model and traditional "equal size bands" approach to estimate sizes for 14 use cases. The EPCU model initially underestimated sizes but accuracy improved when expanding the output range, demonstrating its sensitivity to variable definitions. Further experiments are needed with more test cases to validate the approach.
Similar to Occupancy level estimation using pir sensors only (20)
Evidence of Jet Activity from the Secondary Black Hole in the OJ 287 Binary S...Sérgio Sacani
Wereport the study of a huge optical intraday flare on 2021 November 12 at 2 a.m. UT in the blazar OJ287. In the binary black hole model, it is associated with an impact of the secondary black hole on the accretion disk of the primary. Our multifrequency observing campaign was set up to search for such a signature of the impact based on a prediction made 8 yr earlier. The first I-band results of the flare have already been reported by Kishore et al. (2024). Here we combine these data with our monitoring in the R-band. There is a big change in the R–I spectral index by 1.0 ±0.1 between the normal background and the flare, suggesting a new component of radiation. The polarization variation during the rise of the flare suggests the same. The limits on the source size place it most reasonably in the jet of the secondary BH. We then ask why we have not seen this phenomenon before. We show that OJ287 was never before observed with sufficient sensitivity on the night when the flare should have happened according to the binary model. We also study the probability that this flare is just an oversized example of intraday variability using the Krakow data set of intense monitoring between 2015 and 2023. We find that the occurrence of a flare of this size and rapidity is unlikely. In machine-readable Tables 1 and 2, we give the full orbit-linked historical light curve of OJ287 as well as the dense monitoring sample of Krakow.
PPT on Alternate Wetting and Drying presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Travis Hills of MN is Making Clean Water Accessible to All Through High Flux ...Travis Hills MN
By harnessing the power of High Flux Vacuum Membrane Distillation, Travis Hills from MN envisions a future where clean and safe drinking water is accessible to all, regardless of geographical location or economic status.
Sexuality - Issues, Attitude and Behaviour - Applied Social Psychology - Psyc...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
Discovery of An Apparent Red, High-Velocity Type Ia Supernova at 𝐳 = 2.9 wi...Sérgio Sacani
We present the JWST discovery of SN 2023adsy, a transient object located in a host galaxy JADES-GS
+
53.13485
−
27.82088
with a host spectroscopic redshift of
2.903
±
0.007
. The transient was identified in deep James Webb Space Telescope (JWST)/NIRCam imaging from the JWST Advanced Deep Extragalactic Survey (JADES) program. Photometric and spectroscopic followup with NIRCam and NIRSpec, respectively, confirm the redshift and yield UV-NIR light-curve, NIR color, and spectroscopic information all consistent with a Type Ia classification. Despite its classification as a likely SN Ia, SN 2023adsy is both fairly red (
�
(
�
−
�
)
∼
0.9
) despite a host galaxy with low-extinction and has a high Ca II velocity (
19
,
000
±
2
,
000
km/s) compared to the general population of SNe Ia. While these characteristics are consistent with some Ca-rich SNe Ia, particularly SN 2016hnk, SN 2023adsy is intrinsically brighter than the low-
�
Ca-rich population. Although such an object is too red for any low-
�
cosmological sample, we apply a fiducial standardization approach to SN 2023adsy and find that the SN 2023adsy luminosity distance measurement is in excellent agreement (
≲
1
�
) with
Λ
CDM. Therefore unlike low-
�
Ca-rich SNe Ia, SN 2023adsy is standardizable and gives no indication that SN Ia standardized luminosities change significantly with redshift. A larger sample of distant SNe Ia is required to determine if SN Ia population characteristics at high-
�
truly diverge from their low-
�
counterparts, and to confirm that standardized luminosities nevertheless remain constant with redshift.
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdfSelcen Ozturkcan
Ozturkcan, S., Berndt, A., & Angelakis, A. (2024). Mending clothing to support sustainable fashion. Presented at the 31st Annual Conference by the Consortium for International Marketing Research (CIMaR), 10-13 Jun 2024, University of Gävle, Sweden.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
Anti-Universe And Emergent Gravity and the Dark UniverseSérgio Sacani
Recent theoretical progress indicates that spacetime and gravity emerge together from the entanglement structure of an underlying microscopic theory. These ideas are best understood in Anti-de Sitter space, where they rely on the area law for entanglement entropy. The extension to de Sitter space requires taking into account the entropy and temperature associated with the cosmological horizon. Using insights from string theory, black hole physics and quantum information theory we argue that the positive dark energy leads to a thermal volume law contribution to the entropy that overtakes the area law precisely at the cosmological horizon. Due to the competition between area and volume law entanglement the microscopic de Sitter states do not thermalise at sub-Hubble scales: they exhibit memory effects in the form of an entropy displacement caused by matter. The emergent laws of gravity contain an additional ‘dark’ gravitational force describing the ‘elastic’ response due to the entropy displacement. We derive an estimate of the strength of this extra force in terms of the baryonic mass, Newton’s constant and the Hubble acceleration scale a0 = cH0, and provide evidence for the fact that this additional ‘dark gravity force’ explains the observed phenomena in galaxies and clusters currently attributed to dark matter.
2. What’s occupancy?
Three dimensions
Most common:
• Binary/Head counts
• At the room level
• Time resolution app dependent
Heisenberg’s principle
• Δoccupants × Δtime × Δspace ≥ min. cost
• Costs: $$$ and privacy
15/02/2016 BASTIEN PIETROPAOLI 2
3. Summary
Existing approaches
• Sensor used
• Example of existing approaches
Our approach
• Seeking a simpler solution
• Small deployment
• Saving energy
Binary occupancy, the classic
Machine learning
• What features?
• Linear regression
• Results
• Exploring the parameters
Pros and cons/Conclusion
15/02/2016 BASTIEN PIETROPAOLI 3
4. Summary
Existing approaches
• Sensor used
• Example of existing approaches
Our approach
• Seeking a simpler solution
• Small deployment
• Saving energy
Binary occupancy, the classic
Machine learning
• What features?
• Linear regression
• Results
• Exploring the parameters
Pros and cons/Conclusion
15/02/2016 BASTIEN PIETROPAOLI 4
5. Sensors used for occupancy detection
CO2 / VOC
Pros: Detect people independently of their activity,
• Cons: Expensive, low time resolution, not suitable for open spaces, highly sensitive to ventilation
PIRs
• Pros: Cheap, already deployed, well-known
• Cons: Binary events, noisy peaks, cannot detect still people
Sound
• Pros: ?
• Cons: Sensitive to external noises
Cameras
• Pros: Highly reliable
• Cons: Privacy concerns, sensitive to obstruction and luminosity changes, computationally demanding
15/02/2016 BASTIEN PIETROPAOLI 5
6. Existing approaches (1/3)
Counters at key places
• Pairs of PIR sensors
• Modified PIR sensors
• Cameras
• Wireless sensing
Pros
• Simple in principle
• Cost effective
Cons
• Error prone
• Propagated errors
15/02/2016 BASTIEN PIETROPAOLI 6
Zappi et al. 2010 Lin et al. 2011
Erickson et al. 2013
Hutchins et al. 2007
9. Summary
Existing approaches
• Sensor used
• Example of existing approaches
Our approach
• Seeking a simpler solution
• Small deployment
• Saving energy
Binary occupancy, the classic
Machine learning
• What features?
• Linear regression
• Results
• Exploring the parameters
Pros and cons/Conclusion
15/02/2016 BASTIEN PIETROPAOLI 9
10. Seeking a simpler solution
Required qualities
• Cheap
• Short training set
• Simple models
• Privacy friendly
• Reliable head counts
15/02/2016 BASTIEN PIETROPAOLI 10
The solution
we need!
11. Our (ridiculously) small deployment
PIR sensors
• One office
• Two sensors
• Four people
First to test binary occupancy
• Integration to Wi-Fi sniffer project
• Indoor localisation improvement
15/02/2016 BASTIEN PIETROPAOLI 11
12. Save energy, keep reactivity
Wireless sensor nodes
• Limited batteries
• Wireless com’ consuming too much
Needs
• Reactivity
• All the events
Solution
• Send a sequence start message
• A message every 5s maximum
• Over 11 days: 99727 msgs sent for 198842 events
15/02/2016 BASTIEN PIETROPAOLI 12
13. Summary
Existing approaches
• Sensor used
• Example of existing approaches
Our approach
• Seeking a simpler solution
• Small deployment
• Saving energy
Binary occupancy, the classic
Machine learning
• What features?
• Linear regression
• Results
• Exploring the parameters
Pros and cons/Conclusion
15/02/2016 BASTIEN PIETROPAOLI 13
16. Summary
Existing approaches
• Sensor used
• Example of existing approaches
Our approach
• Seeking a simpler solution
• Small deployment
• Saving energy
Binary occupancy, the classic
Machine learning
• What features?
• Linear regression
• Results
• Exploring the parameters
Pros and cons/Conclusion
15/02/2016 BASTIEN PIETROPAOLI 16
17. What features?
Intuition
• More people = more motion events
• Integration might help
• Take the number of events!
Enough data?
Correlated enough?
15/02/2016 BASTIEN PIETROPAOLI 17
Int.
time 15s 30s 60s 90s 300s 900s 1800s
Corr. 0.741 0.803 0.846 0.866 0.909 0.929 0.928
18. Machine learning?
Supervised, unsupervised?
Which method?
Training set?
Enough features?
Let’s test the simplest: linear
regression
15/02/2016 BASTIEN PIETROPAOLI 18
My colleagues
when I explain
how to use it.
Ground truth system
made of two buttons.
19. Linear regression, the simplest
A matrix of features: nb of motion events
at various degrees
A vector of real measurements: our
occupancy ground truth
A closed-form solution
Super fast prediction!
15/02/2016 BASTIEN PIETROPAOLI 19
𝑋 =
1 𝑥1,1 ⋯ 𝑥1,𝑛
⋮ ⋱ ⋮
1 𝑥 𝑚,1 ⋯ 𝑥 𝑚,𝑛
𝑌 =
𝑦1
⋮
𝑦 𝑚
Θ =
𝜃0
⋮
𝜃 𝑛
= (𝑋 𝑇
𝑋)−1
𝑋 𝑇
𝑌
𝑌 = 𝑋Θ 𝑦𝑖 = 𝜃0 +
𝑗=1
𝑛
𝜃𝑗. 𝑥𝑖,𝑗with
22. Visualising the results (1/4)
Seeking the best parameter combination
• Integration time
• Degree of the polynomial used
Various criteria
• RMSE
• Accuracy
• Accuracy with tolerance
Results averaged over all the days used to learn
15/02/2016 BASTIEN PIETROPAOLI 22
23. Visualising the results (2/4)
RMSE
• Good estimate mixing both mean error
and standard deviation of the error
Best
• Degree 2
• 900s integration
15/02/2016 BASTIEN PIETROPAOLI 23
24. Visualising the results (3/4)
Accuracy (when rounded)
• Proportion of correct estimates
Best
• Degree 1
• 900s integration
15/02/2016 BASTIEN PIETROPAOLI 24
25. Visualising the results (4/4)
Accuracy with tolerance 1
• Take the floor or the ceiling of the
estimates
• Discriminate binary occupancy
Best
Degree 2
900s integration
15/02/2016 BASTIEN PIETROPAOLI 25
26. Does the day matter?
Acceptable difference between worst and best day
The best day tends to be the same for all the
parameter sets
The ordering of parameter sets tend to be
respected for worst, average and best
15/02/2016 BASTIEN PIETROPAOLI 26
Int. Time Worst Average Best
15s 75.19% 77.08% 77.77%
30s 76.68% 78.97% 80.18%
45s 77.13% 79.49% 81.04%
60s 77.68% 79.83% 81.59%
90s 78.04% 79.99% 81.95%
120s 78.47% 80.18% 82.36%
150s 78.41% 80.29% 82.31%
180s 78.38% 80.33% 82.37%
300s 78.09% 80.43% 82.39%
600s 79.02% 81.30% 83.42%
900s 79.29% 81.56% 83.63%
1200s 79.66% 81.45% 83.42%
1800s 78.86% 81.09% 82.83%
27. Summary
Existing approaches
• Sensor used
• Example of existing approaches
Our approach
• Seeking a simpler solution
• Small deployment
• Saving energy
Binary occupancy, the classic
Machine learning
• What features?
• Linear regression
• Results
• Exploring the parameters
Pros and cons/Conclusion
15/02/2016 BASTIEN PIETROPAOLI 27
28. Pros and cons
Requires only one type of sensor
Well-known sensors
Cheap and commonly deployed sensors
Simple model
Computationally extra light
Accurate with a small training set
Sensitive to sensor placement
Model might be specific to the room
Might not work in all types of environments
15/02/2016 BASTIEN PIETROPAOLI 28
29. Conclusion
We did it!
• Small training set
• Computationally light
• Model easily understood
• Cheap sensors
• Acceptable accuracy
15/02/2016 BASTIEN PIETROPAOLI 29
30. Thanks for your
attention! Questions?
CONTACT: BASTIEN.PIETROPAOLI@GMAIL.COM/@CIT.IE
15/02/2016 BASTIEN PIETROPAOLI 30
Editor's Notes
Here, real-time head counts at the room level.
Peaks around 9pm: not noise, the security guard not tapping!
Increased validity time = increased presence recall, reduced precision
Occulted (0, 0) as it would cover the entire figure.
Supervised or unsupervised: unsupervised might be difficult, impossible to distinguish clear classes.
Which method: certainly not classification, we need to infer a number, not a class
Training set: shouldn’t be too massive, hard to acquire ground truth
Used one day to learn the parameters, here, the August 10th.
Overall results not too bad. RMSE of 0.9 over the presence time is equivalent to the best results found in the bibliography.
Nights and days are correctly discriminated.
Peaks when people arrive in the morning. Might be due to sensor placement, with overlapping fields of view.
Criteria: no clear standard criterion in the literature.
If groundtruth == 0 and estimate ceiling == 1, not okay. Needs to be rounded at 0.
If groundtruth >= 1 and estimate floor == 0, not okay. Needs to be rounded at groundtruth.
Table for degree 2
Sensor placement: artificially increased event counts due to overlapping fields of views.
Specific: to be further experimented.
Environments: for instance, in a kitchen, would it work? Different types of activites will induce different levels of counts, might work, might not.
Cheap sensors commonly deployed in recent buildings.
Small training set: 1 day is doable, even for each room. Can even be redone if necessary.
Computationally light, scalable, a whole building can be managed by an embedded PC.
Accuracy at least as good as anything else found in the literature.