The multi–object filtering problem is a generalization of the well–known single– object filtering problem. In essence, multi–object filtering is concerned with the joint estimation of the unknown and time–varying number of objects and the state of each of these objects. The filtering problem becomes particular challenging when the number of objects cannot be inferred from the collected observations and when no association between an observation and an object is possible.
The document describes generating and analyzing a scale-free network using the Barabási-Albert (BA) preferential attachment model and Fruchterman-Reingold (FR) force-directed graph drawing algorithm. Key points:
- The BA model is used to generate an undirected scale-free network with 500 nodes and approximately 688 edges based on the given mean degree of 2.75.
- Degree distribution and circular layout are analyzed for the initial BA network. Giant components are then removed, increasing the average degree to 3.19.
- The FR algorithm is applied to visualize the network, treating nodes as electrons and edges as springs. Darker colors represent higher-degree nodes.
This document discusses cardinality estimation techniques for very large data sets. It begins by outlining goals for counting solutions such as supporting high throughput data streams, estimating cardinality within known error thresholds for large data sets up to 1 billion or 1 trillion elements, and supporting set operations. It then discusses naive solutions and their limitations before introducing intuitions and techniques like applying multiple hash functions, stochastic averaging, and the HyperLogLog algorithm. HyperLogLog uses a single hash function to map values to substreams and tracks the maximum value in each to estimate cardinality within a typical 1.04/m error. The document concludes by discussing related probabilistic data structures and references.
Data Generation with PROSPECT: a Probability Specification ToolIvan Ruchkin
The document describes PROSPECT, a tool for automatically generating synthetic data from probabilistic specifications. PROSPECT allows users to specify discrete temporal distributions declaratively and solves the specifications algebraically to obtain unique parameter solutions. If a unique distribution is defined, PROSPECT can sample from it. The document outlines PROSPECT's specification language and approach, which involves parameterizing specifications, solving systems of equations, and sampling solutions. An evaluation found PROSPECT specifications were more succinct than probabilistic programs and generated accurate data.
The document provides an overview of object detection methods for nighttime surveillance. It discusses two main approaches: (1) a Contrast Change method that detects objects based on changes in local contrast between frames, and (2) a Salient Contrast Analysis method that improves on the first by adding adaptability using machine learning and feedback from trajectory analysis. Experimental results showed the Salient Contrast Analysis method achieved better detection accuracy and lower tracking errors than the original Contrast Change method.
Comparative Study of Granger Causality Algorithm for Gene Regulatory NetworkZhafir Aglna Tijani
This document summarizes a study that compares three algorithms for inferring gene regulatory networks from time series data: MVGC, Lasso, and Copula. The algorithms were implemented and their performance evaluated on both synthetic 3-variable and 5-variable time series datasets using seven different metrics. For 3-variable data, MVGC generally performed best, while for 5-variable data its performance was most consistent. Lasso worked best for high numbers of time points. Copula showed optimized performance for intermediate time points. Overall, MVGC provided the best average performance across conditions, but the best algorithm depended on factors like the number of variables and time points. Future work could explore non-linear models and applications to real biological datasets.
Searching for Anomalies, by Thomas Dietterich, Distinguished Professor Emeritus in the School of EECS at Oregon State University and Chief Scientist of BigML.
*MLSEV 2020: Virtual Conference.
Monte carlo presentation for analysis of business growthAsif Anik
This document discusses using Monte Carlo simulation and the Brownian walk approach to forecast time series data. It describes generating random numbers as inputs to iteratively evaluate a deterministic model. This allows producing a range of probable outcomes to assist with decision making. The document outlines experiments applying this method to both raw and regression modes of forecasting productivity, installation rates, and other trends. It interprets the results as probabilities and weighted averages to understand the likelihood of different forecast scenarios. Real-life applications include asset distribution forecasting, materials forecasting, and predicting growth over time.
The document describes generating and analyzing a scale-free network using the Barabási-Albert (BA) preferential attachment model and Fruchterman-Reingold (FR) force-directed graph drawing algorithm. Key points:
- The BA model is used to generate an undirected scale-free network with 500 nodes and approximately 688 edges based on the given mean degree of 2.75.
- Degree distribution and circular layout are analyzed for the initial BA network. Giant components are then removed, increasing the average degree to 3.19.
- The FR algorithm is applied to visualize the network, treating nodes as electrons and edges as springs. Darker colors represent higher-degree nodes.
This document discusses cardinality estimation techniques for very large data sets. It begins by outlining goals for counting solutions such as supporting high throughput data streams, estimating cardinality within known error thresholds for large data sets up to 1 billion or 1 trillion elements, and supporting set operations. It then discusses naive solutions and their limitations before introducing intuitions and techniques like applying multiple hash functions, stochastic averaging, and the HyperLogLog algorithm. HyperLogLog uses a single hash function to map values to substreams and tracks the maximum value in each to estimate cardinality within a typical 1.04/m error. The document concludes by discussing related probabilistic data structures and references.
Data Generation with PROSPECT: a Probability Specification ToolIvan Ruchkin
The document describes PROSPECT, a tool for automatically generating synthetic data from probabilistic specifications. PROSPECT allows users to specify discrete temporal distributions declaratively and solves the specifications algebraically to obtain unique parameter solutions. If a unique distribution is defined, PROSPECT can sample from it. The document outlines PROSPECT's specification language and approach, which involves parameterizing specifications, solving systems of equations, and sampling solutions. An evaluation found PROSPECT specifications were more succinct than probabilistic programs and generated accurate data.
The document provides an overview of object detection methods for nighttime surveillance. It discusses two main approaches: (1) a Contrast Change method that detects objects based on changes in local contrast between frames, and (2) a Salient Contrast Analysis method that improves on the first by adding adaptability using machine learning and feedback from trajectory analysis. Experimental results showed the Salient Contrast Analysis method achieved better detection accuracy and lower tracking errors than the original Contrast Change method.
Comparative Study of Granger Causality Algorithm for Gene Regulatory NetworkZhafir Aglna Tijani
This document summarizes a study that compares three algorithms for inferring gene regulatory networks from time series data: MVGC, Lasso, and Copula. The algorithms were implemented and their performance evaluated on both synthetic 3-variable and 5-variable time series datasets using seven different metrics. For 3-variable data, MVGC generally performed best, while for 5-variable data its performance was most consistent. Lasso worked best for high numbers of time points. Copula showed optimized performance for intermediate time points. Overall, MVGC provided the best average performance across conditions, but the best algorithm depended on factors like the number of variables and time points. Future work could explore non-linear models and applications to real biological datasets.
Searching for Anomalies, by Thomas Dietterich, Distinguished Professor Emeritus in the School of EECS at Oregon State University and Chief Scientist of BigML.
*MLSEV 2020: Virtual Conference.
Monte carlo presentation for analysis of business growthAsif Anik
This document discusses using Monte Carlo simulation and the Brownian walk approach to forecast time series data. It describes generating random numbers as inputs to iteratively evaluate a deterministic model. This allows producing a range of probable outcomes to assist with decision making. The document outlines experiments applying this method to both raw and regression modes of forecasting productivity, installation rates, and other trends. It interprets the results as probabilities and weighted averages to understand the likelihood of different forecast scenarios. Real-life applications include asset distribution forecasting, materials forecasting, and predicting growth over time.
This document provides a summary of three topics:
1) An introduction to Locally Linear Embedding (LLE), an unsupervised nonlinear dimensionality reduction technique. It describes the objective, idea, and algorithm of LLE.
2) Explaining variational approximations, describing the idea, algorithm, and examples of both density transform and tangent transform approaches. Variational approximations provide fast deterministic alternatives to Monte Carlo methods.
3) A question and answer section.
Data association for semantic world modeling from partial viewsRaphael Van Hoffelen
Lecture for the Integrated Intelligence in Robotics class (cis700) at the University of Pennsylvania.
This presentation was based on the paper of the same name written by: Lawson L.S. Wong, Leslie Pack Kaelbling, Tomas Lozano-Perez
Statistical analysis and mining of huge multi-terabyte data sets is a common task nowadays, especially in the areas like web analytics and Internet advertising. Analysis of such large data sets often requires powerful distributed data stores like Hadoop and heavy data processing with techniques like MapReduce. This approach often leads to heavyweight high-latency analytical processes and poor applicability to realtime use cases. On the other hand, when one is interested only in simple additive metrics like total page views or average price of conversion, it is obvious that raw data can be efficiently summarized, for example, on a daily basis or using simple in-stream counters. Computation of more advanced metrics like a number of unique visitor or most frequent items is more challenging and requires a lot of resources if implemented straightforwardly. In this article, I provide an overview of probabilistic data structures that allow one to estimate these and many other metrics and trade precision of the estimations for the memory consumption.
Online Divergence Switching for Superresolution-Based Nonnegative Matrix Fa...奈良先端大 情報科学研究科
This document summarizes a research presentation on an online divergence switching method for a hybrid source separation technique. The hybrid method combines directional clustering for spatial separation with supervised nonnegative matrix factorization (SNMF) for spectral separation. The proposed method switches between KL divergence and Euclidean distance for the SNMF, depending on the amount of spectral gaps from the directional clustering. When there are many gaps, Euclidean distance is better for basis extrapolation. When gaps are fewer, KL divergence gives better separation. In experiments, the proposed online switching method outperformed using only one divergence, achieving higher signal-to-distortion ratios for music source separation.
The slide discusses some basic information that I presented in my public talk titled "The future is quantum", at the Indian Institute of Science, Bengaluru, India.
In this slide, I have covered the basics of quantum computing and quantum communication.
Please write your suggestions to quantumaravinth@gmail.com. Looking forward to interacting with you
[1808.00177] Learning Dexterous In-Hand ManipulationSeung Jae Lee
This document summarizes research on training a robot hand to perform dexterous in-hand manipulation tasks. The researchers used a simulation environment to generate large amounts of training data and trained a policy using reinforcement learning and domain randomization. They found the policy could transfer to controlling a real robot hand to successfully reorient objects, even generalizing to new objects. Key aspects that improved transferability included randomizing the simulation, using memory in the policy network, and training a vision model to estimate object pose without sensors.
This paper applies inverse transform sampling to sample training points for surrogate models. Inverse transform sampling uniformly generates a sequence of real numbers ranging from 0 to 1 as the probabilities at sample points. The coordinates of the sample points are evaluated using the inverse functions of Cumulative Distribution Functions (CDF). The inputs to surrogate models are assumed to be independent random variables. The sample points obtained by inverse transform sampling can effectively represent the frequency of occurrence of the inputs. The distributions of inputs to the surrogate models are fitted to their observed data. These distributions are used for inverse transform sampling. The sample points have larger densities in the regions where the Probability Density Functions (PDF) are higher. This sampling approach ensures that the regions with higher densities of sample points are more prevalent in the observations of the random variables. Inverse transform sampling is applied to the development of surrogate models for window performance evaluation. The distributions of the following three climatic conditions are fitted: (i) the outside temperature, (ii) the wind speed, and (iii) the solar radiation. The sample climatic conditions obtained by the inverse transform sampling are used as training points to evaluate the heat transfer through a generic triple pane window. Using the simulation results at the sample points, surrogate models are developed to represent the heat transfer through the window as a function of the climatic conditions. It is observed that surrogate models developed using the inverse transform sampling can provide higher accuracy than that developed using the Sobol sequence directly for the window performance evaluation.
The document discusses pseudo-random number generators (PRNGs) and different algorithms used to generate pseudo-random numbers, including the middle-square method, linear congruence method, and MersenneTwister. PRNGs are useful for applications like cryptography, games, and statistics, but different algorithms have strengths best suited for different uses. Parameters must be carefully chosen for reliable sequences without repeats or quick degeneration.
Qualification of HPLC & LCMS.pptxfjddjdjdhdjdjjPratik434909
This document discusses the qualification process for an HPLC system and LC-MS system. It describes the four parts of the HPLC qualification process: design qualification, installation qualification, operational qualification, and performance qualification. Specific tests and acceptance criteria are provided for evaluating the HPLC system's performance qualification, including tests for baseline noise and drift, detector linearity, temperature accuracy and precision, auto sampler carry over, and gradient composition accuracy. The document also discusses calibration and tuning parameters for qualifying an LC-MS system.
Qualification of HPLC & LCMS.pptdjdjdjdjfjkfxPratik434909
This document discusses the qualification process for an HPLC system and LC-MS system. It describes the four parts of qualification for HPLC: design qualification, installation qualification, operational qualification, and performance qualification. Specific tests and acceptance criteria are provided for evaluating the HPLC system's performance qualification, including tests for baseline noise and drift, detector linearity, temperature accuracy and precision, auto sampler carryover, and gradient composition accuracy. The document also summarizes calibration and tuning parameters and processes for qualifying an LC-MS system.
This document discusses different sampling techniques that can be used to analyze large datasets. It defines key sampling concepts like the target population, sampling frame, and sampling unit. Probability sampling techniques described include simple random sampling, systematic sampling, stratified sampling, cluster sampling, and probability proportional to size sampling. Non-probability sampling techniques include convenience sampling and purposive sampling. The document also covers how to calculate sample sizes needed to estimate proportions and means within a desired level of accuracy. Stratified sampling can help reduce variability and improve efficiency by dividing the population into relevant subgroups.
This document discusses a research project using MEMS "smart dust motes" for intelligent lighting control. The goals are to use wireless sensor networks to better understand occupancy patterns, validate sensor readings, and optimize lighting for energy savings while considering user preferences. Researchers plan to characterize mote sensors, develop validation and fusion algorithms, and eventually implement a smart lighting system in the BEST Lab to automatically control dimming based on human presence and interactions. This would personalize lighting while reducing electricity costs.
Rule-based Real-Time Activity Recognition in a Smart Home EnvironmentGeorge Baryannis
This presentation outlines a rule-based approach for both offline and real-time recognition of Activities of Daily Living (ADL), leveraging events produced by a non-intrusive multi-modal sensor infrastructure deployed in a residential environment. Novel aspects of the approach include: the ability to recognise arbitrary scenarios of complex activities
using bottom-up multi-level reasoning, starting from sensor events at the lowest level; an effective heuristics-based method for distinguishing between actual and ghost images in video data; and a highly accurate indoor localisation approach that fuses different sources of location information. The proposed approach is implemented as a rule-based system
using Jess and is evaluated using data collected in a smart home environment. Experimental results show high levels of accuracy and performance,proving the effectiveness of the approach in real world setups.
Talk of Ali Mousavi "Event-Modelling An Engineering Solution for Control and Analysis of Complex Systems" at 116th regular meeting of INCOSE Russian chapter, 14-Sep-2016
Bridging the Gap: Machine Learning for Ubiquitous Computing -- EvaluationThomas Ploetz
Tutorial @Ubicomp 2015: Bridging the Gap -- Machine Learning for Ubiquitous Computing (evaluation session).
A tutorial on promises and pitfalls of Machine Learning for Ubicomp (and Human Computer Interaction). From Practitioners for Practitioners.
Presenter: Nils Hammerla <n.hammerla@gmail.com>
video recording of talks as they wer held at Ubicomp:
https://youtu.be/LgnnlqOIXJc?list=PLh96aGaacSgXw0MyktFqmgijLHN-aQvdq
Quantum cryptography by Girisha Shankar, Sr. Manager, CiscoVishnu Pendyala
Quantum computing is said to break the Internet by making the underlying encryption ineffective. This session, hosted by ICON@Cisco tells you how Quantum cryptography, which has the potential to protect the Internet, works.
This document provides an overview of input modeling for simulation. It discusses the four main steps: 1) collecting real system data, 2) identifying the probability distribution, 3) estimating distribution parameters, and 4) evaluating goodness of fit. Common distributions are identified like Poisson, normal, exponential. Methods for identifying the distribution include histograms and Q-Q plots. Goodness of fit can be tested using chi-square and Kolmogorov-Smirnov tests. The document also discusses modeling non-stationary processes, selecting distributions without data, and multivariate/time-series input models.
Candidate young stellar objects in the S-cluster: Kinematic analysis of a sub...Sérgio Sacani
Context. The observation of several L-band emission sources in the S cluster has led to a rich discussion of their nature. However, a definitive answer to the classification of the dusty objects requires an explanation for the detection of compact Doppler-shifted Brγ emission. The ionized hydrogen in combination with the observation of mid-infrared L-band continuum emission suggests that most of these sources are embedded in a dusty envelope. These embedded sources are part of the S-cluster, and their relationship to the S-stars is still under debate. To date, the question of the origin of these two populations has been vague, although all explanations favor migration processes for the individual cluster members. Aims. This work revisits the S-cluster and its dusty members orbiting the supermassive black hole SgrA* on bound Keplerian orbits from a kinematic perspective. The aim is to explore the Keplerian parameters for patterns that might imply a nonrandom distribution of the sample. Additionally, various analytical aspects are considered to address the nature of the dusty sources. Methods. Based on the photometric analysis, we estimated the individual H−K and K−L colors for the source sample and compared the results to known cluster members. The classification revealed a noticeable contrast between the S-stars and the dusty sources. To fit the flux-density distribution, we utilized the radiative transfer code HYPERION and implemented a young stellar object Class I model. We obtained the position angle from the Keplerian fit results; additionally, we analyzed the distribution of the inclinations and the longitudes of the ascending node. Results. The colors of the dusty sources suggest a stellar nature consistent with the spectral energy distribution in the near and midinfrared domains. Furthermore, the evaporation timescales of dusty and gaseous clumps in the vicinity of SgrA* are much shorter ( 2yr) than the epochs covered by the observations (≈15yr). In addition to the strong evidence for the stellar classification of the D-sources, we also find a clear disk-like pattern following the arrangements of S-stars proposed in the literature. Furthermore, we find a global intrinsic inclination for all dusty sources of 60 ± 20◦, implying a common formation process. Conclusions. The pattern of the dusty sources manifested in the distribution of the position angles, inclinations, and longitudes of the ascending node strongly suggests two different scenarios: the main-sequence stars and the dusty stellar S-cluster sources share a common formation history or migrated with a similar formation channel in the vicinity of SgrA*. Alternatively, the gravitational influence of SgrA* in combination with a massive perturber, such as a putative intermediate mass black hole in the IRS 13 cluster, forces the dusty objects and S-stars to follow a particular orbital arrangement. Key words. stars: black holes– stars: formation– Galaxy: center– galaxies: star formation
This document provides a summary of three topics:
1) An introduction to Locally Linear Embedding (LLE), an unsupervised nonlinear dimensionality reduction technique. It describes the objective, idea, and algorithm of LLE.
2) Explaining variational approximations, describing the idea, algorithm, and examples of both density transform and tangent transform approaches. Variational approximations provide fast deterministic alternatives to Monte Carlo methods.
3) A question and answer section.
Data association for semantic world modeling from partial viewsRaphael Van Hoffelen
Lecture for the Integrated Intelligence in Robotics class (cis700) at the University of Pennsylvania.
This presentation was based on the paper of the same name written by: Lawson L.S. Wong, Leslie Pack Kaelbling, Tomas Lozano-Perez
Statistical analysis and mining of huge multi-terabyte data sets is a common task nowadays, especially in the areas like web analytics and Internet advertising. Analysis of such large data sets often requires powerful distributed data stores like Hadoop and heavy data processing with techniques like MapReduce. This approach often leads to heavyweight high-latency analytical processes and poor applicability to realtime use cases. On the other hand, when one is interested only in simple additive metrics like total page views or average price of conversion, it is obvious that raw data can be efficiently summarized, for example, on a daily basis or using simple in-stream counters. Computation of more advanced metrics like a number of unique visitor or most frequent items is more challenging and requires a lot of resources if implemented straightforwardly. In this article, I provide an overview of probabilistic data structures that allow one to estimate these and many other metrics and trade precision of the estimations for the memory consumption.
Online Divergence Switching for Superresolution-Based Nonnegative Matrix Fa...奈良先端大 情報科学研究科
This document summarizes a research presentation on an online divergence switching method for a hybrid source separation technique. The hybrid method combines directional clustering for spatial separation with supervised nonnegative matrix factorization (SNMF) for spectral separation. The proposed method switches between KL divergence and Euclidean distance for the SNMF, depending on the amount of spectral gaps from the directional clustering. When there are many gaps, Euclidean distance is better for basis extrapolation. When gaps are fewer, KL divergence gives better separation. In experiments, the proposed online switching method outperformed using only one divergence, achieving higher signal-to-distortion ratios for music source separation.
The slide discusses some basic information that I presented in my public talk titled "The future is quantum", at the Indian Institute of Science, Bengaluru, India.
In this slide, I have covered the basics of quantum computing and quantum communication.
Please write your suggestions to quantumaravinth@gmail.com. Looking forward to interacting with you
[1808.00177] Learning Dexterous In-Hand ManipulationSeung Jae Lee
This document summarizes research on training a robot hand to perform dexterous in-hand manipulation tasks. The researchers used a simulation environment to generate large amounts of training data and trained a policy using reinforcement learning and domain randomization. They found the policy could transfer to controlling a real robot hand to successfully reorient objects, even generalizing to new objects. Key aspects that improved transferability included randomizing the simulation, using memory in the policy network, and training a vision model to estimate object pose without sensors.
This paper applies inverse transform sampling to sample training points for surrogate models. Inverse transform sampling uniformly generates a sequence of real numbers ranging from 0 to 1 as the probabilities at sample points. The coordinates of the sample points are evaluated using the inverse functions of Cumulative Distribution Functions (CDF). The inputs to surrogate models are assumed to be independent random variables. The sample points obtained by inverse transform sampling can effectively represent the frequency of occurrence of the inputs. The distributions of inputs to the surrogate models are fitted to their observed data. These distributions are used for inverse transform sampling. The sample points have larger densities in the regions where the Probability Density Functions (PDF) are higher. This sampling approach ensures that the regions with higher densities of sample points are more prevalent in the observations of the random variables. Inverse transform sampling is applied to the development of surrogate models for window performance evaluation. The distributions of the following three climatic conditions are fitted: (i) the outside temperature, (ii) the wind speed, and (iii) the solar radiation. The sample climatic conditions obtained by the inverse transform sampling are used as training points to evaluate the heat transfer through a generic triple pane window. Using the simulation results at the sample points, surrogate models are developed to represent the heat transfer through the window as a function of the climatic conditions. It is observed that surrogate models developed using the inverse transform sampling can provide higher accuracy than that developed using the Sobol sequence directly for the window performance evaluation.
The document discusses pseudo-random number generators (PRNGs) and different algorithms used to generate pseudo-random numbers, including the middle-square method, linear congruence method, and MersenneTwister. PRNGs are useful for applications like cryptography, games, and statistics, but different algorithms have strengths best suited for different uses. Parameters must be carefully chosen for reliable sequences without repeats or quick degeneration.
Qualification of HPLC & LCMS.pptxfjddjdjdhdjdjjPratik434909
This document discusses the qualification process for an HPLC system and LC-MS system. It describes the four parts of the HPLC qualification process: design qualification, installation qualification, operational qualification, and performance qualification. Specific tests and acceptance criteria are provided for evaluating the HPLC system's performance qualification, including tests for baseline noise and drift, detector linearity, temperature accuracy and precision, auto sampler carry over, and gradient composition accuracy. The document also discusses calibration and tuning parameters for qualifying an LC-MS system.
Qualification of HPLC & LCMS.pptdjdjdjdjfjkfxPratik434909
This document discusses the qualification process for an HPLC system and LC-MS system. It describes the four parts of qualification for HPLC: design qualification, installation qualification, operational qualification, and performance qualification. Specific tests and acceptance criteria are provided for evaluating the HPLC system's performance qualification, including tests for baseline noise and drift, detector linearity, temperature accuracy and precision, auto sampler carryover, and gradient composition accuracy. The document also summarizes calibration and tuning parameters and processes for qualifying an LC-MS system.
This document discusses different sampling techniques that can be used to analyze large datasets. It defines key sampling concepts like the target population, sampling frame, and sampling unit. Probability sampling techniques described include simple random sampling, systematic sampling, stratified sampling, cluster sampling, and probability proportional to size sampling. Non-probability sampling techniques include convenience sampling and purposive sampling. The document also covers how to calculate sample sizes needed to estimate proportions and means within a desired level of accuracy. Stratified sampling can help reduce variability and improve efficiency by dividing the population into relevant subgroups.
This document discusses a research project using MEMS "smart dust motes" for intelligent lighting control. The goals are to use wireless sensor networks to better understand occupancy patterns, validate sensor readings, and optimize lighting for energy savings while considering user preferences. Researchers plan to characterize mote sensors, develop validation and fusion algorithms, and eventually implement a smart lighting system in the BEST Lab to automatically control dimming based on human presence and interactions. This would personalize lighting while reducing electricity costs.
Rule-based Real-Time Activity Recognition in a Smart Home EnvironmentGeorge Baryannis
This presentation outlines a rule-based approach for both offline and real-time recognition of Activities of Daily Living (ADL), leveraging events produced by a non-intrusive multi-modal sensor infrastructure deployed in a residential environment. Novel aspects of the approach include: the ability to recognise arbitrary scenarios of complex activities
using bottom-up multi-level reasoning, starting from sensor events at the lowest level; an effective heuristics-based method for distinguishing between actual and ghost images in video data; and a highly accurate indoor localisation approach that fuses different sources of location information. The proposed approach is implemented as a rule-based system
using Jess and is evaluated using data collected in a smart home environment. Experimental results show high levels of accuracy and performance,proving the effectiveness of the approach in real world setups.
Talk of Ali Mousavi "Event-Modelling An Engineering Solution for Control and Analysis of Complex Systems" at 116th regular meeting of INCOSE Russian chapter, 14-Sep-2016
Bridging the Gap: Machine Learning for Ubiquitous Computing -- EvaluationThomas Ploetz
Tutorial @Ubicomp 2015: Bridging the Gap -- Machine Learning for Ubiquitous Computing (evaluation session).
A tutorial on promises and pitfalls of Machine Learning for Ubicomp (and Human Computer Interaction). From Practitioners for Practitioners.
Presenter: Nils Hammerla <n.hammerla@gmail.com>
video recording of talks as they wer held at Ubicomp:
https://youtu.be/LgnnlqOIXJc?list=PLh96aGaacSgXw0MyktFqmgijLHN-aQvdq
Quantum cryptography by Girisha Shankar, Sr. Manager, CiscoVishnu Pendyala
Quantum computing is said to break the Internet by making the underlying encryption ineffective. This session, hosted by ICON@Cisco tells you how Quantum cryptography, which has the potential to protect the Internet, works.
This document provides an overview of input modeling for simulation. It discusses the four main steps: 1) collecting real system data, 2) identifying the probability distribution, 3) estimating distribution parameters, and 4) evaluating goodness of fit. Common distributions are identified like Poisson, normal, exponential. Methods for identifying the distribution include histograms and Q-Q plots. Goodness of fit can be tested using chi-square and Kolmogorov-Smirnov tests. The document also discusses modeling non-stationary processes, selecting distributions without data, and multivariate/time-series input models.
Candidate young stellar objects in the S-cluster: Kinematic analysis of a sub...Sérgio Sacani
Context. The observation of several L-band emission sources in the S cluster has led to a rich discussion of their nature. However, a definitive answer to the classification of the dusty objects requires an explanation for the detection of compact Doppler-shifted Brγ emission. The ionized hydrogen in combination with the observation of mid-infrared L-band continuum emission suggests that most of these sources are embedded in a dusty envelope. These embedded sources are part of the S-cluster, and their relationship to the S-stars is still under debate. To date, the question of the origin of these two populations has been vague, although all explanations favor migration processes for the individual cluster members. Aims. This work revisits the S-cluster and its dusty members orbiting the supermassive black hole SgrA* on bound Keplerian orbits from a kinematic perspective. The aim is to explore the Keplerian parameters for patterns that might imply a nonrandom distribution of the sample. Additionally, various analytical aspects are considered to address the nature of the dusty sources. Methods. Based on the photometric analysis, we estimated the individual H−K and K−L colors for the source sample and compared the results to known cluster members. The classification revealed a noticeable contrast between the S-stars and the dusty sources. To fit the flux-density distribution, we utilized the radiative transfer code HYPERION and implemented a young stellar object Class I model. We obtained the position angle from the Keplerian fit results; additionally, we analyzed the distribution of the inclinations and the longitudes of the ascending node. Results. The colors of the dusty sources suggest a stellar nature consistent with the spectral energy distribution in the near and midinfrared domains. Furthermore, the evaporation timescales of dusty and gaseous clumps in the vicinity of SgrA* are much shorter ( 2yr) than the epochs covered by the observations (≈15yr). In addition to the strong evidence for the stellar classification of the D-sources, we also find a clear disk-like pattern following the arrangements of S-stars proposed in the literature. Furthermore, we find a global intrinsic inclination for all dusty sources of 60 ± 20◦, implying a common formation process. Conclusions. The pattern of the dusty sources manifested in the distribution of the position angles, inclinations, and longitudes of the ascending node strongly suggests two different scenarios: the main-sequence stars and the dusty stellar S-cluster sources share a common formation history or migrated with a similar formation channel in the vicinity of SgrA*. Alternatively, the gravitational influence of SgrA* in combination with a massive perturber, such as a putative intermediate mass black hole in the IRS 13 cluster, forces the dusty objects and S-stars to follow a particular orbital arrangement. Key words. stars: black holes– stars: formation– Galaxy: center– galaxies: star formation
Discovery of An Apparent Red, High-Velocity Type Ia Supernova at 𝐳 = 2.9 wi...Sérgio Sacani
We present the JWST discovery of SN 2023adsy, a transient object located in a host galaxy JADES-GS
+
53.13485
−
27.82088
with a host spectroscopic redshift of
2.903
±
0.007
. The transient was identified in deep James Webb Space Telescope (JWST)/NIRCam imaging from the JWST Advanced Deep Extragalactic Survey (JADES) program. Photometric and spectroscopic followup with NIRCam and NIRSpec, respectively, confirm the redshift and yield UV-NIR light-curve, NIR color, and spectroscopic information all consistent with a Type Ia classification. Despite its classification as a likely SN Ia, SN 2023adsy is both fairly red (
�
(
�
−
�
)
∼
0.9
) despite a host galaxy with low-extinction and has a high Ca II velocity (
19
,
000
±
2
,
000
km/s) compared to the general population of SNe Ia. While these characteristics are consistent with some Ca-rich SNe Ia, particularly SN 2016hnk, SN 2023adsy is intrinsically brighter than the low-
�
Ca-rich population. Although such an object is too red for any low-
�
cosmological sample, we apply a fiducial standardization approach to SN 2023adsy and find that the SN 2023adsy luminosity distance measurement is in excellent agreement (
≲
1
�
) with
Λ
CDM. Therefore unlike low-
�
Ca-rich SNe Ia, SN 2023adsy is standardizable and gives no indication that SN Ia standardized luminosities change significantly with redshift. A larger sample of distant SNe Ia is required to determine if SN Ia population characteristics at high-
�
truly diverge from their low-
�
counterparts, and to confirm that standardized luminosities nevertheless remain constant with redshift.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
TOPIC OF DISCUSSION: CENTRIFUGATION SLIDESHARE.pptxshubhijain836
Centrifugation is a powerful technique used in laboratories to separate components of a heterogeneous mixture based on their density. This process utilizes centrifugal force to rapidly spin samples, causing denser particles to migrate outward more quickly than lighter ones. As a result, distinct layers form within the sample tube, allowing for easy isolation and purification of target substances.
Anti-Universe And Emergent Gravity and the Dark UniverseSérgio Sacani
Recent theoretical progress indicates that spacetime and gravity emerge together from the entanglement structure of an underlying microscopic theory. These ideas are best understood in Anti-de Sitter space, where they rely on the area law for entanglement entropy. The extension to de Sitter space requires taking into account the entropy and temperature associated with the cosmological horizon. Using insights from string theory, black hole physics and quantum information theory we argue that the positive dark energy leads to a thermal volume law contribution to the entropy that overtakes the area law precisely at the cosmological horizon. Due to the competition between area and volume law entanglement the microscopic de Sitter states do not thermalise at sub-Hubble scales: they exhibit memory effects in the form of an entropy displacement caused by matter. The emergent laws of gravity contain an additional ‘dark’ gravitational force describing the ‘elastic’ response due to the entropy displacement. We derive an estimate of the strength of this extra force in terms of the baryonic mass, Newton’s constant and the Hubble acceleration scale a0 = cH0, and provide evidence for the fact that this additional ‘dark gravity force’ explains the observed phenomena in galaxies and clusters currently attributed to dark matter.
Microbial interaction
Microorganisms interacts with each other and can be physically associated with another organisms in a variety of ways.
One organism can be located on the surface of another organism as an ectobiont or located within another organism as endobiont.
Microbial interaction may be positive such as mutualism, proto-cooperation, commensalism or may be negative such as parasitism, predation or competition
Types of microbial interaction
Positive interaction: mutualism, proto-cooperation, commensalism
Negative interaction: Ammensalism (antagonism), parasitism, predation, competition
I. Mutualism:
It is defined as the relationship in which each organism in interaction gets benefits from association. It is an obligatory relationship in which mutualist and host are metabolically dependent on each other.
Mutualistic relationship is very specific where one member of association cannot be replaced by another species.
Mutualism require close physical contact between interacting organisms.
Relationship of mutualism allows organisms to exist in habitat that could not occupied by either species alone.
Mutualistic relationship between organisms allows them to act as a single organism.
Examples of mutualism:
i. Lichens:
Lichens are excellent example of mutualism.
They are the association of specific fungi and certain genus of algae. In lichen, fungal partner is called mycobiont and algal partner is called
II. Syntrophism:
It is an association in which the growth of one organism either depends on or improved by the substrate provided by another organism.
In syntrophism both organism in association gets benefits.
Compound A
Utilized by population 1
Compound B
Utilized by population 2
Compound C
utilized by both Population 1+2
Products
In this theoretical example of syntrophism, population 1 is able to utilize and metabolize compound A, forming compound B but cannot metabolize beyond compound B without co-operation of population 2. Population 2is unable to utilize compound A but it can metabolize compound B forming compound C. Then both population 1 and 2 are able to carry out metabolic reaction which leads to formation of end product that neither population could produce alone.
Examples of syntrophism:
i. Methanogenic ecosystem in sludge digester
Methane produced by methanogenic bacteria depends upon interspecies hydrogen transfer by other fermentative bacteria.
Anaerobic fermentative bacteria generate CO2 and H2 utilizing carbohydrates which is then utilized by methanogenic bacteria (Methanobacter) to produce methane.
ii. Lactobacillus arobinosus and Enterococcus faecalis:
In the minimal media, Lactobacillus arobinosus and Enterococcus faecalis are able to grow together but not alone.
The synergistic relationship between E. faecalis and L. arobinosus occurs in which E. faecalis require folic acid
JAMES WEBB STUDY THE MASSIVE BLACK HOLE SEEDSSérgio Sacani
The pathway(s) to seeding the massive black holes (MBHs) that exist at the heart of galaxies in the present and distant Universe remains an unsolved problem. Here we categorise, describe and quantitatively discuss the formation pathways of both light and heavy seeds. We emphasise that the most recent computational models suggest that rather than a bimodal-like mass spectrum between light and heavy seeds with light at one end and heavy at the other that instead a continuum exists. Light seeds being more ubiquitous and the heavier seeds becoming less and less abundant due the rarer environmental conditions required for their formation. We therefore examine the different mechanisms that give rise to different seed mass spectrums. We show how and why the mechanisms that produce the heaviest seeds are also among the rarest events in the Universe and are hence extremely unlikely to be the seeds for the vast majority of the MBH population. We quantify, within the limits of the current large uncertainties in the seeding processes, the expected number densities of the seed mass spectrum. We argue that light seeds must be at least 103 to 105 times more numerous than heavy seeds to explain the MBH population as a whole. Based on our current understanding of the seed population this makes heavy seeds (Mseed > 103 M⊙) a significantly more likely pathway given that heavy seeds have an abundance pattern than is close to and likely in excess of 10−4 compared to light seeds. Finally, we examine the current state-of-the-art in numerical calculations and recent observations and plot a path forward for near-future advances in both domains.
PPT on Sustainable Land Management presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
3. Single-Object Filtering
• Estimate the state of a single object given a single measurement
• Kalman Filter is the most common way to approach this problem
• Bayes Filtering is a generalization of the Kalman filter
3
4. Multi-Object Filtering
• Joint estimation of the unknown and time-varying number of multiple
objects and the state of each with multiple measurements
• Challenging when
• the number of objects cannot be inferred by number of measurements
directly
• no association between a measurement and an object can be made
• Need to account for additional effects like missed detections or clutter
4
5. Multi-Object Filtering
• Heuristic approaches exists that leverage classic single-object filters
• Multi Hypothesis Tracking (MHT),
• Joint Probability Data Association (JPDA)
• …
• Non-heuristic approaches exists that are based on Finite Set Statistics
• Probability Hypothesis Density (PHD) filter,
• Multi-Object Multi-Bernoulli (MeMBer) filter
5
6. Finite Set Statistics
• Main building block are random finite sets (rfs)
• Finite-set valued random variables
• Random in the values of entries and the number of entries in the set
• Multi-object state can be modeled as a single rfs
• Allows the formulation of the multi-object filtering problem in a
mathematical rigorous way
6
8. Detection-Type Sensors
• Measurements are not directly
generated by sensor
• Raw sensor data will be a single
measurement
• Extraction of measurements by
preprocessing / peak detection
−π/4 0 +π/4
ϕ
0.5
1.0
1.5
8
Principle of measurement generation process. 16
measurements are taken in a range of +/- 90°. Here three
measurements would be extract
9. Superposition-Type Sensors
• Govern superposition (sps) principle
• Comprised out of all individual
measurements that would be
generated by each object
individually
• Raw data is not easily separable
−π/4 0 +π/4
ϕ
0.5
1.0
1.5
−π/4 0 +π/4
ϕ
0.5
1.0
1.5
−π/4 0 +π/4
ϕ
0.5
1.0
1.5
−π/4 0 +π/4
ϕ
0.5
1.0
1.5
9
Concept of sps principle. Raw signal is generated
by adding signals of three source signals.
11. Research Question
Can we derive effective and efficient multi-object Bayes filters for
superposition-type sensors?
11
12. Research Methodology
• Derive multi-object Bayes filter for sps-type sensors in a systematic and
formal way
• Employ Finite Set Statistics as a systematic technique for modeling the
problem
• Multi-object state is modeled as Finite Set
12
15. Bayes Filter equations
• Recursively estimates the multi-object pdf with each step
• Probability of state is independent of non-parent states
• Separable into Prediction and Correction step
15
PredictionInitialization Correction
16. Bayes Filter Steps
• Prediction of the next multi-object state with system dynamics
• Not special for sps-type sensors
• Correction of the multi-object state given the sensor measurements
• Special for sps-type sensors
➡ Fokus on Correction step
16
17. Set-Integrals
• Complexity is hidden behind set—integral
• Expands to an infinite sum over all sets of length zero to infinity
➡ Set-Integrals are computational expensive
17
18. Bayes Filter Corrector
• Measurement likelihood is used
• Measurement is used to correct the knowledge about the multi-object
state
18
19. Measurement Likelihood
• Likelihood that the measurement is the result of the additive contributions of
the objects
• Sensor noise is assumed to be additive
• Objects might not be visible to the sensor
19
20. Measurement Likelihood
20
• Likelihood that the measurement is the result of the additive contributions of
the objects
• Sensor noise is assumed to be additive
• Objects might not be visible to the sensor
22. Practical Realization (Σ-MeMBer)
• Predicted and corrected pdf need to be from the same type to be useful
• Bayes filter equations need to be solved in closed form
• Popular choices for initial multi-object probability distributions are
• Poisson
• Multi-Bernoulli
22
23. Multi Bernoulli
• Fully described by multiple Bernoulli components each having two parameters
• Probability of existence and single-object density
• Modelled with combination of single-object densities
23
24. Σ-MeMBer Filter
• Propagates only the Bernoulli parameters over time
• Predictor shown to be solvable in closed form
• Only corrector needs to be determined
24
PredictionInitialization Correction
25. Σ-MeMBer Filter
• Propagates only the Bernoulli parameters over time
• Predictor shown to be solvable in closed form
• Only corrector needs to be determined
25
PredictionInitialization Correction
26. Σ-MeMBer Corrector
• When predicted distribution is Multi-Bernoulli
➡ Resulting distribution is not a Multi-Bernoulli
• Filter not recursively applicable
26
27. Approximations
Is it possible to reformulate the Σ-MeMBer corrector such that
we can derive at least approximate Multi-Bernoulli parameters ?
27
28. Factorization
• Split the Σ-MeMBer corrector into two parts
• Missed part that is not dependent on measurement
➡ Results in a true Multi-Bernoulli
• Detected part that is dependent on measurement
➡ Results not in a Multi-Bernoulli
28
Missed Part Detected Part
29. Factorization: Approximation
• Approximate by its probability hypothesis density (phd)
• Infer parameters by comparing with the phd of a Multi-Bernoulli
➡ Results in an invalid probability density due to negative values
• Limiting values to be always positive
➡ Results in an overconfident estimate of the probability density
29
PHD of corrector detected PHD of Multi-Bernoulli
30. Factorization Equation
• Each Bernoulli component is propagated individually
• Each Bernoulli component leads to two
30
Bernoulli Parameters for missed part Bernoulli Parameters for detected part
31. Intensity Approximation
• Directly approximate corrector by its probability hypothesis density (phd)
• Infer parameters by comparing with phd of a Multi-Bernoulli
• Does not require further approximations
• Does not change the number of components
31
PHD of corrector PHD of Multi-Bernoulli
32. Intensity Equation
• Each Bernoulli component is propagated individually
• Number of components stays constant
32
33. Results
• Provided two possible ways to approximate Bernoulli parameters
• Allows the application of the predictor and corrector recursively
• Potential effective/usable filters
• What about the efficiency/computational tractability?
33
34. Pseudo-Likelihood
• Most terms are easy to compute
• Pseudo-likelihood is hard to compute
34
Factorization
Intensity approximation
35. Pseudo-Likelihood
• Convolutions lead to many combinations
• Computable but very demanding
• Approximation needed to make it computationally tractable
35
36. Computationally Tractable Approximations
• Pseudo-Likelihood is an ordinary probability density function (pdf)
• Approximate pdf by replacing with a density that is more easily to
compute
• Gaussian Mixture
• Gaussian
• Poisson Binomial
36
37. Gaussian Pseudo Likelihood
• Pseudo-likelihood is comprised out of convolution of individual pdfs
• Determine the mean and variance
37
38. Gaussian Pseudo Likelihood
• Assume that the additive noise is zero-mean Gaussian
• Simplifies to single Gaussian
38
40. Setup
• Comparison of all filter variants
overall
• Up to 6 objects at the same time
• Objects move linear in a 2D area
• Non-linear superposition-type
sensor model
0 1 2 3 4
px
0
1
2
3
4
py
40
Example of object movement. Squares denote the position of object
entering the area. Triangles mark the position of the objects
disappearance. Sensors are placed in the corners.
41. Monte Carlo Verification
• Use Sequential Monte Carlo (SMC) implementations
• All filter parameters have been fixed over multiple runs
• Measurements are generated individually each run
• 5% of objects are assumed to be not visible on average
41
46. Summary
• Derived multi-object Bayes filters for sps-type sensors in mathematical
formal way
• Provided practical implementations by choosing multi-Bernoulli
distribution as underlying distribution
• Provided computationally tractable approximations
• Analyzed performance in a numerical study
46
47. Conclusion
• It is possible to derive an effective and efficient multi-object filter for
sps-type sensor
• Σ-MeMBer filter provide an usable and computationally tractable
way to estimate the state of multiple objects with sps-type sensors
47
50. Multi-Object Predictor
• Solely dependent on system dynamics
• Objects move independently of each other
• Objects may enter
• Objects may disappear monitored area
I could throw it out
50
51. • If Initial and Birth distribution is Multi-Bernoulli
➡ Resulting distribution is also Multi-Bernoulli
• Only need to propagate predicted Bernoulli components
MeMBer Predictor
51