This document is a resume for Marwin Ko that outlines his education and experience. It summarizes that he received his M.S. in Mechanical Engineering from UC Merced in 2015 and B.S. in Bioengineering from UC Merced in 2013. It also lists his extensive engineering coursework, research experience including publications, technical skills, teaching experience, awards and affiliations.
Alexander Venzin is a statistician currently working for the International Atomic Energy Agency in Vienna, Austria. He has 7 years of experience developing statistical models and algorithms for applications in analytical chemistry, power markets, sampling methodology, and more. His background includes graduate research on quantifying bias in ROC curves and work at Pacific Northwest National Laboratory and the Air Force Institute of Technology.
This project aims to improve the precision of estimates from the National Lakes Assessment (NLA) by combining citizen science lake data with NLA data. The student will identify states with usable citizen science lake monitoring programs and develop models to predict lake conditions using citizen science data. These predictions will be incorporated into model-assisted estimators using NLA data to estimate lake conditions. Standard errors of these estimators will be compared to those from NLA alone to quantify the potential benefit of including citizen science data. The primary outcomes will be a methodology for combining data sources and an assessment of the precision gains from adding citizen science data.
Estimators for structural equation models of Likert scale dataNick Stauner
- The document compares different estimation methods (ML, WLS, ULS, DWLS, GLS) for confirmatory factor analysis with ordinal/Likert scale data.
- WLS estimates deviated most from the target values, while ML, ULS, and DWLS estimates were roughly equal. WLS estimates were the least accurate.
- For non-normal data, fit statistics from ML and GLS were biased, while ULS and DWLS were considered the most robust and liberal estimators.
This document discusses a study that compares the performance of different statistical learning methods (LASSO, classification trees, random forests, support vector machines) across different sample sizes using a dataset on high school dropout rates. The study finds that the statistical learning methods generally perform better than traditional regression at predicting dropout across different sample sizes. It also finds that the prediction models and errors produced differ between learning methods and across sample sizes for each method. The document outlines how each statistical learning method works in order to help researchers apply these advanced techniques to their own work.
An Ontology-underpinned Decision-Support System for Wastewater managementLuigi Ceccaroni
The document describes OntoWEDSS, an ontology-based decision support system for wastewater management. OntoWEDSS integrates rule-based and case-based reasoning systems using the WaWO ontology to model the wastewater domain. Evaluation results showed OntoWEDSS had an average of 88% successful outcomes in diagnosing problematic situations like bulking sludge. The authors conclude that OntoWEDSS is a research tool that demonstrates how ontologies can be introduced into decision support systems to improve modeling, reasoning and diagnosis in environmental domains.
The document discusses challenges in analytics for big data. It notes that big data refers to data that exceeds the capabilities of conventional algorithms and techniques to derive useful value. Some key challenges discussed include handling the large volume, high velocity, and variety of data types from different sources. Additional challenges include scalability for hierarchical and temporal data, representing uncertainty, and making the results understandable to users. The document advocates for distributed analytics from the edge to the cloud to help address issues of scale.
This document is a resume for Marwin Ko that outlines his education and experience. It summarizes that he received his M.S. in Mechanical Engineering from UC Merced in 2015 and B.S. in Bioengineering from UC Merced in 2013. It also lists his extensive engineering coursework, research experience including publications, technical skills, teaching experience, awards and affiliations.
Alexander Venzin is a statistician currently working for the International Atomic Energy Agency in Vienna, Austria. He has 7 years of experience developing statistical models and algorithms for applications in analytical chemistry, power markets, sampling methodology, and more. His background includes graduate research on quantifying bias in ROC curves and work at Pacific Northwest National Laboratory and the Air Force Institute of Technology.
This project aims to improve the precision of estimates from the National Lakes Assessment (NLA) by combining citizen science lake data with NLA data. The student will identify states with usable citizen science lake monitoring programs and develop models to predict lake conditions using citizen science data. These predictions will be incorporated into model-assisted estimators using NLA data to estimate lake conditions. Standard errors of these estimators will be compared to those from NLA alone to quantify the potential benefit of including citizen science data. The primary outcomes will be a methodology for combining data sources and an assessment of the precision gains from adding citizen science data.
Estimators for structural equation models of Likert scale dataNick Stauner
- The document compares different estimation methods (ML, WLS, ULS, DWLS, GLS) for confirmatory factor analysis with ordinal/Likert scale data.
- WLS estimates deviated most from the target values, while ML, ULS, and DWLS estimates were roughly equal. WLS estimates were the least accurate.
- For non-normal data, fit statistics from ML and GLS were biased, while ULS and DWLS were considered the most robust and liberal estimators.
This document discusses a study that compares the performance of different statistical learning methods (LASSO, classification trees, random forests, support vector machines) across different sample sizes using a dataset on high school dropout rates. The study finds that the statistical learning methods generally perform better than traditional regression at predicting dropout across different sample sizes. It also finds that the prediction models and errors produced differ between learning methods and across sample sizes for each method. The document outlines how each statistical learning method works in order to help researchers apply these advanced techniques to their own work.
An Ontology-underpinned Decision-Support System for Wastewater managementLuigi Ceccaroni
The document describes OntoWEDSS, an ontology-based decision support system for wastewater management. OntoWEDSS integrates rule-based and case-based reasoning systems using the WaWO ontology to model the wastewater domain. Evaluation results showed OntoWEDSS had an average of 88% successful outcomes in diagnosing problematic situations like bulking sludge. The authors conclude that OntoWEDSS is a research tool that demonstrates how ontologies can be introduced into decision support systems to improve modeling, reasoning and diagnosis in environmental domains.
The document discusses challenges in analytics for big data. It notes that big data refers to data that exceeds the capabilities of conventional algorithms and techniques to derive useful value. Some key challenges discussed include handling the large volume, high velocity, and variety of data types from different sources. Additional challenges include scalability for hierarchical and temporal data, representing uncertainty, and making the results understandable to users. The document advocates for distributed analytics from the edge to the cloud to help address issues of scale.
The document discusses using buildings and their structural vibrations as sensors for machine learning applications with small datasets. It describes challenges with deploying many sensors that require extensive data collection and maintenance. The presented approach aims to enable "small data" learning by optimizing sensing, integrating physical models to reduce data needs, and adapting data models using physical understanding to transfer learning across applications. Examples are given on using building vibrations to detect footsteps versus non-footsteps with high accuracy, and to identify people by their unique walking patterns. The approach is shown to significantly reduce labeling requirements by transferring models between structures informed by an understanding of physical effects.
This document summarizes a study on using data mining techniques like multiple linear regression and density-based clustering to estimate crop production in East Godavari district of India. Multiple linear regression and density-based clustering were used to model the relationship between crop production and factors like rainfall, area sown, fertilizer use. The estimated values from both techniques were found to have a percentage difference ranging from -14% to 13% when compared to actual production values, indicating the techniques can adequately estimate crop production. Tables of actual versus estimated values using both techniques are provided for comparison.
Reduced Order Models for Decision Analysis and Upscaling of Aquifer Heterogen...Velimir (monty) Vesselinov
Vesselinov, V.V., O'Malley, D., Alexandrov, B., Moore, B., Reduced Order Models for Decision Analysis and Upscaling of Aquifer Heterogeneity, AGU Fall Meeting, San Francisco, CA, 2016, (invited).
This document describes a study that used supervised machine learning techniques to predict river water quality parameters. Five machine learning models were applied to a river water quality dataset to predict four parameters: dissolved sodium, dissolved nitrate, gran alkalinity, and electrical conductivity. The best performing algorithm was found to be the decision tree model, which predicted all parameters with 87-98% accuracy. The results of this study could help support inexpensive and fast monitoring of river water quality to improve existing testing systems.
May 2015 talk to SW Data Meetup by Professor Hendrik Blockeel from KU Leuven & Leiden University.
With increasing amounts of ever more complex forms of digital data becoming available, the methods for analyzing these data have also become more diverse and sophisticated. With this comes an increased risk of incorrect use of these methods, and a greater burden on the user to be knowledgeable about their assumptions. In addition, the user needs to know about a wide variety of methods to be able to apply the most suitable one to a particular problem. This combination of broad and deep knowledge is not sustainable.
The idea behind declarative data analysis is that the burden of choosing the right statistical methodology for answering a research question should no longer lie with the user, but with the system. The user should be able to simply describe the problem, formulate a question, and let the system take it from there. To achieve this, we need to find answers to questions such as: what languages are suitable for formulating these questions, and what execution mechanisms can we develop for them? In this talk, I will discuss recent and ongoing research in this direction. The talk will touch upon query languages for data mining and for statistical inference, declarative modeling for data mining, meta-learning, and constraint-based data mining. What connects these research threads is that they all strive to put intelligence about data analysis into the system, instead of assuming it resides in the user.
Hendrik Blockeel is a professor of computer science at KU Leuven, Belgium, and part-time associate professor at Leiden University, The Netherlands. His research interests lie mostly in machine learning and data mining. He has made a variety of research contributions in these fields, including work on decision tree learning, inductive logic programming, predictive clustering, probabilistic-logical models, inductive databases, constraint-based data mining, and declarative data analysis. He is an action editor for Machine Learning and serves on the editorial board of several other journals. He has chaired or organized multiple conferences, workshops, and summer schools, including ILP, ECMLPKDD, IDA and ACAI, and he has been vice-chair, area chair, or senior PC member for ECAI, IJCAI, ICML, KDD, ICDM. He was a member of the board of the European Coordinating Committee for Artificial Intelligence from 2004 to 2010, and currently serves as publications chair for the ECMLPKDD steering committee.
Fall Detection System for the Elderly based on the Classification of Shimmer ...Moiz Ahmed
The purpose of this research was to use a body sensor network to analyze falls in elderly. Real-time data from Shimmer device could be the analysis for detection of certain activities of daily livings as well as certain cases of falls.
For more information read the publication:
http://pdf.medrang.co.kr/Hir/2017/023/Hir023-03-03.pdf
Decision Support System to Manage Critical Civil Infrastructure Systems for D...Mike Mahaffie
The document discusses a spatial decision support system (SDSS) and critical infrastructure resilience decision support system (CIR-DSS) created to help decision makers improve the resilience of critical infrastructure to flooding in Delaware. The SDSS integrates GIS and HAZUS-MH analyses to assess infrastructure vulnerability and risks. A case study on 2006 Delaware flooding is analyzed using the SDSS and other tools. While the SDSS provided useful spatial data and analyses, limitations in HAZUS-MH modeling of transportation infrastructure were identified. The SDSS results could be integrated with other systems to further decision making. Scenarios assessing mitigation strategies found investment in mitigation reduced long-term costs from future flood events.
ExUM - Invited Talk on Nudging in RecSysAlain Starke
I present work on using explanatory nudges to support 'better' decision-making in recommender systems. I aim to help people to achieve their behavioral goals by providing relevant options in the short-term that are clearly explained to them.
sers, Applications and the Community of Practice for the Air Quality ScenarioRudolf Husar
The document discusses the GEOSS (Global Earth Observation System of Systems) architecture for the air quality community. It proposes an architecture where air quality services could register with the GEOSS registry and be discovered and invoked by users. This would allow data analysts to compose and visualize air quality data workflows to inform decision makers. It also discusses establishing an air quality community of practice to facilitate collaboration.
2008-05-05 GEOSS UIC-ADC AQ Scen W shop TorontoRudolf Husar
The document discusses the GEOSS (Global Earth Observation System of Systems) architecture for the air quality community. It proposes an architecture where air quality services register with the GEOSS registry and are discoverable through the GEOSS clearinghouse. This would allow users to find, select, and link to relevant air quality services. The architecture envisions community air quality catalogs that aggregate catalog listings and allow users to access data and models through composed workflows.
Metabolomics and Beyond Challenges and Strategies for Next-gen Omic Analyses Dmitry Grapov
Dr. Dmitry Grapov gave a webinar on challenges and strategies for next-generation omics analyses. He discussed how large, longitudinal studies integrating multiple omics domains are needed to identify small biological effects. Data normalization strategies must be considered during experimental design to remove analytical batch effects. Quality control-based normalization using analytical replicates can estimate and remove analytical variance from large datasets. Integrating multiple measurement platforms is often required to identify systems of biological changes. Network-based analysis of omics data can help explain more phenotypic variance than single omics approaches alone. Dr. Grapov demonstrated software tools he developed for network analysis, visualization, and integration of multi-omics datasets.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document summarizes a study that used hyperspectral imagery to classify land cover in the campus of the University Putra Malaysia (UPM) using support vector machine (SVM) and maximum likelihood classification algorithms. The researcher classified the imagery into 9 land cover classes and found that SVM produced a higher overall classification accuracy of 98.23% compared to 90.48% for maximum likelihood classification. The study demonstrated that SVM is better suited than maximum likelihood for classifying hyperspectral data.
Visualising Multi-objective Data: From League Tables to Optimisers, and backdjw213
An Applied Mathematics seminar given at the University of Plymouth on 8th March 2017, discussing approaches for visualising performance data.
Papers describing the work are available here:
http://ieeexplore.ieee.org/abstract/document/5586078/
http://ieeexplore.ieee.org/abstract/document/6342906/
http://www.sciencedirect.com/science/article/pii/S004313541500202X
http://dl.acm.org/citation.cfm?id=2330853
This document discusses different approaches to multivariate data analysis and clustering, including nearest neighbor methods, hierarchical clustering, and k-means clustering. It provides examples of using Ward's method, average linkage, and k-means clustering on poverty data to identify potential clusters of countries based on variables like birth rate, death rate, and infant mortality rate. Key lessons are that different linkage methods, distance measures, and data normalizations should be tested and that higher-dimensional data may require different variable spaces or transformations to identify meaningful clusters.
This document discusses challenges and case studies in applying predictive analytics to big data. It begins by outlining several application areas that could benefit from big data analytics, such as healthcare, marketing, finance, and transportation. It then discusses challenges like data complexity, privacy concerns, and integrating domain knowledge with data-driven methods. Several case studies are presented applying predictive modeling techniques to healthcare datasets to predict outcomes like mortality. The document advocates moving from predictive to prescriptive decision making for higher financial and human benefits.
This document discusses data mining applications and trends. It covers topics like mining complex data types, other data mining methodologies, and various applications of data mining. Some key applications discussed include using data mining in finance, retail, telecommunications, science/engineering, intrusion detection, and recommender systems. The document also touches on topics like visual data mining, ubiquitous and invisible data mining, and the privacy and social impacts of data mining.
The document discusses a project called Space Evaders that aims to prevent collisions between spacecraft and debris in space. Their team is developing methods to analyze data and find ways to prevent debris-causing collisions, which could eventually make major orbital regions unusable. If collisions and debris continue to increase unchecked, it could lead to a dangerous proliferation of collisions known as a Kessler cascade. The team's goal is to use data-driven approaches to help avoid this scenario and keep space accessible for future use.
Physics-Informed Machine Learning Methods for Data Analytics and Model Diagno...Velimir (monty) Vesselinov
This document summarizes research on physics-informed machine learning methods for data and model analysis. Key points include:
1) The methods couple data and model analytics to extract common hidden features using techniques like nonnegative tensor factorization.
2) Physics constraints are incorporated to identify important processes in datasets and model outputs.
3) The methods have been applied to analyze climate model outputs from Europe to identify dominant patterns in air temperature and water tables over time.
Velimir V Vesselinov (monty) 2019
Unsupervised machine learning methods for feature extraction
New Mexico Big Data & Analytics Summit, http://nmbdas.com, Albuquerque, February 2019.
http://tensors.lanl.gov
LA-UR-19-21450
More Related Content
Similar to Model Analyses of Complex Systems Behavior using MADS,
The document discusses using buildings and their structural vibrations as sensors for machine learning applications with small datasets. It describes challenges with deploying many sensors that require extensive data collection and maintenance. The presented approach aims to enable "small data" learning by optimizing sensing, integrating physical models to reduce data needs, and adapting data models using physical understanding to transfer learning across applications. Examples are given on using building vibrations to detect footsteps versus non-footsteps with high accuracy, and to identify people by their unique walking patterns. The approach is shown to significantly reduce labeling requirements by transferring models between structures informed by an understanding of physical effects.
This document summarizes a study on using data mining techniques like multiple linear regression and density-based clustering to estimate crop production in East Godavari district of India. Multiple linear regression and density-based clustering were used to model the relationship between crop production and factors like rainfall, area sown, fertilizer use. The estimated values from both techniques were found to have a percentage difference ranging from -14% to 13% when compared to actual production values, indicating the techniques can adequately estimate crop production. Tables of actual versus estimated values using both techniques are provided for comparison.
Reduced Order Models for Decision Analysis and Upscaling of Aquifer Heterogen...Velimir (monty) Vesselinov
Vesselinov, V.V., O'Malley, D., Alexandrov, B., Moore, B., Reduced Order Models for Decision Analysis and Upscaling of Aquifer Heterogeneity, AGU Fall Meeting, San Francisco, CA, 2016, (invited).
This document describes a study that used supervised machine learning techniques to predict river water quality parameters. Five machine learning models were applied to a river water quality dataset to predict four parameters: dissolved sodium, dissolved nitrate, gran alkalinity, and electrical conductivity. The best performing algorithm was found to be the decision tree model, which predicted all parameters with 87-98% accuracy. The results of this study could help support inexpensive and fast monitoring of river water quality to improve existing testing systems.
May 2015 talk to SW Data Meetup by Professor Hendrik Blockeel from KU Leuven & Leiden University.
With increasing amounts of ever more complex forms of digital data becoming available, the methods for analyzing these data have also become more diverse and sophisticated. With this comes an increased risk of incorrect use of these methods, and a greater burden on the user to be knowledgeable about their assumptions. In addition, the user needs to know about a wide variety of methods to be able to apply the most suitable one to a particular problem. This combination of broad and deep knowledge is not sustainable.
The idea behind declarative data analysis is that the burden of choosing the right statistical methodology for answering a research question should no longer lie with the user, but with the system. The user should be able to simply describe the problem, formulate a question, and let the system take it from there. To achieve this, we need to find answers to questions such as: what languages are suitable for formulating these questions, and what execution mechanisms can we develop for them? In this talk, I will discuss recent and ongoing research in this direction. The talk will touch upon query languages for data mining and for statistical inference, declarative modeling for data mining, meta-learning, and constraint-based data mining. What connects these research threads is that they all strive to put intelligence about data analysis into the system, instead of assuming it resides in the user.
Hendrik Blockeel is a professor of computer science at KU Leuven, Belgium, and part-time associate professor at Leiden University, The Netherlands. His research interests lie mostly in machine learning and data mining. He has made a variety of research contributions in these fields, including work on decision tree learning, inductive logic programming, predictive clustering, probabilistic-logical models, inductive databases, constraint-based data mining, and declarative data analysis. He is an action editor for Machine Learning and serves on the editorial board of several other journals. He has chaired or organized multiple conferences, workshops, and summer schools, including ILP, ECMLPKDD, IDA and ACAI, and he has been vice-chair, area chair, or senior PC member for ECAI, IJCAI, ICML, KDD, ICDM. He was a member of the board of the European Coordinating Committee for Artificial Intelligence from 2004 to 2010, and currently serves as publications chair for the ECMLPKDD steering committee.
Fall Detection System for the Elderly based on the Classification of Shimmer ...Moiz Ahmed
The purpose of this research was to use a body sensor network to analyze falls in elderly. Real-time data from Shimmer device could be the analysis for detection of certain activities of daily livings as well as certain cases of falls.
For more information read the publication:
http://pdf.medrang.co.kr/Hir/2017/023/Hir023-03-03.pdf
Decision Support System to Manage Critical Civil Infrastructure Systems for D...Mike Mahaffie
The document discusses a spatial decision support system (SDSS) and critical infrastructure resilience decision support system (CIR-DSS) created to help decision makers improve the resilience of critical infrastructure to flooding in Delaware. The SDSS integrates GIS and HAZUS-MH analyses to assess infrastructure vulnerability and risks. A case study on 2006 Delaware flooding is analyzed using the SDSS and other tools. While the SDSS provided useful spatial data and analyses, limitations in HAZUS-MH modeling of transportation infrastructure were identified. The SDSS results could be integrated with other systems to further decision making. Scenarios assessing mitigation strategies found investment in mitigation reduced long-term costs from future flood events.
ExUM - Invited Talk on Nudging in RecSysAlain Starke
I present work on using explanatory nudges to support 'better' decision-making in recommender systems. I aim to help people to achieve their behavioral goals by providing relevant options in the short-term that are clearly explained to them.
sers, Applications and the Community of Practice for the Air Quality ScenarioRudolf Husar
The document discusses the GEOSS (Global Earth Observation System of Systems) architecture for the air quality community. It proposes an architecture where air quality services could register with the GEOSS registry and be discovered and invoked by users. This would allow data analysts to compose and visualize air quality data workflows to inform decision makers. It also discusses establishing an air quality community of practice to facilitate collaboration.
2008-05-05 GEOSS UIC-ADC AQ Scen W shop TorontoRudolf Husar
The document discusses the GEOSS (Global Earth Observation System of Systems) architecture for the air quality community. It proposes an architecture where air quality services register with the GEOSS registry and are discoverable through the GEOSS clearinghouse. This would allow users to find, select, and link to relevant air quality services. The architecture envisions community air quality catalogs that aggregate catalog listings and allow users to access data and models through composed workflows.
Metabolomics and Beyond Challenges and Strategies for Next-gen Omic Analyses Dmitry Grapov
Dr. Dmitry Grapov gave a webinar on challenges and strategies for next-generation omics analyses. He discussed how large, longitudinal studies integrating multiple omics domains are needed to identify small biological effects. Data normalization strategies must be considered during experimental design to remove analytical batch effects. Quality control-based normalization using analytical replicates can estimate and remove analytical variance from large datasets. Integrating multiple measurement platforms is often required to identify systems of biological changes. Network-based analysis of omics data can help explain more phenotypic variance than single omics approaches alone. Dr. Grapov demonstrated software tools he developed for network analysis, visualization, and integration of multi-omics datasets.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document summarizes a study that used hyperspectral imagery to classify land cover in the campus of the University Putra Malaysia (UPM) using support vector machine (SVM) and maximum likelihood classification algorithms. The researcher classified the imagery into 9 land cover classes and found that SVM produced a higher overall classification accuracy of 98.23% compared to 90.48% for maximum likelihood classification. The study demonstrated that SVM is better suited than maximum likelihood for classifying hyperspectral data.
Visualising Multi-objective Data: From League Tables to Optimisers, and backdjw213
An Applied Mathematics seminar given at the University of Plymouth on 8th March 2017, discussing approaches for visualising performance data.
Papers describing the work are available here:
http://ieeexplore.ieee.org/abstract/document/5586078/
http://ieeexplore.ieee.org/abstract/document/6342906/
http://www.sciencedirect.com/science/article/pii/S004313541500202X
http://dl.acm.org/citation.cfm?id=2330853
This document discusses different approaches to multivariate data analysis and clustering, including nearest neighbor methods, hierarchical clustering, and k-means clustering. It provides examples of using Ward's method, average linkage, and k-means clustering on poverty data to identify potential clusters of countries based on variables like birth rate, death rate, and infant mortality rate. Key lessons are that different linkage methods, distance measures, and data normalizations should be tested and that higher-dimensional data may require different variable spaces or transformations to identify meaningful clusters.
This document discusses challenges and case studies in applying predictive analytics to big data. It begins by outlining several application areas that could benefit from big data analytics, such as healthcare, marketing, finance, and transportation. It then discusses challenges like data complexity, privacy concerns, and integrating domain knowledge with data-driven methods. Several case studies are presented applying predictive modeling techniques to healthcare datasets to predict outcomes like mortality. The document advocates moving from predictive to prescriptive decision making for higher financial and human benefits.
This document discusses data mining applications and trends. It covers topics like mining complex data types, other data mining methodologies, and various applications of data mining. Some key applications discussed include using data mining in finance, retail, telecommunications, science/engineering, intrusion detection, and recommender systems. The document also touches on topics like visual data mining, ubiquitous and invisible data mining, and the privacy and social impacts of data mining.
The document discusses a project called Space Evaders that aims to prevent collisions between spacecraft and debris in space. Their team is developing methods to analyze data and find ways to prevent debris-causing collisions, which could eventually make major orbital regions unusable. If collisions and debris continue to increase unchecked, it could lead to a dangerous proliferation of collisions known as a Kessler cascade. The team's goal is to use data-driven approaches to help avoid this scenario and keep space accessible for future use.
Similar to Model Analyses of Complex Systems Behavior using MADS, (20)
Physics-Informed Machine Learning Methods for Data Analytics and Model Diagno...Velimir (monty) Vesselinov
This document summarizes research on physics-informed machine learning methods for data and model analysis. Key points include:
1) The methods couple data and model analytics to extract common hidden features using techniques like nonnegative tensor factorization.
2) Physics constraints are incorporated to identify important processes in datasets and model outputs.
3) The methods have been applied to analyze climate model outputs from Europe to identify dominant patterns in air temperature and water tables over time.
Velimir V Vesselinov (monty) 2019
Unsupervised machine learning methods for feature extraction
New Mexico Big Data & Analytics Summit, http://nmbdas.com, Albuquerque, February 2019.
http://tensors.lanl.gov
LA-UR-19-21450
Novel Machine Learning Methods for Extraction of Features Characterizing Comp...Velimir (monty) Vesselinov
1) Unsupervised machine learning methods like non-negative matrix factorization (NMF) and non-negative tensor factorization (NTF) are used to extract hidden features from datasets without prior information or training.
2) NMF/NTF decompose datasets into core tensors and factor matrices to identify dominant patterns and compress large datasets for analysis.
3) The document provides an example of using NTF to analyze simulations of reactive mixing, extracting the main time/space features representing physical processes from over 200GB of model output data.
Novel Machine Learning Methods for Extraction of Features Characterizing Data...Velimir (monty) Vesselinov
Vesselinov, V.V., Novel Machine Learning Methods for Extraction of Features Characterizing Datasets and Models, AGU Fall meeting, Washington D.C., 2018.
Data and Model-Driven Decision Support for Environmental Management of a Chro...Velimir (monty) Vesselinov
Vesselinov, V.V., Katzman, D., Broxton, D., Birdsell, K., Reneau, S., Vaniman, D., Longmire, P., Fabryka-Martin, J., Heikoop, J., Ding, M., Hickmott, D., Jacobs, E., Goering, T., Harp, D., Mishra, P., Data and Model-Driven Decision Support for Environmental Management of a Chromium Plume at Los Alamos National Laboratory (LANL), Waste Management Symposium 2013, Session 109: ER Challenges: Alternative Approaches for Achieving End State, Phoenix, AZ, February 28, 2013.
Environmental Management Modeling Activities at Los Alamos National Laborator...Velimir (monty) Vesselinov
esselinov, V.V., et al., Environmental Management Modeling Activities at Los Alamos National Laboratory (LANL), Department of Energy Technical Exchange Meeting, Performance Assessment Community of Practice, Hanford, April 13-14, 2010.
GNI: Coupling Model Analysis Tools and High-Performance Subsurface Flow and T...Velimir (monty) Vesselinov
Vesselinov, V.V., et al., AGNI: Coupling Model Analysis Tools and High-Performance Subsurface Flow and Transport Simulators for Risk and Performance Assessments, XIX International Conference on Computational Methods in Water Resources (CMWR 2012), University of Illinois at Urbana-Champaign, June 17-22, 2012.
Tomographic inverse estimation of aquifer properties based on pressure varia...Velimir (monty) Vesselinov
Vesselinov, V.V., Harp, D., Koch, R., Birdsell, K., Katzman, K., Tomographic inverse estimation of aquifer properties based on pressure variations caused by transient water-supply pumping, <em>AGU Meeting</em>, San Francisco, CA, December 15-19, 2008.
103. Vesselinov, V.V., Uncertainties in Transient Capture-Zone Estimates, CMWR 2006 XVI International Conference on Computational Methods in Water Resources, Copenhagen, Denmark, 18-22 June 2006.
Model-driven decision support for monitoring network design based on analysis...Velimir (monty) Vesselinov
Vesselinov, V.V., Harp, D., Katzman, D., Model-driven decision support for monitoring network design based on analysis of data and model uncertainties: methods and applications, H32F: Uncertainty Quantification and Parameter Estimation: Impacts on Risk and Decision Making, AGU Fall meeting, San Francisco, December 3-7, 2012, LA-UR-13-20189, (invited).
Decision Analyses Related to a Chromium Plume at Los Alamos National LaboratoryVelimir (monty) Vesselinov
Vesselinov, V.V., O'Malley, D., Katzman, D., Model-Assisted Decision Analyses Related to a Chromium Plume at Los Alamos National Laboratory, Waste Management Symposium, Phoenix, AZ, 2015.
Decision Support for Environmental Management of a Chromium Plume at Los Alam...Velimir (monty) Vesselinov
Vesselinov, V.V., Katzman, D., Broxton, D., Birdsell, K., Reneau, S., Vaniman, D., Longmire, P., Fabryka-Martin, J., Heikoop, J., Ding, M., Hickmott, D., Jacobs, E., Goering, T., Harp, D., Mishra, P., Data and Model-Driven Decision Support for Environmental Management of a Chromium Plume at Los Alamos National Laboratory (LANL), Waste Management Symposium 2013, Session 109: ER Challenges: Alternative Approaches for Achieving End State, Phoenix, AZ, February 28, 2013.
ZEM: Integrated Framework for Real-Time Data and Model Analyses for Robust En...Velimir (monty) Vesselinov
Vesselinov, V.V., O'Malley, D., Katzman, D., ZEM: Integrated Framework for Real-Time Data and Model Analyses for Robust Environmental Management Decision Making, Waste Management Symposium, Phoenix, AZ, 2016.
Vesselinov, V.V., O'Malley, D., Katzman, D., Decision Analyses for Groundwater Remediation, Waste Management Symposium, Phoenix, AZ, 2017.
la ur-17-21909
Vesselinov 2018 Novel machine learning methods for extraction of features cha...Velimir (monty) Vesselinov
Vesselinov, V.V., Novel Machine Learning Methods for Extraction of Features Characterizing Complex Datasets and Models, Recent Advances in Machine Learning and Computational Methods for Geoscience, Institute for Mathematics and its Applications, University of Minnesota, 2018.
LA UR-18-30987
We are pleased to share with you the latest VCOSA statistical report on the cotton and yarn industry for the month of May 2024.
Starting from January 2024, the full weekly and monthly reports will only be available for free to VCOSA members. To access the complete weekly report with figures, charts, and detailed analysis of the cotton fiber market in the past week, interested parties are kindly requested to contact VCOSA to subscribe to the newsletter.
Build applications with generative AI on Google CloudMárton Kodok
We will explore Vertex AI - Model Garden powered experiences, we are going to learn more about the integration of these generative AI APIs. We are going to see in action what the Gemini family of generative models are for developers to build and deploy AI-driven applications. Vertex AI includes a suite of foundation models, these are referred to as the PaLM and Gemini family of generative ai models, and they come in different versions. We are going to cover how to use via API to: - execute prompts in text and chat - cover multimodal use cases with image prompts. - finetune and distill to improve knowledge domains - run function calls with foundation models to optimize them for specific tasks. At the end of the session, developers will understand how to innovate with generative AI and develop apps using the generative ai industry trends.
06-20-2024-AI Camp Meetup-Unstructured Data and Vector DatabasesTimothy Spann
Tech Talk: Unstructured Data and Vector Databases
Speaker: Tim Spann (Zilliz)
Abstract: In this session, I will discuss the unstructured data and the world of vector databases, we will see how they different from traditional databases. In which cases you need one and in which you probably don’t. I will also go over Similarity Search, where do you get vectors from and an example of a Vector Database Architecture. Wrapping up with an overview of Milvus.
Introduction
Unstructured data, vector databases, traditional databases, similarity search
Vectors
Where, What, How, Why Vectors? We’ll cover a Vector Database Architecture
Introducing Milvus
What drives Milvus' Emergence as the most widely adopted vector database
Hi Unstructured Data Friends!
I hope this video had all the unstructured data processing, AI and Vector Database demo you needed for now. If not, there’s a ton more linked below.
My source code is available here
https://github.com/tspannhw/
Let me know in the comments if you liked what you saw, how I can improve and what should I show next? Thanks, hope to see you soon at a Meetup in Princeton, Philadelphia, New York City or here in the Youtube Matrix.
Get Milvused!
https://milvus.io/
Read my Newsletter every week!
https://github.com/tspannhw/FLiPStackWeekly/blob/main/141-10June2024.md
For more cool Unstructured Data, AI and Vector Database videos check out the Milvus vector database videos here
https://www.youtube.com/@MilvusVectorDatabase/videos
Unstructured Data Meetups -
https://www.meetup.com/unstructured-data-meetup-new-york/
https://lu.ma/calendar/manage/cal-VNT79trvj0jS8S7
https://www.meetup.com/pro/unstructureddata/
https://zilliz.com/community/unstructured-data-meetup
https://zilliz.com/event
Twitter/X: https://x.com/milvusio https://x.com/paasdev
LinkedIn: https://www.linkedin.com/company/zilliz/ https://www.linkedin.com/in/timothyspann/
GitHub: https://github.com/milvus-io/milvus https://github.com/tspannhw
Invitation to join Discord: https://discord.com/invite/FjCMmaJng6
Blogs: https://milvusio.medium.com/ https://www.opensourcevectordb.cloud/ https://medium.com/@tspann
https://www.meetup.com/unstructured-data-meetup-new-york/events/301383476/?slug=unstructured-data-meetup-new-york&eventId=301383476
https://www.aicamp.ai/event/eventdetails/W2024062014
Open Source Contributions to Postgres: The Basics POSETTE 2024ElizabethGarrettChri
Postgres is the most advanced open-source database in the world and it's supported by a community, not a single company. So how does this work? How does code actually get into Postgres? I recently had a patch submitted and committed and I want to share what I learned in that process. I’ll give you an overview of Postgres versions and how the underlying project codebase functions. I’ll also show you the process for submitting a patch and getting that tested and committed.
Discovering Digital Process Twins for What-if Analysis: a Process Mining Appr...Marlon Dumas
This webinar discusses the limitations of traditional approaches for business process simulation based on had-crafted model with restrictive assumptions. It shows how process mining techniques can be assembled together to discover high-fidelity digital twins of end-to-end processes from event data.
Model Analyses of Complex Systems Behavior using MADS,
1. Model Analyses of Complex Systems Behavior using
MADS
Velimir V. Vesselinov vvv@lanl.gov
Daniel O’Malley omalled@lanl.gov
Computational Earth Science, Los Alamos National Laboratory, USA
AGU Fall meeting, December, 2016
Unclassified: LA-UR-16-29120
Data-Models-Decisions MADS MADS applications Highlights
2. Our work inform important decisions
Climate Science: Should we cap carbon emissions or not?
Meteorology: Should we evacuate a city due to a hurricane?
Geology: How much should we bid on a fossil fuel play?
Seismology: Should we inject fluids in the underground (and how to
do it without causing earthquakes and contamination)
Hydrogeology: How to provide clean water supply?
Hydrogeology: Which remediation option will clean up the
groundwater?
Data-Models-Decisions MADS MADS applications Highlights
3. Our work inform important decisions
Climate Science: Should we cap carbon emissions or not?
Meteorology: Should we evacuate a city due to a hurricane?
Geology: How much should we bid on a fossil fuel play?
Seismology: Should we inject fluids in the underground (and how to
do it without causing earthquakes and contamination)
Hydrogeology: How to provide clean water supply?
Hydrogeology: Which remediation option will clean up the
groundwater?
We rely on data & models to make scientifically defensible decisions
Data-Models-Decisions MADS MADS applications Highlights
4. How should we support these decisions?
Model Decision
Build a “representative” model
Use the “representative” model to make a decision
Data-Models-Decisions MADS MADS applications Highlights
5. How should we support these decisions?
Model Decision
Build a “representative” model
Use the “representative” model to make a decision
However:
many real-world models cannot be validated (especially in the earth
sciences)
data can be highly uncertain
conceptualization can be highly uncertain
model predictions can be highly uncertain
Data-Models-Decisions MADS MADS applications Highlights
6. How should we support these decisions?
Data Model Decision
Use data to calibrate the model
Use the model to make a decision
Data-Models-Decisions MADS MADS applications Highlights
7. How should we support these decisions?
Data
Model
UQ Decision
Quantify uncertainty in the data and model
Use estimated uncertainties in model predictions to make a decision
Data-Models-Decisions MADS MADS applications Highlights
8. How should we support these decisions?
Data
Model
UQ Decision
Quantify uncertainty in the data and model
Estimate uncertainties in model predictions
Collect data that reduces the prediction uncertainties (optimal
experimental design)
Use the new data to quantify uncertainty again
Use updated uncertainties to make decision (are we done?)
Data-Models-Decisions MADS MADS applications Highlights
9. How should we support these decisions?
Data
Model
Decision
Analysis
Perform decision analysis coupling UQ with the decision process to
evaluate uncertainty in decisions (not uncertainty in model
parameters/predictions)
Use the decision analysis to guide collection of data that can
influence a better decision (it may not always be feasible)
Use the new collected data to make a better decision
Data-Models-Decisions MADS MADS applications Highlights
10. Decision Analysis Methodologies and Tools
We need robust and versatile decision analysis methodologies and
tools
Recently, we have developed a series of novel methods and
techniques for data- and model-based decision analyses
Most of them are implemented in MADS
Model Analysis & Decision Support
Data-Models-Decisions MADS MADS applications Highlights
11. MADS: Model Analysis and Decision Support
MADS is a high-performance computational framework
MADS performs a wide range of data- & model-based analyses
including
Sensitivity Analysis
Parameter Estimation, Model Inversion/Calibration
Uncertainty Quantification
Machine Learning Methods
Reduced Order Modeling (ROM)
Optimal Experimental Design (OED)
Decision Analysis
MADS is open source code (GPL3) written in
is a high-level, dynamic programming language for technical
computing
has C speed but with MatLab/Python flexibility
provides access to a vast number of mathematical, statical,
and visualization packages
Data-Models-Decisions MADS MADS applications Highlights
12. MADS: Model Analysis and Decision Support
MADS can be applied to perform analyses using any existing physics
simulator
MADS provides tools for model development, integration and couping
MADS utilizes advanced code development tools for
version control (git)
https://github.com/madsjulia
continuous integration (Travis-CI)
https://travis-ci.org/madsjulia/Mads.jl
tracking code test coverage (Coveralls)
https://coveralls.io/github/madsjulia/Mads.jl
MADS contributors and developers are welcome
MADS examples, manuals and publications are available at:
https://mads.lanl.gov
https://madsjulia.github.io/Mads.jl
https://mads.readthedocs.io
Data-Models-Decisions MADS MADS applications Highlights
13. Advanced and novel methods implemented in MADS
Information-Gap Decision Theory (IGDT)
O’Malley, D., Vesselinov, V.V., Groundwater remediation using the information gap decision theory, Water Resources
Research, doi: 10.1002/2013WR014718, 2014.
Harp, D.R., Vesselinov. V.V., Contaminant remediation decision analysis using information gap theory, Stochastic
Environmental Research and Risk Assessment (SERRA), doi:10.1007/s00477-012-0573-1, 2012.
Bayesian-Information-Gap Decision Theory (BIG-DT)
O’Malley, Vesselinov: Groundwater Remediation using Bayesian Information-Gap Decision Theory (West 3024,
Thursday, 17:00 - 17:15, H44E-05)
Grasinger, M., O’Malley, D., Vesselinov, V.V., Karra, S., Decision Analysis for Robust CO2 Injection: Application of
Bayesian-Information-Gap Decision Theory, International Journal of Greenhouse Gas Control, doi:
10.1016/j.ijggc.2016.02.017, 2016.
O’Malley, D., Vesselinov, V.V., Bayesian-Information-Gap decision theory with an application to CO2 sequestration,
Water Resources Research, doi: 10.1002/2015WR017413, 2015.
O’Malley, D., Vesselinov, V.V., A combined probabilistic/non-probabilistic decision analysis for contaminant remediation,
Journal on Uncertainty Quantification, SIAM/ASA, doi: 10.1137/140965132, 2014.
Optimal Experimental Design (OED) driven by decision analysis
O’Malley, D., Vesselinov, V.V., (in preparation).
Measure-theoretic Uncertainty Quantification (UQ)
Dawson, Butler, Mattis, Westerink, Vesselinov, Estep: Parameter Estimation for Geoscience Applications Using a
Measure-Theoretic Approach (West 3024, Thursday, 17:30 - 17:45, H44E-07)
Mattis, S.A., Butler, T.D. Dawson, C.N., Estep, D., Vesselinov, V.V., Parameter estimation and prediction for groundwater
contamination based on measure theory, Water Resources Research, doi: 10.1002/2015WR017295, 2015.
Novel Levenberg-Marquardt (LM) optimization method using a
dimensionality reduction based on Krylov subspace method
Lin, Y, O’Malley, D., Vesselinov, V.V., A computationally efficient parallel Levenberg-Marquardt algorithm for highly
parameterized inverse model analyses, Water Resources Research, doi: 10.1002/2016WR019028, 2016.
Data-Models-Decisions MADS MADS applications Highlights
14. Advanced and novel methods implemented in MADS
Model inversion using modified Total-Variation (TV) regularization
Lin, O’Malley, Vesselinov: Hydraulic Inverse Modeling with Modified Total-Variation Regularization with Relaxed
Variable-Splitting (poster, Thursday, 8:00 - 12:00, H41B-1301)
Model inversion using Principal Component Geostatistical Approach
(PCGA) and Randomized Geostatistical Approach (RGA)
Lin, Y, Le, E.B, O’Malley, D., Vesselinov, V.V., Bui-Thanh, T., Large-Scale Inverse Model Analyses Employing Fast
Randomized Data Reduction, 2016, (submitted).
Blind Source Separation (BSS) using Non-negative Matrix
Factorization (NMF)
Vesselinov, V.V., O’Malley, D., Alexandrov, B.S., Source identification of groundwater contamination sources and
groundwater types using semi-supervised machine learning, (in preparation).
Iliev, F.L., Stanev, V.G., Vesselinov, V.V., Alexandrov, B.S., Sources identification using shifted non-negative matrix
factorization combined with semi-supervised clustering, 2016, (submitted).
Stanev, V.G., Iliev, F.L., Vesselinov, V.V., Alexandrov, B.S., Machine learning approach for identification of release
sources in advection-diffusion systems, 2016, (submitted).
Alexandrov, B., Vesselinov, V.V., Blind source separation for groundwater level analysis based on non-negative matrix
factorization, Water Resources Research, doi: 10.1002/2013WR015037, 2014.
Support Vector Regression (SVR) methods for surrogate modeling
Alexandrov, B.S., O’Malley, D., Vesselinov, V.V., (in preparation).
Vesselinov, O’Malley, Alexandrov, Moore: Reduced Order Models for Decision Analysis and Upscaling of Aquifer
Heterogeneity (South 302, Monday, 8:45 - 9:00, NG11A-04)
Data-Models-Decisions MADS MADS applications Highlights
15. Advanced and novel methods implemented in MADS
Advanced Monte Carlo Methods: Robust Adaptive Metropolis (RAM)
and Affine Invariant Markov Chain Monte Carlo Ensemble Sampler
(aka Emcee)
Vihola: Robust adaptive Metropolis algorithm with coerced acceptance rate, Statistics and Computing, 2012.
Goodman, Weare: Ensemble samplers with affine invariance. Communications in applied mathematics and
computational science, 2010.
Extended Fourier Amplitude Sensitivity Testing (eFAST) global
sensitivity analysis
Saltelli, et al. Global sensitivity analysis, John Wiley & Sons, 2008.
Multifidelity Global Sensitivity Analysis (MFSA) under given
computational budget
Qian, Peherstorfer, O’Malley, Vesselinov, Wilcox: Multifidelity Global Sensitivity Analysis, SIAM, 2016, (submitted).
Data-Models-Decisions MADS MADS applications Highlights
16. MADS applications
Groundwater contaminant remediation (LANL Chromium & RDX)
Mattis, S.A., Butler, T.D. Dawson, C.N., Estep, D., Vesselinov, V.V., Parameter estimation and prediction for groundwater
contamination based on measure theory, Water Resources Research, doi: 10.1002/2015WR017295, 2015.
Vesselinov, V.V., O’Malley, D., Katzman, D., Model-Assisted Decision Analyses Related to a Chromium Plume at Los
Alamos National Laboratory, Waste Management, 2015.
O’Malley, D., Vesselinov, V.V., A combined probabilistic/non-probabilistic decision analysis for contaminant remediation,
Journal on Uncertainty Quantification, SIAM/ASA, doi: 10.1137/140965132, 2014.
O’Malley, D., Vesselinov, V.V., Analytical solutions for anomalous dispersion transport, Advances in Water Resources,
doi: 10.1016/j.advwatres.2014.02.006, 2014.
Water/Energy/Food Nexus
Zhang, Vesselinov: Bi-Level Decision Making for Supporting Energy and Water Nexus (West 3016: Wednesday, 09:15
- 09:30, H31J-06)
Zhang, X., Vesselinov, V.V., Integrated Modeling Approach for Optimal Management of Water, Energy and Food
Security Nexus Advances in Water Resources, Advances in Water Resources, 2016 (submitted).
Zhang, X., Vesselinov, V.V., Energy-Water Nexus: Balancing the Tradeoffs between Two-Level Decision Makers Applied
Energy, Applied Energy, DOI: 10.1016/j.apenergy.2016.08.156, 2016.
CO2 injection
Grasinger, M., O’Malley, D., Vesselinov, V.V., Karra, S., Decision Analysis for Robust CO2 Injection: Application of
Bayesian-Information-Gap Decision Theory, International Journal of Greenhouse Gas Control, doi:
10.1016/j.ijggc.2016.02.017, 2016.
O’Malley, D., Vesselinov, V.V., Bayesian-Information-Gap decision theory with an application to CO2 sequestration,
Water Resources Research, doi: 10.1002/2015WR017413, 2015.
Data-Models-Decisions MADS MADS applications Highlights
17. LANL Chromium site
50 ppb
1000 ppb
Model predicted plume shape (~2012)
Cr6+ MCL 50 ppb
Sandia CanyonMortandad
Canyon
Vadosezone(~300m)
Single-screen aquifer
monitoring wells
Two-screen aquifer
monitoring wells
Groundwater contamination site
with high visibility (DOE)
More than 20 wells drilled since
2007 (each wells costs $2-3M)
Limited remedial options
Complex
uncertainties/unknowns
Plume is located near LANL
boundary and water-supply
wells
Modeling accounts for complex
biogeochemical processes in
highly heterogeneous media
In the last 5 years, we have
accumulated close to 2,000
years computational time on the
LANL HPC clusters for various
model analyses
Used up to 4,096 processors
simultaneously
Data-Models-Decisions MADS MADS applications Highlights
18. LANL regional aquifer model
Model domain encompasses the regional aquifer beneath LANL
(≈ 8 × 4 × 0.3 km)
766,283 nodes / 4,659,062 cells
193 concentration calibration targets (representing annual transients
for about 10 years)
182,090 water-level calibration targets (representing daily transients
for about 4 years)
water-level transients represent pumping effects caused by 9 wells (6
water-supply wells and 3 site wells where pumping tests are
conducted)
230 unknown model parameters representing groundwater flow and
transport (including aquifer heterogeneity and spatial
location/strength of 3 unknown contaminant sources)
Calibration required about 100 years computational time
... more data and physics/biogeochemistry needs to be incorporated
in the model soon!
Data-Models-Decisions MADS MADS applications Highlights
21. MADS development support
LANL ADEM: Los Alamos National Laboratory Environmental
Management Directorate
DiaMonD: An Integrated Multifaceted Approach to
Mathematics at the Interfaces of
Data, Models, and Decisions
Data-Models-Decisions MADS MADS applications Highlights
23. Related model and decision analyses presentations at AGU 2016
Vesselinov, O’Malley, Alexandrov, Moore: Reduced Order Models for Decision Analysis and
Upscaling of Aquifer Heterogeneity (South 302, Monday, 8:45 - 9:00, NG11A-04, invited)
Lu, Vesselinov, Lei: Identifying Aquifer Heterogeneities using the Level Set Method (poster,
Wednesday, 8:00 - 12:00, H31F-1462)
Zhang, Vesselinov: Bi-Level Decision Making for Supporting Energy and Water Nexus (West
3016: Wednesday, 09:15 - 09:30, H31J-06)
Vesselinov, O’Malley: Model Analysis of Complex Systems Behavior using MADS (West 3024:
Wednesday, 15:06 - 15:18, H33Q-08)
Hansen, Vesselinov: Analysis of hydrologic time series reconstruction uncertainty due to
inverse model inadequacy using Laguerre expansion method (West 3024: Wednesday, 16:30 -
16:45, H34E-03)
Lin, O’Malley, Vesselinov: Hydraulic Inverse Modeling with Modified Total-Variation
Regularization with Relaxed Variable-Splitting (poster, Thursday, 8:00 - 12:00, H41B-1301)
Hansen, Haslauer, Cirpka, Vesselinov: Prediction of Breakthrough Curves for Conservative and
Reactive Transport from the Structural Parameters of Highly Heterogeneous Media (West 3014,
Thursday, 14:25 - 14:40, H43N-04)
O’Malley, Vesselinov: Groundwater Remediation using Bayesian Information-Gap Decision
Theory (West 3024, Thursday, 17:00 - 17:15, H44E-05)
Dawson, Butler, Mattis, Westerink, Vesselinov, Estep: Parameter Estimation for Geoscience
Applications Using a Measure-Theoretic Approach (West 3024, Thursday, 17:30 - 17:45,
H44E-07)
Data-Models-Decisions MADS MADS applications Highlights