13 January 2023…
Scientific Machine Learning Community (Presentation): Explainable AI for identifying regional climate change patterns, University of Leeds, UK. Remote Presentation.
As AI becomes more and more prevalent in our lives, the decisions it makes for us are becoming more and more impactful on our lives and those of others.
How can we help people to have trust in the models we're building? The field of Explainable AI focuses on making any machine learning model interpretable by non experts.
This document discusses dimensionality reduction techniques for data mining. It begins with an introduction to dimensionality reduction and reasons for using it. These include dealing with high-dimensional data issues like the curse of dimensionality. It then covers major dimensionality reduction techniques of feature selection and feature extraction. Feature selection techniques discussed include search strategies, feature ranking, and evaluation measures. Feature extraction maps data to a lower-dimensional space. The document outlines applications of dimensionality reduction like text mining and gene expression analysis. It concludes with trends in the field.
Explainable AI: Building trustworthy AI models? Raheel Ahmad
Building trustworthy, transparent and unbiased machine learning models?
Get started with explainX that brings state-of-the-art explainability techniques under one roof accessible via one-line of code.
Learn the major modules within the explainX explainable AI and model interpretability framework.
These slides are taken from Raheel's presentation at the UnpackAI's forum on Data Ethics in AI.
Artificial intelligence and machine learning can help analyze large amounts of environmental data to better understand climate change and predict future impacts. AI is used to identify patterns in data from sensors monitoring conditions around the world. This data provides insights into vulnerabilities and helps predict extreme weather events. AI technologies can also optimize renewable energy production and design more energy efficient systems, buildings and consumer products to mitigate climate change. However, training AI models also contributes to carbon emissions which must be addressed.
Adversarial machine learning for av softwarejunseok seo
Introduce practical guidances for developing adversarial machine model for anti-malware software. I didn't use reinforcement model yet, just proof-of-concept. If you have any questions about my work, email to me :)
nababora@naver.com
Measures and mismeasures of algorithmic fairnessManojit Nandi
This document discusses various measures and challenges of achieving algorithmic fairness. It begins by defining algorithmic fairness and noting it is inherently a social concept. It then covers three main types of algorithmic biases: bias in allocation, representation, and weaponization. It outlines three families of fairness measures: anti-classification, classification parity, and calibration. It notes each approach has dangers and no single definition of fairness exists. The document concludes by discussing proposed standards for documenting datasets and models to improve algorithmic transparency and accountability.
In this presentation, two different data-sets are being collected to implement the machine learning classification techniques introduced from introduction to data mining and machine learning coursework. Both data-sets are collected by analyzing their output and team members interest. Following are the data-sets named as, Electricity grid stability simulated data-set and Face Recognition on Olivetti Data set
As AI becomes more and more prevalent in our lives, the decisions it makes for us are becoming more and more impactful on our lives and those of others.
How can we help people to have trust in the models we're building? The field of Explainable AI focuses on making any machine learning model interpretable by non experts.
This document discusses dimensionality reduction techniques for data mining. It begins with an introduction to dimensionality reduction and reasons for using it. These include dealing with high-dimensional data issues like the curse of dimensionality. It then covers major dimensionality reduction techniques of feature selection and feature extraction. Feature selection techniques discussed include search strategies, feature ranking, and evaluation measures. Feature extraction maps data to a lower-dimensional space. The document outlines applications of dimensionality reduction like text mining and gene expression analysis. It concludes with trends in the field.
Explainable AI: Building trustworthy AI models? Raheel Ahmad
Building trustworthy, transparent and unbiased machine learning models?
Get started with explainX that brings state-of-the-art explainability techniques under one roof accessible via one-line of code.
Learn the major modules within the explainX explainable AI and model interpretability framework.
These slides are taken from Raheel's presentation at the UnpackAI's forum on Data Ethics in AI.
Artificial intelligence and machine learning can help analyze large amounts of environmental data to better understand climate change and predict future impacts. AI is used to identify patterns in data from sensors monitoring conditions around the world. This data provides insights into vulnerabilities and helps predict extreme weather events. AI technologies can also optimize renewable energy production and design more energy efficient systems, buildings and consumer products to mitigate climate change. However, training AI models also contributes to carbon emissions which must be addressed.
Adversarial machine learning for av softwarejunseok seo
Introduce practical guidances for developing adversarial machine model for anti-malware software. I didn't use reinforcement model yet, just proof-of-concept. If you have any questions about my work, email to me :)
nababora@naver.com
Measures and mismeasures of algorithmic fairnessManojit Nandi
This document discusses various measures and challenges of achieving algorithmic fairness. It begins by defining algorithmic fairness and noting it is inherently a social concept. It then covers three main types of algorithmic biases: bias in allocation, representation, and weaponization. It outlines three families of fairness measures: anti-classification, classification parity, and calibration. It notes each approach has dangers and no single definition of fairness exists. The document concludes by discussing proposed standards for documenting datasets and models to improve algorithmic transparency and accountability.
In this presentation, two different data-sets are being collected to implement the machine learning classification techniques introduced from introduction to data mining and machine learning coursework. Both data-sets are collected by analyzing their output and team members interest. Following are the data-sets named as, Electricity grid stability simulated data-set and Face Recognition on Olivetti Data set
This document discusses generative adversarial networks (GANs) and their applications. It begins with an overview of generative models including variational autoencoders and GANs. GANs use two neural networks, a generator and discriminator, that compete against each other in a game theoretic framework. The generator learns to generate fake samples to fool the discriminator, while the discriminator learns to distinguish real and fake samples. Applications discussed include image-to-image translation using conditional GANs to map images from one domain to another, and text-to-image translation using GANs to generate images from text descriptions.
This document discusses time series forecasting models including autoregressive (AR) models, DeepAR, and LSTNet. It provides the following information:
- Autoregressive (AR) models forecast a variable using its past values in a linear combination. The DeepAR model is based on autoregressive RNNs that learn from multiple time series datasets.
- LSTNet is designed to capture both long-term and short-term patterns in multivariate time series using CNNs, RNNs, and an autoregressive component. It combines the outputs from recurrent and recurrent-skip layers.
- The goal of models like DeepAR and LSTNet is to learn from similar time series to generalize without overfitting
Batch normalization is a technique introduced in 2015 by Google researchers to address issues like internal covariate shift and vanishing gradients. It works by normalizing the inputs to each unit to have zero mean and unit variance based on the statistics of the mini-batch. This helps the network train deeper models with higher learning rates and be less sensitive to initialization. Batch normalization is applied before the activation function of each layer during both training and inference.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
IMAGE QUALITY ASSESSMENT- A SURVEY OF RECENT APPROACHES cscpconf
Image Quality Assessment (IQA) is the process of quantifying degradation in image quality.
With the increasedimage-basedapplicationsIQAdeservesextensiveresearch.Inthis paper we have
presented popular IQA methods for the three types namely, Full Reference (FR), No Reference
(NR) and Reduced Reference (RR). The paper gives comparison of the approaches in terms of
the database used, the performance metric and the methods used.
Smart Data Slides: Machine Learning - Case StudiesDATAVERSITY
The state of the art and practice for machine learning (ML) has matured rapidly in the past 3 years, making it an ideal time to take a look at what works and what doesn’t.
In this webinar, we will review case studies from 3 industries:
-Insurance
-Healthcare
-Pharma
Participants will learn to look for characteristics of business processes and of data that make them well - or ill - suited to augmentation or automation with ML.
This document introduces classification using linear models and discriminant functions. It discusses:
1) Representing binary and multi-class labels using coding schemes like 1-of-K.
2) Using a generalized linear model framework to map linear discriminant functions to class labels.
3) Solving classification problems using least squares to determine the parameters of linear discriminant functions that minimize error on training data.
However, least squares solutions for classification have deficiencies since discriminant function outputs are unconstrained and not probabilistic.
GANs are the new hottest topic in the ML arena; however, they present a challenge for the researchers and the engineers alike. Their design, and most importantly, the code implementation has been causing headaches to the ML practitioners, especially when moving to production.
Starting from the very basic of what a GAN is, passing trough Tensorflow implementation, using the most cutting-edge APIs available in the framework, and finally, production-ready serving at scale using Google Cloud ML Engine.
Slides for the talk: https://www.pycon.it/conference/talks/deep-diving-into-gans-form-theory-to-production
Github repo: https://github.com/zurutech/gans-from-theory-to-production
For just a moment, think of the immense amount of data generated by Earth-observing systems. The sheer volume often makes it impractical for humans alone to perform the analysis, and accordingly, many groups are turning to artificial intelligence (AI) and machine learning (ML) algorithms to support their analysis. We'll hear from Development Seed and EOS about how they are using AI and ML to unlock the power of this planetary-scale data that is becoming increasingly more accessible in the cloud. From open-source libraries and human-in-the loop initial processing passes, to fully automated pipelines, we'll examine the new capacity for analysis now possible with technology.
“AI is the new electricity” proclaims Andrew Ng, co-founder of Google Brain. Just as we need to know how to safely harness electricity, we also need to know how to securely employ AI to power our businesses. In some scenarios, the security of AI systems can impact human safety. On the flip side, AI can also be misused by cyber-adversaries and so we need to understand how to counter them.
This talk will provide food for thought in 3 areas:
Security of AI systems
Use of AI in cybersecurity
Malicious use of AI
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain RatioMarina Santini
attribute selection, constructing decision trees, decision trees, divide and conquer, entropy, gain ratio, information gain, machine leaning, pruning, rules, suprisal
Every single security company is talking about how they are using machine learning—as a security company you have to claim artificial intelligence to be even part of the conversation. However, this approach can be dangerous when we blindly rely on algorithms to do the right thing. Rather than building systems with actual security knowledge, companies are using algorithms that nobody understands and, in turn, discovering wrong insights.
In this session, we will discuss:
• Limitations of machine learning and issues of explainability
• Where deep learning should never be applied
• Examples of how the blind application of algorithms can lead to wrong results
Slides by Víctor Garcia about the paper:
Reed, Scott, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. "Generative adversarial text to image synthesis." ICML 2016.
Logistic Regression in Python | Logistic Regression Example | Machine Learnin...Edureka!
** Python Data Science Training : https://www.edureka.co/python **
This Edureka Video on Logistic Regression in Python will give you basic understanding of Logistic Regression Machine Learning Algorithm with examples. In this video, you will also get to see demo on Logistic Regression using Python. Below are the topics covered in this tutorial:
1. What is Regression?
2. What is Logistic Regression?
3. Why use Logistic Regression?
4. Linear vs Logistic Regression
5. Logistic Regression Use Cases
6. Logistic Regression Example Demo in Python
Subscribe to our channel to get video updates. Hit the subscribe button above.
Machine Learning Tutorial Playlist: https://goo.gl/UxjTxm
Welcome to the Supervised Machine Learning and Data Sciences.
Algorithms for building models. Support Vector Machines.
Classification algorithm explanation and code in Python ( SVM ) .
Assessing climate variability and change with explainable neural networksZachary Labe
I. Explainable neural networks can reveal patterns of climate change in large ensembles simulated with different combinations of external forcing.
II. Neural networks can identify unique model differences and biases between large ensembles and observations.
III. The presentation raises the question of what neural networks can reveal about predictability in the climate system, using the example of the debated "global warming hiatus" period.
Forced climate signals with explainable AI and large ensemblesZachary Labe
1) The document discusses using artificial neural networks and explainable AI methods to analyze climate model simulations and detect climate signals related to different external forcings like greenhouse gases and aerosols.
2) A case study examines temperature trends in simulations with varying external forcings, and the neural network is able to more accurately predict observations when trained on a simulation with evolving greenhouse gases.
3) Explainable AI methods reveal the neural network relies more on patterns related to greenhouse gas forcing compared to aerosol or preindustrial forcing patterns when making predictions.
This document discusses generative adversarial networks (GANs) and their applications. It begins with an overview of generative models including variational autoencoders and GANs. GANs use two neural networks, a generator and discriminator, that compete against each other in a game theoretic framework. The generator learns to generate fake samples to fool the discriminator, while the discriminator learns to distinguish real and fake samples. Applications discussed include image-to-image translation using conditional GANs to map images from one domain to another, and text-to-image translation using GANs to generate images from text descriptions.
This document discusses time series forecasting models including autoregressive (AR) models, DeepAR, and LSTNet. It provides the following information:
- Autoregressive (AR) models forecast a variable using its past values in a linear combination. The DeepAR model is based on autoregressive RNNs that learn from multiple time series datasets.
- LSTNet is designed to capture both long-term and short-term patterns in multivariate time series using CNNs, RNNs, and an autoregressive component. It combines the outputs from recurrent and recurrent-skip layers.
- The goal of models like DeepAR and LSTNet is to learn from similar time series to generalize without overfitting
Batch normalization is a technique introduced in 2015 by Google researchers to address issues like internal covariate shift and vanishing gradients. It works by normalizing the inputs to each unit to have zero mean and unit variance based on the statistics of the mini-batch. This helps the network train deeper models with higher learning rates and be less sensitive to initialization. Batch normalization is applied before the activation function of each layer during both training and inference.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
IMAGE QUALITY ASSESSMENT- A SURVEY OF RECENT APPROACHES cscpconf
Image Quality Assessment (IQA) is the process of quantifying degradation in image quality.
With the increasedimage-basedapplicationsIQAdeservesextensiveresearch.Inthis paper we have
presented popular IQA methods for the three types namely, Full Reference (FR), No Reference
(NR) and Reduced Reference (RR). The paper gives comparison of the approaches in terms of
the database used, the performance metric and the methods used.
Smart Data Slides: Machine Learning - Case StudiesDATAVERSITY
The state of the art and practice for machine learning (ML) has matured rapidly in the past 3 years, making it an ideal time to take a look at what works and what doesn’t.
In this webinar, we will review case studies from 3 industries:
-Insurance
-Healthcare
-Pharma
Participants will learn to look for characteristics of business processes and of data that make them well - or ill - suited to augmentation or automation with ML.
This document introduces classification using linear models and discriminant functions. It discusses:
1) Representing binary and multi-class labels using coding schemes like 1-of-K.
2) Using a generalized linear model framework to map linear discriminant functions to class labels.
3) Solving classification problems using least squares to determine the parameters of linear discriminant functions that minimize error on training data.
However, least squares solutions for classification have deficiencies since discriminant function outputs are unconstrained and not probabilistic.
GANs are the new hottest topic in the ML arena; however, they present a challenge for the researchers and the engineers alike. Their design, and most importantly, the code implementation has been causing headaches to the ML practitioners, especially when moving to production.
Starting from the very basic of what a GAN is, passing trough Tensorflow implementation, using the most cutting-edge APIs available in the framework, and finally, production-ready serving at scale using Google Cloud ML Engine.
Slides for the talk: https://www.pycon.it/conference/talks/deep-diving-into-gans-form-theory-to-production
Github repo: https://github.com/zurutech/gans-from-theory-to-production
For just a moment, think of the immense amount of data generated by Earth-observing systems. The sheer volume often makes it impractical for humans alone to perform the analysis, and accordingly, many groups are turning to artificial intelligence (AI) and machine learning (ML) algorithms to support their analysis. We'll hear from Development Seed and EOS about how they are using AI and ML to unlock the power of this planetary-scale data that is becoming increasingly more accessible in the cloud. From open-source libraries and human-in-the loop initial processing passes, to fully automated pipelines, we'll examine the new capacity for analysis now possible with technology.
“AI is the new electricity” proclaims Andrew Ng, co-founder of Google Brain. Just as we need to know how to safely harness electricity, we also need to know how to securely employ AI to power our businesses. In some scenarios, the security of AI systems can impact human safety. On the flip side, AI can also be misused by cyber-adversaries and so we need to understand how to counter them.
This talk will provide food for thought in 3 areas:
Security of AI systems
Use of AI in cybersecurity
Malicious use of AI
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain RatioMarina Santini
attribute selection, constructing decision trees, decision trees, divide and conquer, entropy, gain ratio, information gain, machine leaning, pruning, rules, suprisal
Every single security company is talking about how they are using machine learning—as a security company you have to claim artificial intelligence to be even part of the conversation. However, this approach can be dangerous when we blindly rely on algorithms to do the right thing. Rather than building systems with actual security knowledge, companies are using algorithms that nobody understands and, in turn, discovering wrong insights.
In this session, we will discuss:
• Limitations of machine learning and issues of explainability
• Where deep learning should never be applied
• Examples of how the blind application of algorithms can lead to wrong results
Slides by Víctor Garcia about the paper:
Reed, Scott, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. "Generative adversarial text to image synthesis." ICML 2016.
Logistic Regression in Python | Logistic Regression Example | Machine Learnin...Edureka!
** Python Data Science Training : https://www.edureka.co/python **
This Edureka Video on Logistic Regression in Python will give you basic understanding of Logistic Regression Machine Learning Algorithm with examples. In this video, you will also get to see demo on Logistic Regression using Python. Below are the topics covered in this tutorial:
1. What is Regression?
2. What is Logistic Regression?
3. Why use Logistic Regression?
4. Linear vs Logistic Regression
5. Logistic Regression Use Cases
6. Logistic Regression Example Demo in Python
Subscribe to our channel to get video updates. Hit the subscribe button above.
Machine Learning Tutorial Playlist: https://goo.gl/UxjTxm
Welcome to the Supervised Machine Learning and Data Sciences.
Algorithms for building models. Support Vector Machines.
Classification algorithm explanation and code in Python ( SVM ) .
Assessing climate variability and change with explainable neural networksZachary Labe
I. Explainable neural networks can reveal patterns of climate change in large ensembles simulated with different combinations of external forcing.
II. Neural networks can identify unique model differences and biases between large ensembles and observations.
III. The presentation raises the question of what neural networks can reveal about predictability in the climate system, using the example of the debated "global warming hiatus" period.
Forced climate signals with explainable AI and large ensemblesZachary Labe
1) The document discusses using artificial neural networks and explainable AI methods to analyze climate model simulations and detect climate signals related to different external forcings like greenhouse gases and aerosols.
2) A case study examines temperature trends in simulations with varying external forcings, and the neural network is able to more accurately predict observations when trained on a simulation with evolving greenhouse gases.
3) Explainable AI methods reveal the neural network relies more on patterns related to greenhouse gas forcing compared to aerosol or preindustrial forcing patterns when making predictions.
Applications of machine learning for climate change and variabilityZachary Labe
23 February 2024…
Department of Environmental Sciences Seminar (Presentation): Applications of machine learning for climate change and variability, Rutgers University, New Brunswick, NJ.
References:
Labe, Z.M. and E.A. Barnes (2021), Detecting climate signals using explainable AI with single-forcing large ensembles. Journal of Advances in Modeling Earth Systems, DOI:10.1029/2021MS002464
Labe, Z.M. and E.A. Barnes (2022), Predicting slowdowns in decadal climate warming trends with explainable neural networks. Geophysical Research Letters, DOI:10.1029/2022GL098173
Labe, Z. M., Johnson, N. C., & Delworth, T. L. (2024). Changes in United States summer temperatures revealed by explainable neural networks. Earth's Future, DOI:10.1029/2023EF003981
Exploring climate change signals with explainable AIZachary Labe
This document provides an overview of using machine learning and explainable AI techniques to analyze climate model data and make predictions about climate signals and forcings. It discusses:
1) The growing use of machine learning tools like neural networks in weather and climate research to better understand models, find relationships in data, and make predictions.
2) An example of using a neural network to classify decade from surface temperature maps and explaining the predictions using Layer-wise Relevance Propagation.
3) Results showing the neural network had greater skill in identifying patterns related to greenhouse gas forcing versus aerosol forcing, and LRP helped identify the relevant regions driving these predictions.
Disentangling Climate Forcing in Multi-Model Large Ensembles Using Neural Net...Zachary Labe
The relative roles of individual forcings on large-scale climate variability remain difficult to disentangle within fully-coupled global climate model simulations. Here, we train an artificial neural network (ANN) to classify the climate forcings of a new set of CESM1 initial-condition large ensembles that are forced by different combinations of aerosol (industrial and biomass burning), greenhouse gas, and land-use/land-cover forcings. As a result of learning the regional responses of internal variability to the different external forcings, the ANN is able to successfully classify the dominant forcing for each model simulation. Using recently developed explainable AI methods, such as layerwise relevance propagation, we then compare the patterns of climate variability identified by the ANN between different external climate forcings that are learned by the neural network. Further, we apply this ANN architecture on additional climate simulations from the multi-model large ensemble archive, which include all anthropogenic and natural radiative forcings. From this collection of initial-condition ensembles, the ANN is also able to detect changes in atmospheric internal variability between the 20th and 21st centuries by training on climate fields after the mean forced signal has already been removed. This ANN framework and its associated visualization tools provide a novel approach to extract complex patterns of observable and projected climate variability and trends in Earth system models. (from https://ams.confex.com/ams/101ANNUAL/meetingapp.cgi/Paper/379553)
Distinguishing the regional emergence of United States summer temperatures be...Zachary Labe
Labe, Z.M., N.C. Johnson, and T.L. Delworth. Distinguishing the regional emergence of United States summer temperatures between observations and climate model large ensembles, 23rd Conference on Artificial Intelligence for Environmental Science, Baltimore, MD (Jan 2024). https://ams.confex.com/ams/104ANNUAL/meetingapp.cgi/Paper/431288
Climate Signals in CESM1 Single-Forcing Large Ensembles Revealed by Explainab...Zachary Labe
26th Annual CESM Workshop - Machine Learning: CESM-Related Efforts
In this study, we use an explainable artificial intelligence met hod to identify climate signals that are found in a new set of single-forcing large ensembles from CESM1. To compare patterns between simulations, we adopt an artificial neural network (ANN) that predicts the year from input maps of near-surface temperature. We find that the North Atlantic Ocean is an important region for the ANN to make its prediction, especially for the simulation forced without time-evolving industrial aerosols.
Revealing climate change signals with explainable AIZachary Labe
1. The document discusses using explainable artificial intelligence methods to analyze climate model data and reveal patterns of climate change.
2. An artificial neural network was trained to predict decades from maps of surface temperature and showed greater skill at predicting decades when greenhouse gases were the only forcing compared to when aerosols were the only forcing.
3. Explainable AI techniques showed the North Atlantic region was particularly important for the neural network's predictions in climate model experiments forced by both aerosols and greenhouse gases.
Climate change extremes by season in the United StatesZachary Labe
11 September 2023…
Hershey Horticulture Society (Presentation): Climate change extremes by season in the United States, Hershey, PA, USA.
References...
Eischeid, J.K., M.P. Hoerling, X.-W. Quan, A. Kumar, J. Barsugli, Z.M. Labe, K.E. Kunkel, C.J. Schreck III, D.R. Easterling, T. Zhang, J. Uehling, and X. Zhang (2023). Why has the summertime central U.S. warming hole not disappeared? Journal of Climate, DOI:10.1175/JCLI-D-22-0716.1
Labe, Z.M., T.R. Ault, and R. Zurita-Milla (2016), Identifying Anomalously Early Spring Onsets in the CESM Large Ensemble Project, R. Clim Dyn, DOI:10.1007/s00382-016-3313-2
Labe, Z.M., N.C. Johnson, and T.L Delworth (2023). Changes in United States summer temperatures revealed by explainable neural networks. Preprint. DOI: 10.22541/essoar.168987129.98069596/v1
Exploring explainable machine learning for detecting changes in climateZachary Labe
9 February 2023…
Department of Earth, Ocean, and Atmospheric Science Colloquium (Presentation): Exploring explainable machine learning for detecting changes in climate, Florida State University, USA. Remote Presentation.
References:
Labe, Z.M. and E.A. Barnes (2021), Detecting climate signals using explainable AI with single-forcing large ensembles. Journal of Advances in Modeling Earth Systems, DOI:10.1029/2021MS002464
Labe, Z.M. and E.A. Barnes (2022), Predicting slowdowns in decadal climate warming trends with explainable neural networks. Geophysical Research Letters, DOI:10.1029/2022GL098173
Po-Chedley, S., J.T. Fasullo, N. Siler, Z.M. Labe, E.A. Barnes, C.J.W. Bonfils, and B.D. Santer (2022). Internal variability and forcing influence model-satellite differences in the rate of tropical tropospheric warming. Proceedings of the National Academy of Sciences, DOI:10.1073/pnas.2209431119
Using explainable machine learning for evaluating patterns of climate changeZachary Labe
22 February 2023…
Natural Sciences Group Seminar (Presentation): Using explainable machine learning for evaluating patterns of climate change, Washington State University Vancouver, USA. Remote Presentation.
References:
Labe, Z.M. and E.A. Barnes (2021), Detecting climate signals using explainable AI with single-forcing large ensembles. Journal of Advances in Modeling Earth Systems, DOI:10.1029/2021MS002464
Labe, Z.M. and E.A. Barnes (2022), Predicting slowdowns in decadal climate warming trends with explainable neural networks. Geophysical Research Letters, DOI:10.1029/2022GL098173
Evaluating and communicating Arctic climate change projectionZachary Labe
20 February 2023…
Climate Change and Agriculture Guest (Presentation): Evaluating and communicating Arctic climate change projections, Kansas State University, USA.
References...
Delworth, T. L., Cooke, W. F., Adcroft, A., Bushuk, M., Chen, J. H., Dunne, K. A., ... & Zhao, M. (2020). SPEAR: The next generation GFDL modeling system for seasonal to multidecadal prediction and projection. Journal of Advances in Modeling Earth Systems, 12(3), e2019MS001895, https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2019MS001895
Labe, Z.M. and E.A. Barnes (2022), Comparison of climate model large ensembles with observations in the Arctic using simple neural networks. Earth and Space Science, DOI:10.1029/2022EA002348, https://doi.org/10.1029/2022EA002348
Labe, Z.M., Y. Peings, and G. Magnusdottir (2020). Warm Arctic, cold Siberia pattern: role of full Arctic amplification versus sea ice loss alone, Geophysical Research Letters, DOI:10.1029/2020GL088583, https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2020GL088583
Peings, Y., Cattiaux, J., Vavrus, S. J., & Magnusdottir, G. (2018). Projected squeezing of the wintertime North-Atlantic jet. Environmental Research Letters, 13(7), 074016, https://iopscience.iop.org/article/10.1088/1748-9326/aacc79/meta
Learning new climate science by thinking creatively with machine learningZachary Labe
Presentation for: GFDL/AOS Summer Internship Lecture Series
The popularity of machine learning is rapidly growing in nearly all areas of Earth science. However, there is also some hesitancy in adopting the use of these methods due to concerns about their reliability, reproducibility, and interpretability – thus, they are often described as “black boxes.”
In this talk, I will introduce a few simple examples from climate science that leverage new visualization methods to peer into the machine learning “black box,” which help us to better understand their predictions while also learning new science. These same machine learning visualization tools can be easily adapted for a wide variety of applications and other scientific fields of study.
Learning new climate science by opening the machine learning black boxZachary Labe
Department of Psychology (Invited): Cognitive Brownbag Series, Colorado State University, CO.
The popularity of machine learning, specifically neural networks, is rapidly growing in nearly all areas of science. The explosion of these methods also coincides with a growing influx of computationally expensive data sets and the need for high efficiency in solving predication problems. However, there is also some hesitancy in adopting the use of neural networks due to concerns about their reliability, reproducibility, and interpretability – thus, they are often described as “black boxes.”
In climate science, we often consider signal-to-noise problems to help disentangle human-caused climate change from natural variability. These applications typically involve complicated nonlinear relationships between different feedbacks at play in the ocean, land, and atmosphere. Recent work has shown that neural networks can be a promising tool for solving these types of statistical problems when combined with explainability techniques developed by the fields of computer science and image processing. Interestingly, these methods have revealed that neural networks often leverage regional patterns of climate change indicators in order to make their predictions. In this talk, I will share examples from climate science that use a few of these visualization methods to peer into the “black box” of neural networks, which help us to better understand their decision-making processes while also learning new science. The same machine learning visualization methods can be easily adapted for a wide variety of applications and other scientific fields of study.
Explainable AI approach for evaluating climate models in the ArcticZachary Labe
27 March 2024…
IARPC Collaborations, Modelers’ Community of Practice (Presentation): Explainable AI approach for evaluating climate models in the Arctic. Remote Presentation.
References...
Labe, Z. M., & Barnes, E. A. (2022). Comparison of climate model large ensembles with observations in the Arctic using simple neural networks. Earth and Space Science, 9(7), e2022EA002348, https://doi.org/10.1029/2022EA002348
Machine learning for evaluating climate model projectionsZachary Labe
IEEE-Student Branch- IIT Indore, Tech-Talks 2.0. Remote Presentation (Dec 2022) (Invited).
The popularity of machine learning methods, such as neural networks, is rapidly expanding in nearly all areas of science. The interest in these tools also coincides with a growing influx of big data and the need for high efficiency in solving predication problems. However, there is also some hesitancy in adopting the use of neural networks due to concerns about their reliability, reproducibility, and interpretability.
In climate science, we often consider signal-to-noise problems to help disentangle human-caused climate change from natural variability. These applications typically involve complicated relationships between different feedbacks at play in the ocean, cryosphere, land, and atmosphere. Recent work has shown that neural networks can be a promising tool for solving these types of statistical problems when combined with explainability techniques developed by the fields of computer science and image processing. Interestingly, these methods have revealed that neural networks often leverage regional patterns of climate change in order to make their predictions. In this webinar, I will share examples from climate science that use a few of these visualization methods to peer into the “black box” of neural networks, which help us to better understand their decision-making process while also learning new science. The same machine learning visualization methods can be easily adapted for a wide variety of applications and other scientific fields of study.
Using artificial neural networks to predict temporary slowdowns in global war...Zachary Labe
1. Researchers used an artificial neural network to predict slowdowns in the rate of decadal global warming by analyzing patterns in ocean heat content anomalies.
2. Explainable AI techniques showed the neural network was leveraging tropical ocean heat content to make predictions.
3. Transitions between phases of the Interdecadal Pacific Oscillation were often associated with periods of slower warming in climate model simulations, consistent with observed temperature trends.
Explainable neural networks for evaluating patterns of climate change and var...Zachary Labe
12 March 2024…
Sharing Science – North American Webinar, Young Earth System Scientists (YESS) Community (Presentation): Explainable neural networks for evaluating patterns of climate change and variability. Remote Presentation.
References...
Labe, Z.M., E.A. Barnes, and J.W. Hurrell (2023). Identifying the regional emergence of climate patterns in the ARISE-SAI-1.5 simulations. Environmental Research Letters, DOI:10.1088/1748-9326/acc81a
Reexamining future projections of Arctic climate linkagesZachary Labe
10 May 2024…
Atmospheric and Oceanic Sciences Student/Postdoc Seminar (Presentation): Reexamining future projections of Arctic climate linkages, Princeton University, USA.
References...
Labe, Z.M., Y. Peings, and G. Magnusdottir (2018), Contributions of ice thickness to the atmospheric response from projected Arctic sea ice loss,
Geophysical Research Letters, DOI:10.1029/2018GL078158
Labe, Z.M., Y. Peings, and G. Magnusdottir (2019). The effect of QBO phase on the atmospheric response to projected Arctic sea ice loss in early winter, Geophysical Research Letters, DOI:10.1029/2019GL083095
Labe, Z.M., Y. Peings, and G. Magnusdottir (2020). Warm Arctic, cold Siberia pattern: role of full Arctic amplification versus sea ice loss alone, Geophysical Research Letters, DOI:10.1029/2020GL088583
Labe, Z.M., May 2020: The effects of Arctic sea-ice thickness loss and stratospheric variability on mid-latitude cold spells. University of California, Irvine. Doctoral Dissertation.
Peings, Y., Z.M. Labe, and G. Magnusdottir (2021), Are 100 ensemble members enough to capture the remote atmospheric response to +2°C Arctic sea ice loss? Journal of Climate, DOI:10.1175/JCLI-D-20-0613.1
Techniques and Considerations for Improving Accessibility in Online MediaZachary Labe
3 April 2024…
United States Association of Polar Early Career Scientists (USAPECS) IDEA Training Course (Presentation): Accessibility and disability in online spaces. Remote Presentation.
An intro to explainable AI for polar climate scienceZachary Labe
26 March 2024…
GFDL Polar Climate Interest Group (Presentation): An intro to explainable AI for polar climate science, NOAA GFDL, Princeton, NJ.
References:
Labe, Z.M. and E.A. Barnes (2022), Comparison of climate model large ensembles with observations in the Arctic using simple neural networks. Earth and Space Science, DOI:10.1029/2022EA002348, https://doi.org/10.1029/2022EA002348
Labe, Z.M. and E.A. Barnes (2021), Detecting climate signals using explainable AI with single-forcing large ensembles. Journal of Advances in Modeling Earth Systems, DOI:10.1029/2021MS002464, https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2021MS002464
Using accessible data to communicate global climate changeZachary Labe
25 March 2024…
Climate Communication Workshop: Learn How To Make Your Research Matter (Keynote Presentation): Using accessible data to communicate global climate change, Temple University, Philadelphia, PA.
Water in a Frozen Arctic: Cross-Disciplinary PerspectivesZachary Labe
14 March 2024…
United States Association of Polar Early Career Scientists (USAPECS) Webinar (Host): Water in a Frozen Arctic: Cross-Disciplinary Perspectives. Remote Panel.
Event Page: https://www.usapecs.org/post/webinar-water-frozen-arctic
data-driven approach to identifying key regions of change associated with fut...Zachary Labe
Labe, Z.M., T.L. Delworth, N.C. Johnson, and W.F. Cooke. A data-driven approach to identifying key regions of change associated with future climate scenarios, 23rd Conference on Artificial Intelligence for Environmental Science, Baltimore, MD (Jan 2024). https://ams.confex.com/ams/104ANNUAL/meetingapp.cgi/Paper/431300
Researching and Communicating Our Changing ClimateZachary Labe
Zachary Labe is a postdoc researcher at NOAA GFDL and Princeton University who studies climate variability and change. His research uses tools like artificial intelligence and climate models to disentangle the signal of climate change from natural weather noise. He conducts field work including Arctic expeditions and uses supercomputers to run complex climate models that generate huge amounts of data.
Revisiting projections of Arctic climate change linkagesZachary Labe
16 November 2023…
Department Seminar (Presentation): Revisiting projections of Arctic climate change linkages, Tongji University, Shanghai, China. Remote Presentation.
References:
Labe, Z.M., Y. Peings, and G. Magnusdottir (2018), Contributions of ice thickness to the atmospheric response from projected Arctic sea ice loss, Geophysical Research Letters, DOI: 10.1029/2018GL078158
Labe, Z.M., Y. Peings, and G. Magnusdottir (2019). The effect of QBO phase on the atmospheric response to projected Arctic sea ice loss in early winter, Geophysical Research Letters, DOI: 10.1029/2019GL083095
Labe, Z.M., Y. Peings, and G. Magnusdottir (2020). Warm Arctic, cold Siberia pattern: role of full Arctic amplification versus sea ice loss alone, Geophysical Research Letters, DOI: 10.1029/2020GL088583
Peings, Y., Z.M. Labe, and G. Magnusdottir (2021), Are 100 ensemble members enough to capture the remote atmospheric response to +2°C Arctic sea ice loss?
Journal of Climate, DOI: 10.1175/JCLI-D-20-0613.1
Labe, Z.M. and E.A. Barnes (2022), Comparison of climate model large ensembles with observations in the Arctic using simple neural networks. Earth and Space Science, DOI: 10.1029/2022EA002348
Visualizing climate change through dataZachary Labe
18 November 2023…
NJ State Museum Planetarium (Presentation): Visualizing climate change through data, Trenton, NJ.
References...
Eischeid, J.K., M.P. Hoerling, X.-W. Quan, A. Kumar, J. Barsugli, Z.M. Labe, K.E. Kunkel, C.J. Schreck III, D.R. Easterling, T. Zhang, J. Uehling, and X. Zhang (2023). Why has the summertime central U.S. warming hole not disappeared? Journal of Climate, DOI:10.1175/JCLI-D-22-0716.1, https://journals.ametsoc.org/view/journals/clim/36/20/JCLI-D-22-0716.1.xml
Using explainable machine learning to evaluate climate change projectionsZachary Labe
5 October 2023…
Atmosphere and Ocean Climate Dynamics Seminar (Presentation): Using explainable machine learning to evaluate climate change projections, Yale University, New Haven, CT. Remote Presentation.
References...
Labe, Z.M., E.A. Barnes, and J.W. Hurrell (2023). Identifying the regional emergence of climate patterns in the ARISE-SAI-1.5 simulations. Environmental Research Letters, DOI:10.1088/1748-9326/acc81a, https://iopscience.iop.org/article/10.1088/1748-9326/acc81a
Contrasting polar climate change in the past, present, and futureZachary Labe
28 September 2023…
Guest lecture for “Observing and Modeling Climate Change (EES 3506/5506)” (Presentation): Contrasting polar climate change in the past, present, and future, Temple University, Philadelphia, PA. Remote Presentation.
Guest Lecture: Our changing Arctic in the past and futureZachary Labe
22 August 2023…
Guest lecture for “Introduction to Global Climate Change (ESS 15)” (Invited): Our changing Arctic in the past and future, University of California, Irvine, CA. Remote Presentation.
References...
Delworth, T. L., Cooke, W. F., Adcroft, A., Bushuk, M., Chen, J. H., Dunne, K. A., ... & Zhao, M. (2020). SPEAR: The next generation GFDL modeling system for seasonal to multidecadal prediction and projection. Journal of Advances in Modeling Earth Systems, 12(3), e2019MS001895, https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2019MS001895
Labe, Z.M. and E.A. Barnes (2022), Comparison of climate model large ensembles with observations in the Arctic using simple neural networks. Earth and Space Science, DOI:10.1029/2022EA002348, https://doi.org/10.1029/2022EA002348
Labe, Z.M., Y. Peings, and G. Magnusdottir (2020). Warm Arctic, cold Siberia pattern: role of full Arctic amplification versus sea ice loss alone, Geophysical Research Letters, DOI:10.1029/2020GL088583, https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2020GL088583
Monitoring indicators of climate change through data-driven visualizationZachary Labe
19 June 2023…
La Uni Climática - IV Edition (Presentation): Monitoring indicators of climate change through data-driven visualization. Remote Presentation.
Career pathways and research opportunities in the Earth sciencesZachary Labe
20 April 2023…
Mercer County Community College (Presentation): Career pathways and research opportunities in the Earth sciences, West Windsor Township, NJ, USA.
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdfSelcen Ozturkcan
Ozturkcan, S., Berndt, A., & Angelakis, A. (2024). Mending clothing to support sustainable fashion. Presented at the 31st Annual Conference by the Consortium for International Marketing Research (CIMaR), 10-13 Jun 2024, University of Gävle, Sweden.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...
Explainable AI for identifying regional climate change patterns
1. Explainable AI for identifying
regional climate change patterns
@ZLabe
Zachary M. Labe
Postdoc at Princeton University and NOAA GFDL
13 January 2023 – University of Leeds
Scientific Machine Learning Community (SciML)
https://zacklabe.com/
2. • Do it better
• e.g., parameterizations in climate models are not
perfect, use ML to make them more accurate
• Do it faster
• e.g., code in climate models is very slow (but we
know the right answer) - use ML methods to speed
things up
• Do something new
• e.g., go looking for non-linear relationships you
didn’t know were there
Very relevant for
research: may be
slower and worse,
but can still learn
something
WHY SHOULD WE CONSIDER
MACHINE LEARNING?
5. Today’s weather or climate
scientist is far more likely to be
debugging code written in
Python… than to be poring over
satellite images or releasing
radiosondes.
“
D. Irving| Bulletin of the American Meteorological Society| 2016
6. Machine learning for weather
IDENTIFYING SEVERE THUNDERSTORMS
Molina et al. 2021
Toms et al. 2021
CLASSIFYING PHASE OF MADDEN-JULLIAN OSCILLATION
SATELLITE DETECTION
Lee et al. 2021
DETECTING TORNADOES
McGovern et al. 2019
7. Machine learning for climate
FINDING FORECASTS OF OPPORTUNITY
Mayer and Barnes, 2021
PREDICTING CLIMATE MODES OF VARIABILITY
Gordon et al. 2021
TIMING OF CLIMATE CHANGE
Barnes et al. 2019
8. Machine learning for oceanography
CLASSIFYING ARCTIC OCEAN ACIDIFICATION
Krasting et al. 2022
TRACK AND REVEAL DEEP WATER MASSES
Sonnewald and Lguensat, 2021
ESTIMATING OCEAN SURFACE CURRENTS
Sinha and Abernathey, 2021
14. We know some metadata…
+ What year is it?
+ Where did it come from?
[Labe and Barnes, 2022; ESS]
TEMPERATURE
15. TEMPERATURE
Neural network learns nonlinear
combinations of forced climate
patterns to identify the year
We know some metadata…
+ What year is it?
+ Where did it come from?
[Labe and Barnes, 2022; ESS]
16. ----ANN----
2 Hidden Layers
10 Nodes each
Ridge Regularization
Early Stopping
[e.g., Barnes et al. 2019, 2020]
[e.g., Labe and Barnes, 2021]
TIMING OF EMERGENCE
(COMBINED VARIABLES)
RESPONSES TO
EXTERNAL CLIMATE
FORCINGS
PATTERNS OF
CLIMATE INDICATORS
[e.g., Rader et al. 2022]
Surface Temperature Map Precipitation Map
+
TEMPERATURE
We know some metadata…
+ What year is it?
+ Where did it come from?
[Labe and Barnes, 2022; ESS]
17. ----ANN----
2 Hidden Layers
10 Nodes each
Ridge Regularization
Early Stopping
[e.g., Barnes et al. 2019, 2020]
[e.g., Labe and Barnes, 2021]
TIMING OF EMERGENCE
(COMBINED VARIABLES)
RESPONSES TO
EXTERNAL CLIMATE
FORCINGS
PATTERNS OF
CLIMATE INDICATORS
Surface Temperature Map Precipitation Map
+
TEMPERATURE
[e.g., Rader et al. 2022]
We know some metadata…
+ What year is it?
+ Where did it come from?
[Labe and Barnes, 2022; ESS]
19. What is the annual mean temperature of Earth?
THE REAL WORLD
(Observations)
Anomaly is relative to 1951-1980
20. What is the annual mean temperature of Earth?
THE REAL WORLD
(Observations)
Let’s run a
climate model
21. What is the annual mean temperature of Earth?
THE REAL WORLD
(Observations)
Let’s run a
climate model
again
22. What is the annual mean temperature of Earth?
THE REAL WORLD
(Observations)
Let’s run a
climate model
again & again
23. What is the annual mean temperature of Earth?
THE REAL WORLD
(Observations)
CLIMATE MODEL
ENSEMBLES
24. What is the annual mean temperature of Earth?
THE REAL WORLD
(Observations)
CLIMATE MODEL
ENSEMBLES
Range of ensembles
= internal variability (noise)
Mean of ensembles
= forced response (climate change)
25. What is the annual mean temperature of Earth?
Range of ensembles
= internal variability (noise)
Mean of ensembles
= forced response (climate change)
But let’s remove
climate change…
26. What is the annual mean temperature of Earth?
Range of ensembles
= internal variability (noise)
Mean of ensembles
= forced response (climate change)
After removing the
forced response…
anomalies/noise!
27. What is the annual mean temperature of Earth?
• Increasing greenhouse gases (CO2, CH4, N2O)
• Changes in industrial aerosols (SO4, BC, OC)
• Changes in biomass burning (aerosols)
• Changes in land-use & land-cover (albedo)
28. What is the annual mean temperature of Earth?
• Increasing greenhouse gases (CO2, CH4, N2O)
• Changes in industrial aerosols (SO4, BC, OC)
• Changes in biomass burning (aerosols)
• Changes in land-use & land-cover (albedo)
Plus everything else…
(Natural/internal variability)
30. Greenhouse gases fixed to 1920 levels
All forcings (CESM-LE)
Industrial aerosols fixed to 1920 levels
[Deser et al. 2020, JCLI]
Fully-coupled CESM1.1
20 Ensemble Members
Run from 1920-2080
Observations
31. So what?
Greenhouse gases = warming
Aerosols = ?? (though mostly cooling)
What are the relative responses
between greenhouse gas
and aerosol forcing?
34. INPUT LAYER
HIDDEN LAYERS
OUTPUT LAYER
Surface Temperature Map
“2000-2009”
DECADE CLASS
“2070-2079”
“1920-1929”
ARTIFICIAL NEURAL NETWORK (ANN)
35. INPUT LAYER
HIDDEN LAYERS
OUTPUT LAYER
Surface Temperature Map
“2000-2009”
DECADE CLASS
“2070-2079”
“1920-1929”
BACK-PROPAGATE THROUGH NETWORK = EXPLAINABLE AI
ARTIFICIAL NEURAL NETWORK (ANN)
36. INPUT LAYER
HIDDEN LAYERS
OUTPUT LAYER
Layer-wise Relevance Propagation
Surface Temperature Map
“2000-2009”
DECADE CLASS
“2070-2079”
“1920-1929”
BACK-PROPAGATE THROUGH NETWORK = EXPLAINABLE AI
ARTIFICIAL NEURAL NETWORK (ANN)
[Barnes et al. 2020, JAMES]
[Labe and Barnes 2021, JAMES]
37. LAYER-WISE RELEVANCE PROPAGATION (LRP)
Volcano
Great White
Shark
Timber
Wolf
Image Classification LRP
https://heatmapping.org/
LRP heatmaps show regions
of “relevance” that
contribute to the neural
network’s decision-making
process for a sample
belonging to a particular
output category
Neural Network
WHY
WHY
WHY
Backpropagation – LRP
38. LAYER-WISE RELEVANCE PROPAGATION (LRP)
Volcano
Great White
Shark
Timber
Wolf
Image Classification LRP
https://heatmapping.org/
LRP heatmaps show regions
of “relevance” that
contribute to the neural
network’s decision-making
process for a sample
belonging to a particular
output category
Neural Network
WHY
WHY
WHY
Backpropagation – LRP
39. LAYER-WISE RELEVANCE PROPAGATION (LRP)
Volcano
Great White
Shark
Timber
Wolf
Image Classification LRP
https://heatmapping.org/
LRP heatmaps show regions
of “relevance” that
contribute to the neural
network’s decision-making
process for a sample
belonging to a particular
output category
Neural Network
Backpropagation – LRP
WHY
WHY
WHY
40. LAYER-WISE RELEVANCE PROPAGATION (LRP)
Image Classification LRP
https://heatmapping.org/
NOT PERFECT
Crock
Pot
Neural Network
Backpropagation – LRP
WHY
41. [Adapted from Adebayo et al., 2020]
EXPLAINABLE AI IS
NOT PERFECT
THERE ARE MANY
METHODS
42. [Adapted from Adebayo et al., 2020]
THERE ARE MANY
METHODS
EXPLAINABLE AI IS
NOT PERFECT
44. Neural
Network
[0] La Niña [1] El Niño
[Toms et al. 2020, JAMES]
Input a map of sea surface temperatures
45. Visualizing something we already know…
Input maps of sea surface
temperatures to identify
El Niño or La Niña
Use ‘LRP’ to see how the
neural network is making
its decision
[Toms et al. 2020, JAMES]
Layer-wise Relevance Propagation
Composite Observations
LRP [Relevance]
SST Anomaly [°C]
0.00 0.75
0.0 1.5
-1.5
46. INPUT LAYER
HIDDEN LAYERS
OUTPUT LAYER
Layer-wise Relevance Propagation
Surface Temperature Map
“2000-2009”
DECADE CLASS
“2070-2079”
“1920-1929”
BACK-PROPAGATE THROUGH NETWORK = EXPLAINABLE AI
ARTIFICIAL NEURAL NETWORK (ANN)
[Barnes et al. 2020, JAMES]
[Labe and Barnes 2021, JAMES]
47. 1960-1999: ANNUAL MEAN TEMPERATURE TRENDS
Greenhouse gases fixed
to 1920 levels
[AEROSOLS PREVAIL]
Industrial aerosols fixed
to 1920 levels
[GREENHOUSE GASES PREVAIL]
All forcings
[STANDARD CESM-LE]
DATA
48. 1960-1999: ANNUAL MEAN TEMPERATURE TRENDS
Greenhouse gases fixed
to 1920 levels
[AEROSOLS PREVAIL]
Industrial aerosols fixed
to 1920 levels
[GREENHOUSE GASES PREVAIL]
All forcings
[STANDARD CESM-LE]
DATA
49. 1960-1999: ANNUAL MEAN TEMPERATURE TRENDS
Greenhouse gases fixed
to 1920 levels
[AEROSOLS PREVAIL]
Industrial aerosols fixed
to 1920 levels
[GREENHOUSE GASES PREVAIL]
All forcings
[STANDARD CESM-LE]
DATA
50. 1960-1999: ANNUAL MEAN TEMPERATURE TRENDS
Greenhouse gases fixed
to 1920 levels
[AEROSOLS PREVAIL]
Industrial aerosols fixed
to 1920 levels
[GREENHOUSE GASES PREVAIL]
All forcings
[STANDARD CESM-LE]
DATA
51. CLIMATE MODEL DATA PREDICT THE YEAR FROM MAPS OF TEMPERATURE
AEROSOLS
PREVAIL
GREENHOUSE GASES
PREVAIL
STANDARD
CLIMATE MODEL
[Labe and Barnes 2021, JAMES]
52. OBSERVATIONS PREDICT THE YEAR FROM MAPS OF TEMPERATURE
AEROSOLS
PREVAIL
GREENHOUSE GASES
PREVAIL
STANDARD
CLIMATE MODEL
[Labe and Barnes 2021, JAMES]
53. OBSERVATIONS
SLOPES
PREDICT THE YEAR FROM MAPS OF TEMPERATURE
AEROSOLS
PREVAIL
GREENHOUSE GASES
PREVAIL
STANDARD
CLIMATE MODEL
[Labe and Barnes 2021, JAMES]
61. Higher LRP values indicate greater relevance
for the ANN’s prediction
AVERAGED OVER 1960-2039
Aerosol-driven
Greenhouse gas-driven
All forcings
Low High
[Labe and Barnes 2021, JAMES]
91. N
Y
HIDDEN LAYERS
INPUT LAYER
INPUT LAYER
SAI WORLD?
or
or
map of near-surface temperature
map of near-surface temperature
map of total precipitation
map of total precipitation
Years Since
SAI Injection
OUTPUT
LOGISTIC
REGRESSION
ARTIFICAL
NEURAL
NETWORK
softmax
[Labe et al. 2023, EarthArXiv]
92. N
Y
HIDDEN LAYERS
INPUT LAYER
INPUT LAYER
SAI WORLD?
or
or
map of near-surface temperature
map of near-surface temperature
map of total precipitation
map of total precipitation
Years Since
SAI Injection
OUTPUT
LOGISTIC
REGRESSION
ARTIFICAL
NEURAL
NETWORK
softmax
[Labe et al. 2023, EarthArXiv]
93. CAN WE DETECT A SAI WORLD?
[Labe et al. 2023, EarthArXiv]
99. CAN WE DETECT A SAI WORLD?
[Labe et al. 2023, EarthArXiv]
100. N
Y
HIDDEN LAYERS
INPUT LAYER
INPUT LAYER
SAI WORLD?
or
or
map of near-surface temperature
map of near-surface temperature
map of total precipitation
map of total precipitation
Years Since
SAI Injection
OUTPUT
LOGISTIC
REGRESSION
ARTIFICAL
NEURAL
NETWORK
softmax
[Labe et al. 2023, EarthArXiv]
105. WE CAN LEARN NEW SCIENCE
FROM EXPLAINABLE AI.
3)
106. KEY POINTS
1. Machine learning is just another tool to add to our scientific workflow
2. We can use explainable AI (XAI) methods to peer into the black box of machine learning
3. We can learn new science by using XAI methods in conjunction with existing statistical tools
Zachary Labe
zachary.labe@noaa.gov
Labe, Z.M. and E.A. Barnes (2021), Detecting climate signals using explainable AI with single-forcing
large ensembles. Journal of Advances in Modeling Earth Systems, DOI: 10.1029/2021MS002464
Po-Chedley, S., J.T. Fasullo, N. Siler, Z.M. Labe, E.A. Barnes, C.J.W. Bonfils, and B.D. Santer (2022). Internal
variability and forcing influence model-satellite differences in the rate of tropical tropospheric
warming. Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.2209431119
Labe, Z.M., E.A. Barnes, and J.W. Hurrell (2023). Identifying the regional emergence of climate patterns
in a simulation of stratospheric aerosol injection. EarthArXiv, DOI: 10.31223/X5394Z