The document summarizes an optimization technique used to adjust air pollution emissions rates in an air quality model using data from low-cost air quality sensors. The technique develops an inversion method to automatically adjust emissions inputs to improve model predictions against monitored concentrations. Preliminary tests of the technique in Cambridge, UK optimized NOx emissions rates from 305 road sources against data from 20 low-cost sensors and 5 reference monitors. The optimization reduced errors between modeled and monitored concentrations and adjusted emissions profiles and rates in a physically reasonable manner.
Marc stettler modelling of instantaneous vehicle emissions - dmug17IES / IAQM
DMUG remains the key annual event for experts in this field. Unmissable speakers will be examining topical issues in emissions, exposure and dispersion modelling.
Roger Barrowcliff - Chairman's introduction to vehicle section - DMUG17IES / IAQM
An unapologetically technical conference, DMUG remains the key annual event for experts in this field. Unmissable speakers will be examining topical issues in emissions, exposure and dispersion modelling.
Dr James Tate - Better estimation of vehicle emissions for modelling - DMUG17IES / IAQM
An unapologetically technical conference, DMUG remains the key annual event for experts in this field. Unmissable speakers will be examining topical issues in emissions, exposure and dispersion modelling.
Dr Glyn Rhys-Tyler - Road vehicle exhaust emissions; 'an age of uncertainty' ...IES / IAQM
DMUG remains the key annual event for experts in this field. Unmissable speakers will be examining topical issues in emissions, exposure and dispersion modelling.
Emma Gibbons - Model uncertainty in the assessment of major infrastructure pr...IES / IAQM
DMUG remains the key annual event for experts in this field. Unmissable speakers will be examining topical issues in emissions, exposure and dispersion modelling.
Marc stettler modelling of instantaneous vehicle emissions - dmug17IES / IAQM
DMUG remains the key annual event for experts in this field. Unmissable speakers will be examining topical issues in emissions, exposure and dispersion modelling.
Roger Barrowcliff - Chairman's introduction to vehicle section - DMUG17IES / IAQM
An unapologetically technical conference, DMUG remains the key annual event for experts in this field. Unmissable speakers will be examining topical issues in emissions, exposure and dispersion modelling.
Dr James Tate - Better estimation of vehicle emissions for modelling - DMUG17IES / IAQM
An unapologetically technical conference, DMUG remains the key annual event for experts in this field. Unmissable speakers will be examining topical issues in emissions, exposure and dispersion modelling.
Dr Glyn Rhys-Tyler - Road vehicle exhaust emissions; 'an age of uncertainty' ...IES / IAQM
DMUG remains the key annual event for experts in this field. Unmissable speakers will be examining topical issues in emissions, exposure and dispersion modelling.
Emma Gibbons - Model uncertainty in the assessment of major infrastructure pr...IES / IAQM
DMUG remains the key annual event for experts in this field. Unmissable speakers will be examining topical issues in emissions, exposure and dispersion modelling.
This is a presentation giving an introduction to the LandGEM model released by USEPA. It takes the student through a quick case study of the Pirana Landfill in India.
Meant for educational purposes only.
Presentation given by Dr Zia Wadud at the18th World Conference of the Air Transport Research Society, Bordeaux, France, July 2014.
atrs2014.org
www.its.leeds.ac.uk/people/z.wadud
Routes to Clean Air 2016 - Dr Norbert Ligterink - TNOIES / IAQM
Talk title: NOx and NO2 Emissions of diesel vehicles.
Routes to Clean Air is a two-day conference from the IAQM where academics, professionals and policy makers share their experiences of improving traffic emissions.
This event highlights the importance of public communication and behavioural change surrounding road transport and air quality issues.
At the 2014 annual Dispersion Modellers user group meeting guest speaker Christine McHugh spoke on the topic: 'Comparison of Air Quality in World Cities'
Effects of Errors on Vehicle Emission Rates from Portable Emissions Measure...Gurdas Sandhu
See journal paper at http://dx.doi.org/10.3141/2340-02
Portable emissions measurement systems (PEMS) are useful for quantification of real-world vehicle activity, energy use, and emissions. However, there is no standard methodology for processing PEMS data; this can lead to errors in reported results. PEMS typically include tail-pipe exhaust gas and particle analyzers, Global Positioning System (GPS) receivers, engine sensors, and electronic control unit (ECU) data loggers. The sensitivity of estimated emission rates to random errors in measurements is quantified. Methods are evaluated for identification and correction of improper synchronization of PEMS, ECU, and GPS data streams and for road grade estimation. Estimated fuel use and emission rates for light- and heavy-duty vehicles are sensitive to errors in intake manifold absolute pressure and engine revolutions per minute values and in indicators of air-to-fuel ratio including carbon dioxide and oxygen concentrations. Synchronization can be aided by maximizing the Pearson correlation coefficient between two indicator variables and confirming the result by matching concurrent increases in indicator variables. The effect of improper synchronization on estimated modal emission rates is quantified. Modal average emission rates based on vehicle-specific power (VSP) are more sensitive to improperly synchronized engine versus GPS data. Improperly synchronized data streams result in decreased variability between the lowest and highest modal average emission rates. Estimation of road grade from a linear least squares slope of elevation over a specified distance is demonstrated. VSP-based modal fuel use and pollutant emission rates are less sensitive to differences in road grade than to errors in synchronization.
Routes to Clean Air 2016 - Dr Christine McHugh & Marilena Karyampa, ArupIES / IAQM
Talk title: The PCM Model and Modelling Uncertainty
Routes to Clean Air is a two-day conference from the IAQM where academics, professionals and policy makers share their experiences of improving traffic emissions.
This event highlights the importance of public communication and behavioural change surrounding road transport and air quality issues.
At the 2014 annual Dispersion Modellers user group meeting guest speaker James Tate spoke the topic: 'Making better use of microsimulation models for estimating vehicle emissions'
At the 2014 annual Dispersion Modellers user group meeting guest speaker Sean Beevers spoke on the topic: 'Update on progress with the development of a hybrid personal exposure model'
Presentation by Dr James Tate at Institute of Air Quality Management (IAQM) Dispersion Modellers User Group December 2014.
www.its.leeds.ac.uk/people/j.tate
http://iaqm.co.uk/event/dmug-2014/
This paper applies inverse transform sampling to sample training points for surrogate models. Inverse transform sampling uniformly generates a sequence of real numbers ranging from 0 to 1 as the probabilities at sample points. The coordinates of the sample points are evaluated using the inverse functions of Cumulative Distribution Functions (CDF). The inputs to surrogate models are assumed to be independent random variables. The sample points obtained by inverse transform sampling can effectively represent the frequency of occurrence of the inputs. The distributions of inputs to the surrogate models are fitted to their observed data. These distributions are used for inverse transform sampling. The sample points have larger densities in the regions where the Probability Density Functions (PDF) are higher. This sampling approach ensures that the regions with higher densities of sample points are more prevalent in the observations of the random variables. Inverse transform sampling is applied to the development of surrogate models for window performance evaluation. The distributions of the following three climatic conditions are fitted: (i) the outside temperature, (ii) the wind speed, and (iii) the solar radiation. The sample climatic conditions obtained by the inverse transform sampling are used as training points to evaluate the heat transfer through a generic triple pane window. Using the simulation results at the sample points, surrogate models are developed to represent the heat transfer through the window as a function of the climatic conditions. It is observed that surrogate models developed using the inverse transform sampling can provide higher accuracy than that developed using the Sobol sequence directly for the window performance evaluation.
This is a presentation giving an introduction to the LandGEM model released by USEPA. It takes the student through a quick case study of the Pirana Landfill in India.
Meant for educational purposes only.
Presentation given by Dr Zia Wadud at the18th World Conference of the Air Transport Research Society, Bordeaux, France, July 2014.
atrs2014.org
www.its.leeds.ac.uk/people/z.wadud
Routes to Clean Air 2016 - Dr Norbert Ligterink - TNOIES / IAQM
Talk title: NOx and NO2 Emissions of diesel vehicles.
Routes to Clean Air is a two-day conference from the IAQM where academics, professionals and policy makers share their experiences of improving traffic emissions.
This event highlights the importance of public communication and behavioural change surrounding road transport and air quality issues.
At the 2014 annual Dispersion Modellers user group meeting guest speaker Christine McHugh spoke on the topic: 'Comparison of Air Quality in World Cities'
Effects of Errors on Vehicle Emission Rates from Portable Emissions Measure...Gurdas Sandhu
See journal paper at http://dx.doi.org/10.3141/2340-02
Portable emissions measurement systems (PEMS) are useful for quantification of real-world vehicle activity, energy use, and emissions. However, there is no standard methodology for processing PEMS data; this can lead to errors in reported results. PEMS typically include tail-pipe exhaust gas and particle analyzers, Global Positioning System (GPS) receivers, engine sensors, and electronic control unit (ECU) data loggers. The sensitivity of estimated emission rates to random errors in measurements is quantified. Methods are evaluated for identification and correction of improper synchronization of PEMS, ECU, and GPS data streams and for road grade estimation. Estimated fuel use and emission rates for light- and heavy-duty vehicles are sensitive to errors in intake manifold absolute pressure and engine revolutions per minute values and in indicators of air-to-fuel ratio including carbon dioxide and oxygen concentrations. Synchronization can be aided by maximizing the Pearson correlation coefficient between two indicator variables and confirming the result by matching concurrent increases in indicator variables. The effect of improper synchronization on estimated modal emission rates is quantified. Modal average emission rates based on vehicle-specific power (VSP) are more sensitive to improperly synchronized engine versus GPS data. Improperly synchronized data streams result in decreased variability between the lowest and highest modal average emission rates. Estimation of road grade from a linear least squares slope of elevation over a specified distance is demonstrated. VSP-based modal fuel use and pollutant emission rates are less sensitive to differences in road grade than to errors in synchronization.
Routes to Clean Air 2016 - Dr Christine McHugh & Marilena Karyampa, ArupIES / IAQM
Talk title: The PCM Model and Modelling Uncertainty
Routes to Clean Air is a two-day conference from the IAQM where academics, professionals and policy makers share their experiences of improving traffic emissions.
This event highlights the importance of public communication and behavioural change surrounding road transport and air quality issues.
At the 2014 annual Dispersion Modellers user group meeting guest speaker James Tate spoke the topic: 'Making better use of microsimulation models for estimating vehicle emissions'
At the 2014 annual Dispersion Modellers user group meeting guest speaker Sean Beevers spoke on the topic: 'Update on progress with the development of a hybrid personal exposure model'
Presentation by Dr James Tate at Institute of Air Quality Management (IAQM) Dispersion Modellers User Group December 2014.
www.its.leeds.ac.uk/people/j.tate
http://iaqm.co.uk/event/dmug-2014/
This paper applies inverse transform sampling to sample training points for surrogate models. Inverse transform sampling uniformly generates a sequence of real numbers ranging from 0 to 1 as the probabilities at sample points. The coordinates of the sample points are evaluated using the inverse functions of Cumulative Distribution Functions (CDF). The inputs to surrogate models are assumed to be independent random variables. The sample points obtained by inverse transform sampling can effectively represent the frequency of occurrence of the inputs. The distributions of inputs to the surrogate models are fitted to their observed data. These distributions are used for inverse transform sampling. The sample points have larger densities in the regions where the Probability Density Functions (PDF) are higher. This sampling approach ensures that the regions with higher densities of sample points are more prevalent in the observations of the random variables. Inverse transform sampling is applied to the development of surrogate models for window performance evaluation. The distributions of the following three climatic conditions are fitted: (i) the outside temperature, (ii) the wind speed, and (iii) the solar radiation. The sample climatic conditions obtained by the inverse transform sampling are used as training points to evaluate the heat transfer through a generic triple pane window. Using the simulation results at the sample points, surrogate models are developed to represent the heat transfer through the window as a function of the climatic conditions. It is observed that surrogate models developed using the inverse transform sampling can provide higher accuracy than that developed using the Sobol sequence directly for the window performance evaluation.
Spectral opportunity selection based on the hybrid algorithm AHP-ELECTRETELKOMNIKA JOURNAL
Due to an ever-growing demand for spectrum and the fast-paced developmentof wireless applications, technologies such as cognitive radio enablethe efficient use of the spectrum. The objective of the present article is todesign an algorithm capable of choosing the best channel for data transmission.It uses quantitative methods that can modify behavior by changing qualityparameters in the channel. To achieve this task, a hybrid decision-makingalgorithm is designed that combinesanalytical hierarchy process(AHP)algorithms and adjusts the weights of each channel parameter, using a prioritytable. TheElimination Et Choix Tranduisant La Realité(ELECTRE)algorithm processes the information from each channel through a weightmatrix and then delivers the most favorable result for the transmitted data. Theresults reveal that the hybrid AHP-ELECTRE algorithm has a suitableperformance, which improves the throughput rate by 14% compared to similaralternatives.
Approximation models (or surrogate models) provide an efficient substitute to expen- sive physical simulations and an efficient solution to the lack of physical models of system behavior. However, it is challenging to quantify the accuracy and reliability of such ap- proximation models in a region of interest or the overall domain without additional system evaluations. Standard error measures, such as the mean squared error, the cross-validation error, and the Akaikes information criterion, provide limited (often inadequate) informa- tion regarding the accuracy of the final surrogate. This paper introduces a novel and model independent concept to quantify the level of errors in the function value estimated by the final surrogate in any given region of the design domain. This method is called the Re- gional Error Estimation of Surrogate (REES). Assuming the full set of available sample points to be fixed, intermediate surrogates are iteratively constructed over a sample set comprising all samples outside the region of interest and heuristic subsets of samples inside the region of interest (i.e., intermediate training points). The intermediate surrogate is tested over the remaining sample points inside the region of interest (i.e., intermediate test points). The fraction of sample points inside region of interest, which are used as interme- diate training points, is fixed at each iteration, with the total number of iterations being pre-specified. The estimated median and maximum relative errors within the region of in- terest for the heuristic subsets at each iteration are used to fit a distribution of the median and maximum error, respectively. The estimated statistical mode of the median and the maximum error, and the absolute maximum error are then represented as functions of the density of intermediate training points, using regression models. The regression models are then used to predict the expected median and maximum regional errors when all the sample points are used as training points. Standard test functions and a wind farm power generation problem are used to illustrate the effectiveness and the utility of such a regional error quantification method.
El 29 de febrero y el 1 de marzo de 2016, la Fundación Ramón Areces analizó la relación entre 'Big Data y el cambio climático' en unas jornadas. ¿Puede el Big Data ayudar a reducir el cambio climático? ¿Cómo contribuirá ese análisis masivo de datos a prevenir y gestionar catástrofes naturales? Son solo algunas de las preguntas a las que intentarán responder los ponentes. Las ciencias vinculadas al clima tienen en el Big Data una herramienta muy prometedora para afrontar diferentes fenómenos asociados al cambio climático.
Statistical Technique in Gas Dispersion Modeling Based on Linear InterpolationTELKOMNIKA JOURNAL
In this paper, we introduced statistical techniques in creating a gas dispersion model in an indoor with a controlled environment. The temperature, air-wind and humidity were constant throughout the experiment. The collected data were then treated as an image; which the pixel size is similar to the total data available for x and y axis. To predict the neighborhood value, linear interpolation technique was implemented. The result of the experiment is significantly applicable in extending the total amount of data if small data is available.
The determination of complex underlying relationships between system parameters from simulated and/or recorded data requires advanced interpolating functions, also known as surrogates. The development of surrogates for such complex relationships often requires the modeling of high dimensional and non-smooth functions using limited information. To this end, the hybrid surrogate modeling paradigm, where different surrogate models are aggregated, offers a robust solution. In this paper, we develop a new high fidelity surro- gate modeling technique that we call the Reliability Based Hybrid Functions (RBHF). The RBHF formulates a reliable Crowding Distance-Based Trust Region (CD-TR), and adap- tively combines the favorable characteristics of different surrogate models. The weight of each contributing surrogate model is determined based on the local reliability measure for that surrogate model in the pertinent trust region. Such an approach is intended to ex- ploit the advantages of each component surrogate. This approach seeks to simultaneously capture the global trend of the function and the local deviations. In this paper, the RBHF integrates four component surrogate models: (i) the Quadratic Response Surface Model (QRSM), (ii) the Radial Basis Functions (RBF), (iii) the Extended Radial Basis Functions (E-RBF), and (iv) the Kriging model. The RBHF is applied to standard test problems. Subsequent evaluations of the Root Mean Squared Error (RMSE) and the Maximum Ab- solute Error (MAE), illustrate the promising potential of this hybrid surrogate modeling approach.
Approximation of Dynamic Convolution Exploiting Principal Component Analysis:...a3labdsp
In recent years, several techniques have been proposed in the literature in order to attempt the emulation of nonlinear electro-acoustic devices, such as compressors, distortions, and preamplifiers. Among them, the dynamic convolution technique is one of the most common approaches used to perform this task. In this paper an exhaustive objective and subjective analysis of a dynamic convolution operation based on principal components analysis has been performed. Taking into consideration real nonlinear systems, such as bass preamplifier, distortion, and compressor, comparisons with the existing techniques of the state of the art have been carried out in order to prove the effectiveness of the proposed approach.
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
Sharing is Caring – Can cross industry collaboration be achieved on key envir...IES / IAQM
Sharing is Caring – Can cross industry collaboration be achieved on key environmental topics?
Rebecca Hearn, Director, Midland Lands Events: MidLE
mental topics?
WRI’s brand new “Food Service Playbook for Promoting Sustainable Food Choices” gives food service operators the very latest strategies for creating dining environments that empower consumers to choose sustainable, plant-rich dishes. This research builds off our first guide for food service, now with industry experience and insights from nearly 350 academic trials.
Epcon is One of the World's leading Manufacturing Companies.EpconLP
Epcon is One of the World's leading Manufacturing Companies. With over 4000 installations worldwide, EPCON has been pioneering new techniques since 1977 that have become industry standards now. Founded in 1977, Epcon has grown from a one-man operation to a global leader in developing and manufacturing innovative air pollution control technology and industrial heating equipment.
Presented by The Global Peatlands Assessment: Mapping, Policy, and Action at GLF Peatlands 2024 - The Global Peatlands Assessment: Mapping, Policy, and Action
Artificial Reefs by Kuddle Life Foundation - May 2024punit537210
Situated in Pondicherry, India, Kuddle Life Foundation is a charitable, non-profit and non-governmental organization (NGO) dedicated to improving the living standards of coastal communities and simultaneously placing a strong emphasis on the protection of marine ecosystems.
One of the key areas we work in is Artificial Reefs. This presentation captures our journey so far and our learnings. We hope you get as excited about marine conservation and artificial reefs as we are.
Please visit our website: https://kuddlelife.org
Our Instagram channel:
@kuddlelifefoundation
Our Linkedin Page:
https://www.linkedin.com/company/kuddlelifefoundation/
and write to us if you have any questions:
info@kuddlelife.org
"Understanding the Carbon Cycle: Processes, Human Impacts, and Strategies for...MMariSelvam4
The carbon cycle is a critical component of Earth's environmental system, governing the movement and transformation of carbon through various reservoirs, including the atmosphere, oceans, soil, and living organisms. This complex cycle involves several key processes such as photosynthesis, respiration, decomposition, and carbon sequestration, each contributing to the regulation of carbon levels on the planet.
Human activities, particularly fossil fuel combustion and deforestation, have significantly altered the natural carbon cycle, leading to increased atmospheric carbon dioxide concentrations and driving climate change. Understanding the intricacies of the carbon cycle is essential for assessing the impacts of these changes and developing effective mitigation strategies.
By studying the carbon cycle, scientists can identify carbon sources and sinks, measure carbon fluxes, and predict future trends. This knowledge is crucial for crafting policies aimed at reducing carbon emissions, enhancing carbon storage, and promoting sustainable practices. The carbon cycle's interplay with climate systems, ecosystems, and human activities underscores its importance in maintaining a stable and healthy planet.
In-depth exploration of the carbon cycle reveals the delicate balance required to sustain life and the urgent need to address anthropogenic influences. Through research, education, and policy, we can work towards restoring equilibrium in the carbon cycle and ensuring a sustainable future for generations to come.
UNDERSTANDING WHAT GREEN WASHING IS!.pdfJulietMogola
Many companies today use green washing to lure the public into thinking they are conserving the environment but in real sense they are doing more harm. There have been such several cases from very big companies here in Kenya and also globally. This ranges from various sectors from manufacturing and goes to consumer products. Educating people on greenwashing will enable people to make better choices based on their analysis and not on what they see on marketing sites.
Altered Terrain: Colonial Encroachment and Environmental Changes in Cachar, A...PriyankaKilaniya
The beginning of colonial policy in the area was signaled by the British annexation of the Cachar district in southern Assam in 1832. The region became an alluring investment opportunity for Europeans after British rule over Cachar, especially after the accidental discovery of wild tea in 1855. Within this historical context, this study explores three major stages that characterize the evolution of nature. First, it examines the distribution and growth of tea plantations, examining their size and rate of expansion. The second aspect of the study examines the consequences of land concessions, which led to the initial loss of native forests. Finally, the study investigates the increased strain on forests caused by migrant workers' demands. It also highlights the crucial role that the Forest Department plays in protecting these natural habitats from the invasion of tea planters. This study aims to analyze the intricate relationship between colonialism and the altered landscape of Cachar, Assam, by means of a thorough investigation, shedding light on the environmental, economic, and societal aspects of this historical transformation.
Climate Change All over the World .pptxsairaanwer024
Climate change refers to significant and lasting changes in the average weather patterns over periods ranging from decades to millions of years. It encompasses both global warming driven by human emissions of greenhouse gases and the resulting large-scale shifts in weather patterns. While climate change is a natural phenomenon, human activities, particularly since the Industrial Revolution, have accelerated its pace and intensity
Top 8 Strategies for Effective Sustainable Waste Management.pdfJhon Wick
Discover top strategies for effective sustainable waste management, including product removal and product destruction. Learn how to reduce, reuse, recycle, compost, implement waste segregation, and explore innovative technologies for a greener future.
Wildlife-AnIntroduction.pdf so that you know more about our environment
Amy Stidworthy - Optimising local air quality models with sensor data - DMUG17
1. Amy Stidworthy
DMUG
6h April 2017
London
Optimising local air quality models
with sensor data: examples from
Cambridge
2. DMUG, London, 6th April 2017
Acknowledgements
• The work presented here has been done by CERC based on
ADMS-Urban modelling of Cambridge following the
deployment of AQMesh sensors in Cambridge. Partners
include:
– Rod Jones & Lekan Popoola, Department of Chemistry, University
of Cambridge
– Dan Clarke, Cambridgeshire County Council, Cambridge
– Jo Dicks & Anita Lewis, Cambridge City Council, Cambridge
– Ian Leslie, Computer Laboratory, University of Cambridge
– Amanda Randle, AQMesh
3. DMUG, London, 6th April 2017
Outline of presentation
• Motivation
• Optimisation technique
• Preliminary results for Cambridge
• Further work
4. DMUG, London, 6th April 2017
Motivation
• Emissions errors account for a significant proportion of
dispersion model error
• Traditionally, dispersion models such as CERC’s ADMS-Urban
model are validated against data from reference monitors:
– Modellers either use the validation to improve model setup; or
– Calculate and apply a model adjustment factor to model results
• New low cost air pollution sensors allow large networks of
sensors to be installed across a city
• Accuracy and reliability is generally lower than reference
monitors, but larger spatial coverage is possible
• How can we best use these sensor data in modelling?
• If the data are not accurate and reliable enough for model
validation, maybe we can use the data in a different way...
5. DMUG, London, 6th April 2017
Optimisation technique: Overview
• The aim is to develop an inversion technique to use monitoring
data from a network of sensors to automatically adjust
emissions to improve model predictions
• Basic idea:
– Run ADMS-Urban to obtain modelled concentrations at monitor
locations in the normal way
– Use these modelled concentrations and their associated
emissions as a ‘first guess’, together with
a) monitored concentration data
b) information about the error in the monitored data and the proportion
of that error that is systematic across all monitors
c) Information about the error in the emissions data and the proportion
of that error that is systematic across all sources
– Use an inversion technique to calculated an adjusted set of
emissions that reduces error in the modelled concentrations
6. DMUG, London, 6th April 2017
Optimisation technique: Introduction
• There are some conditions that have to be satisfied for such a
scheme to work:
a) The model concentration must be proportional to the emissions,
which means that complex effects like chemistry have to be
ignored
b) Any sources included must affect at least one receptor (monitor)
c) Any receptor included must have non-zero concentration
• The technique developed uses a probabilistic approach
following work by others, for example as used by the Met
Office for estimating volcanic ash source parameters using
satellite retrievals [Webster et al, 2016]
7. DMUG, London, 6th April 2017
Optimisation technique: Cost function
We define a cost function J(x) with two terms: one that describes
the error in the modelled concentration (left-hand term) and one
that describes the error in the emissions (right-hand term)
The aim is to minimise J to obtain x, a vector of adjusted
emissions.
Quantity Definition Dimensions
x Vector of emissions (result) n
M Transport matrix relating the source term to the observations n by k
y Vector of observations k
R Error covariance matrix for the observations k by k
e Vector of first guess emissions n
B Error covariance matrix for the first guess emissions n by n
exBexyMxRyMxx 11 TT
J
8. DMUG, London, 6th April 2017
Optimisation technique: Least squares problem
• To solve the cost function minimisation problem, we first convert
the problem to a ‘least squares’ problem, which is easier to solve
computationally
• A ‘least squares’ problem finds the best solution to the equation
Ax=f, where x is a vector of size m, f is a vector of size n and A is
a matrix with n rows and m columns.
• The result of solving the least squares problem is the vector x
that gives the minimum value of the sum of the squares of the
elements of (Ax-f)
• So, we need to write the cost function as
Fast forward through the maths...
fAxfAx
T
xJ
9. DMUG, London, 6th April 2017
Optimisation technique: Error covariance matrices
• To solve the problem, we need to construct the matrix A and the vector
f, but do we have all the information we need for this?
M is the transport matrix: this represents the contribution of every
source to every receptor given a unit emission rate
y is the vector of monitored concentrations at each receptor
e is the vector of emissions for each source
• What are the matrices T and D?
These are the related to the ‘covariance’ matrices R and B that
represent the error in the monitored data and emissions data
respectively.
The diagonal components of the covariance matrices represent the
variance in the data, which is related to the uncertainty in the data;
The off-diagonal components represent how much of the error is ‘co-
varying’, or in other words, systematic.
eD
yT
f
D
MT
AfAxfAx T
T
T
T
T
xJ and
10. DMUG, London, 6th April 2017
Optimisation technique: Monitoring data error
• The diagonal components of the monitoring data error
covariance matrix R represent the variance σObs
2 of the
monitored data, which is the square of the standard deviation
σObs:
– We assume that the standard deviation σObs is equal to the
uncertainty in the monitoring data expressed as a concentration
and is equal to Uobs x O, where Uobs is the uncertainty expressed
as a fraction.
• The off-diagonal components represent the error that co-varies
between monitors, i.e. systematic error
– We say that a given proportion of the uncertainty is due to
systematic error
11. DMUG, London, 6th April 2017
Optimisation technique: Monitoring data error
• So, for any two monitors labelled i and j, their covariance is
defined as
• The factor UfObs represents the fraction of the monitoring data
uncertainty that is due to systematic error.
• This raises questions:
– How much of monitoring data error is systematic?
– Should monitors of different types be treated as independent, with
no co-variance?
– Are there any causes of monitoring data error that affect all
monitors, e.g. temperature, humidity?
– Is there co-variance between sensors for different pollutants?
jijyUUfiyUUf
jiiyU
ji
ObsObsObsObs
Obs
Obs
2
2
,
12. DMUG, London, 6th April 2017
Optimisation technique: Emissions data error
• The diagonal components of the emissions error covariance
matrix B represent the variance σEm
2 of the emissions data,
which is the square of the standard deviation σEm:
– We assume that the standard deviation σEm is equal to the
uncertainty in the emissions data expressed as a concentration
and is equal to Uem x E, where Uem is the uncertainty expressed
as a percentage.
• The off-diagonal components represent the error that co-varies
between sources, i.e. systematic error
– We say that a given proportion of the uncertainty is due to
systematic error, for example traffic emissions factors
13. DMUG, London, 6th April 2017
Optimisation technique: Emissions data error
• So, for any two sources labelled i and j, their covariance is
defined as
• The factor UfEm represents the fraction of the emissions data
uncertainty that is due to systematic error.
• This also raises questions:
– How much of the emissions data error is systematic? For
example, what proportion of road emissions data is due to errors
in the emission factors (systematic) and how much is due to traffic
counts (non-systematic)
– Is there any co-variance in the emissions data error for different
pollutants? PM10 and PM2.5 – yes, but PM10 and NOX?
jijeUUfieUUf
jiieU
ji
EmEmEmEm
Em
Em
2
2
,
14. DMUG, London, 6th April 2017
Preliminary results: Cambridge
• CERC have been collaborating on a project to study
ambient air quality across Cambridge using a large
number of sensor nodes and computer modelling.
• 20 AQMesh sensor pods have been placed at key
points around Cambridge, measuring air quality in near
real time.
15. DMUG, London, 6th April 2017
Preliminary results: Cambridge
• The aim of the preliminary Cambridge tests presented here is
primarily to examine the behaviour of the optimisation scheme
and refine the process, i.e.
– Does it work?!
– Is it practical? If it takes weeks to run then obviously not.
– What effect does the choice of uncertainty parameters have on
outcome?
– How does the validation at the reference monitors change?
– Can we learn anything about emissions?
16. DMUG, London, 6th April 2017
ADMS-Urban model setup
• One source type: 305 road sources
• One pollutant: NOX
• 25 monitors: 20 AQMesh monitors and 5 reference monitors
• Time-varying emission factors: diurnal profiles for weekdays,
Saturdays and Sundays
• Daylight saving option used to obtain correct emission factors
• 3-month period: 30/06/2016 01:00 to 30/09/2016 23:00
17. DMUG, London, 6th April 2017
Optimisation process
Step 1: Run ADMS-Urban to obtain modelled
concentrations at monitoring site locations
Step 2: Form the transport matrix, emissions
vector and monitored data vector
Step 3: Run the optimisation scheme
Step 4: Create an hourly factors (.hfc) file
from the adjusted emissions data
Step 5: Re-run ADMS-Urban using the
adjusted emissions .hfc file
18. DMUG, London, 6th April 2017
Optimisation parameters
• As described previously, we specify the following parameters
in the optimisation:
EU
Parameter name Description
Uobs(ref) Observation uncertainty (reference monitors)
Uobs(aqmesh) Observation uncertainty (AQmesh sensors)
Ufobs(ref) Observation uncertainty covariance factor
(reference monitors)
Ufobs(aqmesh) Observation uncertainty covariance factor
(AQmesh sensors)
Uem Emissions uncertainty
Ufem Emissions uncertainty covariance factor
19. DMUG, London, 6th April 2017
Optimisation Technique: Effect of uncertainty
Optimisation is working!
J is reduced for all uncertainty
values
Increasing Ou relaxes the
constraints so J is reduced less
20. DMUG, London, 6th April 2017
AQMesh sensors: more model error toleratedRef: less model
error tolerated
0
50
100
150
200
250
300
NOxconcentration(ug/m3)
Observed Model (original emissions) Model (adjusted emissions, all sensor data)
Effect of monitor uncertainty on concentrations
• In these inversion calculations:
– Reference monitor uncertainty set to 10%
– AQMesh sensor uncertainty set to 30%
– Covariance between Reference monitors (systematic error) set to 5%
– Covariance between AQMesh sensors (systematic error) set to 10%
– No covariance between Reference monitors and AQMesh sensors
Example hour: 7am on 5th July
21. DMUG, London, 6th April 2017
Effect of emissions covariance on adjusted emissions
-100%
-50%
0%
50%
100%
150%
200%
Percentage increase in source emission rate with different emissions
error covariance settings
Zero emissions error covariance
75% emissions error covariance
• If emissions error covariance is zero, emissions can
change completely independently
• With non-zero emissions error covariance, emissions
have to change more consistently across all sources
22. DMUG, London, 6th April 2017
Cambridge: optimisation parameters used
Parameter
name
Description Value
Uobs(ref) Observation uncertainty (reference monitors) 0.1
Uobs(aqmesh) Observation uncertainty (AQmesh sensors) 0.3
Ufobs(ref) Observation uncertainty covariance factor
(reference monitors)
0.05
Ufobs(aqmesh) Observation uncertainty covariance factor
(AQmesh sensors)
0.1
Uem Emissions uncertainty 0.5
Ufem Emissions uncertainty covariance factor 0.4
All monitoring data are provisional apart from Gonville Place
reference monitor; AQMesh data were obtained in real time.
23. DMUG, London, 6th April 2017
Effect of optimisation on model validation
Statistics 1. 2. 3.
Mean
Obs 31.2 31.2 31.2
Mod 34.5 29.3 31.3
StDev
Obs 27.9 27.9 27.9
Mod 31.0 26.0 27.0
MB 3.30 -1.91 0.10
NMSE 0.51 0.05 0.39
R 0.70 0.97 0.75
Fac2 0.71 0.94 0.73
1. Orig_RdsOnly
Base case model output
2. Inv_ReRun_AllSensors
Model output using optimised emissions;
optimisation carried out using all sensor data
3. Inv_ReRun_AQMeshSensorsOnly
Model output using optimised emissions;
optimisation carried out using AQMesh data
only
Validation at
Reference
sites only
Data points not
included in the
inversion
24. DMUG, London, 6th April 2017
Effect of optimisation on diurnal emissions profiles
0
0.5
1
1.5
2
2.5
0 2 4 6 8 10 12 14 16 18 20 22 0 2 4 6 8 10 12 14 16 18 20 22 0 2 4 6 8 10 12 14 16 18 20 22
Weekday Saturday Sunday
Emissionfactor
Diurnal emission factor profiles: original and adjusted emissions
Original
Adjusted, all sensors
Adjusted, AQMesh sensors only
25. DMUG, London, 6th April 2017
Effect of optimisation on mean emission rates
0
0.2
0.4
0.6
0.8
1
0 0.2 0.4 0.6 0.8 1
Meanadjustedemissionrate(g/km/s)
Mean first-guess emission rate
Optimisation using AQMesh
sensor data only
0
0.2
0.4
0.6
0.8
1
0 0.2 0.4 0.6 0.8 1
Meanadjustedemissionrate(g/km/s)
Mean first-guess emission rate
Optimisation using AQMesh
sensor data and reference
monitor data
26. DMUG, London, 6th April 2017
Effect of optimisation on mean emission rates
Average % change
-2.8%
-60.0%
-40.0%
-20.0%
0.0%
20.0%
40.0%
60.0%
Optimisation using AQMesh sensor data only
Percentage change in mean emission rate per road source
Average % change
-3.0%
-60.0%
-40.0%
-20.0%
0.0%
20.0%
40.0%
60.0%
Optimisation using AQMesh sensor data and reference monitor data
Including reference data causes
big changes in just a few sources
27. DMUG, London, 6th April 2017
Example output at reference monitors: 5th July 2016
Montague Rd
Regent St
Gonville Place
Parker St
Newmarket Rd
28. DMUG, London, 6th April 2017
7-day average concentration: Adjusted - Original
Example of how the
optimisation process
affects
concentration
contours:
General reduction,
but increase in
some areas
+30
-30
NOX ug/m3
0
29. DMUG, London, 6th April 2017
Discussion and further work
• We have developed an optimisation scheme to use data from a
network of sensors to automatically adjust emissions and thereby
improve model results
• Tests show that the scheme works and initial results are
encouraging, but there is more work to do, for example:
– More than 1 pollutant
– Other source types
• The optimisation scheme run times are also encouraging: approx 15
minutes to run 3 months of hourly data with 305 sources and 25
receptors, carrying out the optimisation for each individual hour
• The values of uncertainty and covariance factors used so far are
largely arbitrary; we need to use realistic values to obtain meaningful
results
• After Cambridge, the next step is to run the scheme with sensor data
collected at Heathrow during the NERC SNAQ project.
30. DMUG, London, 6th April 2017
Thank you
• Thanks again to CERC’s partners in this work:
– Rod Jones & Lekan Popoola, Department of Chemistry, University
of Cambridge
– Dan Clarke, Cambridgeshire County Council, Cambridge
– Jo Dicks & Anita Lewis, Cambridge City Council, Cambridge
– Ian Leslie, Computer Laboratory, University of Cambridge
– Amanda Randle, AQMesh
• For more information about the ADMS-Urban dispersion
model, see www.cerc.co.uk/Urban
Editor's Notes
Explain that this is ongoing work and these are preliminary results – much more work to do!
Traditionally, dispersion models are validated by comparing measured and modelled concentrations at well-established monitoring sites; at best, modellers manually refine the dispersion modelling to minimise error at these locations; at worst, modellers calculate ‘adjustment factors’ and apply these to modelled concentrations.
Meanwhile, the increasing availability of relatively low cost air pollution sensors that are easy to install and to maintain is allowing networks of such sensors to be installed across urban areas. Although these sensors have reduced reliability and accuracy compared with traditional monitors they allow much greater spatial coverage. A systematic method that integrates data from these low cost sensors with models could deliver real benefits in terms of understanding emissions and improving model estimates.
From Kate:
I picked up an EMIT inventory from Mark Attree (maybe Chetan) from P:\FM\FM1085_Cambridge\EMIT\FM1034\Cambridge2013_20150713.MDB
This database was for the year 2013 and was made by Cambridge City Council, together with our help I believe.
I left the database as it was, other than changing the roads emission factors to be for 2016. The flows and route type were left as they were - the route type was a special one created specifically for Cambridge for 2013 - we thought this would be more accurate than the generic 2016 route type.
The exhaust emission factors used for 2016 were NAEI 2014 Urban for the year 2016.
My EMIT db is here:
P:\IP\IP155 Cambridge sensors\Working\EMIT\Cambridge2013_20150713.MDB
Other emission sources in the inventory include:
Guided buses
Car parks
Addenbrooks boilers, car parks, bus station and internal roads Park and ride Queues NAEI grid sources and point sources
Including all sensor data results in excellent agreement at the ref monitors, particularly high correlation
Odd results above the y=2x line represent points where the monitored concentration is less than the background conc, so these data points could not be included in the inversion, i.e. Concentration at these receptors for these hours were not part of the inversion process, so did not constrain emissions adjustment
Very encouraging results from the AQMesh data only run: reduced bias and error, improved correlation and fraction within a factor of 2.
Small change in the diurnal profile, particularly weekdays: note the increase in the morning rush hour and the decrease in the evening rush hour.
Very little difference between runs including all sensor data and runs only including AQMesh sensor data.
The sources that change most when reference sensor data are included are those right next to the reference monitors, as you might expect.
These graphs show variation in observed and modelled concentration through the day on one day only: 5th July 2016.
The graphs show that for some sensors, e.g. Regent Street and Montague Rd, if the ref sensors are included in the inversion then the modelled conc can be made to fit the observed conc. At these sources the modelled conc is dominated by only one source. For the receptors where the inversion has a harder job making the modelled conc fit the observed conc (e.g. Newmarket Rd) it is because many sources impact on the receptor.