Abstract
The Comprehensive Nuclear-Test-Ban Treaty’s verification regime requires uniform distribution of monitoring capabilities over the globe. The use of waveform cross correlation as a monitoring technique demands waveform templates from master events outside regions of natural seismicity and test sites. We populated aseismic areas with masters having synthetic templates for predefined sets (from 3 to 10) of primary array stations of the International Monitoring System. Previously, we tested the global set of master events and synthetic templates using IMS seismic data for February 12, 2013 and demonstrated excellent detection and location capability of the matched filter technique. In this study, we test the global grid of synthetic master events using seismic events from the Reviewed Event Bulletin. For detection, we use standard STA/LTA (SNR) procedure applied to the time series of cross correlation coefficient (CC). Phase association is based on SNR, CC, and arrival times. Azimuth and slowness estimates based f-k analysis cross correlation traces are used to reject false arrivals.
Recovery of aftershock sequences using waveform cross correlation: from catas...Ivan Kitov
Description of a software package for signal detection and association using waveform cross correlation. Recovery of aftershock sequences of the largest events: Sumatra 2004 and Tohoku 2011. Finding of a small aftershock of the September 9, 2016 DPRK test.
Extreme weather events pose great potential risk on ecosystem, infrastructure and human health. Analyzing extreme weather in the observed record (satellite, reanalysis products) and characterizing changes in extremes in simulations of future climate regimes is an important task. Thus far, extreme weather events have been typically specified by the community through hand-coded, multi-variate threshold conditions. Such criteria are usually subjective, and often there is little agreement in the community on the specific algorithm that should be used. We propose the use of a different approach: machine learning (and in particular deep learning) for solving this important problem. If human experts can provide spatio-temporal patches of a climate dataset, and associated labels, we can turn to a machine learning system to learn the underlying feature representation. The trained Machine Learning (ML) system can then be applied to novel datasets, thereby automating the pattern detection step. Summary statistics, such as location, intensity and frequency of such events can be easily computed as a post-process.
We will report compelling results from our investigations of Deep Learning for the tasks of classifying tropical cyclones, atmospheric rivers and weather front events. For all of these events, we observe 90-99% classification accuracy. We will also report on progress in localizing such events: namely drawing a bounding box (of the correct size and scale) around the weather pattern of interest. Both tasks currently utilize multi-layer convolutional networks in conjunction with hyper-parameter optimization. We utilize HPC systems at NERSC to perform the optimization across multiple nodes, and utilize highly-tuned libraries to utilize multiple cores on a single node. We will conclude with thoughts on the frontier of Deep Learning and the role of humans (vis-a-vis AI) in the scientific discovery process.
Recovery of aftershock sequences using waveform cross correlation: from catas...Ivan Kitov
Description of a software package for signal detection and association using waveform cross correlation. Recovery of aftershock sequences of the largest events: Sumatra 2004 and Tohoku 2011. Finding of a small aftershock of the September 9, 2016 DPRK test.
Extreme weather events pose great potential risk on ecosystem, infrastructure and human health. Analyzing extreme weather in the observed record (satellite, reanalysis products) and characterizing changes in extremes in simulations of future climate regimes is an important task. Thus far, extreme weather events have been typically specified by the community through hand-coded, multi-variate threshold conditions. Such criteria are usually subjective, and often there is little agreement in the community on the specific algorithm that should be used. We propose the use of a different approach: machine learning (and in particular deep learning) for solving this important problem. If human experts can provide spatio-temporal patches of a climate dataset, and associated labels, we can turn to a machine learning system to learn the underlying feature representation. The trained Machine Learning (ML) system can then be applied to novel datasets, thereby automating the pattern detection step. Summary statistics, such as location, intensity and frequency of such events can be easily computed as a post-process.
We will report compelling results from our investigations of Deep Learning for the tasks of classifying tropical cyclones, atmospheric rivers and weather front events. For all of these events, we observe 90-99% classification accuracy. We will also report on progress in localizing such events: namely drawing a bounding box (of the correct size and scale) around the weather pattern of interest. Both tasks currently utilize multi-layer convolutional networks in conjunction with hyper-parameter optimization. We utilize HPC systems at NERSC to perform the optimization across multiple nodes, and utilize highly-tuned libraries to utilize multiple cores on a single node. We will conclude with thoughts on the frontier of Deep Learning and the role of humans (vis-a-vis AI) in the scientific discovery process.
Global grid of master events for waveform cross correlation: design and testingIvan Kitov
Seismic monitoring of the Comprehensive Nuclear-Test-Ban Treaty requires a uniform coverage of the earth. The global use of waveform cross correlation for monitoring purposes is hindered by the absence of master events outside the zones of seismic activity. To populate the aseismic areas we have studied two principal approaches. Around the seismically active areas, we replicate real events best representing seismicity in a given region and distribute them over a regular grid to distances ~1000 km. These replicated events are called “grand masters”. For remote aseismic areas, we calculate synthetic seismograms for a regular grid of master events and a predefined set of array stations of the International Monitoring System. Both approaches were tested and showed a resolution similar to the use of real events. Considering three types of master events, we have created a regular and uniform grid with approximately 100 km spacing between nodes as obtained from the equilibrium distribution of charged particles over the earth’s surface. We have created three versions of the grid: v0.1 with only synthetic templates, v0.2 with real masters added where possible, and v0.3 with grand masters added. The performance of v0.1 has been assessed by full processing of a few data days.
Key words: waveform cross correlation, master events, seismic monitoring, array seismology, IDC, CTBT
Physical processes in the earth system are modeled with mathematical representations called parameterizations. This talk will describe some of the conceptual approaches and mathematics used do describe physical parameterizations focusing on cloud parameterizations. This includes tracing physical laws to discrete representations in coarse scale models. Clouds illustrate several of the complexities and techniques common to many physical parameterizations. This includes the problem of different scales, sub-grid scale variability. Discussions of mathematical methods for dealing with the sub-grid scale will be discussed. In-exactness or indeterminate problems for both weather and climate will be discussed, including the problems of indeterminate parameterizations, and inexact initial conditions. Different mathematical methods, including the use of stochastic methods, will be described and discussed, with examples from contemporary earth system models.
Climate model parameterizations of cumulus convection and other clouds that form due to small-scale turbulent eddies are a leading source of uncertainty in predicting the sensitivity of global warming to greenhouse gas increases. Even though we can write down equations governing the physics of cloud formation and fluid motion, these cloud-forming eddies are not resolved by the grid of a climate model, so the subgrid covariability of cloud processes and turbulence must be parameterized. Many approaches are used, all involving numerous subjective assumptions. Even when optimized to match present-day climate, these approaches produce a broad range of predictions about how clouds will change in a future climate.
High resolution models which explicitly simulate the clouds and turbulence on a very fine computational grid more realistically simulate cloud formation compared to observations. But it has proved challenging to translate this skill into better climate model parameterizations.
We will present one naturally stochastic approach for this using a computationally expensive approach called ‘superparameterization’ and then we will lay out a vision for how machine learning could be used to do this translation, which amounts to a form of stochastic coarse-graining. Developing the statistical and computational methods to realize this vision is a good challenge for this SAMSI year.
We present a survey of computational and applied mathematical techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties.
Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.
Nonlinear filtering approaches to field mapping by sampling using mobile sensorsijassn
This work proposes a novel application of existing powerful nonlinear filters, such as the standard
Extended Kalman Filter (EKF), some of its variants and the standard Unscented Kalman Filter (UKF), to
the estimation of a continuous spatio-temporal field that is spread over a wide area, and hence represented
by a large number of parameters when parameterized. We couple these filters with the powerful scheme of
adaptive sampling performed by a single mobile sensor, and investigate their performances with a view to
significantly improving the speed and accuracy of the overall field estimation. An extensive simulation work
was carried out to show that different variants of the standard EKF and the standard UKF can be used to
improve the accuracy of the field estimate. This paper also aims to provide some guideline for the user of
these filters in reaching a practical trade-off between the desired field estimation accuracy and the
required computational load.
Multivariate dimensionality reduction in cross-correlation analysis ivanokitov
In master event location, a matched-filter like technique based on cross-correlation with pre-defined waveform template, a crucial role plays a template design. Reduction of templates number for certain region under monitoring is extremely important both for interactive and real-time processing as it may dramatically reduce the time of resulting product delivery and may improve low magnitude event detection threshold and location.
A number of dimensionality reduction methods have been explored to minimize the number of master events needed for cross correlation based seismic event detection and location, including multidimensional data model approaches (hypercomplex and tensorial). The primary method considered is Principle Component Analysis (PCA), which is widely accepted as a superior method of matrix factorization or Singular Value Decomposition (SVD). For regional seismic events, Harris (2006) used this in designing a subspace detector for the cross correlation based event location. Other methods of dimensionality reduction explored either theoretically or analytically included Robust PCA, Kernel PCA, Incremental PCA (IPCA), Empirical Subspace Detector (SSD) (Barrett and Beroza, 2015) and Independent Component Analysis (ICA).
Performance of waveform cross correlation using a global and regular grid of...Mikhail Rozhkov
Seismic monitoring of the Comprehensive Nuclear-Test-Ban Treaty (CTBT) requires a uniform distribution of detection threshold over the earth. By design, the Primary Seismic Network of the International Monitoring System (IMS) provides an appropriate coverage. In order to effectively use the cross correlation technique and the archive waveforms from the International Data Centre (IDC) in global monitoring we have introduced a dense (spacing of ~ 140 km) and regular grid of master events with high quality templates at IMS array stations. In seismically active zones, we populated the grid with real masters. For aseismic zones, we developed an extended set of synthetic templates for virtual master events. This set includes a few simple double-couple source mechanisms in 1D and 2D velocity structures as well as explosion sources of varying size and at different depths of burial. All synthetics were first tested in seismic areas and showed the detection threshold similar to that demonstrated by real templates. We have designed three versions of the grid: V1.0 with only synthetic templates, V2.0 with real masters added where possible, and V3.0 with grand masters added. Their performance has been assessed by full processing of a few data days and a direct comparison with the Reviewed Event Bulletin issued by the IDC.
Performance of waveform cross correlation using a global and regular grid of ...Ivan Kitov
Outline
1.Motivation
2.Global seismic monitoring: IMS
3.Global seismicity: IDC view
4.Global cross correlation grid: a design
5.Cross correlation at teleseismic distances
6.Underground nuclear explosions as master events
7.Synthetic master events
8.Principal and Independent Component Analysis
9.Testing with world seismicity of February 12, 2013
10. DPRK 2013 of February 12, 2013
Global grid of master events for waveform cross correlation: design and testingIvan Kitov
Seismic monitoring of the Comprehensive Nuclear-Test-Ban Treaty requires a uniform coverage of the earth. The global use of waveform cross correlation for monitoring purposes is hindered by the absence of master events outside the zones of seismic activity. To populate the aseismic areas we have studied two principal approaches. Around the seismically active areas, we replicate real events best representing seismicity in a given region and distribute them over a regular grid to distances ~1000 km. These replicated events are called “grand masters”. For remote aseismic areas, we calculate synthetic seismograms for a regular grid of master events and a predefined set of array stations of the International Monitoring System. Both approaches were tested and showed a resolution similar to the use of real events. Considering three types of master events, we have created a regular and uniform grid with approximately 100 km spacing between nodes as obtained from the equilibrium distribution of charged particles over the earth’s surface. We have created three versions of the grid: v0.1 with only synthetic templates, v0.2 with real masters added where possible, and v0.3 with grand masters added. The performance of v0.1 has been assessed by full processing of a few data days.
Key words: waveform cross correlation, master events, seismic monitoring, array seismology, IDC, CTBT
Physical processes in the earth system are modeled with mathematical representations called parameterizations. This talk will describe some of the conceptual approaches and mathematics used do describe physical parameterizations focusing on cloud parameterizations. This includes tracing physical laws to discrete representations in coarse scale models. Clouds illustrate several of the complexities and techniques common to many physical parameterizations. This includes the problem of different scales, sub-grid scale variability. Discussions of mathematical methods for dealing with the sub-grid scale will be discussed. In-exactness or indeterminate problems for both weather and climate will be discussed, including the problems of indeterminate parameterizations, and inexact initial conditions. Different mathematical methods, including the use of stochastic methods, will be described and discussed, with examples from contemporary earth system models.
Climate model parameterizations of cumulus convection and other clouds that form due to small-scale turbulent eddies are a leading source of uncertainty in predicting the sensitivity of global warming to greenhouse gas increases. Even though we can write down equations governing the physics of cloud formation and fluid motion, these cloud-forming eddies are not resolved by the grid of a climate model, so the subgrid covariability of cloud processes and turbulence must be parameterized. Many approaches are used, all involving numerous subjective assumptions. Even when optimized to match present-day climate, these approaches produce a broad range of predictions about how clouds will change in a future climate.
High resolution models which explicitly simulate the clouds and turbulence on a very fine computational grid more realistically simulate cloud formation compared to observations. But it has proved challenging to translate this skill into better climate model parameterizations.
We will present one naturally stochastic approach for this using a computationally expensive approach called ‘superparameterization’ and then we will lay out a vision for how machine learning could be used to do this translation, which amounts to a form of stochastic coarse-graining. Developing the statistical and computational methods to realize this vision is a good challenge for this SAMSI year.
We present a survey of computational and applied mathematical techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties.
Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.
Nonlinear filtering approaches to field mapping by sampling using mobile sensorsijassn
This work proposes a novel application of existing powerful nonlinear filters, such as the standard
Extended Kalman Filter (EKF), some of its variants and the standard Unscented Kalman Filter (UKF), to
the estimation of a continuous spatio-temporal field that is spread over a wide area, and hence represented
by a large number of parameters when parameterized. We couple these filters with the powerful scheme of
adaptive sampling performed by a single mobile sensor, and investigate their performances with a view to
significantly improving the speed and accuracy of the overall field estimation. An extensive simulation work
was carried out to show that different variants of the standard EKF and the standard UKF can be used to
improve the accuracy of the field estimate. This paper also aims to provide some guideline for the user of
these filters in reaching a practical trade-off between the desired field estimation accuracy and the
required computational load.
Multivariate dimensionality reduction in cross-correlation analysis ivanokitov
In master event location, a matched-filter like technique based on cross-correlation with pre-defined waveform template, a crucial role plays a template design. Reduction of templates number for certain region under monitoring is extremely important both for interactive and real-time processing as it may dramatically reduce the time of resulting product delivery and may improve low magnitude event detection threshold and location.
A number of dimensionality reduction methods have been explored to minimize the number of master events needed for cross correlation based seismic event detection and location, including multidimensional data model approaches (hypercomplex and tensorial). The primary method considered is Principle Component Analysis (PCA), which is widely accepted as a superior method of matrix factorization or Singular Value Decomposition (SVD). For regional seismic events, Harris (2006) used this in designing a subspace detector for the cross correlation based event location. Other methods of dimensionality reduction explored either theoretically or analytically included Robust PCA, Kernel PCA, Incremental PCA (IPCA), Empirical Subspace Detector (SSD) (Barrett and Beroza, 2015) and Independent Component Analysis (ICA).
Performance of waveform cross correlation using a global and regular grid of...Mikhail Rozhkov
Seismic monitoring of the Comprehensive Nuclear-Test-Ban Treaty (CTBT) requires a uniform distribution of detection threshold over the earth. By design, the Primary Seismic Network of the International Monitoring System (IMS) provides an appropriate coverage. In order to effectively use the cross correlation technique and the archive waveforms from the International Data Centre (IDC) in global monitoring we have introduced a dense (spacing of ~ 140 km) and regular grid of master events with high quality templates at IMS array stations. In seismically active zones, we populated the grid with real masters. For aseismic zones, we developed an extended set of synthetic templates for virtual master events. This set includes a few simple double-couple source mechanisms in 1D and 2D velocity structures as well as explosion sources of varying size and at different depths of burial. All synthetics were first tested in seismic areas and showed the detection threshold similar to that demonstrated by real templates. We have designed three versions of the grid: V1.0 with only synthetic templates, V2.0 with real masters added where possible, and V3.0 with grand masters added. Their performance has been assessed by full processing of a few data days and a direct comparison with the Reviewed Event Bulletin issued by the IDC.
Performance of waveform cross correlation using a global and regular grid of ...Ivan Kitov
Outline
1.Motivation
2.Global seismic monitoring: IMS
3.Global seismicity: IDC view
4.Global cross correlation grid: a design
5.Cross correlation at teleseismic distances
6.Underground nuclear explosions as master events
7.Synthetic master events
8.Principal and Independent Component Analysis
9.Testing with world seismicity of February 12, 2013
10. DPRK 2013 of February 12, 2013
Synthetics vs. real waveforms from underground nuclear explosions as master t...Ivan Kitov
The cross-correlation (CC) and master event technique is efficient in Comprehensive Nuclear-Test Ban Treaty (CTBT) monitoring. Two primary goals of CTBT monitoring are detection and location of nuclear explosions. Therefore, the CC monitoring should be focused on finding such events. The use of physically adequate masters may increase the number of valid events in the Reviewed Event Bulletin (REB) of the International Data Centre by a factor of 2. Inadequate master events may increase the number of irrelevant events in REB and reduce the sensitivity of the CC technique to valid events. In order to cover the entire earth, including vast aseismic territories, with the CC based nuclear test monitoring we conducted a thorough research and defined the most appropriate real and synthetic master events representing underground explosion sources. A procedure was developed on optimizing the master event simulation based on principal component analysis with bootstrap aggregation as a dimension reduction technique narrowing the classes of CC templates used in detection and location process. Actual waveforms and metadata from the DTRA Verification Database were used to validate our approach. The detection and location results based on real and synthetic master events were compared.
Investigations on real time RSSI based outdoor target tracking using kalman f...IJECEIAES
Target tracking is essential for localization and many other applications in Wireless Sensor Networks (WSNs). Kalman filter is used to reduce measurement noise in target tracking. In this research TelosB motes are used to measure Received Signal Strength Indication (RSSI). RSSI measurement doesn‟t require any external hardware compare to other distance estimation methods such as Time of Arrival (TOA), Time Difference of Arrival (TDoA) and Angle of Arrival (AoA). Distances between beacon and non-anchor nodes are estimated using the measured RSSI values. Position of the nonanchor node is estimated after finding the distance between beacon and nonanchor nodes. A new algorithm is proposed with Kalman filter for location estimation and target tracking in order to improve localization accuracy called as MoteTrack InOut system. This system is implemented in real time for indoor and outdoor tracking. Localization error reduction obtained in an outdoor environment is 75%.
RT (Ray Tracing) models are widely used in RAN for channel modelling. Another possible application in
processing chain of base station with multiple purposes: positioning, channel estimation/prediction, radio
resources scheduling and others. In this paper RT positioning technique is addressed for Urban Outdoor
scenario. Proposed robust approach achieves several meters accuracy even in NLOS and multipath
conditions. Developed RT tracking was used for multiuser (MU) precoder prediction and demonstrated
significant capacity gain. Also, this paper discloses practical aspects for achieving high accuracy.
RT (Ray Tracing) models are widely used in RAN for channel modelling. Another possible application in processing chain of base station with multiple purposes: positioning, channel estimation/prediction, radio resources scheduling and others. In this paper RT positioning technique is addressed for Urban Outdoor scenario. Proposed robust approach achieves several meters accuracy even in NLOS and multipath conditions. Developed RT tracking was used for multiuser (MU) precoder prediction and demonstrated significant capacity gain. Also, this paper discloses practical aspects for achieving high accuracy.
Adaptive Monitoring and Localization of Faulty Node in a Wireless Sensor Netw...Onyebuchi nosiri
Abstract
This work seeks to solve the problem that is being experienced in most existing remote monitoring systems by coming up with an enhanced monitoring system called Wireless Sensor Network. A Personal Area Network was evolved to increase the coverage area by spatially distributing Sensor nodes to capture and transmit physical parameters like temperature and Carbon monoxide in an indoor local cooking environment. Faulty node detection and localization was also realized, this was achieved by coming up with an algorithm that logically considers the receive signal strength value of -100 dbm as threshold, Result of data transmitted were viewed via a C-Sharp interface for Adaptive monitoring. The result from the Visual Basic plot shows that the Sensor nodes were able to capture temperature range of between 250C to 510C while the result of the CO emission shows an interval of 0.01g/m3 to 30.0 g/m3. A comparison between data transmitted at source and data received at the destination (sink) was carried out, a ranking test was used to validate the data received, a 0.9325 correlation value was obtained which shows a high level of integrity of 93.25% .
Adaptive 3D ray tracing approach for indoor radio signal prediction at 3.5 GHzIJECEIAES
This paper explained an adaptive ray tracing technique in modelling indoor radio wave propagation. As compared with conventional ray tracing approach, the presented ray tracing approach offers an optimized method to trace the travelling radio signal by introducing flexibility and adaptive features in ray launching algorithm in modelling the radio wave for indoor scenarios. The simulation result was compared with measurements data for verification. By analyzing the results, the proposed adaptive technique showed a better improvement in simulation time, power level and coverage in modelling the radio wave propagation for indoor scenario and may benefit in the development of signal propagation simulators for future technologies.
Automatic and interactive spot check of the IDC bulletins (REB, SEL3) and XSE...ivanokitov
Aimed at improvement of the Reviewed Event Bulletin (REB) quality
Final level of the interactive review hierarchy (independent review)
Provides an extended check of bulletin and analysis quality
Identifying both consistencies and inconsistencies in the REB
Reviewed areas usually include event location anomalies, seriously mislocated events, missing large events, and unqualified or ‘bogus’ events representing incorrect / invalid data associations or incorrect detections, or REB events not meeting event definition criteria.
One more implementation of the master-event approach
Similar to Testing the global grid of master events for waveform cross correlation with the Reviewed Event Bulletin (20)
Assessing the consistency, quality, and completeness of the Reviewed Event Bu...Ivan Kitov
The Reviewed Event Bulletin (REB) of the IDC includes more than 500,000 events with associated seismic phases. The quality of these events and its completeness depends on multistage automatic processing followed by interactive analysis. The IDC raw data archive allows to apply the method of waveform cross correlation (WCC) for assessment of the similarity between seismic signals associated with REB events, and thus, the overall bulletin consistency. For cross correlation, we create a global set of master-events (ME) in the areas where reliable seismic events are available in the REB. Using only events within 3 degrees from a given ME, we apply the Principal Component Analysis to signals at each associated station. The major components are used to build synthetic MEs. Using real and synthetic MEs, we process continuous data in a specified region with the aim to find new REB-compatible events, which are missing from the REB. Therefore, the developed method allows to test REB consistency, quality, and completeness in any specified region or globally. It can also be thought as an alternative to the manual spot check during an independent review of the REB in routine IDC event analysis or as an additional tool for the independent reviewer.
Detection and location of small aftershocks using waveform cross correlationIvan Kitov
Aftershock sequences of earthquakes with magnitudes 5.0 and lower are difficult to detect and locate by sparse regional networks. Signals from aftershocks with magnitudes 2 to 3 are usually below detection thresholds of standard 3-C seismic stations at near regional distances. For seismic events close in space, the method waveform cross correlation (WCC) allows to reduce detection threshold by at least a unit of magnitude and to improve location precision to a few kilometres. Therefore, the WCC method is directly applicable to weak aftershock sequences. Here, we recover seismic activity after the earthquake near the town of Mariupol (Ukraine) occurred on August 7, 2016. The main shock was detected by many stations of the International monitoring system (IMS), including the closest primary IMS array stations AKASG (6.62 deg.) and BRTR (7.81), as well as 3-C station KBZ (5.00). The International data centre located this event (47.0013N, 37.5427E), estimated its origin time (08:15:4.1 UTC), magnitude (mb=4.5), and depth (6.8 km). This event was also detected by two array stations of the Institute for Dynamics of Geospheres (IDG) of the Russian Academy of Sciences (RAS): portable 3-C array RDON (3.28), which is the closest station, and MHVAR (7.96). Using signals from the main shock at five stations as waveform templates, we calculated continuous traces of cross correlation coefficient (CC) from the 7th to the 11th of August. We found that the best templates should include all regional phases, and thus, have the length from 80 s to 180 s. For detection, we used standard STA/LTA method with threshold depending on station. The accuracy of onset time estimation by the STA/LTA detector based on CC-traces is close to one sample, which varies from 0.05 s at BRTR to 0.005 s for RDON and MHVAR. Arrival times of all detected signals were reduced to origin times using the observed travel times from the main shock. Clusters of origin times are considered as event hypotheses in the phase association procedure. As a result, we found 12 aftershocks with magnitudes between 1.5 and 3.5. These small events were detected neither by the IDC nor by the near regional network of the Geophysical Survey of RAS, which has three closest 3-C stations at distances of 2.2 to 3.5 degrees from the studied earthquake. We also applied procedure of relative location and all aftershocks were found within a few km from the main shock.
Waveform cross correlation: coherency of seismic signals estimated from repea...Ivan Kitov
Waveform cross correlation (WCC) is an optimal detection technique for signals from spatially close seismic sources. Observations at various distances from a multitude of sources in a variety of seismotectonic and geological conditions demonstrate that signals from close events recorded at common stations are characterized by high level of similarity. Signals from remote sources are less similar mainly because of the variations in propagation paths. Different parts of a complete seismic wavetrain have different sensitivity to the propagation path. The initial part retains general characteristics of the source time function. The shape of later seismic phases is chiefly defined by propagation path. Here, we investigate the level of similarity between hundreds of signals generated by chemical blasts within a phosphate mine in Jordan and measured by 5 seismic stations at near-regional distances. We have revealed a much higher similarity of the first 3 s to 5 s of signals from different blasts, also at distances of about 20 km, at the same station as well as at different stations. This observation evidences in favour of high coherency in the initial part of signals at all stations. We also demonstrate that the observed coherency allows the use of very short (say, 3 s) waveform templates for detection and further phase association of signals based on cross correlation. Longer templates are characterized by larger overall signal specificity, which may reduce detection threshold and spatial resolution of the WCC method. However, different propagation paths within the same geological province may have similar transfer functions producing regular seismic phases with similar shapes independent on source position. This may increase the number of false detections from remote sources. We compare the performance of short and long waveform templates using detection statistics and the results of event hypotheses creation and further event location.
Remote detection of weak aftershocks of the DPRK underground explosions using...Ivan Kitov
We have estimated the performance of discrimination criterion based on the P/S spectral amplitude ratios obtained from six underground tests conducted by the DPRK since October 2006 and six aftershocks induced by the last two explosions. Two aftershocks were detected in routine processing at the International Data Centre of the Comprehensive Nuclear-Test-Ban Treaty Organization. Three aftershocks were detected by a prototype waveform cross correlation tool with explosions as master events, and one aftershock was found with the aftershocks as master events. Two seismic arrays USRK and KSRS of the International Monitoring System (IMS) and two non-IMS 3-component stations SEHB (South Korea) and MDJ (China) were used. With increasing frequency, all stations demonstrate approximately the same level of deviation between the Pg/Lg spectral amplitude ratios belonging to the DPRK explosions and their aftershocks. For a single station, simple statistical estimates show that the probability of any of six aftershocks not to be a sample from the explosion population is larger than 99.996% at the KSRS and even larger at USRK. The probability of any of the DPRK explosion to be a representative of the aftershock population is extremely small as defined by the distance of 20 and more standard deviations to the mean explosion Pg/Lg value. For network discrimination, we use the Mahalanobis distance combining the Pg/Lg estimates at three stations: USRK, KSRS and MDJ. At frequencies above 4 Hz, the (squared) Mahalanobis distance, D2, between the populations of explosions and aftershocks is larger than 100. In the frequency band between 6 and 12 Hz at USRK, the aftershocks distance from the average explosion D2>21,000. Statistically, the probability to mix up explosions and aftershocks is negligible. These discrimination results are related only to the aftershocks of the DPRK tests and cannot be directly extrapolated to the population of tectonic earthquakes in the same area.
Investigation of repeated events at Jordan phosphate mine with waveform cross...Ivan Kitov
More than 1500 events were measured at 3 seismic stations. Their signals are processed using waveform cross correlation and Principal Component Analysis. The best waveforms and eigenvectors are used for detection.
Investigation of repeated blasts at Aitik mine using waveform cross correlationIvan Kitov
We present results of signal detection from repeated events at the Aitik and Kiruna mines in Sweden as based on waveform cross correlation. Several advanced methods based on tensor Singular Value Decomposition is applied to waveforms measured at seismic array ARCES, which consists of three-component sensors.
The use of waveform cross correlation for creation of an accurate catalogue o...Ivan Kitov
Page 3
In the current study of mining activity within the Russian platform, we use the advantages of location and historical bulletins/catalogues of mining explosions recorded by small-aperture seismic array Mikhnevo (MHVAR). The Institute of Geospheres Dynamics (IDG) of the Russian Academy of Sciences runs seismic array MHVAR (54.950N; 37.767E) since 2004.
Small-aperture seismic array “Mikhnevo” includes ten vertical stations (solid triangles), with one station in the geometrical centre of the array (C00) and other nine stations distributed over three circles with radii of 130 m, 320 m, and 600 m. The array aperture in approximately 1.1 km. Two 3C stations (solid triangles in circles) were added to the outer circle in order to improve the overall stations sensitivity (detection threshold) and resolution. All stations are equipped with short-period seismometers SM3-KV, which are characterized by flat response between 0.8 Hz and 30 Hz and gain of 180,000 [Vs/m]. Later, a 3C broad band station (BB) was installed in the centre of the array for surface wave measurements. The array response function (only for 12 vertical channels) is similar to that for many small-aperture arrays. Such arrays are designed to measure high-frequency signals from regional and near-regional sources with magnitudes above 1.5-2.0.
Page 4
MHVAR detects regional seismic phases (Pn, Sn, Lg, Rg) from various sources. Figure shows some selected waveforms with source-station distance decreasing up-down. Correspondingly the length of records decreases – for the closest mines it’s harder to distinguish between P and S phases.
Page 5
More than 50 areas at regional and near regional distances with different levels of mining activity have been identified by MHVAR. Since 2004, thousands of events have been reported in the IDG seismic catalogue as mining explosions. The IDG publishes this mining event catalogue as a part of the annual issues of “Earthquakes in Russia”, which is available for the broader geophysical community. The map shows several selected mines at near-regional distances where MHVAR successfully detects events with magnitudes 1.0 and lower. We also show a few selected mines at regional distances with the largest events of magnitude (ML) 2.0 and above. Such events should be also detected by IMS arrays. Joint interpretation of signals detected by MHVAR and IMS arrays allows significant improvements in signal detection, location, characterization and identification of events in the IDG catalogue when the historical data are revisited. The work on joint analysis of the IDG and IMS data is possible under the “Contract for limited access to IMS data and IDC products” between the CTBTO and IDG, which allows obtaining data through 2011.
To begin with, we have chosen blasts with larger magnitudes from well-known ironstone mine Mikhailovskiy (red circle), which is situated at regional distances somewhere between MHVAR (~330 km) and IMS array AKASG
Inflation, Unemployment, and Labor Force: The Phillips Curve and Long-term Pr...Ivan Kitov
Inflation, Unemployment, and Labor Force: The Phillips Curve and Long-term Projections for Japan
presented at
Euro Area Business Cycle Network (EABCN)
Inflation Developments after the Great Recession
Eltville (Frankfurt), 6-7 December 2013
Hosted by the Deutsche Bundesbank;
Sponsored by the EABCN
Big Data solution for CTBT monitoring:CEA-IDC joint global cross correlation ...Ivan Kitov
Cross correlation of seismic waveforms from the IDC 15-year archive needs a BigData solution. Joint efforts are aimed at solving scientific and technical problems/
Joint interpretation of infrasound, acoustic, and seismic waves from meteorit...Ivan Kitov
Sources of signals
Peak energy release. Acoustic (low-amplitude shock) wave
Infrasound source vs. seismic source
Seismic waves: Pn, Lg
Acousto-seismic waves: LR, LQ
Comparison with atmospheric nuclear tests: Love and Rayleigh waves
Comparison with the 1987 Chulym meteorite
Review regional Source Specific Station Corrections (SSSCs) developed for no...Ivan Kitov
1, To evaluate current version of SSSCs installed at IDC (Russian Federation-United States Joint Calibration Program)
2. To test and evaluate various sets of SSSCs with highest emphasis in GT0-2 explosions
3. To develop procedures for SSSCs validation
4. To study location problem in presence of correlated errors and deviations from Gaussian statistics
5. To select sets or subsets of proposed SSSCs for installation into routine IDC processing
The dynamics of personal income distribution and inequality in the United StatesIvan Kitov
We model the evolution of age-dependent personal income distribution and inequality
as expressed by the Gini ratio. In our framework, inequality is an emergent property of a
theoretical model we develop for the dynamics of individual incomes. The model relates
the evolution of personal income to the individual’s capability to earn money, the size of
her work instrument, her work experience and aggregate output growth. Our model is
calibrated to the single-year population cohorts as well as the personal incomes data in 10-
and 5- year age bins available from March Current Population Survey (CPS). We predict
the dynamics of personal incomes for every single person in the working-age population
in the USA between 1930 and 2011. The model output is then aggregated to construct
annual age-dependent and overall personal income distributions (PID) and to compute the
Gini ratios. The latter are predicted very accurately - up to 3 decimal places. We show
that Gini for people with income is approximately constant since 1930, which is confirmed
empirically. Because of the increasing proportion of people with income between 1947 and
1999, the overall Gini reveals a tendency to decline slightly with time. The age-dependent
Gini ratios have different trends. For example, the group between 55 and 64 years of age
does not demonstrate any decline in the Gini ratio since 2000. In the youngest age group
(from 15 to 24 years), however, the level of income inequality increases with time. We also
find that in the latter cohort the average income decreases relatively to the age group with
the highest mean income. Consequently, each year it is becoming progressively harder for
young people to earn a proportional share of the overall income.
Acoustic and seismic effects of the 2013 Chelyabinsk meteorite as measured by...Ivan Kitov
Two events crucial for monitoring of nuclear explosions under the CTBT occurred on February 12 and 15 and attracted attention of the mass media and scientists. Seismic waves from the underground event and infrasound waves from the meteorite are of extreme interest as well as various processes of energy conversion at the free surface. Infrasound station I45(RU) collocated with seismic array USRK recorded the epicentral I-phase generated by the DPRK 2013 event and the seismoacoustic wave emitted beneath the station. The shock wave from the Chebarkul meteorite generated a regular I-phase recorded by many IMS infrasound stations and a series of seismic phases likely associated with impact and acoustoseismic conversion. Due to the altitude of the peak energy release, the air-coupled ground rolls with a group velocity of 3.5 km/s were generated. A similar pattern was observed after the 1984 r.Chulym (Siberia) bolide. We estimate the energy of both sources and discuss possible mechanisms of acoustic/seismic wave generation and conversion.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Testing the global grid of master events for waveform cross correlation with the Reviewed Event Bulletin
1. Testing the global grid of master events for waveform cross correlation with the Reviewed Event Bulletin
Dmitry Bobrov1, Ivan Kitov2, and Mikhail Rozhkov1
1International Data Centre, CTBTO, 2 Institute of Geospheres Dynamics, Russian Academy of Sciences
Abstract
The Comprehensive Nuclear-Test-Ban
Treaty’s verification regime requires uniform
distribution of monitoring capabilities over
the globe. The use of waveform cross
correlation as a monitoring technique
demands waveform templates from master
events outside regions of natural seismicity
and test sites. We populated aseismic areas
with masters having synthetic templates for
predefined sets (from 3 to 10) of primary
array stations of the International Monitoring
System. Previously, we tested the global set
of master events and synthetic templates
using IMS seismic data for February 12,
2013 and demonstrated excellent detection
and location capability of the matched filter
technique. In this study, we test the global
grid of synthetic master events using seismic
events from the Reviewed Event Bulletin.
For detection, we use standard STA/LTA
(SNR) procedure applied to the time series
of cross correlation coefficient (CC). Phase
association is based on SNR, CC, and arrival
times. Azimuth and slowness estimates based
f-k analysis cross correlation traces are used
to reject false arrivals.
Corresponsing author: Ivan Kitov (IDG RAS)
E-mail: ikitov@mail.ru
International Data Centre IDG RAS
http://www.ctbto.org http://idg.chph.ras.ru
Disclaimer: The views expressed on this poster are those of the authors
and do not necessary reflect the views of the CTBTO Preparatory Commission
and the Institute of Geospheres Dynamics, RAS
Conclusion
This work extends the research we presented at the Science and
Technology 2013 conference regarding the construction of Global
Grid of master events for seismic monitoring with waveform cross
correlation.
Using the catalogue of seismic events (REB) measured in North
Atlantic region by the IDC, we have done some preliminary estimates
of the portion of valid events obtained by waveform cross correlation.
Machine learning was a natural choice for selection of valid events
from the extremely large amount of data. As in other areas, we
doubled the number of events in the REB, with all new events
matching the IDC event definition criteria.
The CC-based Global Grid monitoring system prototype developed in
IDC during the past years is now populated with one synthetic
template obtained by PCA (Principal Component Analysis) of real
waveforms from a hundred of underground nuclear tests. One
template waveform was replicated over all stations and individual
channels.
Before one can use machine learning at the global level it is necessary
to create a training dataset populated with valid events created by
waveform cross correlation. Here we used the REB for 2013 to build a
cross correlation event list and processed only data at relevant stations
and in time intervals around known events. We have demonstrated a
significant increase in the number of detected arrivals when cross
correlation is used. The obtained events and arrivals will be used for
training of various classification algorithms applied in the framework
of continuous processing with cross correlation.
The Global CC Grid technique gives an opportunity for complete
implementation of CC-based global detection and location. The
specific features of the P-waves from underground nuclear tests used
in this study can reduce the global detection threshold of seismic
monitoring under the CTBT by 0.4 to 0.5 units of magnitude. This
corresponds to the reduction in the test yield by a factor of 2 to 3 for
any location and depth. Considering the history of seismic monitoring
of UNEs this is a crucial improvement which can be also enhanced by
a more effective use of IMS array stations.
Figure 6. Real seismograms with varying
distance, depth, and UNE source functions
which were used in PCA.
An underground nuclear test can be
conducted in any place of the planet, not
only in seismic regions. Figure 1 presents a
map of events from the Reviewed Event
Bulletin (REB) as found by the
International Data Centre (IDC) in 2013.
Figure 2 shows the Seismic Network of the
International Monitoring System (IMS) and
relative positions of selected historical
underground nuclear tests. The difference
between geographical distribution of
earthquakes and explosions is striking.
The technique of waveform cross
correlation (matched filter) is naturally
based on high-quality waveform templates
from a representative set of master events.
In seismic areas, one can use waveform
templates from natural events. Similarly,
one can use signals from underground
nuclear tests where available. And for the
rest of the world we need the best possible
templates for the purpose of seismic
monitoring. Here we present and test the
performance of the uniformly distributed
Global Grid (GG) of master events
designed for the best performance of
waveform cross correlation. Figure 3
presents the global coverage by master
events. A small segment of the GG is
shown in Figure 4. Each node contains a
master event with templates at IMS
stations. Since the IMS network is sparse,
the number of stations associated with a
given master may vary from 3 to 10. The
quality, sensitivity, and resolution of an
array depend on many factors, and its
aperture and number of sensors are likely
the most important. Considering the design
and historical performance of the primary
IMS arrays we have to distribute them over
three quality groups. For a given master, we
use arrays from Group 2 and 3 only when
no array from Group 1 is available. The
largest number of IMS stations associated
with a master is 10. The smallest number is
3 as dictated by the strict IDC requirement
for a valid REB event. There are some areas
where 3 primary arrays are not available.
We are going to use templates from 3-C
stations to build master events for nodes in
the not covered areas.
Each node or master event is responsible
for a circular footprint of 125 km in radius.
The distance between nodes is
approximately 140 km. Therefore, the GG
covers the whole earth without any blind
areas. It is important that the zones of
responsibility of neighboring nodes
intersect. Figure 5 presents the detailed
design of the nodes. For all 25,000 nodes,
cross correlation with templates at each
associated station is calculated.
Figure 2. A map of IMS seismic network with historical UNEs. Blue circles – primary arrays, blue triangles –
primary 3-C stations. Yellow circles – auxiliary arrays, yellow triangles – auxiliary 3-C stations. Red starts –
underground nuclear explosions. Only primary arrays are used for cross correlation.
Figure 4. Sample segment of the Global Grid.
• Spacing between grid points (masters) ~140 km. The
• P-wave templates from three to ten IMS primary arrays per master
• Distance for P-phase from 6 to 96 degrees
• At least three IMS stations to create an REB event
Feasible templates:
- Real waveforms from events near the GG nodes,
- Grand masters – best events replicated over larger sets of GG nodes
- Synthetic waveforms - any source function, mechanism, depth, location, etc.
Figure 9. Finding the DPRK-2013 with the Global Grid. The
number of stations associated with the event hypotheses
obtained by local association. Only the hypotheses with 3 and
more associated stations are shown. The best hypothesis has 9
associated stations and is the closest to the test position obtained
by the satellite data. Detections at the array stations are found
using the first PCA component shown un Figure 7.
Figure 8. For a set of master events around the DPRK-2013,
sample distributions of cross correlation coefficient between
master templates and real waveforms from the DPRK-2013 for
array stations AKASG and WRA. Almost all masters have
detections at AKASG and WRA. These detections and
detections at other 8 IMS stations are used to build event
hypotheses.
Figure 1. A map of REB events found during 2013. Red circles – events found by cross correlation (i.e. two
or more REB phases are within 4 s). White circles – not found events.
Figure 3. Global Grid for Cross Correlation. There are 25,000 nodes
in the GG. Each node represent a master event with 3 to 10 templates
at primary IMS array stations.
SEISMIC MONITORING: EXPLOSIONS CAN BE CONDUCTED ANYWHERE
GLOBAL CROSS CORRELATION GRID: DESIGN
Figure 5. Local Association and Conflict Resolution
1. Five circles with ~25 km increment in radius.
2. 91 nodes for origin time calculation (green points)
3. All hypotheses at the outer circle (red points) are
neglected since they have to be created by neighbouring
masters.
Global Grid design, 25000 nodes
Array stations arranged by quality:
Group 1 = WRA, TORD, MKAR, ILAR, GERES, PDAR, CMAR, SONM, AKASG, BRTR, GEYT
Group 2 = ASAR, ZALV, YKA, ARCES, TXAR, KSRS
Group 3 = USRK, FINES, NVAR, NOA, MJAR
GCCG quality criteria:
XSEL event: NSTAmin = 3; dTorigin = 6s; AZGAPmax = 330º
Detections: SNRmin = 0.5; SNR_CCmin = 2.5; |CCmin | = 0.2;
Quality check: FKSTATmin = 2.5; AZRESmax = 20.0º; SLORESmax = 2.0 s/º
WAVEFORM TEMPLATES, CROSS CORRELATION, EVENT BUILDING
TESTING GLOBAL GRID WITH THE REVIEWED EVENT BULLETIN
Figure 7. PCA 1st component
Despite its original design aimed at seismic
monitoring of nuclear tests, waveform
templates for the GG can be selected from a
broader set of signals depending on the nature
of target seismic sources. Here, we follow the
way known as a signal dimensionality
reduction (please also refer to our poster T3.3-
P10).
The dimensionality reduction allows finding a
template the best way describing the set of
records of nuclear explosions conducted in
wide range of epicentral distance, rock types,
yield, and depth of burial. Some examples of
such waveforms are presented in Figure 6.
These real seismograms have different
sampling rate and were recorded by sensors
with different responses. Therefore, they were
all reduced to one sensor type and sampling
rate.
Principal Component Analysis (PCA)
performed with Singular Value Decomposition
(SVD) is used as a tool. Figure 7 demonstrates
the first principal component obtained by PCA.
This waveform was used to build synthetic
templates for all channels and all arrays. The
only difference between the templates was in
the sampling rate which varies from 20 Hz to
80 Hz for IMS array stations. For a given array,
time delays between channels were calculated
from theoretical travel time curves according
to master/station positions.
Relevant signals are detected by STA/LTA applied to the
trace of cross correlation coefficient averaged over all
channels without any time shift – for two events close
in space cross correlation peaks simultaneously at all
channels. The detected signals have to match a number
of quality criteria as obtained by statistical and FK-
analysis. For all detections, their arrival times are
projected back to master events by theoretical travel
times. Using the set of origin times for a given master
we build slave events. In order to better predict the
origin times, each node has five circles of hypothetical
locations for the sought (slave) events where travel times
to the stations relevant for the given master event are
predicted (Figure 5); the spacing between circles is 25
km and the outer circle has radius of 125 km. Because
the outer circle (red points) of a given master intersects
with internal circles of the neighboring masters we reject
all final event hypotheses built on this outer circle. Such
hypotheses must belong to the neighboring events. Other
conflicts between event hypotheses belonging to
neighboring events are resolved by the number of
stations and the standard deviation of origin times for
arrivals on the associated stations.
0
10
20
30
40
50
60
70
-50 -40 -30 -20
lat,deg
long, deg
USING THE REB FOR TESTING GG PROCEDURES
Figure 10. Locations of 252 new
qualified event hypotheses, which
match the REB criteria.
In order to assess the portion of correct
hypotheses which can be potentially built by the
GG we tested a smaller case of seismicity in
North Atlantic region. Because of task
dimension, machine learning was used with the
MATLAB TreeBagger as the first choice.
Following the general procedure we have
compiled a dataset containing two classes of
arrivals as associated with valid and invalid
(only according to the IDC definition) events.
To populate the set of valid events and
detections, we selected the event hypotheses
obtained by the GG which have close origin
times to the events reported in the REB.
Overall, we selected 258 valid event hypotheses
obtained by the GG prototype for the training
set. The set of invalid events was created by
random choice of ~3% (1859) of the hypotheses
far enough (600 s) from the REB events. In
total, the class of valid events included 1031
arrivals at 7 IMS array stations and there were
5631 arrivals associated with the events not
matching REB criteria.
After training, we classified 177,520 arrivals in
48,443 hypotheses with the MATLAB
TreeBagger. Classification algorithm allowed to
build 252 new valid events defined by three or
more valid arrivals as required by the IDC event
definition criteria. Figure 10 displays locations
for these qualified events, which repeat the
pattern of the Mid-Atlantic Ridge seismicity.
To be included in the REB, all selected
hypotheses have to be confirmed interactively.
As in our previous studies, cross correlation
approximately doubles the amount of REB
events and reduces the detection threshold by
0.4 magnitude units.
In this study, we have conducted cross correlation of the templates based on
the first PCA component in Figure 7 with the waveforms likely containing
signals from the seismic events included the 2013 REB. From 33,710 events,
we used 30,513 which have 2 and more primary IMS arrays. Only templates
from the master events within 1500 km from the REB events were used. Time
windows included the segment -5 min to +5 min from the expected arrival
times on stations related to each master. There are 22 IMS stations in the study:
AKASG, ARCES, ASAR, BRTR, CMAR, FINES, GERES, GEYT, ILAR,
KSRS, MJAR, MKAR, NOA, NVAR, PDAR, SONM, TORD, TXAR, USRK,
WRA, YKA, and ZALV
The reduced set of masters (around 300 instead of 25,000 in the GG) and short
time window allows easy calculation for one year period.
All cross correlation detections for all masters were used to build event
hypotheses and, after local association and conflict resolution, the best events
are saved in the XSEL (cross correlation event list). At this stage, the principal
goal is to find more arrivals at IMS array stations than listed in the REB for
future use in machine learning. We define an REB arrival as found when there
is a cross correlation detection within 4 s. There are 16,318 REB events (from
30,513) having 2 and more arrivals found by cross correlation. At the same
time, we find many arrivals at other stations of the same master, which were
not detected by standard procedures. Figure 11 shows the numbers of found
REB detections and added detections for 22 stations. Figure 12 through 15
present some parameters of the found REB and added arrivals. The level of
SNR for the added detections falls below that for the found REB and explains
why these arrivals are missing from the REB.
For a synthetic template, theoretical time delays between channels and the
absence of empirical SASCs may lead to poor azimuth and slowness estimates
as obtained by the FK-analysis of cross correlation traces. Before building
event hypotheses, we used the closest masters to detect signals near those
existing or expected in the REB. The number of found and added arrivals has
increased. Figures 16 through 18 demonstrate that there are many more
detections with slowness and azimuth residuals beyond the predefined limits.
The best stations like WRA have very low proportion of such detections as
they provide reliable estimates of the studied parameters. However, judging by
the distribution of CC and SNR_CC for these detections many of them are of
relatively high quality and thus can be use in local association.
Figure 11. The number of found REB and added detections.
Most effective stations (WRA,ASAR,MKAR …) have
relatively low proportion of added events. Poor stations
may add as many arrivals as found in the REB. Figure 14. Comparison of detection bands
for station CMAR and ARCES. The best
bands correspond to everyday practice of
interactive analysis.
Figure 12. The number of found REB and added detections
as a function of standard SNR and SNR_CC for station
WRA.
Figure 13. The number of found REB and added
detections as a function of standard CC for station
AKASG.
Figure 15. Azimuth residuals for CC detections
as obtained by FK-analysis for stations ASAR
and MKAR. Two cases are shown – found REB
and added detections.
Figure 18. Probability density functions (PDF) for
the distribution of the number of arrivals found by
CC within the predefined Azres and Slowres limits
and detections out of these limits as a function of
CC.
Figure 17. Probability density functions (PDF) for the
distribution of the number of arrivals found by CC within the
predefined Azres and Slowres limits and detections out of
these limits as a function of standard SNR and SNR_CC.
Figure 16. The number of arrivals found by CC within the
predefined Azres and Slowres limits and added detections out of
these limits. Most effective stations WRA, ASAR, MKAR …
have very low proportions of added arrivals.
SnT2015 Poster No. T3.3-P34