Viruses -- or more generally, biomolecular systems -- are made of atoms which constantly fluctuate. However, they can occasionally undergo large-scale structural variations, called conformational changes. We explore detecting online these transitions in numerical simulations: this is a key to analyse and improve on drug discovery and design.
“Accelerating SARS-COv2 Molecular Dynamics Studies with Optical Random Features” by Amélie Chatelain, Machine learning engineer @ LightOn
Abstract: Viruses -- or more generally, biomolecular systems -- are made of atoms which constantly fluctuate. However, they can occasionally undergo large-scale structural variations, called conformational changes. We explore detecting online these transitions in numerical simulations: this is a key to analyse and improve on drug discovery and design.
15. Sächsisches GI/GIS/GDI Forum und Club of Ossiach Workshops
COPERNICUS PROGRAMME AND SENTINEL DATA FOR AGRICULTURE AND FORESTRY
Lenka Hladíková, CENIA, Czech Environmental Information Agency (CZ)
“Accelerating SARS-COv2 Molecular Dynamics Studies with Optical Random Features” by Amélie Chatelain, Machine learning engineer @ LightOn
Abstract: Viruses -- or more generally, biomolecular systems -- are made of atoms which constantly fluctuate. However, they can occasionally undergo large-scale structural variations, called conformational changes. We explore detecting online these transitions in numerical simulations: this is a key to analyse and improve on drug discovery and design.
15. Sächsisches GI/GIS/GDI Forum und Club of Ossiach Workshops
COPERNICUS PROGRAMME AND SENTINEL DATA FOR AGRICULTURE AND FORESTRY
Lenka Hladíková, CENIA, Czech Environmental Information Agency (CZ)
Sharing the experience and results of using georeferenced 2010 Census data in Mexico and EO to train algorithms in order to detect urban growth and generate useful information for estimating population for non-census years.
FDTD Analysis of the Complex Current Distribution on a circular disk exposed ...kagikenco
Some rigorous solutions were suggested for investigating the scatter characteristic on a circular disk of perfect conductor. However, these solutions were effective in limited radius of the circular disk. According to former our research, in the case of the radius in which rigorous solutions were ineffective, the complex current distributions were investigated by FDTD simulation. In addition, the advantage of FDTD simulation was evaluated in normal incident condition. In this paper, the advantage of FDTD simulation is evaluated in oblique incident condition.
We use the Georeferenced results of the 2010 Census in Mexico to train machine learning algorithms to detect growth in cities and contribute new information to estimate the total population.
New features presentation: meteodyn WT 4.8 software - Wind EnergyJean-Claude Meteodyn
New feature of meteodyn WT, CFD software for wind resource assessment and wind park optimisation. Worldwide terrain database, convergence improvements and others improvements.
Talk on Satellite Technology, Applications & Engineering Standardisation by Abdul M. Ismail, space technology advisor, The Northern Space Consortium. The talk was delivered on the 18th of March at the Cunard building in Liverpool as part of the inaugural Northern Space consortium 'A case for space as an economic driver' event.
New approach to calculating the fundamental matrix IJECEIAES
The estimation of the fundamental matrix (F) is to determine the epipolar geometry and to establish a geometrical relation between two images of the same scene or elaborate video frames. In the literature, we find many techniques that have been proposed for robust estimations such as RANSAC (random sample consensus), least squares median (LMeds), and M estimators as exhaustive. This article presents a comparison between the different detectors that are (Harris, FAST, SIFT, and SURF) in terms of detected points number, the number of correct matches and the computation speed of the ‘F’. Our method based first on the extraction of descriptors by the algorithm (SURF) was used in comparison to the other one because of its robustness, then set the threshold of uniqueness to obtain the best points and also normalize these points and rank it according to the weighting function of the different regions at the end of the estimation of the matrix ''F'' by the technique of the M-estimator at eight points, to calculate the average error and the speed of the calculation ''F''. The results of the experimental simulation were applied to the real images with different changes of viewpoints, for example (rotation, lighting and moving object), give a good agreement in terms of the counting speed of the fundamental matrix and the acceptable average error. The results of the simulation it shows this technique of use in real-time applications
EffectiveOcclusion Handling for Fast Correlation Filter-based TrackersEECJOURNAL
Correlation filter-based trackers heavily suffer from the problem of multiple peaks in their response maps incurred by occlusions. Moreover, the whole tracking pipeline may break down due to the uncertainties brought by shifting among peaks, which will further lead to the degraded correlation filter model. To alleviate the drift problem caused by occlusions, we propose a novel scheme to choose the specific filter model according to different scenarios. Specifically, an effective measurement function is designed to evaluate the quality of filter response. A sophisticated strategy is employed to judge whether occlusions occur, and then decide how to update the filter models. In addition, we take advantage of both log-polar method and pyramid-like approach to estimate the best scale of the target. We evaluate our proposed approach on VOT2018 challenge and OTB100 dataset, whose experimental result shows that the proposed tracker achieves the promising performance compared against the state-of-the-art trackers.
Online video-based abnormal detection using highly motion techniques and stat...TELKOMNIKA JOURNAL
At the essence of video surveillance, there are abnormal detection approaches, which have been proven to be substantially effective in detecting abnormal incidents without prior knowledge about these incidents. Based on the state-of-the-art research, it is evident that there is a trade-off between frame processing time and detection accuracy in abnormal detection approaches. Therefore, the primary challenge is to balance this trade-off suitably by utilizing few, but very descriptive features to fulfill online performance while maintaining a high accuracy rate. In this study, we propose a new framework, which achieves the balancing between detection accuracy and video processing time by employing two efficient motion techniques, specifically, foreground and optical flow energy. Moreover, we use different statistical analysis measures of motion features to get robust inference method to distinguish abnormal behavior incident from normal ones. The performance of this framework has been extensively evaluated in terms of the detection accuracy, the area under the curve (AUC) and frame processing time. Simulation results and comparisons with ten relevant online and non-online frameworks demonstrate that our framework efficiently achieves superior performance to those frameworks, in which it presents high values for the accuracy while attaining simultaneously low values for the processing time.
Sharing the experience and results of using georeferenced 2010 Census data in Mexico and EO to train algorithms in order to detect urban growth and generate useful information for estimating population for non-census years.
FDTD Analysis of the Complex Current Distribution on a circular disk exposed ...kagikenco
Some rigorous solutions were suggested for investigating the scatter characteristic on a circular disk of perfect conductor. However, these solutions were effective in limited radius of the circular disk. According to former our research, in the case of the radius in which rigorous solutions were ineffective, the complex current distributions were investigated by FDTD simulation. In addition, the advantage of FDTD simulation was evaluated in normal incident condition. In this paper, the advantage of FDTD simulation is evaluated in oblique incident condition.
We use the Georeferenced results of the 2010 Census in Mexico to train machine learning algorithms to detect growth in cities and contribute new information to estimate the total population.
New features presentation: meteodyn WT 4.8 software - Wind EnergyJean-Claude Meteodyn
New feature of meteodyn WT, CFD software for wind resource assessment and wind park optimisation. Worldwide terrain database, convergence improvements and others improvements.
Talk on Satellite Technology, Applications & Engineering Standardisation by Abdul M. Ismail, space technology advisor, The Northern Space Consortium. The talk was delivered on the 18th of March at the Cunard building in Liverpool as part of the inaugural Northern Space consortium 'A case for space as an economic driver' event.
New approach to calculating the fundamental matrix IJECEIAES
The estimation of the fundamental matrix (F) is to determine the epipolar geometry and to establish a geometrical relation between two images of the same scene or elaborate video frames. In the literature, we find many techniques that have been proposed for robust estimations such as RANSAC (random sample consensus), least squares median (LMeds), and M estimators as exhaustive. This article presents a comparison between the different detectors that are (Harris, FAST, SIFT, and SURF) in terms of detected points number, the number of correct matches and the computation speed of the ‘F’. Our method based first on the extraction of descriptors by the algorithm (SURF) was used in comparison to the other one because of its robustness, then set the threshold of uniqueness to obtain the best points and also normalize these points and rank it according to the weighting function of the different regions at the end of the estimation of the matrix ''F'' by the technique of the M-estimator at eight points, to calculate the average error and the speed of the calculation ''F''. The results of the experimental simulation were applied to the real images with different changes of viewpoints, for example (rotation, lighting and moving object), give a good agreement in terms of the counting speed of the fundamental matrix and the acceptable average error. The results of the simulation it shows this technique of use in real-time applications
EffectiveOcclusion Handling for Fast Correlation Filter-based TrackersEECJOURNAL
Correlation filter-based trackers heavily suffer from the problem of multiple peaks in their response maps incurred by occlusions. Moreover, the whole tracking pipeline may break down due to the uncertainties brought by shifting among peaks, which will further lead to the degraded correlation filter model. To alleviate the drift problem caused by occlusions, we propose a novel scheme to choose the specific filter model according to different scenarios. Specifically, an effective measurement function is designed to evaluate the quality of filter response. A sophisticated strategy is employed to judge whether occlusions occur, and then decide how to update the filter models. In addition, we take advantage of both log-polar method and pyramid-like approach to estimate the best scale of the target. We evaluate our proposed approach on VOT2018 challenge and OTB100 dataset, whose experimental result shows that the proposed tracker achieves the promising performance compared against the state-of-the-art trackers.
Online video-based abnormal detection using highly motion techniques and stat...TELKOMNIKA JOURNAL
At the essence of video surveillance, there are abnormal detection approaches, which have been proven to be substantially effective in detecting abnormal incidents without prior knowledge about these incidents. Based on the state-of-the-art research, it is evident that there is a trade-off between frame processing time and detection accuracy in abnormal detection approaches. Therefore, the primary challenge is to balance this trade-off suitably by utilizing few, but very descriptive features to fulfill online performance while maintaining a high accuracy rate. In this study, we propose a new framework, which achieves the balancing between detection accuracy and video processing time by employing two efficient motion techniques, specifically, foreground and optical flow energy. Moreover, we use different statistical analysis measures of motion features to get robust inference method to distinguish abnormal behavior incident from normal ones. The performance of this framework has been extensively evaluated in terms of the detection accuracy, the area under the curve (AUC) and frame processing time. Simulation results and comparisons with ten relevant online and non-online frameworks demonstrate that our framework efficiently achieves superior performance to those frameworks, in which it presents high values for the accuracy while attaining simultaneously low values for the processing time.
Title PDF: MATRIOSKA: A Multi-level Approach to Fast Tracking by Learning | ICIAP 2013
In this paper we propose a novel framework for the detection and tracking in real-time of unknown object in a video stream. We decompose the problem into two separate modules: detection and learning. The detection module can use multiple keypoint-based methods (ORB, FREAK, BRISK, SIFT, SURF and more) inside a fallback model, to correctly localize the object frame by frame exploiting the strengths of each method. The learning module updates the object model, with a growing and pruning approach, to account for changes in its appearance and extracts negative samples to further improve the detector performance. To show the effectiveness of the proposed tracking-by-detection algorithm, we present numerous quantitative results on a number of challenging sequences in which the target object goes through changes of pose, scale and illumination.
A Survey On Tracking Moving Objects Using Various AlgorithmsIJMTST Journal
Sparse representation has been applied to the object tracking problem. Mining the self- similarities between particles via multitask learning can improve tracking performance. How-ever, some particles may be different from others when they are sampled from a large region. Imposing all particles share the same structure may degrade the results. To overcome this problem, we propose a tracking algorithm based on robust multitask sparse representation (RMTT) in this letter. When we learn the particle representations, we decompose the sparse coefficient matrix into two parts in our algorithm. Joint sparse regularization is imposed on one coefficient matrix while element-wise sparse regularization is imposed on another matrix. The former regularization exploits self-similarities of particles while the later one considers the differences between them.
Automated Piecewise-Linear Fitting of S-Parameters step-response (PWLFIT) for...Piero Belforte
An innovative full time-domain macromodeling
technique for general, linear multiport systems is described. The
methodology is defined in a digital wave framework and timedomain
simulations are performed via an efficient method called
Segment Fast Convolution (SFC). It is based on a piecewiseconstant
(PWC) model of the impulse response of scattering
parameters, computed starting from a piecewise-linear fitting
of their step response (PWLFIT). Such step response is directly
available from time-domain reflectometer measurements
(TDR/TDT) or equivalent simulations. The model-building phase
is performed in a fast automated framework and an analytic
formulation of computational efficiency of the SFC with respect to
the standard time-domain convolution is given. Two application
examples are used to verify the PWLFIT performance and to
perform a comparison with macromodeling methods defined in
the frequency-domain, such as Vector Fitting (VF).
Index Terms—Digital wave models, time-domain macromodeling,
S-parameters, step response.
Multi Object Tracking Methods Based on Particle Filter and HMMIJTET Journal
Abstract – For various application detection of objects movement in a video is an important process. Determination of path of object as time advances is a tedious step. Many proposal for tracking the multiple movement of object has been put forward using various sophisticated techniques. In this paper detail description of the recent object trackers based on particle filtering and Markov Models have been analyzed. The outcome of the analysis is computational efficiency, robustness and computational complexity.
An efficient scanning algorithm for photovoltaic systems under partial shadingIJECEIAES
This paper proposes a new technique of maximum power point tracking (MPPT) for a photovoltaic (PV) system connected to three phase grids under partial shading condition (PSC), based on a new combined perturb and observe (P&O) with scanning algorithm. This new algorithm main advantages are the high-speed tracking compared to existing algorithms, high accuracy and simplicity which makes it ideal for hardware implementation. Simulation was carried on MATLAB/Simulink. Results showed the effectiveness in speed and accuracy of our algorithm over the existing ones either during standard condition (STC) or PSC. Furthermore, conventional direct power control (DPC) was applied to synchronize successfully the injected power with the grid, which makes our algorithm global and works efficiently under severe conditions.
OPTIMAL GLOBAL THRESHOLD ESTIMATION USING STATISTICAL CHANGE-POINT DETECTIONsipij
Aim of this paper is reformulation of global image thresholding problem as a well-founded statistical
method known as change-point detection (CPD) problem. Our proposed CPD thresholding algorithm does
not assume any prior statistical distribution of background and object grey levels. Further, this method is
less influenced by an outlier due to our judicious derivation of a robust criterion function depending on
Kullback-Leibler (KL) divergence measure. Experimental result shows efficacy of proposed method
compared to other popular methods available for global image thresholding. In this paper we also propose
a performance criterion for comparison of thresholding algorithms. This performance criteria does not
depend on any ground truth image. We have used this performance criterion to compare the results of
proposed thresholding algorithm with most cited global thresholding algorithms in the literature.
Strategy for Foreground Movement Identification Adaptive to Background Variat...IJECEIAES
Video processing has gained a lot of significance because of its applications in various areas of research. This includes monitoring movements in public places for surveillance. Video sequences from various standard datasets such as I2R, CAVIAR and UCSD are often referred for video processing applications and research. Identification of actors as well as the movements in video sequences should be accomplished with the static and dynamic background. The significance of research in video processing lies in identifying the foreground movement of actors and objects in video sequences. Foreground identification can be done with a static or dynamic background. This type of identification becomes complex while detecting the movements in video sequences with a dynamic background. For identification of foreground movement in video sequences with dynamic background, two algorithms are proposed in this article. The algorithms are termed as Frame Difference between Neighboring Frames using Hue, Saturation and Value (FDNF-HSV) and Frame Difference between Neighboring Frames using Greyscale (FDNF-G). With regard to F-measure, recall and precision, the proposed algorithms are evaluated with state-of-art techniques. Results of evaluation show that, the proposed algorithms have shown enhanced performance.
Method of optimization of the fundamental matrix by technique speeded up rob...IJECEIAES
The purpose of determining the fundamental matrix (F) is to define the epipolar geometry and to relate two 2D images of the same scene or video series to find the 3D scenes. The problem we address in this work is the estimation of the localization error and the processing time. We start by comparing the following feature extraction techniques: Harris, features from accelerated segment test (FAST), scale invariant feature transform (SIFT) and speed-up robust features (SURF) with respect to the number of detected points and correct matches by different changes in images. Then, we merged the best chosen by the objective function, which groups the descriptors by different regions in order to calculate F. Then, we applied the standardized eight-point algorithm which also automatically eliminates the outliers to find the optimal solution F. The test of our optimization approach is applied on the real images with different scene variations. Our simulation results provided good results in terms of accuracy and the computation time of F does not exceed 900 ms, as well as the projection error of maximum 1 pixel, regardless of the modification.
Detecting and Shadows in the HSV Color Space using Dynamic Thresholds IJECEIAES
The detection of moving objects in a video sequence is an essential step in almost all the systems of vision by computer. However, because of the dynamic change in natural scenes, the detection of movement becomes a more difficult task. In this work, we propose a new method for the detection moving objects that is robust to shadows, noise and illumination changes. For this purpose, the detection phase of the proposed method is an adaptation of the MOG approach where the foreground is extracted by considering the HSV color space. To allow the method not to take shadows into consideration during the detection process, we developed a new shade removal technique based on a dynamic thresholding of detected pixels of the foreground. The calculation model of the threshold is established by two statistical analysis tools that take into account the degree of the shadow in the scene and the robustness to noise. Experiments undertaken on a set of video sequences showed that the method put forward provides better results compared to existing methods that are limited to using static thresholds.
In this deck from the Stanford HPC Conference, Peter Dueben from the European Centre for Medium-Range Weather Forecasts (ECMWF) presents: Machine Learning for Weather Forecasts.
"I will present recent studies that use deep learning to learn the equations of motion of the atmosphere, to emulate model components of weather forecast models and to enhance usability of weather forecasts. I will than talk about the main challenges for the application of deep learning in cutting-edge weather forecasts and suggest approaches to improve usability in the future."
Peter is contributing to the development and optimization of weather and climate models for modern supercomputers. He is focusing on a better understanding of model error and model uncertainty, on the use of reduced numerical precision that is optimised for a given level of model error, on global cloud- resolving simulations with ECMWF's forecast model, and the use of machine learning, and in particular deep learning, to improve the workflow and predictions. Peter has graduated in Physics and wrote his PhD thesis at the Max Planck Institute for Meteorology in Germany. He worked as Postdoc with Tim Palmer at the University of Oxford and has taken up a position as University Research Fellow of the Royal Society at the European Centre for Medium-Range Weather Forecasts (ECMWF) in 2017.
Watch the video: https://youtu.be/ks3fkRj8Iqc
Learn more: https://www.ecmwf.int/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Feature selection for sky image classification based on self adaptive ant col...IJECEIAES
Statistical-based feature extraction has been typically used to purpose obtaining the important features from the sky image for cloud classification. These features come up with many kinds of noise, redundant and irrelevant features which can influence the classification accuracy and be time consuming. Thus, this paper proposed a new feature selection algorithm to distinguish significant features from the extracted features using an ant colony system (ACS). The informative features are extracted from the sky images using a Gaussian smoothness standard deviation, and then represented in a directed graph. In feature selection phase, the self-adaptive ACS (SAACS) algorithm has been improved by enhancing the exploration mechanism to select only the significant features. Support vector machine, kernel support vector machine, multilayer perceptron, random forest,
k-nearest neighbor, and decision tree were used to evaluate the algorithms. Four datasets are used to test the proposed model: Kiel, Singapore whole-sky imaging categories, MGC Diagnostics Corporation, and greatest common divisor. The SAACS algorithm is compared with six bio-inspired benchmark feature selection algorithms. The SAACS algorithm achieved classification accuracy of 95.64% that is superior to all the benchmark feature selection algorithms. Additionally, the Friedman test and Mann-Whitney U test are employed to statistically evaluate the efficiency of the proposed algorithms.
Techniques for the evaluation of complex polynomials with one and two variables are
introduced.Polynomials arise in may areas such as control systems, image and signal processing, coding
theory,electrical networks, etc., and their evaluations are time consuming. This paper introduces new
evaluationalgorithms that are straightforward with fewer arithmetic operations and a fast matrix
exponentiation technique.
Similar to Accelerating SARS-CoV-2 Molecular Dynamics Studies with Optical Random Features (20)
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
Astronomy Update- Curiosity’s exploration of Mars _ Local Briefs _ leadertele...
Accelerating SARS-CoV-2 Molecular Dynamics Studies with Optical Random Features
1. Accelerating SARS-CoV-2 Molecular Dynamics
Studies with Optical Random Features
Amélie Chatelain
amelie@lighton.ai
Paris - Women in Machine Learning and Data Science - 22.04.2020
2. 22.04.2020 2
My background: from neutrinos to photons
[Chatelain, Volpe, 2018]
Ph. D. in theoretical physics
linkedin.com/in/amelie-chatelain/
Travelling around ... Paris rive gauche
Master ICFP in
theoretical
physics at ENS
LightOn AI Research Team
3. 22.04.2020 3
LightOn AI Research Team
Reducing compute time and energy consumption
Optical Processing Unit (OPU)
4. 22.04.2020 4
Molecular Dynamics (MD) and conformational changes
Molecular Dynamics (MD): follow trajectories of atoms
Fluctuations ~fs
Transitions ~μs, up to msFreeenergy
Collective Variable
A billion timesteps!
→ Methods to enhance sampling.
[Trstanova, Leimkuhler, Lelievre, 2019]
7. 22.04.2020 8
Diffusion Maps – General Method
Nonlinear dimensionality reduction technique.
dimension N dimension k, k < N
[Coifman, Lafon, Lee, Maggioni, Nadler, Warner, Zucker, 2005]
Diffusion
matrix
Stochastic
matrix
Diffusion
coordinates
diagonalisenormalise normalise
8. 22.04.2020 9
Diffusion Maps – Illustration: the swiss-roll
Bonus Pearson’s correlation coefficients → relevant physical coordinates
●
Diffusion Coordinate 2 → ϕ
●
Diffusion Coordinate 3 → z
[Marsland, 2009]
x
y
z
Diffusion Coordinate 2
DiffusionCoordinate3
9. 22.04.2020 10
Diffusion Maps: application to MD trajectories
[Trstanova, Leimkuhler, Lelievre, 2019]
Conformational changes
Issues:
(1) Memory footprint
(2) Hyperparameters
(3) User-defined threshold
(4) Compute time
F F F
F
Produced
by MD
Diffusion Maps
algorithm
Eigenvalues Change in
→ change of
conformation
Metadynamics
(or other)
Collective
variables
Diffusion
coordinates
10. NEWMA to Detect Conformational Changes in
Molecular Dynamics
11. 22.04.2020 13
Online change-point detection – EWMA
Statistics Function of
time series
→ Change pointIf
In-control value Threshold
→ Requires prior knowledge of the dataset
Exponentially Weighted Moving Average for series of points .
12. 22.04.2020 14
Introducing NEWMA
→ No prior knowledge
: random features
→ CPU: Random Fourier Features (RFF),
or FastFood (FF)
[Rahimi, Recht, 2007] [Sarlós, Smola, 2013]
→ optically: RP on Aurora OPU
[Keriven, Garreau, Poli, 2018]
Change point if:
Adaptative threshold
14. 22.04.2020 17
Applying NEWMA to MD trajectories: SARS-CoV-2
[DE Shaw Research, 2020]
Comparison with changes observed in video produced by Anton
+ match changes observed using the diffusion maps algorithm
15. 22.04.2020 18
Applying NEWMA to MD: performances comparison
OPU vs. CPU for random projections: faster and lower memory footprint
16. 22.04.2020 19
Take away message
●
NEWMA: great way to detect conformational changes in molecular
dynamics simulations.
●
Optical random features: particularly adapted to this task.
●
Future work: reinforcement learning for molecular dynamics.
[Shin, Tran, Takemura, Kitao, Terayama, Tsuda, 2019]