This document summarizes a study that used live cell imaging and time series clustering to analyze heterogeneity in cell motility at the subcellular level. The study developed a method called HACKS to extract local velocity and fluorescence intensity time series from imaging data. Time series clustering identified distinct protrusion phenotypes ("fluctuating", "periodic", "accelerating"). Molecular dynamics analysis associated the "accelerating" phenotype with temporally ordered recruitment of the actin regulator VASP. Drug inhibition experiments confirmed VASP promotes the "accelerating" protrusion phenotype.
The document discusses the generation of high resolution digital surface models (DSMs) from ALOS PRISM stereo imagery and efforts to create seamless DSM mosaics at the global scale. It provides an overview of the software used to generate DSMs from PRISM triplet images and details on archiving over 5,000 processed scene DSMs. It then describes the process used to create 1° x 1° tile mosaics, including correcting for biases between relative and absolute DSMs. Examples of mosaics created for Japan and Bhutan are shown and their contributions to studies on glacial lake outburst floods in Bhutan are discussed. The goal is to complete a global 10m resolution DSM mosaic.
Fast Sparse 2-D DFT Computation using Sparse-Graph Alias CodesFrank Ong
This document presents a method called 2D-FFAST (Fast Fourier-Aliasing-based Sparse Transform) that enables fast computation of sparse 2D discrete Fourier transforms (DFTs). It generalizes a previous 1D method to exploit sparsity and allow sub-Nyquist sampling rates. The key ideas are: 1) aliasing patterns from different subsampling reveal sparse entries, 2) choosing co-prime subsampling factors provides diverse patterns, and 3) combining patterns recovers the sparse spectrum. Simulations demonstrate reconstruction of medical images from highly subsampled measurements in both ideal sparse models and more realistic settings.
This document compares geometry-based Doppler ambiguity resolution methods for squint synthetic aperture radar (SAR) and presents an indirect scheme for estimating Doppler rate in low-contrast scenes. It introduces squint SAR geometry and the effects of incorrect Doppler parameters. It then describes conventional, iterative, and improved Radon transform geometry-based methods for resolving Doppler ambiguity, noting the improved schemes are faster. Finally, it presents a method to indirectly estimate Doppler rate in low-contrast scenes by first estimating it in high-contrast areas and using the inverse relationship between Doppler rate and range.
Smart mm-Wave Beam Steering Algorithm for Fast Link Re-Establishment under No...Avishek Patra
Millimeter-wave (mm-wave) wireless local area networks (WLANs) are expected to provide multi-Gbps connectivity by exploiting a large amount of unoccupied spectrum in e.g. the unlicensed 60 GHz band. However, to overcome the high path loss inherent at these high frequencies, mm-wave networks must employ highly directional beamforming antennas, which make link establishment and maintenance much more challenging than in traditional omnidirectional networks. In particular, maintaining connectivity under node mobility necessitates frequent re-steering of the transmit and receive antenna beams to re-establish a directional mm-wave link. A simple exhaustive sequential scanning to search for new feasible antenna sector pairs may introduce excessive delay, potentially disrupting communication and lowering the QoS. In this paper, we propose a smart beam steering algorithm for fast 60 GHz link re-establishment under node mobility, which uses knowledge of previously feasible sector pairs to narrow the sector search space, thereby reducing the associated latency overhead. We evaluate the performance of our algorithm in several representative indoor scenarios, based on detailed simulations of signal propagation in a 60 GHz WLAN in WinProp with realistic building materials. We study the effect of indoor layout, antenna sector beamwidth, node mobility pattern, and device orientation awareness. Our results show that the smart beam steering algorithm achieves a 7-fold reduction of the sector search space on average, which directly translates into lower 60 GHz link re-establishment latency. Our results also show that our fast search algorithm selects the near-optimal antenna sector pair for link re-establishment.
Experimental Evaluation of a Novel Fast Beamsteering Algorithm for Link Re-Es...Avishek Patra
The millimeter-wave (mm-wave) bands are currently being explored for multi-Gbps wireless local area networks (WLANs). Directional antennas are required to overcome the high attenuation inherent at the mm-wave frequencies. However, directionality makes link maintenance and establishment tasks complex, especially under node mobility, as slight misalignment of antenna beams between nodes leads to link disruption. Consequently, low latency beamsteering algorithms are needed for fast link re-establishment to support seamless data provisioning. Solutions based on exhaustive sequential scanning induce high latency, thereby disrupting communication. On the other hand, existing low latency proposals typically consider only static links, depend on additional hardware, or require a priori information about the network environment. In this paper, we propose a generic, fast mm-wave beamsteering algorithm that utilizes the previous valid link information to initiate the feasible antenna sector pair search and adaptively increases the sector search space around it to re-establish a link. Additionally, we experimentally evaluate the performance of our algorithm through measurements conducted in a real indoor environment using 60 GHz packet radio transceivers. The results show that, compared to exhaustive sequential scanning, our algorithm reduces the required sector search space, and thereby the link re-establishment latency, by 89% on average compared to exhaustive sequential scanning.
Representing and Querying Geospatial Information in the Semantic WebKostis Kyzirakos
The document discusses representing and querying geospatial information in the semantic web. It introduces stRDF, an extension of RDF that adds spatial literals and valid time to triples. It also introduces stSPARQL, an extension of SPARQL with functions for querying spatial data based on Open Geospatial Consortium standards. The document describes the Strabon system, which uses stRDF and supports both stSPARQL and the OGC standard GeoSPARQL for querying geospatial data stored in RDF graphs.
presentationsample_KAKE_cosmicraytelescope_final-presentation_REVISED_3D_26JU...C. David Kearsley
The document discusses an experiment to measure the angular distribution of cosmic ray muons. It describes how a telescope consisting of three plastic scintillator detectors was constructed and deployed on a university roof to detect and measure muon fluxes from different angles. Experimental data was collected over 24 hours and separately for day and night periods. The data showed that the muon flux ratio as a function of angle followed a cos2 distribution as expected from cosmic ray shower theory.
Jonathan Lefman presents his work on Superresolution chemical microscopyJonathan Lefman
This document discusses several microscopy techniques including structured illumination fluorescence microscopy, time-of-flight secondary ion mass spectrometry, coherent anti-Stokes Raman scattering microscopy, photoactivated localization microscopy, stimulated emission depletion microscopy, and 4Pi microscopy. It focuses on describing improvements made to structured illumination fluorescence microscopy including parallel GPU processing to accelerate image analysis and a new automated imaging framework. Time-of-flight secondary ion mass spectrometry imaging is discussed with applications to iterative clustering and classification analysis.
The document discusses the generation of high resolution digital surface models (DSMs) from ALOS PRISM stereo imagery and efforts to create seamless DSM mosaics at the global scale. It provides an overview of the software used to generate DSMs from PRISM triplet images and details on archiving over 5,000 processed scene DSMs. It then describes the process used to create 1° x 1° tile mosaics, including correcting for biases between relative and absolute DSMs. Examples of mosaics created for Japan and Bhutan are shown and their contributions to studies on glacial lake outburst floods in Bhutan are discussed. The goal is to complete a global 10m resolution DSM mosaic.
Fast Sparse 2-D DFT Computation using Sparse-Graph Alias CodesFrank Ong
This document presents a method called 2D-FFAST (Fast Fourier-Aliasing-based Sparse Transform) that enables fast computation of sparse 2D discrete Fourier transforms (DFTs). It generalizes a previous 1D method to exploit sparsity and allow sub-Nyquist sampling rates. The key ideas are: 1) aliasing patterns from different subsampling reveal sparse entries, 2) choosing co-prime subsampling factors provides diverse patterns, and 3) combining patterns recovers the sparse spectrum. Simulations demonstrate reconstruction of medical images from highly subsampled measurements in both ideal sparse models and more realistic settings.
This document compares geometry-based Doppler ambiguity resolution methods for squint synthetic aperture radar (SAR) and presents an indirect scheme for estimating Doppler rate in low-contrast scenes. It introduces squint SAR geometry and the effects of incorrect Doppler parameters. It then describes conventional, iterative, and improved Radon transform geometry-based methods for resolving Doppler ambiguity, noting the improved schemes are faster. Finally, it presents a method to indirectly estimate Doppler rate in low-contrast scenes by first estimating it in high-contrast areas and using the inverse relationship between Doppler rate and range.
Smart mm-Wave Beam Steering Algorithm for Fast Link Re-Establishment under No...Avishek Patra
Millimeter-wave (mm-wave) wireless local area networks (WLANs) are expected to provide multi-Gbps connectivity by exploiting a large amount of unoccupied spectrum in e.g. the unlicensed 60 GHz band. However, to overcome the high path loss inherent at these high frequencies, mm-wave networks must employ highly directional beamforming antennas, which make link establishment and maintenance much more challenging than in traditional omnidirectional networks. In particular, maintaining connectivity under node mobility necessitates frequent re-steering of the transmit and receive antenna beams to re-establish a directional mm-wave link. A simple exhaustive sequential scanning to search for new feasible antenna sector pairs may introduce excessive delay, potentially disrupting communication and lowering the QoS. In this paper, we propose a smart beam steering algorithm for fast 60 GHz link re-establishment under node mobility, which uses knowledge of previously feasible sector pairs to narrow the sector search space, thereby reducing the associated latency overhead. We evaluate the performance of our algorithm in several representative indoor scenarios, based on detailed simulations of signal propagation in a 60 GHz WLAN in WinProp with realistic building materials. We study the effect of indoor layout, antenna sector beamwidth, node mobility pattern, and device orientation awareness. Our results show that the smart beam steering algorithm achieves a 7-fold reduction of the sector search space on average, which directly translates into lower 60 GHz link re-establishment latency. Our results also show that our fast search algorithm selects the near-optimal antenna sector pair for link re-establishment.
Experimental Evaluation of a Novel Fast Beamsteering Algorithm for Link Re-Es...Avishek Patra
The millimeter-wave (mm-wave) bands are currently being explored for multi-Gbps wireless local area networks (WLANs). Directional antennas are required to overcome the high attenuation inherent at the mm-wave frequencies. However, directionality makes link maintenance and establishment tasks complex, especially under node mobility, as slight misalignment of antenna beams between nodes leads to link disruption. Consequently, low latency beamsteering algorithms are needed for fast link re-establishment to support seamless data provisioning. Solutions based on exhaustive sequential scanning induce high latency, thereby disrupting communication. On the other hand, existing low latency proposals typically consider only static links, depend on additional hardware, or require a priori information about the network environment. In this paper, we propose a generic, fast mm-wave beamsteering algorithm that utilizes the previous valid link information to initiate the feasible antenna sector pair search and adaptively increases the sector search space around it to re-establish a link. Additionally, we experimentally evaluate the performance of our algorithm through measurements conducted in a real indoor environment using 60 GHz packet radio transceivers. The results show that, compared to exhaustive sequential scanning, our algorithm reduces the required sector search space, and thereby the link re-establishment latency, by 89% on average compared to exhaustive sequential scanning.
Representing and Querying Geospatial Information in the Semantic WebKostis Kyzirakos
The document discusses representing and querying geospatial information in the semantic web. It introduces stRDF, an extension of RDF that adds spatial literals and valid time to triples. It also introduces stSPARQL, an extension of SPARQL with functions for querying spatial data based on Open Geospatial Consortium standards. The document describes the Strabon system, which uses stRDF and supports both stSPARQL and the OGC standard GeoSPARQL for querying geospatial data stored in RDF graphs.
presentationsample_KAKE_cosmicraytelescope_final-presentation_REVISED_3D_26JU...C. David Kearsley
The document discusses an experiment to measure the angular distribution of cosmic ray muons. It describes how a telescope consisting of three plastic scintillator detectors was constructed and deployed on a university roof to detect and measure muon fluxes from different angles. Experimental data was collected over 24 hours and separately for day and night periods. The data showed that the muon flux ratio as a function of angle followed a cos2 distribution as expected from cosmic ray shower theory.
Jonathan Lefman presents his work on Superresolution chemical microscopyJonathan Lefman
This document discusses several microscopy techniques including structured illumination fluorescence microscopy, time-of-flight secondary ion mass spectrometry, coherent anti-Stokes Raman scattering microscopy, photoactivated localization microscopy, stimulated emission depletion microscopy, and 4Pi microscopy. It focuses on describing improvements made to structured illumination fluorescence microscopy including parallel GPU processing to accelerate image analysis and a new automated imaging framework. Time-of-flight secondary ion mass spectrometry imaging is discussed with applications to iterative clustering and classification analysis.
The document discusses fast factorized back projection (FFBP) for processing circular synthetic aperture radar (CSAR) data. FFBP was adapted for CSAR by modifying the orientation of the polar grids used at each subaperture to follow the circular trajectory. Experimental results using real CSAR data from Germany's E-SAR system validated the FFBP-CSAR algorithm, showing high accuracy and significant speed improvements over conventional backprojection. The algorithm is now being used to process data from new multi-circular flight campaigns.
This document describes three atmospheric correction algorithms for the Geostationary Ocean Color Imager (GOCI): the Standard NASA algorithm, the Spectral Shape Matching Method (SSMM), and the Sun-Glint Correction Algorithm (SGCA). It outlines the processing steps for each algorithm, including radiometric calibration, removal of Rayleigh and aerosol scattering, and derivation of remote sensing reflectance. Validation results show SSMM and SGCA provide reasonable matches to NASA standard processing of MODIS data, though all three GOCI algorithms could be improved by updating aerosol and ocean models. The document concludes the algorithms capture the essential ocean color measurement but would benefit from further refinement.
Using Derivation-Free Optimization Methods in the Hadoop Cluster with TerasortAnhanguera Educacional S/A
The document discusses using derivation-free optimization (DFO) methods like BOBYQA and COBYLA to optimize configuration parameters for the TeraSort benchmark on a Hadoop cluster. It describes running TeraSort with different DFO methods and analyzing how configuration parameters and execution times change over iterations. The COBYLA algorithm showed more stable convergence and an average 21.15% speedup over initial Hadoop settings, performing better than BOBYQA which achieved 12% average speedup.
This document discusses seismic data processing workflows. It begins with an introduction and agenda. The general workflow includes reformatting, trace editing, geometry handling, amplitude recovery, noise attenuation through techniques like frequency and FK filtering, deconvolution, multiple removal, migration, velocity analysis, NMO correction, muting, stacking, and post-stack filtering and amplitude scaling to produce a final image for geological interpretation. The document emphasizes that the proper workflow selection depends on processing environment, targets, costs, and client preferences. It concludes with time for questions.
Google Sky aims to provide an online sky map with data from various surveys like SDSS and DSS. It has ingested over 200 square degrees of SDSS data so far and is working to optimize the processing pipeline to ingest more data. Key challenges include balancing image quality, processing time, and data storage requirements at scale.
Presentation of the paper:
Szymon Klarman and Thomas Meyer. Querying Temporal Databases via OWL 2 QL (with appendix). In Proceedings of the 8th International Conference on Web Reasoning and Rule Systems (RR-14), 2014.
Slide set presented for the Wireless Communication module at Jacobs University Bremen, Fall 2015.
Teacher: Dr. Stefano Severi, assistant: Andrei Stoica
Object Detection Classification, tracking and CountingShounak Mitra
This document summarizes an object detection, tracking, classification, and counting project. The project involved using video from cameras to:
1) Detect objects in video frames using background subtraction and blob analysis. Kalman filters were then used to track objects across frames and reduce noise.
2) Classify objects by color and count them. Shadow detection methods like Gaussian smoothing and thresholding were also applied to filter out shadows.
3) The project aimed to synchronize object counts passing over a bridge with strain gauge and accelerometer readings, to study pedestrian impacts. The document outlines the full algorithm and issues like noise, shadows and tracking.
Explanation of very simple methods for atmospheric corrections and an example adapted from a paper of the Dept. of Thermodynamics, University of Valencia, Spain.
1) The document summarizes an airborne demonstration of SweepSAR technology at Ka-band using an array-fed reflector and digital beamforming with 8 receive beams.
2) The demonstration was successful, with over 200GB of data collected across two test flights. Beamforming algorithms were able to generate mosaicked imagery.
3) The results validate the SweepSAR concept and provide evidence that the system architecture planned for DESDynI is feasible for achieving wide swaths with fine azimuth resolution.
The Raytheon UMass-Lowell Research Institute (RURI) constructed an anechoic chamber and positioning system to test electronic devices from 8-12 GHz. Ray tracing simulations predicted the chamber would meet specifications with amplitude taper below 1 dB and phase taper below 22.5 degrees within the quiet zone. Measurements confirmed the quiet zone performance matched expectations. The completed anechoic chamber and positioner will allow RURI to fully characterize electronic devices from design to testing.
The document discusses a method called time-warp for estimating nonlinear multi-component motion in differential SAR tomography (D-TomoSAR). It presents the D-TomoSAR system model which accounts for general nonlinear and multi-component displacement. The time-warp method warps the temporal axis to accommodate linear and seasonal motion parameters. It is shown to work on both single and double scatterers for a test site in Las Vegas exhibiting linear and seasonal deformation patterns.
This document summarizes a simulation of HAPS (high-altitude platform station) systems performance at 28 GHz using LDPC coding and M-APSK modulation over a Ka band channel model. A digital coded transmission scheme for HAPS in the Ka band is proposed and its performance is evaluated via simulation. The simulation considers a time series generator to model short-term rain fading and frequency scaling to adapt attenuation measurements from 40 GHz to 28 GHz. The simulation results show that the best performing system is LDPC coding with a rate of 1/3, 16-APSK modulation, a codeword length of 16,200 bits, and 50 iterations for decoding over the rainy channel model.
Presents the Tracking methods of moving targets by sensors (radar, electro optics,..).
For comments please contact me at solo.hermelin@gmail.com.
For more presentations visit my website at http://www.solohermelin.com.
I recommend to view this presentation on my website at RADAR folder, Tracking Systems subfolder.
This document describes a study conducted by NASA and the 45th Weather Squadron to develop new statistical methods for predicting the end of lightning in thunderstorms near Cape Canaveral. 58 local thunderstorms were analyzed to create models of lightning flash rates over time. Two approaches were taken: 1) analyzing the distribution of times between the last two lightning flashes, and 2) curve-fitting slowing flash rates in decaying storms. The goal is to predict the probability of additional lightning flashes to help forecasters end advisories more quickly while maintaining safety. Future work includes expanding the analysis and developing software to automatically predict lightning cessation in real-time.
Interferogram Stacking for GB-InSAR-based Measurement of Displacement Velocit...Simone Baffelli
1) The document describes the use of ground-based differential interferometry to monitor the fast-moving Bisgletscher glacier in Switzerland.
2) Radar data was collected using the KAPRI system and interferograms were generated from subsequent image pairs to measure glacier displacement with millimeter precision.
3) Atmospheric phase screen effects were mitigated using a height-dependent fit and polynomial fitting, and displacements were estimated using a moving window stacking approach.
Efficient Implementation of Self-Organizing Map for Sparse Input Dataymelka
This document describes improvements made to the self-organizing map (SOM) algorithm to make it more efficient for sparse, high-dimensional input data. The key contributions are a sparse SOM (Sparse-Som) and sparse batch SOM (Sparse-BSom) algorithm that exploit the sparseness of the data to reduce computational complexity from O(TMD) to O(TMd), where d is the number of non-zero dimensions. Sparse-Som speeds up the BMU search and weight update phases, while Sparse-BSom further allows for efficient parallelization. Experiments show Sparse-Som and Sparse-BSom train significantly faster than standard SOM on sparse datasets, with comparable or better quality
The document discusses time series analysis techniques in R, including decomposition, forecasting, clustering, and classification. It provides an overview of methods such as ARIMA modeling, dynamic time warping, discrete wavelet transforms, and decision trees. Examples are shown applying these techniques to air passenger data and synthetic control chart time series data, including decomposing, forecasting, hierarchical clustering with Euclidean and DTW distances, and classifying with decision trees using DWT features. Accuracy of over 80% is achieved on the classification tasks.
Getting started with chemometric classificationAlex Henderson
The document provides an overview of chemometric classification and resources for working with spectroscopic data. It discusses key terminology like variables, observations, and vector space. It also covers important preprocessing steps like normalization, mean centering, and principal components analysis (PCA). PCA finds orthogonal principal components that maximize the explained variance in the data in a lower dimensional space.
Blind separation of complex-valued satellite-AIS data for marine surveillance...IJECEIAES
In this paper, the problem of the blind separation of complex-valued Satellite-AIS data for marine surveillance is addressed. Due to the specific properties of the sources under consideration: they are cyclo-stationary signals with two close cyclic frequencies, we opt for spatial quadratic time-frequency domain methods. The use of an additional diversity, the time delay, is aimed at making it possible to undo the mixing of signals at the multi-sensor receiver. The suggested method involves three main stages. First, the spatial generalized mean Ambiguity function of the observations across the array is constructed. Second, in the Ambiguity plane, Delay-Doppler regions of high magnitude are determined and Delay-Doppler points of peaky values are selected. Third, the mixing matrix is estimated from these Delay-Doppler regions using our proposed non-unitary joint zero-(block) diagonalization algorithms as to perform separation.
The document discusses fast factorized back projection (FFBP) for processing circular synthetic aperture radar (CSAR) data. FFBP was adapted for CSAR by modifying the orientation of the polar grids used at each subaperture to follow the circular trajectory. Experimental results using real CSAR data from Germany's E-SAR system validated the FFBP-CSAR algorithm, showing high accuracy and significant speed improvements over conventional backprojection. The algorithm is now being used to process data from new multi-circular flight campaigns.
This document describes three atmospheric correction algorithms for the Geostationary Ocean Color Imager (GOCI): the Standard NASA algorithm, the Spectral Shape Matching Method (SSMM), and the Sun-Glint Correction Algorithm (SGCA). It outlines the processing steps for each algorithm, including radiometric calibration, removal of Rayleigh and aerosol scattering, and derivation of remote sensing reflectance. Validation results show SSMM and SGCA provide reasonable matches to NASA standard processing of MODIS data, though all three GOCI algorithms could be improved by updating aerosol and ocean models. The document concludes the algorithms capture the essential ocean color measurement but would benefit from further refinement.
Using Derivation-Free Optimization Methods in the Hadoop Cluster with TerasortAnhanguera Educacional S/A
The document discusses using derivation-free optimization (DFO) methods like BOBYQA and COBYLA to optimize configuration parameters for the TeraSort benchmark on a Hadoop cluster. It describes running TeraSort with different DFO methods and analyzing how configuration parameters and execution times change over iterations. The COBYLA algorithm showed more stable convergence and an average 21.15% speedup over initial Hadoop settings, performing better than BOBYQA which achieved 12% average speedup.
This document discusses seismic data processing workflows. It begins with an introduction and agenda. The general workflow includes reformatting, trace editing, geometry handling, amplitude recovery, noise attenuation through techniques like frequency and FK filtering, deconvolution, multiple removal, migration, velocity analysis, NMO correction, muting, stacking, and post-stack filtering and amplitude scaling to produce a final image for geological interpretation. The document emphasizes that the proper workflow selection depends on processing environment, targets, costs, and client preferences. It concludes with time for questions.
Google Sky aims to provide an online sky map with data from various surveys like SDSS and DSS. It has ingested over 200 square degrees of SDSS data so far and is working to optimize the processing pipeline to ingest more data. Key challenges include balancing image quality, processing time, and data storage requirements at scale.
Presentation of the paper:
Szymon Klarman and Thomas Meyer. Querying Temporal Databases via OWL 2 QL (with appendix). In Proceedings of the 8th International Conference on Web Reasoning and Rule Systems (RR-14), 2014.
Slide set presented for the Wireless Communication module at Jacobs University Bremen, Fall 2015.
Teacher: Dr. Stefano Severi, assistant: Andrei Stoica
Object Detection Classification, tracking and CountingShounak Mitra
This document summarizes an object detection, tracking, classification, and counting project. The project involved using video from cameras to:
1) Detect objects in video frames using background subtraction and blob analysis. Kalman filters were then used to track objects across frames and reduce noise.
2) Classify objects by color and count them. Shadow detection methods like Gaussian smoothing and thresholding were also applied to filter out shadows.
3) The project aimed to synchronize object counts passing over a bridge with strain gauge and accelerometer readings, to study pedestrian impacts. The document outlines the full algorithm and issues like noise, shadows and tracking.
Explanation of very simple methods for atmospheric corrections and an example adapted from a paper of the Dept. of Thermodynamics, University of Valencia, Spain.
1) The document summarizes an airborne demonstration of SweepSAR technology at Ka-band using an array-fed reflector and digital beamforming with 8 receive beams.
2) The demonstration was successful, with over 200GB of data collected across two test flights. Beamforming algorithms were able to generate mosaicked imagery.
3) The results validate the SweepSAR concept and provide evidence that the system architecture planned for DESDynI is feasible for achieving wide swaths with fine azimuth resolution.
The Raytheon UMass-Lowell Research Institute (RURI) constructed an anechoic chamber and positioning system to test electronic devices from 8-12 GHz. Ray tracing simulations predicted the chamber would meet specifications with amplitude taper below 1 dB and phase taper below 22.5 degrees within the quiet zone. Measurements confirmed the quiet zone performance matched expectations. The completed anechoic chamber and positioner will allow RURI to fully characterize electronic devices from design to testing.
The document discusses a method called time-warp for estimating nonlinear multi-component motion in differential SAR tomography (D-TomoSAR). It presents the D-TomoSAR system model which accounts for general nonlinear and multi-component displacement. The time-warp method warps the temporal axis to accommodate linear and seasonal motion parameters. It is shown to work on both single and double scatterers for a test site in Las Vegas exhibiting linear and seasonal deformation patterns.
This document summarizes a simulation of HAPS (high-altitude platform station) systems performance at 28 GHz using LDPC coding and M-APSK modulation over a Ka band channel model. A digital coded transmission scheme for HAPS in the Ka band is proposed and its performance is evaluated via simulation. The simulation considers a time series generator to model short-term rain fading and frequency scaling to adapt attenuation measurements from 40 GHz to 28 GHz. The simulation results show that the best performing system is LDPC coding with a rate of 1/3, 16-APSK modulation, a codeword length of 16,200 bits, and 50 iterations for decoding over the rainy channel model.
Presents the Tracking methods of moving targets by sensors (radar, electro optics,..).
For comments please contact me at solo.hermelin@gmail.com.
For more presentations visit my website at http://www.solohermelin.com.
I recommend to view this presentation on my website at RADAR folder, Tracking Systems subfolder.
This document describes a study conducted by NASA and the 45th Weather Squadron to develop new statistical methods for predicting the end of lightning in thunderstorms near Cape Canaveral. 58 local thunderstorms were analyzed to create models of lightning flash rates over time. Two approaches were taken: 1) analyzing the distribution of times between the last two lightning flashes, and 2) curve-fitting slowing flash rates in decaying storms. The goal is to predict the probability of additional lightning flashes to help forecasters end advisories more quickly while maintaining safety. Future work includes expanding the analysis and developing software to automatically predict lightning cessation in real-time.
Interferogram Stacking for GB-InSAR-based Measurement of Displacement Velocit...Simone Baffelli
1) The document describes the use of ground-based differential interferometry to monitor the fast-moving Bisgletscher glacier in Switzerland.
2) Radar data was collected using the KAPRI system and interferograms were generated from subsequent image pairs to measure glacier displacement with millimeter precision.
3) Atmospheric phase screen effects were mitigated using a height-dependent fit and polynomial fitting, and displacements were estimated using a moving window stacking approach.
Efficient Implementation of Self-Organizing Map for Sparse Input Dataymelka
This document describes improvements made to the self-organizing map (SOM) algorithm to make it more efficient for sparse, high-dimensional input data. The key contributions are a sparse SOM (Sparse-Som) and sparse batch SOM (Sparse-BSom) algorithm that exploit the sparseness of the data to reduce computational complexity from O(TMD) to O(TMd), where d is the number of non-zero dimensions. Sparse-Som speeds up the BMU search and weight update phases, while Sparse-BSom further allows for efficient parallelization. Experiments show Sparse-Som and Sparse-BSom train significantly faster than standard SOM on sparse datasets, with comparable or better quality
The document discusses time series analysis techniques in R, including decomposition, forecasting, clustering, and classification. It provides an overview of methods such as ARIMA modeling, dynamic time warping, discrete wavelet transforms, and decision trees. Examples are shown applying these techniques to air passenger data and synthetic control chart time series data, including decomposing, forecasting, hierarchical clustering with Euclidean and DTW distances, and classifying with decision trees using DWT features. Accuracy of over 80% is achieved on the classification tasks.
Getting started with chemometric classificationAlex Henderson
The document provides an overview of chemometric classification and resources for working with spectroscopic data. It discusses key terminology like variables, observations, and vector space. It also covers important preprocessing steps like normalization, mean centering, and principal components analysis (PCA). PCA finds orthogonal principal components that maximize the explained variance in the data in a lower dimensional space.
Blind separation of complex-valued satellite-AIS data for marine surveillance...IJECEIAES
In this paper, the problem of the blind separation of complex-valued Satellite-AIS data for marine surveillance is addressed. Due to the specific properties of the sources under consideration: they are cyclo-stationary signals with two close cyclic frequencies, we opt for spatial quadratic time-frequency domain methods. The use of an additional diversity, the time delay, is aimed at making it possible to undo the mixing of signals at the multi-sensor receiver. The suggested method involves three main stages. First, the spatial generalized mean Ambiguity function of the observations across the array is constructed. Second, in the Ambiguity plane, Delay-Doppler regions of high magnitude are determined and Delay-Doppler points of peaky values are selected. Third, the mixing matrix is estimated from these Delay-Doppler regions using our proposed non-unitary joint zero-(block) diagonalization algorithms as to perform separation.
SEQUENTIAL CLUSTERING-BASED EVENT DETECTION FOR NONINTRUSIVE LOAD MONITORINGcscpconf
The problem of change-point detection has been well studied and adopted in many signal processing applications. In such applications, the informative segments of the signal are the stationary ones before and after the change-point. However, for some novel signal processing and machine learning applications such as Non-Intrusive Load Monitoring (NILM), the information contained in the non-stationary transient intervals is of equal or even more importance to the recognition process. In this paper, we introduce a novel clustering-based sequential detection of abrupt changes in an aggregate electricity consumption profile with the accurate decomposition of the input signal into stationary and non-stationary segments. We also introduce various event models in the context of clustering analysis. The proposed algorithm is applied to building-level energy profiles with promising results for the residential BLUED power dataset.
SEQUENTIAL CLUSTERING-BASED EVENT DETECTION FOR NONINTRUSIVE LOAD MONITORINGcsandit
The problem of change-point detection has been well studied and adopted in many signal processing applications. In such applications, the informative segments of the signal are the
stationary ones before and after the change-point. However, for some novel signal processing and machine learning applications such as Non-Intrusive Load Monitoring (NILM), the information contained in the non-stationary transient intervals is of equal or even more importance to the recognition process. In this paper, we introduce a novel clustering-based sequential detection of abrupt changes in an aggregate electricity consumption profile with
accurate decomposition of the input signal into stationary and non-stationary segments. We also introduce various event models in the context of clustering analysis. The proposed algorithm is applied to building-level energy profiles with promising results for the residential BLUED power dataset.
This document describes an algorithm to classify music genres and bands based on time-frequency signatures extracted using the Gábor transform. The algorithm uses singular value decomposition to identify dominant modes in the time-frequency data, and linear discriminant analysis to classify new tracks. Testing showed the algorithm was most effective at classifying bands from different genres, achieving over 80% accuracy. Classification of bands within the same genre or different genres showed lower accuracy rates under 70%. Around ten dominant modes provided the most effective classification.
The document discusses experiments performed using TerraSAR-X (TSX) and TanDEM-X (TDX) satellites to demonstrate capabilities of distributed imaging with bi-static SAR systems. Three key experiments are described:
1) Super resolution in range was achieved through step-frequency acquisitions from both satellites, combining the signals coherently to increase range resolution beyond the individual satellite limitations.
2) Super resolution in azimuth used the satellites' Doppler offsets to synthesize a signal with twice the azimuth resolution of either satellite alone.
3) Quad-polarized images were synthesized from dual-polarized acquisitions from each satellite, using one polarization for imaging and the other for calibration.
This document discusses bistatic scatter radio systems for wireless sensor networks. It begins with an introduction and motivation for using bistatic scatter radio to enable low-power and low-cost dense sensor networks. It then provides an overview of the system model, including the fading characteristics, carrier emission, tag scattering, and reception at the SDR reader. Methods for demodulating FSK signals in bistatic scatter radio systems are presented, including optimal correlator-based demodulation. Performance analysis is conducted for noncoherent reception under Rayleigh fading conditions. The document concludes by mentioning the introduction of channel coding at tags to provide redundancy.
This document describes a machine learning framework for improving turbulence modelling in computational fluid dynamics simulations. The framework uses an unsupervised clustering algorithm to group flow field data into partitions, each representing a particular type of turbulence physics. Feature selection is applied to identify the most important variables for clustering. Models are then assigned to clusters using decision rules to enhance interpretability. The framework is tested on cases from an open turbulence dataset using k-means clustering, SHAP feature selection, and Skope-Rules induction. Visualizations of the clustering and feature importance are shown to validate the physics represented by each cluster.
Anomaly Detection in Sequences of Short Text Using Iterative Language ModelsCynthia Freeman
The document discusses various methods for anomaly detection in time series data. It begins by defining time series and anomalies, noting that anomaly detection is challenging due to issues like lack of labeled data and data imbalance. It then covers characteristics of time series like seasonality, trends, and concept drift, and how to detect them. Various anomaly detection methods are outlined, including STL, SARIMA, Prophet, Gaussian processes, and RNNs. Evaluation methods and factors to consider in choosing a detection method are also discussed. The document provides an overview of approaches to determining the optimal anomaly detection model for a given time series and application.
Scattering Model for Vegetation Canopies and Simulation of Satellite Navigati...Frank Schubert
This document summarizes Frank Schubert's Ph.D. defense on developing a scattering model for vegetation canopies and simulating satellite navigation channels. Schubert's research involved institutions including Aalborg University, the German Aerospace Center, and the European Space Agency. The research aims to analyze wave scattering by trees and evaluate signal tracking in multipath-prone environments through simulation. Previous work on scattering models is reviewed. The contents of Schubert's thesis are outlined, including developing a wideband channel model and performing measurements. Simulation results using the developed Satellite Navigation Channel Simulator are presented for different scenarios. The scattering model treats vegetation as scattering volumes filled with point scatterers. Time-variant channel responses and transfer functions are derived
Compressed learning for time series classification學翰 施
This document proposes a compressed learning framework for time series classification using sparse envelope representations. It introduces compressed sensing concepts and describes creating a sparse envelope for time series by thresholding around the mean and standard deviation. A classification framework is developed using linear SVMs in the compressed domain. Experimental results on benchmark datasets demonstrate effectiveness of the envelope representations compared to state-of-the-art methods, as well as efficiency gains from compression. Real-world case studies on smart home applications show promising identification performance from envelope-based classifiers on sensor time series data.
International Journal of Computer Vision 71(2), 127–141, 2007.docxvrickens
This document proposes a kernel spectral matched filter for target detection in hyperspectral imagery. It begins with an overview of linear matched filtering and introduces the concept of implementing algorithms in kernel feature spaces. It then defines a nonlinear matched filter model in a kernel feature space, which is equivalent to a nonlinear matched filter in the original input space. Finally, it derives an expression for the kernel spectral matched filter by rewriting the matched filter defined in the kernel feature space in terms of kernel functions using the kernel trick. Simulation results on hyperspectral imagery show the kernel spectral matched filter outperforms the conventional linear matched filter.
This document provides an overview of time series data mining. It begins with an introduction to time series data and examples of time series similarity search tasks. It then discusses major time series mining tasks like indexing, clustering, classification, prediction and anomaly detection. Distance measures for time series similarity search are explained, including Dynamic Time Warping which allows for nonlinear time alignments. Dimensionality reduction techniques like Fourier analysis and discretization using Symbolic Aggregate Approximation are also summarized. The document is presented as an introduction to key concepts and techniques in time series data mining.
The document introduces two approaches to chemical prediction: quantum simulation based on density functional theory and machine learning based on data. It then discusses using graph-structured neural networks for chemical prediction on datasets like QM9. It presents Neural Fingerprint (NFP) and Gated Graph Neural Network (GGNN) models for predicting molecular properties from graph-structured data. Chainer Chemistry is introduced as a library for chemical and biological machine learning that implements these graph convolutional networks.
IRJET- A Novel Adaptive Sub-Band Filter Design with BD-VSS using Particle Swa...IRJET Journal
This document discusses a novel adaptive sub-band filter design using particle swarm optimization. It begins by reviewing related work on sub-band adaptive filtering techniques for noise cancellation, including sign sub-band adaptive filters and variable step sizes. It then describes the sign sub-band adaptive filter algorithm with individual weighting factors to improve convergence rate. The proposed method applies particle swarm optimization to the delayless closed-loop individual weighting factor sign sub-band adaptive filter with band-dependent variable step sizes. This achieves better convergence performance through 1-norm minimization in sub-bands and the decorrelating properties of sub-band adaptive filtering, with improved computational efficiency from the particle swarm optimization algorithm. The experimental results show the proposed method outperforms
A Novel CAZAC Sequence Based Timing Synchronization Scheme for OFDM SystemIJAAS Team
Several classical timing synchronization schemes have been proposed for the timing synchronization in OFDM systems based on the correlation between identical parts of OFDM symbol. These schemes show poor performance due to the presence of plateau and significant side lobe. In this paper we present a timing synchronization schemes with timing metric based on a Constant Amplitude Zero Auto Correlation (CAZAC) sequence. The performance of the proposed timing synchronization scheme is better than the classical techniques.
The document discusses methods to mitigate atmospheric delays in InSAR measurements. It presents experiments comparing atmospheric phase screens (APS) derived from InSAR to integrated water vapor (IWV) from numerical weather prediction models, IWV from MERIS satellite images, and GPS measurements. The results show NWP models capture large spatial scales but have random errors, MERIS is limited by cloud cover, and GPS provides the most accurate validation when stations are closely spaced.
Similar to HACKSing heterogeneity in cell motility (20)
Anti-Universe And Emergent Gravity and the Dark UniverseSérgio Sacani
Recent theoretical progress indicates that spacetime and gravity emerge together from the entanglement structure of an underlying microscopic theory. These ideas are best understood in Anti-de Sitter space, where they rely on the area law for entanglement entropy. The extension to de Sitter space requires taking into account the entropy and temperature associated with the cosmological horizon. Using insights from string theory, black hole physics and quantum information theory we argue that the positive dark energy leads to a thermal volume law contribution to the entropy that overtakes the area law precisely at the cosmological horizon. Due to the competition between area and volume law entanglement the microscopic de Sitter states do not thermalise at sub-Hubble scales: they exhibit memory effects in the form of an entropy displacement caused by matter. The emergent laws of gravity contain an additional ‘dark’ gravitational force describing the ‘elastic’ response due to the entropy displacement. We derive an estimate of the strength of this extra force in terms of the baryonic mass, Newton’s constant and the Hubble acceleration scale a0 = cH0, and provide evidence for the fact that this additional ‘dark gravity force’ explains the observed phenomena in galaxies and clusters currently attributed to dark matter.
Presentation of our paper, "Towards Quantitative Evaluation of Explainable AI Methods for Deepfake Detection", by K. Tsigos, E. Apostolidis, S. Baxevanakis, S. Papadopoulos, V. Mezaris. Presented at the ACM Int. Workshop on Multimedia AI against Disinformation (MAD’24) of the ACM Int. Conf. on Multimedia Retrieval (ICMR’24), Thailand, June 2024. https://doi.org/10.1145/3643491.3660292 https://arxiv.org/abs/2404.18649
Software available at https://github.com/IDT-ITI/XAI-Deepfakes
Sexuality - Issues, Attitude and Behaviour - Applied Social Psychology - Psyc...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
Mechanics:- Simple and Compound PendulumPravinHudge1
a compound pendulum is a physical system with a more complex structure than a simple pendulum, incorporating its mass distribution and dimensions into its oscillatory motion around a fixed axis. Understanding its dynamics involves principles of rotational mechanics and the interplay between gravitational potential energy and kinetic energy. Compound pendulums are used in various scientific and engineering applications, such as seismology for measuring earthquakes, in clocks to maintain accurate timekeeping, and in mechanical systems to study oscillatory motion dynamics.
Embracing Deep Variability For Reproducibility and Replicability
Abstract: Reproducibility (aka determinism in some cases) constitutes a fundamental aspect in various fields of computer science, such as floating-point computations in numerical analysis and simulation, concurrency models in parallelism, reproducible builds for third parties integration and packaging, and containerization for execution environments. These concepts, while pervasive across diverse concerns, often exhibit intricate inter-dependencies, making it challenging to achieve a comprehensive understanding. In this short and vision paper we delve into the application of software engineering techniques, specifically variability management, to systematically identify and explicit points of variability that may give rise to reproducibility issues (eg language, libraries, compiler, virtual machine, OS, environment variables, etc). The primary objectives are: i) gaining insights into the variability layers and their possible interactions, ii) capturing and documenting configurations for the sake of reproducibility, and iii) exploring diverse configurations to replicate, and hence validate and ensure the robustness of results. By adopting these methodologies, we aim to address the complexities associated with reproducibility and replicability in modern software systems and environments, facilitating a more comprehensive and nuanced perspective on these critical aspects.
https://hal.science/hal-04582287
This presentation offers a general idea of the structure of seed, seed production, management of seeds and its allied technologies. It also offers the concept of gene erosion and the practices used to control it. Nursery and gardening have been widely explored along with their importance in the related domain.
Discovery of An Apparent Red, High-Velocity Type Ia Supernova at 𝐳 = 2.9 wi...Sérgio Sacani
We present the JWST discovery of SN 2023adsy, a transient object located in a host galaxy JADES-GS
+
53.13485
−
27.82088
with a host spectroscopic redshift of
2.903
±
0.007
. The transient was identified in deep James Webb Space Telescope (JWST)/NIRCam imaging from the JWST Advanced Deep Extragalactic Survey (JADES) program. Photometric and spectroscopic followup with NIRCam and NIRSpec, respectively, confirm the redshift and yield UV-NIR light-curve, NIR color, and spectroscopic information all consistent with a Type Ia classification. Despite its classification as a likely SN Ia, SN 2023adsy is both fairly red (
�
(
�
−
�
)
∼
0.9
) despite a host galaxy with low-extinction and has a high Ca II velocity (
19
,
000
±
2
,
000
km/s) compared to the general population of SNe Ia. While these characteristics are consistent with some Ca-rich SNe Ia, particularly SN 2016hnk, SN 2023adsy is intrinsically brighter than the low-
�
Ca-rich population. Although such an object is too red for any low-
�
cosmological sample, we apply a fiducial standardization approach to SN 2023adsy and find that the SN 2023adsy luminosity distance measurement is in excellent agreement (
≲
1
�
) with
Λ
CDM. Therefore unlike low-
�
Ca-rich SNe Ia, SN 2023adsy is standardizable and gives no indication that SN Ia standardized luminosities change significantly with redshift. A larger sample of distant SNe Ia is required to determine if SN Ia population characteristics at high-
�
truly diverge from their low-
�
counterparts, and to confirm that standardized luminosities nevertheless remain constant with redshift.
SDSS1335+0728: The awakening of a ∼ 106M⊙ black hole⋆Sérgio Sacani
Context. The early-type galaxy SDSS J133519.91+072807.4 (hereafter SDSS1335+0728), which had exhibited no prior optical variations during the preceding two decades, began showing significant nuclear variability in the Zwicky Transient Facility (ZTF) alert stream from December 2019 (as ZTF19acnskyy). This variability behaviour, coupled with the host-galaxy properties, suggests that SDSS1335+0728 hosts a ∼ 106M⊙ black hole (BH) that is currently in the process of ‘turning on’. Aims. We present a multi-wavelength photometric analysis and spectroscopic follow-up performed with the aim of better understanding the origin of the nuclear variations detected in SDSS1335+0728. Methods. We used archival photometry (from WISE, 2MASS, SDSS, GALEX, eROSITA) and spectroscopic data (from SDSS and LAMOST) to study the state of SDSS1335+0728 prior to December 2019, and new observations from Swift, SOAR/Goodman, VLT/X-shooter, and Keck/LRIS taken after its turn-on to characterise its current state. We analysed the variability of SDSS1335+0728 in the X-ray/UV/optical/mid-infrared range, modelled its spectral energy distribution prior to and after December 2019, and studied the evolution of its UV/optical spectra. Results. From our multi-wavelength photometric analysis, we find that: (a) since 2021, the UV flux (from Swift/UVOT observations) is four times brighter than the flux reported by GALEX in 2004; (b) since June 2022, the mid-infrared flux has risen more than two times, and the W1−W2 WISE colour has become redder; and (c) since February 2024, the source has begun showing X-ray emission. From our spectroscopic follow-up, we see that (i) the narrow emission line ratios are now consistent with a more energetic ionising continuum; (ii) broad emission lines are not detected; and (iii) the [OIII] line increased its flux ∼ 3.6 years after the first ZTF alert, which implies a relatively compact narrow-line-emitting region. Conclusions. We conclude that the variations observed in SDSS1335+0728 could be either explained by a ∼ 106M⊙ AGN that is just turning on or by an exotic tidal disruption event (TDE). If the former is true, SDSS1335+0728 is one of the strongest cases of an AGNobserved in the process of activating. If the latter were found to be the case, it would correspond to the longest and faintest TDE ever observed (or another class of still unknown nuclear transient). Future observations of SDSS1335+0728 are crucial to further understand its behaviour. Key words. galaxies: active– accretion, accretion discs– galaxies: individual: SDSS J133519.91+072807.4
Dr. Firoozeh Kashani-Sabet is an innovator in Middle Eastern Studies and approaches her work, particularly focused on Iran, with a depth and commitment that has resulted in multiple book publications. She is notable for her work with the University of Pennsylvania, where she serves as the Walter H. Annenberg Professor of History.
Evidence of Jet Activity from the Secondary Black Hole in the OJ 287 Binary S...Sérgio Sacani
Wereport the study of a huge optical intraday flare on 2021 November 12 at 2 a.m. UT in the blazar OJ287. In the binary black hole model, it is associated with an impact of the secondary black hole on the accretion disk of the primary. Our multifrequency observing campaign was set up to search for such a signature of the impact based on a prediction made 8 yr earlier. The first I-band results of the flare have already been reported by Kishore et al. (2024). Here we combine these data with our monitoring in the R-band. There is a big change in the R–I spectral index by 1.0 ±0.1 between the normal background and the flare, suggesting a new component of radiation. The polarization variation during the rise of the flare suggests the same. The limits on the source size place it most reasonably in the jet of the secondary BH. We then ask why we have not seen this phenomenon before. We show that OJ287 was never before observed with sufficient sensitivity on the night when the flare should have happened according to the binary model. We also study the probability that this flare is just an oversized example of intraday variability using the Krakow data set of intense monitoring between 2015 and 2023. We find that the occurrence of a flare of this size and rapidity is unlikely. In machine-readable Tables 1 and 2, we give the full orbit-linked historical light curve of OJ287 as well as the dense monitoring sample of Krakow.
Mechanisms and Applications of Antiviral Neutralizing Antibodies - Creative B...Creative-Biolabs
Neutralizing antibodies, pivotal in immune defense, specifically bind and inhibit viral pathogens, thereby playing a crucial role in protecting against and mitigating infectious diseases. In this slide, we will introduce what antibodies and neutralizing antibodies are, the production and regulation of neutralizing antibodies, their mechanisms of action, classification and applications, as well as the challenges they face.
7. Heterogeneity in protrusion activities
; a complex multi-dimensional time-series data
with heterogeneous hidden patterns
Actin regulator dynamics
Ensemble averaged
8. HACKS
(1) Live Cell Imaging : cells expressing fluorescently tagged actin, Arp2/3 & VASP
(2) Pre-processing: Extract local velocity & intensity time series
(4) Characterize the molecular dynamics associated with the phenotype
(5) Functional validation of the associated molecules by drug tests
(3) Time series clustering: Dimensional reduction (SAX)
Feature extraction (ACF)
Distance calculation
Clustering (DP)
Identify
subcellular
protrusion
phenotypes
Deconvolution of Heterogeneous Activity in Coordination of
cytosKeleton at the Subcellular level
Chuangqi Wang*, Hee June Choi*, …, Kwonmoo Lee (2018) Nat. Comms. ( * = equal contribution)
9. 5μm
Leading edge of a cell expressing
Fluorescently-tagged Actin
Sampling
Window
Svitkina, T. M. et al., J Cell Biol (1997)
Local Sampling from Live Cell Imaging
20. Exemplary aligned velocity time series
(1) Dimensional reductionTime-series clustering :
Dimensional
reduction
(SAX)
Feature
extraction
(ACF)
Distance
calculation
(ED)
Density
Peak
Clustering
21. Symbolic Aggregate approXimation (SAX)
4
3
2
1
2 4 6 8 10 12 14
SAX time interval
SAXrepresentation
Proposed
dissimilarity measure
of two velocity time
series in SAX
= Approximate Euclidean
distance of SAX
(1) Dimensional reductionTime-series clustering :
Dimensional
reduction
(SAX)
Feature
extraction
(ACF)
Distance
calculation
(ED)
Density
Peak
Clustering
22. Exemplary autocorrelation coefficient (ACF)
Dimensional
reduction
(SAX)
Feature
extraction
(ACF)
Distance
calculation
(ED)
Density
Peak
Clustering
Proposed
dissimilarity measure
of two velocity time
series in SAX
= Approximate Euclidean
distance of SAX
Dissimilarity measure
of two velocity time
series
= Squared Euclidean
distance between
Autocorrelation
coefficients
(2) Feature extractionTime-series clustering :
40. 5μm
Before HACKS
25
50
75
100
0
Actin Arp3 VASP HaloTag
Proportionof
clusterspersample
Total(%)
Registered time (s)
0 100 200-100
velocity(μm/min)
-1
0
1
2
n=764
Registered time (s)
0 100 200-100
velocity(μm/min)
-1
0
1
2
n=367
Registered time (s)
0 100 200-100
velocity(μm/min)
-1
0
1
2
n=625
Registered time (s)
0 100 200-100
velocity(μm/min)
-1
0
1
2
n=674
Registered time (s)
0 100 200-100
velocity(μm/min)
-1
0
1
2
n=326
Cluster II-3Cluster II-1 Cluster II-2 Cluster IIICluster I
1.5
2
Cluster ICluster II-1
50
100
Cluster II-1
Cluster II-2
Cluster II-3
Cluster I
Cluster III
Cluster I
100200300
-100 -50 0 50 100 150 200 250
Adjustedsampleindex
Registered time (s)
Cluster II-1
100200300
-100 -50 0 50 100 150 200 250
Registered time (s)
Cluster II-2
100200300
-100 -50 0 50 100 150 200 250
Registered time (s)
100200300
-100 -50 0 50 100 150 200 250
Cluster II-3
Registered time (s)
Cluster III
-100 -50 0 50 100 150 200 250
100200300
Registered time (s)
velocity(μm/min)
-4
-2
0
2
4
6
-6
0.8
1
Prob.
Cluster II
Subcellular protrusion phenotypes
“Fluctuating” “Periodic” “Accelerating”
Registered time (s)
0 100 200-100
velocity(μm/min)
-1
0
1
2
Velocity Profile
n=2756
Ensemble
Average
Registered time (s)
0 100 200-100
Velocity Profile
n=326
velocity(μm/min)
-1
0
1
2
Cluster III
Registered time (s)
0 100 200-100
-1
0
1
2
Actin
n=85
Registered time (s)
0 100 200-100
Actin
n=934400
500
600
NormalizedIntensity
-1
0
1
2
400
500
600
NormalizedIntensity
Registered time (s)
0 100 200-100
Arp3
n=102
Registered time (s)
0 100 200-100
Arp3
n=757300
400
500
600
-1
-1
0
1
2
300
400
500
600
0
1
2
Registered time (s)
0 100 200-100
VASP
n=101
Registered time (s)
0 100 200-100
VASP
n=682
200
300
400
500
600
-1
0
1
2
-1
0
1
2
200
300
400
500
600
-10
-10
200
300
400
500
600
200
300
400
500
600
VASPActin Arp3
0
s)
0
0
0
VASP promotes “accelerating protrusi
Associated molecular dynamics
Fluctuating
Acclerating
Periodic
▪︎Kinetics also serves as info!!!
Using machine learning approach,
differential coordination of actin regulators were found to
generate heterogeneity in subcellular motility
41. Conclusion
1. We developed an analysis pipeline, HACKS, to dissect
protrusion heterogeneity at the subcellular level.
2. HACKS identified hidden patterns from a complex and
heterogeneous velocity time-series data.
3. HACKS provided mechanistic details of molecular dynamics
associated with protrusion phenotypes.
4. HACKS revealed subtle specificity of the drug target. This can
be potentially applied to clinic.
5. HACKS can be potentially applied to other types of time-
series data for cell biological studies.
42. Further Study: Deep HACKS
DeepHACKS dissects the heterogeneity of subcellular
time-series datasets by allowing bi-directional LSTM
(Long-Short Term Memory) neural networks to extract
fine-grained temporal features by integrating autoencoder
with traditional machine learning outcomes.
43. Acknowledgments
Chauncey Wang, M.S.Prof. Kwonmoo Lee
Lee Lab
Sungjin Kim, Ph.D. (former)
Collaborators
Pf. Doohoon Kim (UMMS)
Namgyu Lee, Ph.D.
Pf. Yongho Bae (SUNY, Buffalo)
Aesha Desai, Ph.D.
Editor's Notes
As you may all agree, cell motility is fundamentally important property of biological systems. If cells are not moving at all, at a slightest level, then they are dead. Cell motility is critical for organisms to survive and thrive as they are important for developmental processes or immune responses. If it goes wrong, then we can metastatic cancer. So understanding precise mechanism of cell motility is important.
However, as you saw in the background movie from the previous slide, motility of a cell or population of cells exhibit significant level of heterogeneity, stochasticity and plasticity, which is not surprising considering that heterogeneity is a fundamental and prevalent property of biological systems.
◦ But, methods to identify, quantify and characterize heterogeneity have been lacking and mostly limited to isolated single-cell studies.
◦ So far The mechanism of cell protrusion has been understood based on the ensemble average of actin regulator dynamics, which could lead to the loss of critical information.
Therefore, we developed a machine elarning approach HACKS
Local sampling at mesoscopic scale
Generate clusters of arbitrary shapes.
Robust against noise.
No K value required in advance.
Somewhat similar to human vision.