This document provides information about Petteri Teikari, including his educational background and affiliation with the Singapore Eye Research Institute. It then lists several papers and resources related to broken stick modeling, nonlinear multivariate analysis, and variable importance measures in random forests. Specific topics covered include dynamic modeling of multivariate processes, joint frailty models, additive modeling, outcome weighted deep learning for combination therapies, survival trees, correlation and variable importance, and developing model-agnostic variable importance measures. Links are provided to papers, code implementations, and visualization resources.
From traditional desktop to novel optical designs in small form factors. Towards portable low-cost fundus imaging designs with computational imaging techniques for image quality improvement.
Possible future avenues for ophthalmic imaging combining advanced techniques and deep learning. "Bubbling under the surface, and inspiration from ‘bioimaging’ in general"
Case for chronobiological experiments for rodents and
non-human primates.
Practical guide for selecting components for your lighting system design to be used with other behavioral measures.
Alternative download link:
https://www.dropbox.com/s/34j71dchhstvy0d/animal_lighting.pdf?dl=0
Hardware landscape from computer vision to wearable sensors, and a light intro for UX requirements to ensure adherence and engagement.
At the intersection of new sensors, big data, deep learning, gamification, behavioral medicine and human factors.
Applications benefiting from "quantitative sensorimotor training", "precision exercise", "precision physiotherapy" or whatever you are calling this, include weight and strength training, powerlifting, bodybuilding, martial arts, yoga, dance, musical instrument training, post-surgery rehabilitation for ACL tears, etc.
Alternative download link:
https://www.dropbox.com/s/wcfrzdjkn58xjdq/physio_pipeline_hw.pdf?dl=0
Design to accommodate “intelligent adaptive experiments” with future-proof hardware for deep learning-enabled imaging and neuroscience.
In other words, how to design future-proof measurement systems that are both easy to setup and are scalable for more advanced measurement paradigms of the future. And how you would like to think of structuring your data acquisition to be used efficiently with deep learning in neuroscience.
Alternative download link:
https://www.dropbox.com/s/j5r8vifvh6e7bfp/animal_instrumentation.pdf?dl=0
From unimodal image classification to integrative multimodal deep learning pipelines in disease classification, disease management and predictive personalised healthcare.
Purkinje imaging for crystalline lens density measurementPetteriTeikariPhD
Brief introduction for the non-invasive, inexpensive and fast Purkinje image -based method for measuring the spectral transmittance of the human crystalline lens density in vivo.
Alternative download link:
https://www.dropbox.com/s/588y7epy13n34xo/purkinje_imaging.pdf?dl=0
Shallow introduction for Deep Learning Retinal Image AnalysisPetteriTeikariPhD
Overview of retinal imaging techniques such as fundus photography, optical coherence tomography (OCT) along with future upgrades such as multispectral imaging, OCT angiography, adaptive optics imaging and polarization-sensitive OCT. This is followed by an overview of deep learning image analysis methods suitable to be used with retinal imaging techniques.
Alternative download link: https://www.dropbox.com/s/n01w02cjaf68vbo/retina_deepLearning_pipeline.pdf?dl=0
From traditional desktop to novel optical designs in small form factors. Towards portable low-cost fundus imaging designs with computational imaging techniques for image quality improvement.
Possible future avenues for ophthalmic imaging combining advanced techniques and deep learning. "Bubbling under the surface, and inspiration from ‘bioimaging’ in general"
Case for chronobiological experiments for rodents and
non-human primates.
Practical guide for selecting components for your lighting system design to be used with other behavioral measures.
Alternative download link:
https://www.dropbox.com/s/34j71dchhstvy0d/animal_lighting.pdf?dl=0
Hardware landscape from computer vision to wearable sensors, and a light intro for UX requirements to ensure adherence and engagement.
At the intersection of new sensors, big data, deep learning, gamification, behavioral medicine and human factors.
Applications benefiting from "quantitative sensorimotor training", "precision exercise", "precision physiotherapy" or whatever you are calling this, include weight and strength training, powerlifting, bodybuilding, martial arts, yoga, dance, musical instrument training, post-surgery rehabilitation for ACL tears, etc.
Alternative download link:
https://www.dropbox.com/s/wcfrzdjkn58xjdq/physio_pipeline_hw.pdf?dl=0
Design to accommodate “intelligent adaptive experiments” with future-proof hardware for deep learning-enabled imaging and neuroscience.
In other words, how to design future-proof measurement systems that are both easy to setup and are scalable for more advanced measurement paradigms of the future. And how you would like to think of structuring your data acquisition to be used efficiently with deep learning in neuroscience.
Alternative download link:
https://www.dropbox.com/s/j5r8vifvh6e7bfp/animal_instrumentation.pdf?dl=0
From unimodal image classification to integrative multimodal deep learning pipelines in disease classification, disease management and predictive personalised healthcare.
Purkinje imaging for crystalline lens density measurementPetteriTeikariPhD
Brief introduction for the non-invasive, inexpensive and fast Purkinje image -based method for measuring the spectral transmittance of the human crystalline lens density in vivo.
Alternative download link:
https://www.dropbox.com/s/588y7epy13n34xo/purkinje_imaging.pdf?dl=0
Shallow introduction for Deep Learning Retinal Image AnalysisPetteriTeikariPhD
Overview of retinal imaging techniques such as fundus photography, optical coherence tomography (OCT) along with future upgrades such as multispectral imaging, OCT angiography, adaptive optics imaging and polarization-sensitive OCT. This is followed by an overview of deep learning image analysis methods suitable to be used with retinal imaging techniques.
Alternative download link: https://www.dropbox.com/s/n01w02cjaf68vbo/retina_deepLearning_pipeline.pdf?dl=0
Deep Learning for Biomedical Unstructured Time SeriesPetteriTeikariPhD
1D Convolutional neural networks (CNNs) for time series analysis, and inspiration from beyond biomedical field. Short intro for various different steps involved in Time Series Analysis including outlier detection, imputation, denoising, segmentation, classification and forecasting.
Available also from:
https://www.dropbox.com/s/cql2jhrt5mdyxne/timeSeries_deepLearning.pdf?dl=0
High-level overview of the specifications for a portable multispectral/hyperspectral fundus imaging based on the custom-developed MEMS filter developed by VTT, Finland.
Additionally one could do a multispectral purkinje imaging for quick estimation of spectral ocular media transmittance.
The device could be used for structural imaging ("classical fundus") as well as functional vessel response imaging with oximetry.
Alternative download link:
https://www.dropbox.com/s/g0ladhqw075w59j/multispectral_fundus_camera_rev2.pdf?dl=0
Practical Considerations in the design of Embedded Ophthalmic DevicesPetteriTeikariPhD
Practical level introduction for ophthalmic device design.
How in the future, multimodal measurement devices will be replacing unimodal devices with simple decision tree indices.
Some examples of embedded cameras, measurement illumination, stimulus presentation illumination are shown.
Alternative download link:
https://www.dropbox.com/s/lt76ohoeusopkoo/practicalConsiderations_embeddedOphthalmicDevices.pdf?dl=0
Time-resolved biomedical sensing through scattering mediumPetteriTeikariPhD
Time-resolved biomedical sensing through scattering medium | Case study with pupillometry through closed eyelids for neurological monitoring
Download link: https://www.dropbox.com/s/x0f5q6cz5ax33s4/timeResolvedSensing.pdf?dl=0
Using physics-based OCT Monte Carlo simulation and wave optics models for synthesising new OCT volumes for ophthalmic deep learning.
Alternative download link:
https://www.dropbox.com/s/ax15qy47yi76eex/OCT_MonteCarlo.pdf?dl=0
Developing an inexpensive optical measurement device to estimate ocular media density.
Selection of HDR camera, deep learning software stack for Purkinje image detection and segmentation, NVIDIA Jetson as the embedded computer triggering either LED or LASER lights to be projected on the cornea
Alternative download link:
https://www.dropbox.com/s/fh7r8szuc2pctfr/purkinje_imaging_inPractice.pdf?dl=0
Multimodal RGB-D+RF-based sensing for human movement analysisPetteriTeikariPhD
Combining RGB-D based computer vision with commodity Wifi for pose estimation and human movement analysis for action recognition.
Think of applications especially in healthcare settings, where existing Wifi Access Point already exist and adding USB Wifi dongles to Raspberry Pi (or dedicated chips) is a very easy way to create "operational awareness" of all your patients.
Alternative download link:
https://www.dropbox.com/s/awkqqfhibesjcb9/multimodal_remote_MovementSensing.pdf?dl=0
Short intro for some design considerations around hyperspectral retinal imaging. Both for research-grade desktop setups built around supercontinuum laser and AOTF tunable filter, and for mobile low-cost retinal imagers.
Available also from:
https://www.dropbox.com/s/5brchl9ntqno0i9/hyperspectral_retinal_imaging.pdf?dl=0
Using off-the-shelf ultrasound imagers, and transition to portable system-on-chip ultrasound imagers such as Butterfly IQ.
Embedded devices such as Butterfly IQ can be further improved by integrating deep learning / artificial intelligence at device level, and naturally at the post-processing and analysis levels
Alternative download link:
https://www.dropbox.com/s/rlwv7m29mh6y2w6/pupillometry_throughTheEyelids.pdf?dl=0
For non-grid 3D images like point clouds and meshes, and inherently graph-based data.
Inherently graph-based data include for example brain connectivity analysis, scientific article citation networks, (social) network analysis, etc.
Alternative download link:
https://www.dropbox.com/s/2o3cofcd6d6e2qt/geometricGraph_deepLearning.pdf?dl=0
High-level concepts For applications such as:
1) Myopia, 2) Jetlag, 3) Seasonal Affective Disorder (SAD)
If you want to add some tech to eyewear / glasses / sunglasses design projects, this slideshow serves as a high-level introduction for technical details
Alternative download link:
https://www.dropbox.com/s/qe8dpji6gwh1s8v/lightTreatmentGlasses_concepts.pdf?dl=0
Instead of talking about artificial intelligence at the organizational level in hospitals and in research laboratories, the focus for non-machine learning practitioner should be on understanding the data pipes and what is involved around the model training.
alternative download link:
https://www.dropbox.com/s/9tv673sxkxcnojj/dataStrategyForOphthalmology.pdf?dl=0
Image restoration techniques covered such as denoising, deblurring and super-resolution for 3D images and models.
From classical computer vision techniques to contemporary deep learning based processing for both ordered and unordered point clouds, depth maps and meshes.
IMPROVED NEURAL NETWORK PREDICTION PERFORMANCES OF ELECTRICITY DEMAND: MODIFY...csandit
Accurate prediction of electricity demand can bring extensive benefits to any country as the
forecast values help the relevant authorities to take decisions regarding electricity generation,
transmission and distribution much appropriately. The literature reveals that, when compared
to conventional time series techniques, the improved artificial intelligent approaches provide
better prediction accuracies. However, the accuracy of predictions using intelligent approaches
like neural networks are strongly influenced by the correct selection of inputs and the number of
neuro-forecasters used for prediction. This research shows how a cluster analysis performed to
group similar day types, could contribute towards selecting a better set of neuro-forecasters in
neural networks. Daily total electricity demands for five years were considered for the analysis
and each date was assigned to one of the thirteen day-types, in a Sri Lankan context. As a
stochastic trend could be seen over the years, prior to performing the k-means clustering, the
trend was removed by taking the first difference of the series. Three different clusters were
found using Silhouette plots, and thus three neuro-forecasters were used for predictions. This
paper illustrates the proposed modified neural network procedure using electricity demand
data.
AN EFFICIENT WAVELET BASED FEATURE REDUCTION AND CLASSIFICATION TECHNIQUE FOR...ijcseit
This research paper proposes an improved feature reduction and classification technique to identify mild and severe dementia from brain MRI data. The manual interpretation of changes in brain volume based on visual examination by radiologist or a physician may lead to missing diagnosis when a large number of MRIs are analyzed. To avoid the human error, an automated intelligent classification system is proposed
which caters the need for classification of brain MRI after identifying abnormal MRI volume, for the diagnosis of dementia. In this research work, advanced classification techniques using Support Vector Machines based on Particle Swarm Optimisation and Genetic algorithm are compared. Feature reduction
by wavelets and PCA are analysed. From this analysis, it is observed that the proposed classification of SVM based PSO is found to be efficient than SVM trained with GA and wavelet based feature reduction technique yields better results than PCA.
AN EFFICIENT WAVELET BASED FEATURE REDUCTION AND CLASSIFICATION TECHNIQUE FOR...ijcseit
This research paper proposes an improved feature reduction and classification technique to identify mild
and severe dementia from brain MRI data. The manual interpretation of changes in brain volume based on
visual examination by radiologist or a physician may lead to missing diagnosis when a large number of
MRIs are analyzed. To avoid the human error, an automated intelligent classification system is proposed
which caters the need for classification of brain MRI after identifying abnormal MRI volume, for the
diagnosis of dementia. In this research work, advanced classification techniques using Support Vector
Machines based on Particle Swarm Optimisation and Genetic algorithm are compared. Feature reduction
by wavelets and PCA are analysed. From this analysis, it is observed that the proposed classification of
SVM based PSO is found to be efficient than SVM trained with GA and wavelet based feature reduction
technique yields better results than PCA.
A Review on Classification Based Approaches for STEGanalysis DetectionEditor IJCATR
This paper presents two scenarios of image steganalysis, in first scenario, an alternative feature set for steganalysis based on
rate-distortion characteristics of images. Here features are based on two key observations: i) Data embedding typically increases the
image entropy in order to encode the hidden messages; ii) Data embedding methods are limited to the set of small, imperceptible distortions.
The proposed feature set is used as the basis of a steganalysis algorithm and its performance is investigated using different
data hiding methods. In second scenario, a new blind approach of image Steganalysis based on contourlet transform and nonlinear
support vector machine. Properties of Contourlet transform are used to extract features of images, the important aspect of this paper is
that, it uses the minimum number of features in the transform domain and gives a better accuracy than many of the existing stegananlysis
methods. The efficiency of the proposed method is demonstrated through experimental results. Also its performance is compared
with the Contourlet based steganalyzer (WBS). Finally, the results show that the proposed method is very efficient in terms of its detection
accuracy and computational cost.
Deep Learning for Biomedical Unstructured Time SeriesPetteriTeikariPhD
1D Convolutional neural networks (CNNs) for time series analysis, and inspiration from beyond biomedical field. Short intro for various different steps involved in Time Series Analysis including outlier detection, imputation, denoising, segmentation, classification and forecasting.
Available also from:
https://www.dropbox.com/s/cql2jhrt5mdyxne/timeSeries_deepLearning.pdf?dl=0
High-level overview of the specifications for a portable multispectral/hyperspectral fundus imaging based on the custom-developed MEMS filter developed by VTT, Finland.
Additionally one could do a multispectral purkinje imaging for quick estimation of spectral ocular media transmittance.
The device could be used for structural imaging ("classical fundus") as well as functional vessel response imaging with oximetry.
Alternative download link:
https://www.dropbox.com/s/g0ladhqw075w59j/multispectral_fundus_camera_rev2.pdf?dl=0
Practical Considerations in the design of Embedded Ophthalmic DevicesPetteriTeikariPhD
Practical level introduction for ophthalmic device design.
How in the future, multimodal measurement devices will be replacing unimodal devices with simple decision tree indices.
Some examples of embedded cameras, measurement illumination, stimulus presentation illumination are shown.
Alternative download link:
https://www.dropbox.com/s/lt76ohoeusopkoo/practicalConsiderations_embeddedOphthalmicDevices.pdf?dl=0
Time-resolved biomedical sensing through scattering mediumPetteriTeikariPhD
Time-resolved biomedical sensing through scattering medium | Case study with pupillometry through closed eyelids for neurological monitoring
Download link: https://www.dropbox.com/s/x0f5q6cz5ax33s4/timeResolvedSensing.pdf?dl=0
Using physics-based OCT Monte Carlo simulation and wave optics models for synthesising new OCT volumes for ophthalmic deep learning.
Alternative download link:
https://www.dropbox.com/s/ax15qy47yi76eex/OCT_MonteCarlo.pdf?dl=0
Developing an inexpensive optical measurement device to estimate ocular media density.
Selection of HDR camera, deep learning software stack for Purkinje image detection and segmentation, NVIDIA Jetson as the embedded computer triggering either LED or LASER lights to be projected on the cornea
Alternative download link:
https://www.dropbox.com/s/fh7r8szuc2pctfr/purkinje_imaging_inPractice.pdf?dl=0
Multimodal RGB-D+RF-based sensing for human movement analysisPetteriTeikariPhD
Combining RGB-D based computer vision with commodity Wifi for pose estimation and human movement analysis for action recognition.
Think of applications especially in healthcare settings, where existing Wifi Access Point already exist and adding USB Wifi dongles to Raspberry Pi (or dedicated chips) is a very easy way to create "operational awareness" of all your patients.
Alternative download link:
https://www.dropbox.com/s/awkqqfhibesjcb9/multimodal_remote_MovementSensing.pdf?dl=0
Short intro for some design considerations around hyperspectral retinal imaging. Both for research-grade desktop setups built around supercontinuum laser and AOTF tunable filter, and for mobile low-cost retinal imagers.
Available also from:
https://www.dropbox.com/s/5brchl9ntqno0i9/hyperspectral_retinal_imaging.pdf?dl=0
Using off-the-shelf ultrasound imagers, and transition to portable system-on-chip ultrasound imagers such as Butterfly IQ.
Embedded devices such as Butterfly IQ can be further improved by integrating deep learning / artificial intelligence at device level, and naturally at the post-processing and analysis levels
Alternative download link:
https://www.dropbox.com/s/rlwv7m29mh6y2w6/pupillometry_throughTheEyelids.pdf?dl=0
For non-grid 3D images like point clouds and meshes, and inherently graph-based data.
Inherently graph-based data include for example brain connectivity analysis, scientific article citation networks, (social) network analysis, etc.
Alternative download link:
https://www.dropbox.com/s/2o3cofcd6d6e2qt/geometricGraph_deepLearning.pdf?dl=0
High-level concepts For applications such as:
1) Myopia, 2) Jetlag, 3) Seasonal Affective Disorder (SAD)
If you want to add some tech to eyewear / glasses / sunglasses design projects, this slideshow serves as a high-level introduction for technical details
Alternative download link:
https://www.dropbox.com/s/qe8dpji6gwh1s8v/lightTreatmentGlasses_concepts.pdf?dl=0
Instead of talking about artificial intelligence at the organizational level in hospitals and in research laboratories, the focus for non-machine learning practitioner should be on understanding the data pipes and what is involved around the model training.
alternative download link:
https://www.dropbox.com/s/9tv673sxkxcnojj/dataStrategyForOphthalmology.pdf?dl=0
Image restoration techniques covered such as denoising, deblurring and super-resolution for 3D images and models.
From classical computer vision techniques to contemporary deep learning based processing for both ordered and unordered point clouds, depth maps and meshes.
IMPROVED NEURAL NETWORK PREDICTION PERFORMANCES OF ELECTRICITY DEMAND: MODIFY...csandit
Accurate prediction of electricity demand can bring extensive benefits to any country as the
forecast values help the relevant authorities to take decisions regarding electricity generation,
transmission and distribution much appropriately. The literature reveals that, when compared
to conventional time series techniques, the improved artificial intelligent approaches provide
better prediction accuracies. However, the accuracy of predictions using intelligent approaches
like neural networks are strongly influenced by the correct selection of inputs and the number of
neuro-forecasters used for prediction. This research shows how a cluster analysis performed to
group similar day types, could contribute towards selecting a better set of neuro-forecasters in
neural networks. Daily total electricity demands for five years were considered for the analysis
and each date was assigned to one of the thirteen day-types, in a Sri Lankan context. As a
stochastic trend could be seen over the years, prior to performing the k-means clustering, the
trend was removed by taking the first difference of the series. Three different clusters were
found using Silhouette plots, and thus three neuro-forecasters were used for predictions. This
paper illustrates the proposed modified neural network procedure using electricity demand
data.
AN EFFICIENT WAVELET BASED FEATURE REDUCTION AND CLASSIFICATION TECHNIQUE FOR...ijcseit
This research paper proposes an improved feature reduction and classification technique to identify mild and severe dementia from brain MRI data. The manual interpretation of changes in brain volume based on visual examination by radiologist or a physician may lead to missing diagnosis when a large number of MRIs are analyzed. To avoid the human error, an automated intelligent classification system is proposed
which caters the need for classification of brain MRI after identifying abnormal MRI volume, for the diagnosis of dementia. In this research work, advanced classification techniques using Support Vector Machines based on Particle Swarm Optimisation and Genetic algorithm are compared. Feature reduction
by wavelets and PCA are analysed. From this analysis, it is observed that the proposed classification of SVM based PSO is found to be efficient than SVM trained with GA and wavelet based feature reduction technique yields better results than PCA.
AN EFFICIENT WAVELET BASED FEATURE REDUCTION AND CLASSIFICATION TECHNIQUE FOR...ijcseit
This research paper proposes an improved feature reduction and classification technique to identify mild
and severe dementia from brain MRI data. The manual interpretation of changes in brain volume based on
visual examination by radiologist or a physician may lead to missing diagnosis when a large number of
MRIs are analyzed. To avoid the human error, an automated intelligent classification system is proposed
which caters the need for classification of brain MRI after identifying abnormal MRI volume, for the
diagnosis of dementia. In this research work, advanced classification techniques using Support Vector
Machines based on Particle Swarm Optimisation and Genetic algorithm are compared. Feature reduction
by wavelets and PCA are analysed. From this analysis, it is observed that the proposed classification of
SVM based PSO is found to be efficient than SVM trained with GA and wavelet based feature reduction
technique yields better results than PCA.
A Review on Classification Based Approaches for STEGanalysis DetectionEditor IJCATR
This paper presents two scenarios of image steganalysis, in first scenario, an alternative feature set for steganalysis based on
rate-distortion characteristics of images. Here features are based on two key observations: i) Data embedding typically increases the
image entropy in order to encode the hidden messages; ii) Data embedding methods are limited to the set of small, imperceptible distortions.
The proposed feature set is used as the basis of a steganalysis algorithm and its performance is investigated using different
data hiding methods. In second scenario, a new blind approach of image Steganalysis based on contourlet transform and nonlinear
support vector machine. Properties of Contourlet transform are used to extract features of images, the important aspect of this paper is
that, it uses the minimum number of features in the transform domain and gives a better accuracy than many of the existing stegananlysis
methods. The efficiency of the proposed method is demonstrated through experimental results. Also its performance is compared
with the Contourlet based steganalyzer (WBS). Finally, the results show that the proposed method is very efficient in terms of its detection
accuracy and computational cost.
Application of Medical Image Fusion models, to OCT and Fundus Photographic Im...Ben Adekunle
Dissertation on Medical Image Fusion models, one area of particular interest to anyone undertaking a research project in Image fusion will be my comprehensive breakdown of the Fast Discrete Curvelet transform and the preceding section
on Wavelets.
A Survey On Tracking Moving Objects Using Various AlgorithmsIJMTST Journal
Sparse representation has been applied to the object tracking problem. Mining the self- similarities between particles via multitask learning can improve tracking performance. How-ever, some particles may be different from others when they are sampled from a large region. Imposing all particles share the same structure may degrade the results. To overcome this problem, we propose a tracking algorithm based on robust multitask sparse representation (RMTT) in this letter. When we learn the particle representations, we decompose the sparse coefficient matrix into two parts in our algorithm. Joint sparse regularization is imposed on one coefficient matrix while element-wise sparse regularization is imposed on another matrix. The former regularization exploits self-similarities of particles while the later one considers the differences between them.
GET IEEE BIG DATA,JAVA ,DOTNET,ANDROID ,NS2,MATLAB,EMBEDED AT LOW COST WITH BEST QUALITY PLEASE CONTACT BELOW NUMBER
FOR MORE INFORMATION PLEASE FIND THE BELOW DETAILS:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com
Mobile: 9791938249
Telephone: 0413-2211159
www.nexgenproject.com
A Novel Feature Selection with Annealing For Computer Vision And Big Data Lea...theijes
Numerous PC vision and medical imaging issues a confronted with gaining from expansive scale datasets, with a huge number of perceptions furthermore, highlights.A novel productive learning plan that fixes a sparsity imperative by continuously expelling variables taking into account a measure and a timetable. The alluring actuality that the issue size continues dropping all through the cycles makes it especially reasonable for enormous information learning. Methodology applies nonexclusively to the advancement of any differentiable misfortune capacity, and discovers applications in relapse, order and positioning. The resultant calculations assemble variable screening into estimation and are amazingly easy to execute. It gives hypothetical assurances of joining and determination consistency. Investigates genuine and engineered information demonstrate that the proposed strategy contrasts exceptionally well and other cutting edge strategies in relapse, order and positioning while being computationally exceptionally effective and adaptable.
Maxillofacial Pathology Detection Using an Extended a Contrario Approach Comb...sipij
In this work, we develop a method for pathology detection of the maxillary-facial in 3D CT medical images. Our method is based on “a contrario” approach that is a statistical region based method. To do this, we detect significant changes in regions previously delineated by a physician anatomist. We apply a decision algorithm based on fuzzy logic on the changes detected and we get a decision with a degree of uncertainty. Tests conducted on a basis of real images have shown the performance and robustness of the proposed approach.
From lung/heart/ambient source separation to clinical unimodal
classification
Alternative download link:
https://www.dropbox.com/scl/fi/8s7uq4h0fi8lgqbzqwg83/wearableMic_signal.pdf?rlkey=l2tqg5yffd4e0w224g3cs6pfl&dl=0
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and OculomicsPetteriTeikariPhD
Shallow literature analysis on recent trends in (multimodal) ophthalmic imaging with focus on neurodegenerative disease imaging / oculomics. Open-ended literature review on what you could be building next.
#1/2: Hardware
#2/2: Computational imaging (coming)
Alternative download link:
https://www.dropbox.com/scl/fi/ebp5xkhm3ngfu80hw0lvo/retina_imaging_2024.pdf?rlkey=eeikf3ewxdb481v06wxm34mqu&dl=0
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...PetteriTeikariPhD
Shallow literature analysis on recent trends in computational ophthalmic imaging with focus on neurodegenerative disease imaging / oculomics.
Open-ended literature review on what you could be building next.
#1/2: Hardware
#2/2: Computational imaging
Alternative download link:
https://www.dropbox.com/scl/fi/d34pgi3xopfjbrcqj2lvi/retina_imaging_2024_computational.pdf?rlkey=xnt1dbe8rafyowocl9cbgjh3p&dl=0
Skin temperature as a proxy for core body temperature (CBT) and circadian phasePetteriTeikariPhD
Using distal temperature (wrist temperature with smartwatch / finger temperature with smart ring as Oura) to estimate core body temperature (CBT).
We can then use the wrist temperature shifts as circadian phase shift estimates in circadian phase management. For example when prescribing melatonin or/and light exposure to mitigate the effects of jet lag
Alternative download link:
https://www.dropbox.com/scl/fi/es7174291yws262rhr568/cbt_estimation.pdf?rlkey=846yeed1wrqsjgkx7kp8ccc2y&dl=0
Summary of "Precision strength training: The future of strength training with...PetteriTeikariPhD
Short visual summary of the preprint:
Petteri Teikari and Aleksandra Pietrusz (2021)
“Precision Strength Training: Data-driven Artificial
Intelligence Approach to Strength and Conditioning.”
SportRxiv. May 20. https://doi.org/10.31236/osf.io/w734a
Precision strength training: The future of strength training with data-driven...PetteriTeikariPhD
Visual presentation of the preprint:
Petteri Teikari and Aleksandra Pietrusz (2021)
“Precision Strength Training: Data-driven Artificial
Intelligence Approach to Strength and Conditioning.”
SportRxiv. May 20. https://doi.org/10.31236/osf.io/w734a
Alternative download link:
https://www.dropbox.com/scl/fi/47nqp579t1b4m1zs0irhw/precision_strength_training.pdf?rlkey=05mzzw2ep8id71mq86936hvfi&dl=0
Intracerebral Hemorrhage (ICH): Understanding the CT imaging featuresPetteriTeikariPhD
Overview of CT basics and deep learning literature mostly focused on the analysis of ICH.
Intracerebral hemorrhage (ICH), also known as cerebral bleed, is a type of intracranial bleed that occurs within the brain tissue or ventricles. Intracerebral bleeds are the second most common cause of stroke, accounting for 10% of hospital admissions for stroke.
For spontaneous ICH seen on CT scan, the death rate (mortality) is 34–50% by 30 days after the insult,and half of the deaths occur in the first 2 days. Even though the majority of deaths occurs in the first days after ICH, survivors have a long term excess mortality of 27% compared to the general population.
Deep learning and computational steps roughly can be categorized to 1) Preprocessing, 2) Image Restoration (denoising, deblurring, inpainting, reconstruction), 3) Diffeomorphic registration for spatial normalization, 4) Hand-crafted radiomics and texture analysis, 5) Hemorrhage segmentation, among other relevant head CT issues
Alternative download link: https://www.dropbox.com/s/8l2h93cl2pmle4g/CT_hemorrhage.pdf?dl=0
Clinical applications with a focus on rheumatoid arthritis (RA) management. Quick overview of hand pose tracking for managing rheumatoid arthritis.
For best clinical outcome, you might want to think how to integrate additional modalities like surface electromyography (sEMG) and hand function assessments (like hand grip strength, and finger extension strength) to the clinical prognostics model.
Alternative download link:
https://www.dropbox.com/s/rexzt3d5tsm1vgc/hand_tracking_arthritis_management.pdf?dl=0
Creativity as Science: What designers can learn from science and technologyPetteriTeikariPhD
What personality traits do creative people share? Is creativity skill like any other? Is creativity suppressed in our world, is creativity misunderstood by "dinosaur companies" stuck with their legacy systems? Are "creatives" actually that creative in the end? Can fashion design exist in some romantic old school silo where no tech understanding is needed?
Alternative download link:
https://www.dropbox.com/s/ghiyeo3nyrtutzt/RCA_creativity.pdf?dl=0
Novel deep learning-powered diagnostics hardware for assessing retinal health.
The impact of deep learning and artificial intelligence for the design practice itself is covered better in https://algorithms.design/ and the focus of this presentation is in the visual function diagnostics.
How is the future looking for your high-street optician's (e.g. Specsavers, Boots) vision exam going beyond simple refraction correction, and how possibly in the future AR glasses could allow design of "smarter" every-day eyewear also for health monitoring.
Talk given for “Future of Eyecare: How we see and how we want to be seen” organized by Flora McLean.
Royal College of Art - London UK
Outlining the common challenges encountered when structuring clinical and research datasets for deep learning training.
Typically the datasets are so unstructured that they are impossible to analyze by any deep learning practitioners. And the cleaning and data wrangling ends up taking most of the time which could have been planned properly even before the clinical data acquisition.
One could argue that especially for medical data, the annotated data is the new gold, and not just the Big Data scattered all over the place. This is practice translates to efforts to design as intelligent as possible data labelling pipelines for efficient use of expert clinician annotation work.
Alternative download link:
https://www.dropbox.com/s/bbgc21yc86h0t14/Efficient_Ocular_Data_Labelling.pdf?dl=0
Interactive business intelligence visualizations with R Shiny and beyond with scalable big data architectures. Going beyond MS Excel and other non-scalable proprietary solutions.
Practical steps for non-machine learners on how to prepare your medical image dataset for deep learning modelling.
Here we use a fundus image dataset as an example that might have controls (healthy eyes) and glaucomatous fundus images with three different severities. In glaucoma, the optic disc is of a special interest so we want to annotate that from the images using a bounding box to help the deep learning training.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Securing your Kubernetes cluster_ a step-by-step guide to success !
Beyond Broken Stick Modeling: R Tutorial for interpretable multivariate analysis
1. Petteri Teikari
MSc Electrical Engineering
PhD, Neuroscience
http://petteri-teikari.com/
Singapore Eye Research Institute (SERI)
Visual Neurosciences group
version Tue 28 August 2018
Beyond
BROKEN STICK
“R Tutorial” for Interpretable
multivariate analysis with t-
SNE and Random Forests, etc.
http://doi.org/10.1098/rsif.2014.0672
https://youtu.be/nS1X5OEulDY
https://doi.org/10.1016/j.jbi.2017.10.006
3. Broken Stick Model
Bruch s membrane opening-minimum rim width and visual field'
loss in glaucoma: a broken stick analysis
https://dx.doi.org/10.18240%2Fijo.2018.05.19
Measurement of macular structure-function relationships using
spectral domain-optical coherence tomography (SD-OCT) and
pattern electroretinograms (PERG).
http://dx.plos.org/10.1371/journal.pone.0178004
“The normal”
region
Still appear
normal, “False Negative”
“Glaucomatous”
patients, “True Positive”
Point wherethe
“stickbreaks”
Scatter plots showing relationship of central visual field sensitivities (A, B, C) or
visual field mean deviation (D, E, F) with ganglion cell/inner plexiform layer
(GCIPL) thickness
6. You also want to know whether given measure for the
given patient is reliable
https://www.eyeworld.org/don-t-be-fooled-artifacts-retinal-nerve-fiber-layer-oct
Quality assessment for spectral domain optical coherence tomography
(OCT) images
https://arxiv.org/abs/1703.04977
Iftheimage qualityis bad, the derived
OCTmeasurements have increased
uncertainty as well
7. Artifacts might tell you something about the disease as well
Prevalencesof segmentationerrorsandmotionartifactsinOCT-angiography diferamongretinaldiseases
J.L.Lauermann,A.K.Woetzel,M.Treder,M.Alnawaiseh,C.R.Clemens,N. Eter,FlorianAlten
Graefe'sArchiveforClinicalandExperimentalOphthalmology(07July2018)
https://doi.org/10.1007/s00417-018-4053-2
Spectral domain OCT-A device (Optovue Angiovue) showing diferent degrees of motion artifacts.
a Motion artifact score (MAS) b, c Manifestation of diferent motion artifacts, including strong
quilting, partly with incipient expression of black lining (white asterisk), stretching (white arrows),
and displacements(whitecircles) in diferent partsof theimage
In the future, deep learning software applications might not only be able to distinguish between different retinal diseases but also to detect
specific artifacts and to warn the user in case of insufficient image quality. Today, multimodal imaging leads to an overwhelmingly large amount
of image information that has to be reviewed by ophthalmologists in daily clinical routine. Thus, software assistance in image grading appears
mandatory to manage the growing amount of image data and to avoid useless image data of insufficient quality. In the future, segmentation
will move forward through redefinitions of segmentation boundaries and refinements of algorithm strategies in pathologic maculae [
de Sisternes et al. 2017]. In conclusion, OCT-A image quality including motion artifacts and segmentation errors must be assessed prior to a
detailed qualitative or quantitative analysis to warrant meaningful and reliable results.
8. Myopia and RNFL thickness and image quality?
Qiu et al. 2018:
“Thisstudyaimedtodetermine the infuence ofthe
opticdisc-foveadistance(DFD) on macular
thicknessinmyopiceyes.... Ourfndingsindicate
that eyeswithagreaterDFD have alowermacular
thickness.”,
“It has been reported that the image quality ofOCT scans
afects theobserved retinal layer thicknesses [
Huang et al. 2011; Darmaet al. 2015; Jansonius et al. 2016],
andimagequality decreaseswith anincreasein
myopia[Lee et al. 2018].”
Zhaetal.2017
Inconclusion,myopiadidhave
specialinfuenceonRNFL
thickness, whichwasnot related
tosexor age.
Withthe nice“diurnalprofle”
https://doi.org/10.3341/jkos.2009.50.12.1840
9. Evaluation of a
Myopic Normative
Database for
Analysis of
Retinal Nerve
Fiber Layer
Thickness
http://doi.org/10.1001/jamaophth
almol.2016.2343
10. Clinical modelling often not very linear though
High-level motivation to go beyond the Broken Stick
11. Multivariate Examples 1#
Dynamic Modeling of Multivariate Latent Processes and Their
Temporal Relationships: Application to Alzheimer s Disease'
Bachirou O. Taddé, Hélène Jacqmin-Gadda, Jean-FrançoisDartigues, Daniel Commenges, Cécile
Proust-Lima| NSERM, UMR1219, Univ. Bordeaux,
https://arxiv.org/abs/1806.03659
Alzheimer's disease gradually afects several componentsincluding
the cerebral dimension with brain atrophies, the cognitive dimension
with a decline in various functions and the functional dimension with
impairment in the daily living activities. Understanding how such
dimensions interconnect is crucial for AD research. However it requires
to simultaneously capture the dynamic and multidimensional
aspects, and to explore temporal relationships between
dimensions. We propose an original dynamic model that accounts for
all these features. The model defnes dimensions as latent
processes and combines a multivariate linear mixed model and a
system of diference equations to model trajectories and temporal
relationshipsbetween latentprocessesin fnelydiscrete time.
We demonstrate in a simulation study that this dynamic
model in discrete time benefts the same causal
interpretation of temporal relationships as
mechanistic models defned in continuous time.
12. Nonlinear Multivariate Examples 1#
Multivariate joint frailty model for the analysis of nonlinear tumor
kinetics and dynamic predictions of death
StatisticsinMedicine Volume37, Issue1315June 2018 Pages2148-2161
AgnieszkaKról etal. (2018)
https://doi.org/10.1002/sim.7640
Usually, a biomarker is analyzed with a linear mixed model; more fexible
trajectory can be obtained using diferent forms of the parametric approach.
The longitudinal trajectory of a biomarker can also be approximated using
splines, eg, B-splines. More fexible and sophisticated analyses are provided
by approaches that assume mechanistic models for biomarker dynamics.
These models could provide more accurate modeling of the biological
process and account at the same time for the heterogeneity in the data
andprognosticfactors.
Here, we propose a multivariate joint frailty model for longitudinal data
represented by a solution to ordinary diferential equations (ODE), recurrent
events, and a terminal event. We perform a simulation study in which we
compare this model to a joint model with a 2-phase linear mixed-efects
model for the sumofthe longestdiameters(SLD) andtoajointmodelapplying
B-splines for the SLD trajectory in terms of goodness of ft and predictive
accuracy.
All the models were estimated using the extensions of the R package frailtypack
(https://www.jstatsoft.org/article/view/v081i03) The mechanistic joint frailty model
that uses the analytical solution to the ODE
Estimating nonlinear effects in the presence of cure fraction using
a semi-parametric regression model
ComputationalStatisticsJune 2018, Volume33, Issue 2, pp 709–730
ThiagoG. Ramiresetal. (2017)
https://doi.org/10.1007/s00180-017-0781-8
The proposed model isbased on the generalized additive models (GAMs)
for location, scale and shape, for which any or all parameters of the distribution
are parametric linear and/or nonparametric smooth functions of explanatory
variables. The new model is used to ft the nonlinear behavior between
explanatory variables and cure rate. The biases of the cure rate
parameter estimates caused by not incorporating such non-linear efects in the
model are investigated using Monte Carlo simulations. We discuss diagnostic
measures and methods to select additive terms and their computational
implementation
Codes implemented in the GAMLSS package in the software R
13. Nonlinear Multivariate Examples 2#
Data adaptive additive modeling‐
StatisticsinMedicine https://doi.org/10.1002/sim.7859
AshleyPetersen and DanielaWitten (2018)
For instance, the sparse additive model makes it
possible to adaptively determine which features
should be included in the ftted model, the sparse
partially linear additive model allows each feature in
the ftted model to take either a linear or a
nonlinear functional form, and the recent fused
lasso additive model and additive trend fltering
proposals allow the knots in each nonlinear function ft
tobe selectedfromthedata.
Inthispaper,wecombinethestrengthsofeachofthese
recent proposals into a single proposal that uses the
data to determine which features to include in the
model, whether to model each feature linearly
or nonlinearly, and what form to use for the
nonlinearfunctions.
Modeling decisions required when fitting an additive model of the
form in Equation 1. We seek to develop an estimator that can make
all three decisions in a data-adaptive way
14. Nonlinear Multivariate Examples 3#
Estimating individualized optimal
combination therapies through outcome
weighted deep learning algorithms
StatisticsinMedicine https://doi.org/10.1002/sim.7902
https://tensorfow.rstudio.com/ Keras/ Tensorfowin R
MuxuanLiang, TingYe, HaodaFu(2018)
With the advancement in drug development, multiple treatments are
available for a single disease. Patients can often beneft from taking
multiple treatments simultaneously. For example, patients in
Clinical Practice Research Datalink with chronic diseases such as
type 2 diabetes can receive multiple treatments simultaneously.
Therefore, it is important to estimate what combination
therapy from which patients can beneft the most. However, to
recommend the best treatment combination is not a single label but
a multilabel classifcation problem. In this paper, we propose a
novel outcome weighted deep learning algorithm to estimate
individualizedoptimalcombinationtherapy.
The Fisher consistency of the proposed loss function under certain
conditions is also provided. In addition, we extend our method to a
family of loss functions, which allows adaptive changes based on
treatment interactions. We demonstrate the performance of our
methodsthrough simulationsandrealdataanalysis.
15. Nonlinear Multivariate Examples 4#
Personalized Risk Prediction in Clinical
Oncology Research: Applications and
Practical Issues Using Survival Trees
and Random Forests
Chen Hu & Jon ArniSteingrimsson
Journal ofBiopharmaceuticalStatistics
Volume28,2018- Issue2:SpecialIssue:Precision Medicinein CancerResearch
https://doi.org/10.1080/10543406.2017.1377730
Risk prediction models used in clinical oncology commonly use both
traditional demographic and tumor pathological factors as well as
high-dimensional genetic markers and treatment parameters from
multimodality treatments. In this article, we describe the most
commonly used extensions of the Classifcation and
Regression Tree (CART) and random forest algorithms to right-
censoredoutcomes.
We focus on how they difer from the methods for noncensored
outcomes, and how the diferent splitting rulesand methodsfor cost-
complexity pruning impact these algorithms. These simulation
studies aim to evaluate how sensitive the prediction accuracy is to
the underlying model specifcations, the choice of tuning
parameters,andthedegreesofmissingcovariates.
16. Nonlinear Multivariate Examples 5#
Utilization of Low Dimensional Structure to
Improve the Performance of
Nonparametric Estimation in High
Dimensions
eScholarship -UCLA - UCLA Electronic Theses and Dissertations
Conn, Daniel Joshua
https://escholarship.org/uc/item/9z03w0zk
First, we have developed a variant of random forests, called fuzzy forests. Fuzzy
forests reduce the bias observed in random forest variable importance measures by
clustering covariates into distinct groupssuch that the correlation ofcovariateswithin a
group is high and the correlation between groups is low. Fuzzy forests is expected to
workwellwhen thetrueregressionfunctionexhibitsan additivestructure.
Flow chart of fuzzy forests algorithm
18. Random Forest VIMP the easiest route
https://dinsdalelab.sdsu.edu/metag.stats/code/randomforest.html
library(randomForest)
https://www.biostars.org/p/86981/
Tutorial: Machine Learning For Cancer
Classification
library(randomForest)
library(ROCR)
library(genefilter)
library(Hmisc)
19. Variable Importance 1#
Correlation and variable importance in
random forests
StatisticsandComputingMay2017,Volume27,Issue3,pp659–678
https://doi.org/10.1007/s11222-016-9646-1
Baptiste Gregorutti, Bertrand Michel, Philippe Saint-Pierre
Our results motivate the use of the recursive
feature elimination (RFE) algorithm for variable
selection in this context. This algorithm
recursively eliminates the variables using
permutation importance measure as a ranking
criterion. Next various simulation experiments
illustrate the efciency of the RFE algorithm for
selecting a small number of variables
together with agoodprediction error.
In future works, the algorithm could be adapted by
combining a non recursive strategy at the frst
steps and a recursive strategy at the end of the
algorithm.
Boxplots of the initial permutation
importance measures. In each group,
only the ten variables with the
highest importances are displayed
Boxplots of the initial permutation importance
measures. For each group, only the predictive
variable (dashed lines) and the two variables
with the highest importance in the same group
(solid lines) are displayed
20. Variable Importance 2#
Standard errors and confidence
intervals for variable importance in
random forest regression, classification,
and survival
StatisticsinMedicine https://doi.org/10.1002/sim.7803
Hemant Ishwaran and Min Lu(2018)
Random forests are a popular nonparametric tree
ensemble procedure with broad applications to data
analysis. While its widespread popularity stems from
its prediction performance,anequally importantfeature
is that it provides a fully nonparametric measure of
variableimportance(VIMP)
We propose a subsampling approach that can be
used to estimate the variance of VIMP and for
constructing confdence intervals. Using extensive
simulations, we demonstrate the efectiveness of the
subsampling estimator and in particular fnd that the
delete d jackknife variance estimator‐ , a close
cousin, is especially efective under low subsampling
ratesduetoitsbiascorrectionproperties.
All RF calculations were implemented using the randomForestSRC R package‐
The package runs in OpenMP parallel processing mode, which allows for parallel
processing on user desktops, as well as large scale computing clusters. The package
now includes a dedicated function “subsample” which implements the 3
methodsstudiedhere
21. Variable Importance 3#
FromHu,C.,&Steingrimsson,J.A.(2017)
“The package ggRandomForests (Ehrlinger,2015)
ofers nice visual tools for intermediate data objects
from randomForestSRC, including permutation
VIMPs, minimal depth VIMPs, and various variable
dependency plots. The package party provides a
unifed random forest implementation for categorical,
continuous, and survival outcomes, based on
conditional inference trees (Hothornetal.,2006) as the
base learners.”
22. Variable Importance 4#
Model Class Reliance: Variable Importance
Measures for any Machine Learning Model
Class, from the Rashomon Perspective" "
AaronFisher,CynthiaRudin,FrancescaDominici(2018)
https://arxiv.org/abs/1801.01489
https://github.com/aaronjfsher/mcr
Variable importance (VI) tools are typically used to examine the inner
workings of prediction models. However, many existing VI measures are not
comparable across model types, can obscure implicit assumptions
about the data generating distribution, or can give seemingly incoherent
resultswhenmultiplepredictionmodelsftthedatawell.
In this paper we propose a framework of VI measures for describing how
much any model class (e.g. all linear models of dimension p), any model-ftting
algorithm (e.g. Ridge regression with fxed regularization parameter), or any
individual prediction model (e.g. a single linear model with fxed coefcient
vector),reliesoncovariate(s)ofinterest.
The building block of our approach, Model Reliance (MR), compares a
prediction model's expected loss with that model's expected loss on a pair of
observations in which the value of the covariate of interest has been switched.
Expanding on MR, we propose Model Class Reliance (MCR) as the upper
and lower bounds on the degree to which any well-performing prediction
model within a class may rely on a variable of interest, or set of variables of
interest.
23. No Shortage then for alternative interpretation strategies
Leveraging uncertainty information from deep
neural networks for disease detection
https://doi.org/10.1038/s41598-017-17876-z
Uncertainty-Aware Attention for
Reliable Interpretation and Prediction
https://arxiv.org/abs/1805.09653
“Prediction with “I don’t know" option We further evaluate the reliability of our predictive
model by allowing it to say I don’t know (IDK), where the model can refrain from
making a hard decision of yes or no when it is uncertain about its prediction. This
ability to defer decision is crucial for predictive tasks in clinical environments, since
those deferred patient records could be given a second round examination by human
clinicians to ensure safety in its decision. To this end, we measure the uncertainty of each
prediction by sampling the variance of the prediction using both MC-dropout and
stochastic Gaussian noise over 30 runs, and simply predict the label for the instances with
standarddeviationlarger thansomesetthresholdasIDK.
25. t-SNE as exploratory frst method
Visualising high-dimensionaldatasetsusingPCAandt-SNE inPython
ExploringnonlinearfeaturespacedimensionreductionanddatarepresentationinbreastCADx with Laplacianeigenmapsand t SNE‐
Visualizing time-dependentdatausing dynamict-SNE
Visualizationof diseaserelationshipsby multiplemapst-SNE regularizationbasedonNesterov acceleratedgradient
Visualizationof geneticdisease-phenotype similaritiesbymultiple maps t-SNE withLaplacianregularization
Map 6 from regularized multiple maps t-SNE. The
results based on regularized mm-tSNE reveals one of
the ten maps in which contains our selected
phenotypes in examples. Each text corresponds to a
specific OMIM ID. The size of each text corresponds
to its importance weights in the map. The colours of
each text indicated which disease categories a
phenotype belongs to.
We magnify the neighbourhoods details of one point Melnick-
Needles syndrome (MNS, OMIM: 309350) and its two other
neighbours Campomelic dysplasia (CD, OMIM ID: 114290) and
Antley-Bixler syndrome (ABS, OMIM ID: 207410).
26. t-SNE Low-Dimensional Visualization of features
Data-driven identification of prognostic tumor subpopulations using spatially mapped t-
SNE of mass spectrometry imaging data https://doi.org/10.1073/pnas.1510227113
27. t-SNE R Implementation
t-SNE (Cited by 5394, in Python, in R)
Now that you have an understanding of what is
dimensionality reduction, let’s look at how we can
use t-SNE algorithm for reducing dimensions.
Following are a few dimensionality reduction
algorithms that you can check out:
●
PCA (linear)
●
t-SNE (non-parametric/ nonlinear)
●
Sammon mapping (nonlinear)
●
Isomap (nonlinear)
●
LLE (nonlinear)
●
CCA (nonlinear)
●
SNE (nonlinear)
●
MVU (nonlinear)
It’s quite simple actually, t-SNE a non-linear
dimensionality reduction algorithm finds patterns in
the data by identifying observed clusters based on
similarity of data points with multiple features. But it
is not a clustering algorithm it is a dimensionality
reduction algorithm.
This is because it maps the multi-dimensional data
to a lower dimensional space, the input features are
no longer identifiable. Thus you cannot make any
inference based only on the output of t-SNE. So
essentially it is mainly a data exploration and
visualization technique.
But t-SNE can be used in the process of
classification and clustering by using its output as
the input feature for other classification algorithms.
29. t-SNE fx the random seed for reproducibility
set.seed(42) # Sets seed for reproducibility
https://cran.r-project.org/web/packages/Rtsne/README.h
tml
https://doi.org/10.1016/j.celrep.2015.12.082: used a fixed random seed to"
make sure the t-SNE plot would be reproducible (parameter random_state =
254 in the scikit-learn implementation of t-SNE)."
30. t-SNE How to use
Check out the
excellent demo
https://distill.pub/2016/mis
read-tsne/
31. t-SNE Parameters in Demo https://distill.pub/2016/misread-tsne/
Perplexity
Tuneable parameter, “perplexity,” which says (loosely) how
to balance attention between local and global aspects of
your data. The parameter is, in a sense, a guess about the
number of close neighbors each point has. A low perplexity
means we care about local scale and focus on the closest
other points. High perplexity takes more of a “big picture”
approach.
But the story is more nuanced than that.
Getting the most from t-SNE may mean
analyzing multiple plots with different
perplexities.
32. t-SNE Parameters in Demo https://distill.pub/2016/misread-tsne/
No of Iterations
If you see a t-SNE plot with strange “pinched” shapes, chances are
the process was stopped too early.
33. t-SNE How about preprocessing/transformation
Bits from:
https://www.reddit.com/r/MachineLearning/comments/5ygh
1q/d_data_preprocessing_tips_for_tsne/
JamesLi2017
t-SNE is sensitive to feature-wise normalization; and no
theory says that such normalization will in general improve or
degrade results, it fully depends on your data and
expectation. If you can make more sense with maps from
un-normalized data, then it indicates that normalization is not
good for your study.
1 year ago
Would you mind elaborating just a bit as to what exactly you
mean by ranking pairwise distances?
Compute the pairwise Euclidean distances before
normalization, rank the values (lets call these indices x).
Then compute the pairwise Euclidean distances after
normalization, and rank them by x, lets call these values y. If
you plot(x,y) you will see that the resulting function is not
monotonic, i.e. rank order was not preserved.
Features might have quite
different ranges
Dimensionality reduction (PCA, tSNE)
- Let s encode the categorical variables and try again.'
Encoding categorical variables: one-hot and beyond
34. Preprocessing in R
https://www.displayr.com/using-t-sne-to
-visualize-data-before-prediction/
Because the distributions
are distance based, all
the data must be
numeric. You should
convert categorical
variables to numeric ones
by binary encoding or a
similar method. It is also
often useful to normalize
the data, so each variable
is on the same scale.
This avoids variables with
a larger numeric range
dominating the analysis.
Featurewise Z-score normalization for example
https://www.r-bloggers.com/r-tutorial-series-centering-variable
s-and-generating-z-scores-with-the-scale-function/
R Tutorial Series: Centering Variables
and Generating Z-Scores with the
Scale() Function
https://stackoverflow.com/questions/15215457/standardize-data-columns-in-r
COLUMN NORMALIZATION
library(caret)
# Assuming goal class is column 10
preObj <- preProcess(data[, -10], method=c("center", "scale"))
newData <- predict(preObj, data[, -10])
What is the purpose of row normalization
The main point is that normalizing rows can be detrimental to
any subsequent analysis, such as nearest-neighbor or k-
means. For simplicity, I will keep the answer specific to
mean-centering the rows.
36. t-SNE use as input for a classifer
t-SNE (Cited by 5394, in Python, in R)
Experimental results showed that the proposed new
algorithm applied to facial recognition gained the better
performance compared with those traditional algorithms,
such as PCA, LDA, LLE and SNE. [Yi et al. 2013]
The flowchart for implementing such a combination on the
data could be as follows:
Preprocessing → need to transform ordinal to numerical?
normalization → whiten for t-SNE?
→t-SNE classification algorithm→
PCA LDA LLE SNE t-SNE
SVM 73.5% 74.3% 84.7% 89.6% 90.3%
AdaboostM2 75.4% 75.9% 87.7% 90.6% 94.5%
adam2: Implementation of AdaBoost.M2In
ebmc: Ensemble-Based Methods for Class Imbalance Problem
ASurveyofseveraloftheAdaboostvariantscanbefoundinthepaper,
'SurveyonBoostingAlgorithmsforSupervisedandSemi-supervisedLearning,'Artur Ferreira.
pat:“Theother variantsarecoveredinthepaper,butlessfrequentlymentionedincommon
literature.AsIunderstandit,GentleAdaboostproducesamorestableensemblemodel.The
AdaBoost.M1andAdaBoost.M2modelsareextensionstomulti-classclassifcations(withM2
overcomingarestrictiononthemaximumerrorweightsofclassifersfromM1). “
"ExperimentswithaNewBoosting Algorithm,"YoavFreundandRobertE.Schapire.
37. t-SNE tweaks for classifcation
Set to 3
Might be better for classification?
Worse for visualization
DATA ANALYSIS RESOURCES
MACHINE LEARNING
The
Comprehensive
Guide for Feature
Engineering
POSTED AUGUST 28, 2016 PIUSH VAISH
More advanced Feature selection algorithms
may search subsets of features by trial and
error, creating and evaluating models
automatically in pursuit of the objectively
most predictive sub-group of features.
Regularization methods like LASSO and
ridge regression may also be considered
algorithms with feature selection baked in,
as they actively seek to remove or discount
the contribution of features as part of the
model building process.
38. Feature Selection for classifcation
Maybe your visual field
measurements are just
very noisy and do not
correlate with anything?
Feature Selection with the Caret R Packag
e
EFS: an ensemble feature selection tool im
plemented as R-packag
Easy feature selection for beginners in R
Venn diagram for the results obtained by different Feature Selection methods (from
the R library VennDiagram
https://doi.org/10.1038/srep19256
39. Classifers you can try some other multiclass classifers
RPubs - Intro to Machine
Learning for Classification
: Random Forests
Random Forest (RF) classification was performed
in the R environment using the randomForest
package.
https://doi.org/10.1038/srep31479
A Simple XGBoost Tutorial Using the Iris Dataset
KDnuggets
43. Outlier Forests identify outlier patients
An anomaly detection approach for the
identification of DME patients using spectral
domain optical coherence tomography images
DésiréSidibéetal.(2017)
ComputerMethodsandProgramsinBiomedicineVolume 139, February2017, Pages109-117
https://doi.org/10.1016/j.cmpb.2016.11.001
Isolation based anomaly detection using‐
nearest neighbor ensembles‐
Tharindu R. Bandaragoda, KaiMing, TingDavid, AlbrechtFei,TonyLiu, Ye Zhu, Jonathan R. Wells
ComputationalIntelligence(2018)
https://doi.org/10.1111/coin.12156 ← Isolationforest
Outlier detection for patient monitoring and
alerting
MilosHauskrecht, Iyad Batal, Michal Valko, ShyamVisweswaran, Gregory F. Cooper,GillesClermonte
Journal ofBiomedical InformaticsVolume 46, Issue 1, February 2013, Pages47-55
https://doi.org/10.1016/j.jbi.2012.08.004
The investigation of
new, more suitable
feature sets that
characterize complex
time-series data may
lead to further
improvements and
better coverage of
patient management
actions with such
models.
44. Outlier Forests identify outlier patients 2#
A parameter based growing ensemble of self-
organizing maps for outlier detection in healthcare
Samir Elmougy, M. Shamim Hossain, Ahmed S. Tolba, Mohammed F. Alhamid, Ghulam Muhammad
Cluster Computing (2017)
https://doi.org/10.1007/s10586-017-1327-0
46. Clustering electronic phenotyping 1#
Semi-supervised learning of the electronic
health record for phenotype stratification
BrettK.Beaulieu-Jones,CaseyS.Greene,thePooledResourceOpen-Access
ALSClinicalTrialsConsortium
Journal ofBiomedical InformaticsVolume 64, December 2016, Pages168-178
https://doi.org/10.1016/j.jbi.2016.10.007
Patient interactions with health care providers result in entries to electronic
health records (EHRs). EHRs were built for clinical and billing purposes but
contain many data points about an individual. Mining these records provides
opportunities to extract electronic phenotypes, which can be paired with
genetic data to identify genes underlying common human diseases. This task
remains challenging: high quality phenotyping is costly and requires physician
review; many felds in the records are sparsely flled; and our defnitions
ofdiseasesarecontinuingtoimproveovertime.
Future work will focus on developing tools to examine and interpret
constructed phenotypes (hidden nodes) and clusters. In addition, we will
develop a framework for evaluating the signifcance of constructed
clusters for genotype to phenotype association. We anticipate high weights
indicate important contributors to node construction revealing relevant
combinations of input features. Finally we will construct a scheme for
determiningoptimalhyper parameter (i.e.hiddennodecount)selection
Case vs. Control clustering via principal components analysis
and t-distributed stochastic neighbor embedding
47. Clustering electronic phenotyping 2#
Flexible, cluster-based analysis of
the electronic medical record of
sepsis with composite mixture
models
MichaelB.Mayhew,BrendenK.Petersen,AnaPaulaSales,John
D.Greene,VincentX.Liu,ToddS.Wasson
Journal ofBiomedical InformaticsVolume 78, February2018, Pages33-42
https://doi.org/10.1016/j.jbi.2017.11.015
In the case of sepsis, a debilitating, dysregulated host response
to infection, extracting subtle, uncataloged clinical phenotypes
from the EMR with statistical machine learning methods has the
potential to impact patient diagnosis and treatment early in the
courseoftheir hospitalization.
Here, we describe an unsupervised, probabilistic
framework called a composite mixture model that can
simultaneously accommodate the wide variety of observations
frequently observed in EMR datasets, characterize
heterogeneous clinical populations, and handle missing
observations. We demonstrate the efcacy of our approach on a
large-scale sepsis cohort, developing novel techniques built on
our model-based clusters to track patient mortality risk over
time and identify physiological trends and distinct subgroups
of the dataset associated with elevated risk of mortality during
hospitalization.
Estimated differences between cluster-specific
and population values of each feature. The
dots represent the estimated difference while
whiskers represent the 95% confidence interval
as computed by Wilcoxon rank-sum tests. The
lack of an overlap between these intervals and
a difference of 0 (gray line) reflects a
significant difference in cluster-specific values
of the feature as compared to the overall
population.
49. Visual Fields Deep learnifed already 1#
2846 — B0449 A deep-learning
based automatic glaucoma
identification
SerifeSedaS.Kucur,M.Abegg,S.Wolf,R.Sznitman
1 ARTORG Center, University ofBern, Bern, Switzerland;
2 DepartmentofOpthalmology, Inselspital Bern, Bern, Switzerland
ARVO 2017 poster for deep learning in visual field
assessment
Şerife Seda Kucur Ergünay
Designingmachinelearningalgorithmstoimprovethe
acquisitionanddiagnosistoolsfor Glaucomamanagement.
FollowingareincludedwithinthePhDstudy:
-Designingnewperimetrystrategiesforaccuratevisualfeld
acquisitionleveragingtoolsfromreinforcementlearning,
sparseapproximation,graphicalmodels,auto-encoders
-EarlyGlaucomadetectionfromvisualfeldsusingdeep
learningapproach
-Predictingvisualfelds/Glaucomaprogression
frompatienthistoryviaLSTMs
Topic: lstm-neural-networks · GitHub
umbertogriffo / Predictive-Maintenance-using-LSTM
Example of Multiple Multivariate Time Series Prediction with LSTM
Recurrent Neural Networks in Python with Keras.
kinect59/Spatio-Temporal-LSTM - GitHub
Spatio-Temporal LSTM with Trust Gates for 3D Human Action
Recognition. – J Liu - 2016 - Cited by 179
blackeagle01 / Abnormal_Event_Detection
Abnormal Event Detection in Videos using SpatioTemporal AutoEncoder
Structural-RNN: Deep Learning on Spatio-Temporal Graphs
– Cited by 126
Videos as Space-Time Region Graphs
X Wang, A Gupta - arXiv preprint arXiv:1806.01810, 2018 How do humans
recognize the action opening a book ? We argue that there are two" "
important cues: modeling temporal shape dynamics and modeling
functional relationships between humans and objects. In this paper, we
propose to represent videos as space-time
50. Visual Fields Deep learnifed already 2#
Forecasting Future Humphrey Visual
Fields Using Deep Learning
JoanneC.Wen,CeciliaS.Lee,PearseA.Keane,SaXiao,YueWu,
ArielRokem,PhilipP.Chen,AaronY.Lee
(Submitted on 2 Apr 2018)
https://arxiv.org/abs/1804.04543
All datapoints from consecutive 24-2 HVFs from
1998 to 2018 were extracted from a University of
Washington database. More than 1.7 million
perimetry points were extracted to the hundredth
decibelfrom32,44324-2HVFs.
Clinical dataset combination selection was done
by taking the best performing model in the previous
step and testing every possible combinationof clinical
predictor variables. Categorical variables such as
eye and gender were appended in a one-hot vector
format to the input tensor and continuous
variables were encoded as a single additional
tensor face with every cell encodedasthecontinuous
value
51. Clinical Variables Combine with Images 1#
Glaucomadiagnosisbasedonbothhidden
featuresanddomainknowledgethroughdeep
Non-image features
from case report
Features such as age,
intraocular pressure,
eyesight and
symptoms are also
extracted. We regard
them as non-image
features for
simplicity. For age,
intraocular pressure
and eyesight features,
we just use the raw
numerical values from
case reports.
52. Clinical Variables Combine with Images 2#
An Ophthalmology Clinical Decision Support
System Based on Clinical Annotations,
Ontologies and Images (23 July 2018) José N.
Galveia ; Antonio Travassos ; Luis A. da Silva
Cruz.
https://doi.org/10.1109/CBMS.2018.00024
We explore a multimodal electronic health record dataset (n = 2348 cases and n = 2348
controls) and propose a new classifer model based on Random Forrest Classifers for
recommendation of an ophthalmic procedure (intravitreal injection of bevacizumab). The
dataset comprises structured demographic data, unstructured textual annotations, and
clinical images (optical coherence tomography, slit lamp photography and scanning laser
ophthalmoscopy).
54. Deep Forecast Spatiotemporal Forecasting
Deep Forecast:
Deep Learning-based Spatio-
Temporal Forecasting
Amir Ghaderi,BorhanM.Sanandaji,
FaezehGhaderi
(Submitted on 2 Apr 2018)
https://arxiv.org/pdf/1707.08110.pdf
https://github.com/amirstar/Deep-Forecast
Deep-Forecast/multiLSTM.py
55. Deep Forecast Spatiotemporal Forecasting
Deep Learning for Spatio-Temporal
Modeling: Dynamic Traffic Flows
and High Frequency Trading
MatthewF.Dixon,NicholasG.Polson,VadimO.Sokolov
(revised7May2018)
https://arxiv.org/abs/1705.09851