Innovative Payloads for Small Unmanned Aerial System-Based PersonAustin Jensen
This document describes Austin Jensen's doctoral dissertation titled "Innovative Payloads for Small Unmanned Aerial System-Based Personal Remote Sensing and Applications". The dissertation focuses on using small unmanned aerial systems (UAS) as remote sensing platforms to collect aerial imagery and track radio-tagged fish. It presents novel methods for calibrating imagery from commercial cameras on UAS and estimating the location of radio-tagged fish using multiple UAS. The calibration methods allow imagery to be orthorectified and used to deliver actionable information to end users. Simulations evaluate the fish tracking methods and predict their real-world performance. The dissertation contributes to advancing UAS-based remote sensing for applications such as precision agriculture, vegetation mapping,
This thesis examines the sound reception mechanism of Cuvier's beaked whales through finite element analysis. The thesis:
1) Develops a finite element model of the whale's tympanoperiotic complex to simulate sound reception.
2) Applies boundary conditions representing bone conduction via the ossicular chain and pressure loading through fat bodies.
3) Varies mesh discretization, loading parameters, and damping to evaluate their effects on the structure's vibration response.
The results provide insight into the hearing capabilities of Cuvier's beaked whales and how their auditory system compares to other marine mammals.
The document is a thesis submitted by Aaron Croasmun to the Graduate School of the Pennsylvania State University in partial fulfillment of the requirements for a Master of Science degree in Computer Science. The thesis proposes a novel and efficient method for skeletonization of blood vessel networks in medical images. Existing skeletonization methods often require human intervention, prior knowledge of vessel boundaries, or post-processing to extract morphological parameters from the skeletons. The proposed method aims to automatically detect vessel centerlines in an image and represent them as a graph structure to facilitate measurement of parameters like branch length and branching points. Promising results are shown when applying the method to complex retinal vessel networks.
Using Artificial Neural Networks to Detect Multiple Cancers from a Blood TestStevenQu1
Research paper drafted during my 2 year internship with Oak Ridge National Laboratory illustrating the potential of artificial intelligence in cancer research.
This document summarizes Fabio Forcolin's 2015 master's thesis which studied how heart rate variability (HRV) relates to driver fatigue. The study analyzed ECG data from around 80 drivers who drove for 90 minutes in the morning, afternoon and night. HRV indices were derived after applying outlier detection and spectral transformation methods to the data. Two outlier detection methods and three spectral transformation methods were compared to evaluate their influence on the HRV indices. Results showed that outlier detection had little influence while spectral transformation methods had a higher impact. HRV indices were also able to distinguish fatigue levels between morning, afternoon and night drives, though the spectral transformation method had some effect on this capability.
This document is a thesis submitted by Tokelo Khalema to the University of the Free State in partial fulfillment of the requirements for a B.Sc. Honors degree in Mathematical Statistics. The thesis compares the Gaussian linear model, two Bayesian Student-t regression models, and the method of least absolute deviations through a Monte Carlo simulation study. The study aims to evaluate how soon and how severely the least squares regression model starts to lose optimality against these robust alternatives under violations of its assumptions. The document includes sections on robust statistical procedures, literature review of the models considered, research methodology, results and applications of the simulation study, and closing remarks.
This masters project report describes molecular dynamics simulations performed to study liquid crystals and their potential use in biosensors. Over 50 nanoseconds of simulations were carried out using different atomistic force fields to model the liquid crystal 5CB. The force fields were characterized and compared to experimental data on the phase behavior and properties of bulk 5CB. Simulations were also conducted to study 5CB interactions across water interfaces and observe conformational changes in polypeptides to assess the ability to predict protein structure changes relevant for biosensor applications. Issues encountered are discussed. The work aims to further understanding of liquid crystals at the molecular level to aid the development of liquid crystal-based biosensing technologies.
This document discusses various techniques for improving the efficiency of Monte Carlo simulations used in radiation therapy treatment planning and dose calculations. It describes the condensed history technique, which groups many small electron interactions into single simulation steps, allowing efficient simulation of electron transport. The document then outlines several variance reduction techniques and approximate efficiency improving techniques used in simulations of medical linear accelerators and dose calculations in patients. These include range rejection, transport cutoffs, splitting, Russian roulette, and other methods. It concludes by noting the condensed history technique and choice of electron transport algorithm strongly influence simulation speed and accuracy.
Innovative Payloads for Small Unmanned Aerial System-Based PersonAustin Jensen
This document describes Austin Jensen's doctoral dissertation titled "Innovative Payloads for Small Unmanned Aerial System-Based Personal Remote Sensing and Applications". The dissertation focuses on using small unmanned aerial systems (UAS) as remote sensing platforms to collect aerial imagery and track radio-tagged fish. It presents novel methods for calibrating imagery from commercial cameras on UAS and estimating the location of radio-tagged fish using multiple UAS. The calibration methods allow imagery to be orthorectified and used to deliver actionable information to end users. Simulations evaluate the fish tracking methods and predict their real-world performance. The dissertation contributes to advancing UAS-based remote sensing for applications such as precision agriculture, vegetation mapping,
This thesis examines the sound reception mechanism of Cuvier's beaked whales through finite element analysis. The thesis:
1) Develops a finite element model of the whale's tympanoperiotic complex to simulate sound reception.
2) Applies boundary conditions representing bone conduction via the ossicular chain and pressure loading through fat bodies.
3) Varies mesh discretization, loading parameters, and damping to evaluate their effects on the structure's vibration response.
The results provide insight into the hearing capabilities of Cuvier's beaked whales and how their auditory system compares to other marine mammals.
The document is a thesis submitted by Aaron Croasmun to the Graduate School of the Pennsylvania State University in partial fulfillment of the requirements for a Master of Science degree in Computer Science. The thesis proposes a novel and efficient method for skeletonization of blood vessel networks in medical images. Existing skeletonization methods often require human intervention, prior knowledge of vessel boundaries, or post-processing to extract morphological parameters from the skeletons. The proposed method aims to automatically detect vessel centerlines in an image and represent them as a graph structure to facilitate measurement of parameters like branch length and branching points. Promising results are shown when applying the method to complex retinal vessel networks.
Using Artificial Neural Networks to Detect Multiple Cancers from a Blood TestStevenQu1
Research paper drafted during my 2 year internship with Oak Ridge National Laboratory illustrating the potential of artificial intelligence in cancer research.
This document summarizes Fabio Forcolin's 2015 master's thesis which studied how heart rate variability (HRV) relates to driver fatigue. The study analyzed ECG data from around 80 drivers who drove for 90 minutes in the morning, afternoon and night. HRV indices were derived after applying outlier detection and spectral transformation methods to the data. Two outlier detection methods and three spectral transformation methods were compared to evaluate their influence on the HRV indices. Results showed that outlier detection had little influence while spectral transformation methods had a higher impact. HRV indices were also able to distinguish fatigue levels between morning, afternoon and night drives, though the spectral transformation method had some effect on this capability.
This document is a thesis submitted by Tokelo Khalema to the University of the Free State in partial fulfillment of the requirements for a B.Sc. Honors degree in Mathematical Statistics. The thesis compares the Gaussian linear model, two Bayesian Student-t regression models, and the method of least absolute deviations through a Monte Carlo simulation study. The study aims to evaluate how soon and how severely the least squares regression model starts to lose optimality against these robust alternatives under violations of its assumptions. The document includes sections on robust statistical procedures, literature review of the models considered, research methodology, results and applications of the simulation study, and closing remarks.
This masters project report describes molecular dynamics simulations performed to study liquid crystals and their potential use in biosensors. Over 50 nanoseconds of simulations were carried out using different atomistic force fields to model the liquid crystal 5CB. The force fields were characterized and compared to experimental data on the phase behavior and properties of bulk 5CB. Simulations were also conducted to study 5CB interactions across water interfaces and observe conformational changes in polypeptides to assess the ability to predict protein structure changes relevant for biosensor applications. Issues encountered are discussed. The work aims to further understanding of liquid crystals at the molecular level to aid the development of liquid crystal-based biosensing technologies.
This document discusses various techniques for improving the efficiency of Monte Carlo simulations used in radiation therapy treatment planning and dose calculations. It describes the condensed history technique, which groups many small electron interactions into single simulation steps, allowing efficient simulation of electron transport. The document then outlines several variance reduction techniques and approximate efficiency improving techniques used in simulations of medical linear accelerators and dose calculations in patients. These include range rejection, transport cutoffs, splitting, Russian roulette, and other methods. It concludes by noting the condensed history technique and choice of electron transport algorithm strongly influence simulation speed and accuracy.
Jeff Leach Master's thesis- Magnetic Targeted Drug DeliveryJeff Leach
This thesis examines methods for guiding magnetic particles through the arterial system using external magnetic fields for targeted drug delivery. It reviews past work in magnetic drug delivery and models hemodynamic forces and magnetic forces on particles. The author designed and built a magnetic guidance system and characterized its field and gradient properties. Magnetic microspheres were also characterized. Experiments were conducted to study microspheres under flow conditions in vitro. The system successfully demonstrated magnetic guidance of microspheres within limiting assumptions of field, gradient, particle properties, and flow conditions.
This document summarizes a master's thesis on optimizing quantum states for phase sensitivity in quantum interference measurements. The thesis investigates the fundamental sensitivity limits of two-mode interferometry in the presence of photon loss. It develops a computer algorithm to find the optimal quantum states for N=2 and 3 photons and compares their performance to the standard quantum limit. While focusing on practical implementation possibilities, it provides a comprehensive discussion of statistical methods and their potential for realizing optimized measurements.
This master's thesis describes the development of an interferometric biosensor. The biosensor uses an optical chip with a waveguide and double slits to create an interferometer based on Young's theory. Light is coupled into the chip and splits into sensing and reference beams that travel parallel paths. Changes in the sensing path due to biomolecular interactions are detected as phase changes in the interference pattern. The author aims to improve the optical system, characterize the chip coupling, develop measurement software, implement signal processing algorithms, and conduct test measurements to evaluate parameters like detection limit and drift. The interferometric biosensor allows label-free and real-time detection of biochemical reactions for applications in pharmaceutical and medical research.
My own Machine Learning project - Breast Cancer PredictionGabriele Mineo
This document describes a project to classify breast cancer cell samples as benign or malignant using machine learning models. It analyzes a dataset containing characteristics of cell nuclei images from 569 breast cancer cases. The dataset has 30 variables describing features like radius, texture, and perimeter. The project aims to train models and compare their performance on accuracy, sensitivity and other metrics to identify the best model for cancer prediction. Several supervised learning algorithms will be tested including naive Bayes, logistic regression, random forest, KNN, and neural networks.
This thesis investigates the evolution of a turbulent patch created by an oscillating grid in water and dilute polymer solutions. Particle image velocimetry is used to measure the velocity fields. The addition of polymers is found to reduce the growth and decay rates of the patch compared to water. An algorithm is developed to detect the turbulent/non-turbulent interface from vorticity fields. Analysis of the interface, patch area, kinetic energy and entrainment rate coefficient reveals that polymers create a smaller, smoother patch with less energy transfer across the interface. The results provide insight into how polymers modify turbulent entrainment and mixing, with potential applications where localized mixing control is important.
This thesis investigates methods for integrative analysis of multiple data types. It extends the Joint and Individual Variation Explained (JIVE) method by incorporating a fused lasso penalty. A novel rank selection algorithm is also proposed. The methods are evaluated on simulated data and applied to analyze The Cancer Genome Atlas glioblastoma data to identify shared mutational processes between chromosomes.
This document is a thesis that investigates using higher-order statistics (HOS) based on sequential testing for spectrum sensing in cognitive radio, especially for low signal-to-noise ratio (SNR) applications. It first provides background on cognitive radio and discusses common spectrum sensing techniques. It then introduces higher-order statistics and cumulants, which can help overcome Gaussian noise effects and improve sensing reliability. The thesis proposes using cumulants to formulate a binary hypothesis testing problem and develop a low-complexity sequential probability ratio test (SPRT) for efficiently detecting underutilized spectrum. Simulation results show the proposed HOS detector outperforms energy detectors by more than 10dB in detection probability for low SNR regimes.
Thesis. A comparison between some generative and discriminative classifiers.Pedro Ernesto Alonso
This thesis comprises Naive Bayes, Full Bayesian Network, Artificial Neural Networks, Support Vector Machines and Logistic Regression. For classification purposes.
This document describes the development of a phase contrast imaging simulator in GATE (Gate Application Toolkit for Emission Tomography). The implementation includes Monte Carlo simulation of x-ray attenuation and an analytical model for Fresnel diffraction. The program was written in C++ and validated through simulations comparing results to theoretical values and real x-ray images. Limitations of the current implementation are discussed along with potential solutions.
This thesis examines using a data-driven sample generator model to augment real data for improved classification of schizophrenia patients and healthy controls from structural magnetic resonance images (SMRIs). A three-way ANOVA analysis of SMRIs found significant differences between groups based on diagnosis, age, and gender. Various machine learning classifiers were tested on raw data, reduced data, and augmented data. A neural network trained exclusively on synthetic data produced the best classification results, demonstrating that the generator model can produce realistic training data that improves generalization.
This document is a thesis on statistical approaches to solving the inverse problem in scatterometry. Scatterometry is used to characterize nanostructures by measuring diffraction patterns and reconstructing critical dimensions. The thesis covers:
1) Using maximum likelihood estimation and least squares to reconstruct critical dimensions and estimate measurement variances from simulated and measured data.
2) Investigating the effects of systematic errors like line roughness and multilayer variations on diffraction patterns.
3) Including these systematic errors in the reconstruction model and showing it reduces estimated variances and improves consistency with other measurement methods.
4) Outlining a Bayesian approach that incorporates prior knowledge to further reduce uncertainties in the reconstructed critical dimensions.
This document presents a study on extracting a heartbeat signal from speckle pattern analysis of light scattering off moving red blood cells. The author F.J. Brull conducted a thesis project using simulations of light scattering performed by previous students. Brull analyzed time series of properties like fractality, correlations, and speckle contrast of simulated speckle patterns to retrieve an artificially introduced periodicity meant to mimic a heartbeat. The analysis was done both in the time and frequency domains using techniques like the discrete Fourier transform. While the results did not allow direct retrieval of the heartbeat frequency due to limitations of the simulation, Brull concluded that with adjustments to parameters like camera size and particle density, retrieval may be possible based on previous experimental
Để xem full tài liệu Xin vui long liên hệ page để được hỗ trợ
: https://www.facebook.com/thuvienluanvan01
HOẶC
https://www.facebook.com/garmentspace/
https://www.facebook.com/thuvienluanvan01
https://www.facebook.com/thuvienluanvan01
tai lieu tong hop, thu vien luan van, luan van tong hop, do an chuyen nganh
Algorithms for Sparse Signal Recovery in Compressed SensingAqib Ejaz
This thesis examines algorithms for recovering sparse signals from compressed measurements. It reviews fundamental compressed sensing theory and several non-Bayesian greedy algorithms such as orthogonal matching pursuit (OMP) and iterative hard thresholding (IHT). It also develops Bayesian algorithms like randomized OMP (RandOMP) and randomized IHT (RandIHT) that incorporate a sparsity-inducing prior. The thesis extends these algorithms to the multichannel case and derives a minimum mean squared error (MMSE) estimator for jointly recovering multiple sparse signals. It then proposes a novel algorithm called RandSOMP that approximates the MMSE estimator. Empirical results show RandSOMP outperforms other algorithms in applications like direction of arrival estimation and image
Backtesting Value at Risk and Expected Shortfall with Underlying Fat Tails an...Stefano Bochicchio
This thesis investigates the effect of volatility clustering and fat tails on the backtesting of Value at Risk (VaR) and Expected Shortfall (ES) using a GARCH(1,1) model and Student's t distribution. The key results are:
1) For the GARCH model, VaR violations were overestimated and not independent due to volatility clustering. ES overestimated the true risk.
2) For the Student's t distribution, VaR and ES estimates passed backtesting and were not affected by fat tails.
3) Further research is recommended to study this topic with a different focus.
Evolutionary optimization of bio-inspired controllers for modular soft robots...GiorgiaNadizar
This document is a thesis on evolutionary optimization of bio-inspired controllers for modular soft robots. It contains two main parts that explore different bio-inspired features for neural network controllers of voxel-based soft robots (VSRs). The first part analyzes the effects of synaptic pruning on evolved neural network controllers. The second part experiments with evolved spiking neural networks (SNNs) as controllers, which are a more biologically plausible model compared to traditional artificial neural networks. The results show that both synaptic pruning and SNN controllers can positively impact the performance and adaptability of VSRs compared to traditional neural network controllers, especially when faced with unforeseen circumstances.
This document summarizes the findings of Task Group 176 regarding the dosimetric effects of external devices used in radiation therapy, such as treatment couches and immobilization devices. It was found that these devices can cause increased skin dose, reduced tumor dose, and alterations to the overall dose distribution, with effects that are often underestimated or ignored. Recommendations are provided to improve measurement and modeling of these devices in treatment planning systems in order to more accurately account for their dosimetric impact.
This thesis examines pursuit formations of autonomous multi-vehicle systems. Pursuit problems from mathematics and physics are used as inspiration. The thesis focuses on systems of vehicles pursuing each other in a cyclic fashion. It is shown that equilibrium formations are generalized regular polygons and the global behavior can be shaped by controller gains. Local stability of equilibrium polygons is examined to determine which formations are asymptotically stable. Experiments with physical vehicles demonstrate the practicality of distributed pursuit control strategies. The last part of the thesis explores how network topology influences symmetry in multi-agent system trajectories.
The analysis of doubly censored survival dataLonghow Lam
This document describes methods for analyzing doubly censored survival data, where the time of infection is interval censored and the time of disease onset or death may be right censored. It applies these methods to data from Amsterdam Cohort Studies on HIV infection. Specifically, it 1) introduces nonparametric models for the infection and incubation time distributions that use maximum likelihood estimation on interval-censored data, 2) applies these methods to data from three cohort studies, estimating the seroconversion and incubation time distributions, and 3) explores extensions including incorporating covariates and using marker data to estimate distributions for prevalently infected individuals.
Classification of squamous cell cervical cytologykarthigailakshmi
This document presents a thesis submitted for the degree of Magister en Ingeniería Biomédica. The thesis aims to classify squamous cervical cells using color and texture descriptors defined in the MPEG-7 standard. The author first characterizes the transformation zone of cervical smear images using MPEG-7 descriptors like color layout, scalable color, and edge histogram. These descriptors are then used as inputs to binary classifiers to obtain a precision of 90% and sensitivity of 83% for cell classification. Unlike traditional approaches requiring cell segmentation, the proposed method is independent of cell shape. The thesis finds this strategy applicable for pre-screening cervical smear images in conditions with random noise factors that could mislead segmentation.
Jeff Leach Master's thesis- Magnetic Targeted Drug DeliveryJeff Leach
This thesis examines methods for guiding magnetic particles through the arterial system using external magnetic fields for targeted drug delivery. It reviews past work in magnetic drug delivery and models hemodynamic forces and magnetic forces on particles. The author designed and built a magnetic guidance system and characterized its field and gradient properties. Magnetic microspheres were also characterized. Experiments were conducted to study microspheres under flow conditions in vitro. The system successfully demonstrated magnetic guidance of microspheres within limiting assumptions of field, gradient, particle properties, and flow conditions.
This document summarizes a master's thesis on optimizing quantum states for phase sensitivity in quantum interference measurements. The thesis investigates the fundamental sensitivity limits of two-mode interferometry in the presence of photon loss. It develops a computer algorithm to find the optimal quantum states for N=2 and 3 photons and compares their performance to the standard quantum limit. While focusing on practical implementation possibilities, it provides a comprehensive discussion of statistical methods and their potential for realizing optimized measurements.
This master's thesis describes the development of an interferometric biosensor. The biosensor uses an optical chip with a waveguide and double slits to create an interferometer based on Young's theory. Light is coupled into the chip and splits into sensing and reference beams that travel parallel paths. Changes in the sensing path due to biomolecular interactions are detected as phase changes in the interference pattern. The author aims to improve the optical system, characterize the chip coupling, develop measurement software, implement signal processing algorithms, and conduct test measurements to evaluate parameters like detection limit and drift. The interferometric biosensor allows label-free and real-time detection of biochemical reactions for applications in pharmaceutical and medical research.
My own Machine Learning project - Breast Cancer PredictionGabriele Mineo
This document describes a project to classify breast cancer cell samples as benign or malignant using machine learning models. It analyzes a dataset containing characteristics of cell nuclei images from 569 breast cancer cases. The dataset has 30 variables describing features like radius, texture, and perimeter. The project aims to train models and compare their performance on accuracy, sensitivity and other metrics to identify the best model for cancer prediction. Several supervised learning algorithms will be tested including naive Bayes, logistic regression, random forest, KNN, and neural networks.
This thesis investigates the evolution of a turbulent patch created by an oscillating grid in water and dilute polymer solutions. Particle image velocimetry is used to measure the velocity fields. The addition of polymers is found to reduce the growth and decay rates of the patch compared to water. An algorithm is developed to detect the turbulent/non-turbulent interface from vorticity fields. Analysis of the interface, patch area, kinetic energy and entrainment rate coefficient reveals that polymers create a smaller, smoother patch with less energy transfer across the interface. The results provide insight into how polymers modify turbulent entrainment and mixing, with potential applications where localized mixing control is important.
This thesis investigates methods for integrative analysis of multiple data types. It extends the Joint and Individual Variation Explained (JIVE) method by incorporating a fused lasso penalty. A novel rank selection algorithm is also proposed. The methods are evaluated on simulated data and applied to analyze The Cancer Genome Atlas glioblastoma data to identify shared mutational processes between chromosomes.
This document is a thesis that investigates using higher-order statistics (HOS) based on sequential testing for spectrum sensing in cognitive radio, especially for low signal-to-noise ratio (SNR) applications. It first provides background on cognitive radio and discusses common spectrum sensing techniques. It then introduces higher-order statistics and cumulants, which can help overcome Gaussian noise effects and improve sensing reliability. The thesis proposes using cumulants to formulate a binary hypothesis testing problem and develop a low-complexity sequential probability ratio test (SPRT) for efficiently detecting underutilized spectrum. Simulation results show the proposed HOS detector outperforms energy detectors by more than 10dB in detection probability for low SNR regimes.
Thesis. A comparison between some generative and discriminative classifiers.Pedro Ernesto Alonso
This thesis comprises Naive Bayes, Full Bayesian Network, Artificial Neural Networks, Support Vector Machines and Logistic Regression. For classification purposes.
This document describes the development of a phase contrast imaging simulator in GATE (Gate Application Toolkit for Emission Tomography). The implementation includes Monte Carlo simulation of x-ray attenuation and an analytical model for Fresnel diffraction. The program was written in C++ and validated through simulations comparing results to theoretical values and real x-ray images. Limitations of the current implementation are discussed along with potential solutions.
This thesis examines using a data-driven sample generator model to augment real data for improved classification of schizophrenia patients and healthy controls from structural magnetic resonance images (SMRIs). A three-way ANOVA analysis of SMRIs found significant differences between groups based on diagnosis, age, and gender. Various machine learning classifiers were tested on raw data, reduced data, and augmented data. A neural network trained exclusively on synthetic data produced the best classification results, demonstrating that the generator model can produce realistic training data that improves generalization.
This document is a thesis on statistical approaches to solving the inverse problem in scatterometry. Scatterometry is used to characterize nanostructures by measuring diffraction patterns and reconstructing critical dimensions. The thesis covers:
1) Using maximum likelihood estimation and least squares to reconstruct critical dimensions and estimate measurement variances from simulated and measured data.
2) Investigating the effects of systematic errors like line roughness and multilayer variations on diffraction patterns.
3) Including these systematic errors in the reconstruction model and showing it reduces estimated variances and improves consistency with other measurement methods.
4) Outlining a Bayesian approach that incorporates prior knowledge to further reduce uncertainties in the reconstructed critical dimensions.
This document presents a study on extracting a heartbeat signal from speckle pattern analysis of light scattering off moving red blood cells. The author F.J. Brull conducted a thesis project using simulations of light scattering performed by previous students. Brull analyzed time series of properties like fractality, correlations, and speckle contrast of simulated speckle patterns to retrieve an artificially introduced periodicity meant to mimic a heartbeat. The analysis was done both in the time and frequency domains using techniques like the discrete Fourier transform. While the results did not allow direct retrieval of the heartbeat frequency due to limitations of the simulation, Brull concluded that with adjustments to parameters like camera size and particle density, retrieval may be possible based on previous experimental
Để xem full tài liệu Xin vui long liên hệ page để được hỗ trợ
: https://www.facebook.com/thuvienluanvan01
HOẶC
https://www.facebook.com/garmentspace/
https://www.facebook.com/thuvienluanvan01
https://www.facebook.com/thuvienluanvan01
tai lieu tong hop, thu vien luan van, luan van tong hop, do an chuyen nganh
Algorithms for Sparse Signal Recovery in Compressed SensingAqib Ejaz
This thesis examines algorithms for recovering sparse signals from compressed measurements. It reviews fundamental compressed sensing theory and several non-Bayesian greedy algorithms such as orthogonal matching pursuit (OMP) and iterative hard thresholding (IHT). It also develops Bayesian algorithms like randomized OMP (RandOMP) and randomized IHT (RandIHT) that incorporate a sparsity-inducing prior. The thesis extends these algorithms to the multichannel case and derives a minimum mean squared error (MMSE) estimator for jointly recovering multiple sparse signals. It then proposes a novel algorithm called RandSOMP that approximates the MMSE estimator. Empirical results show RandSOMP outperforms other algorithms in applications like direction of arrival estimation and image
Backtesting Value at Risk and Expected Shortfall with Underlying Fat Tails an...Stefano Bochicchio
This thesis investigates the effect of volatility clustering and fat tails on the backtesting of Value at Risk (VaR) and Expected Shortfall (ES) using a GARCH(1,1) model and Student's t distribution. The key results are:
1) For the GARCH model, VaR violations were overestimated and not independent due to volatility clustering. ES overestimated the true risk.
2) For the Student's t distribution, VaR and ES estimates passed backtesting and were not affected by fat tails.
3) Further research is recommended to study this topic with a different focus.
Evolutionary optimization of bio-inspired controllers for modular soft robots...GiorgiaNadizar
This document is a thesis on evolutionary optimization of bio-inspired controllers for modular soft robots. It contains two main parts that explore different bio-inspired features for neural network controllers of voxel-based soft robots (VSRs). The first part analyzes the effects of synaptic pruning on evolved neural network controllers. The second part experiments with evolved spiking neural networks (SNNs) as controllers, which are a more biologically plausible model compared to traditional artificial neural networks. The results show that both synaptic pruning and SNN controllers can positively impact the performance and adaptability of VSRs compared to traditional neural network controllers, especially when faced with unforeseen circumstances.
This document summarizes the findings of Task Group 176 regarding the dosimetric effects of external devices used in radiation therapy, such as treatment couches and immobilization devices. It was found that these devices can cause increased skin dose, reduced tumor dose, and alterations to the overall dose distribution, with effects that are often underestimated or ignored. Recommendations are provided to improve measurement and modeling of these devices in treatment planning systems in order to more accurately account for their dosimetric impact.
This thesis examines pursuit formations of autonomous multi-vehicle systems. Pursuit problems from mathematics and physics are used as inspiration. The thesis focuses on systems of vehicles pursuing each other in a cyclic fashion. It is shown that equilibrium formations are generalized regular polygons and the global behavior can be shaped by controller gains. Local stability of equilibrium polygons is examined to determine which formations are asymptotically stable. Experiments with physical vehicles demonstrate the practicality of distributed pursuit control strategies. The last part of the thesis explores how network topology influences symmetry in multi-agent system trajectories.
The analysis of doubly censored survival dataLonghow Lam
This document describes methods for analyzing doubly censored survival data, where the time of infection is interval censored and the time of disease onset or death may be right censored. It applies these methods to data from Amsterdam Cohort Studies on HIV infection. Specifically, it 1) introduces nonparametric models for the infection and incubation time distributions that use maximum likelihood estimation on interval-censored data, 2) applies these methods to data from three cohort studies, estimating the seroconversion and incubation time distributions, and 3) explores extensions including incorporating covariates and using marker data to estimate distributions for prevalently infected individuals.
Classification of squamous cell cervical cytologykarthigailakshmi
This document presents a thesis submitted for the degree of Magister en Ingeniería Biomédica. The thesis aims to classify squamous cervical cells using color and texture descriptors defined in the MPEG-7 standard. The author first characterizes the transformation zone of cervical smear images using MPEG-7 descriptors like color layout, scalable color, and edge histogram. These descriptors are then used as inputs to binary classifiers to obtain a precision of 90% and sensitivity of 83% for cell classification. Unlike traditional approaches requiring cell segmentation, the proposed method is independent of cell shape. The thesis finds this strategy applicable for pre-screening cervical smear images in conditions with random noise factors that could mislead segmentation.
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Height and depth gauge linear metrology.pdfq30122000
Height gauges may also be used to measure the height of an object by using the underside of the scriber as the datum. The datum may be permanently fixed or the height gauge may have provision to adjust the scale, this is done by sliding the scale vertically along the body of the height gauge by turning a fine feed screw at the top of the gauge; then with the scriber set to the same level as the base, the scale can be matched to it. This adjustment allows different scribers or probes to be used, as well as adjusting for any errors in a damaged or resharpened probe.
Blood finder application project report (1).pdfKamal Acharya
Blood Finder is an emergency time app where a user can search for the blood banks as
well as the registered blood donors around Mumbai. This application also provide an
opportunity for the user of this application to become a registered donor for this user have
to enroll for the donor request from the application itself. If the admin wish to make user
a registered donor, with some of the formalities with the organization it can be done.
Specialization of this application is that the user will not have to register on sign-in for
searching the blood banks and blood donors it can be just done by installing the
application to the mobile.
The purpose of making this application is to save the user’s time for searching blood of
needed blood group during the time of the emergency.
This is an android application developed in Java and XML with the connectivity of
SQLite database. This application will provide most of basic functionality required for an
emergency time application. All the details of Blood banks and Blood donors are stored
in the database i.e. SQLite.
This application allowed the user to get all the information regarding blood banks and
blood donors such as Name, Number, Address, Blood Group, rather than searching it on
the different websites and wasting the precious time. This application is effective and
user friendly.
Open Channel Flow: fluid flow with a free surfaceIndrajeet sahu
Open Channel Flow: This topic focuses on fluid flow with a free surface, such as in rivers, canals, and drainage ditches. Key concepts include the classification of flow types (steady vs. unsteady, uniform vs. non-uniform), hydraulic radius, flow resistance, Manning's equation, critical flow conditions, and energy and momentum principles. It also covers flow measurement techniques, gradually varied flow analysis, and the design of open channels. Understanding these principles is vital for effective water resource management and engineering applications.
Software Engineering and Project Management - Software Testing + Agile Method...Prakhyath Rai
Software Testing: A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object -Oriented Software, Validation Testing, System Testing, The Art of Debugging.
Agile Methodology: Before Agile – Waterfall, Agile Development.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Mechanical Engineering on AAI Summer Training Report-003.pdf
Masters Thesis Andrew Boppart
1. A Novel Method for Quantifying Uncertainty in the Unified
Envelope of Constraint in the Passive Knee Joint
By
Andrew M. Boppart
Submitted to the graduate degree program in Bioengineering and the Graduate Faculty of the
University of Kansas in partial fulfillment of the requirements for the degree of Master of
Science
________________________________
Dr. Lorin Maletsky, Chairperson
________________________________
Dr. Lisa Friis, Committee Member
________________________________
Dr. Sara Wilson, Committee Member
Date Defended: 27 November 2018
2. ii
The Thesis Committee for Andrew M. Boppart
certifies that this is the approved version of the following thesis
A Novel Method for Quantifying Uncertainty in the Unified
Envelope of Constraint in the Passive Knee Joint
________________________________
Dr. Lorin Maletsky, Chairperson
Date Approved: 30 November 2018
3. iii
Abstract
The Experimental Joint Biomechanics Research Lab at the University of Kansas created
the unified envelope (UE) of constraint as a means for describing the overall laxity of the passive
knee joint. The UE is currently calculated with the use of a radial basis function (RBF), which
has been shown to provide a useful approximation of the multidimensional relationship between
applied forces and observed kinematics. However, the UE does not provide any information
regarding its certainty in the approximation at any given position, making comparisons between
UEs more difficult. The main objective of this thesis was to create an estimate for the uncertainty
of the UE, specific to each point within the UE. Secondary objectives included reducing the
effect of the investigator on the UE and determining the optimal protocol for loading during
laxity evaluations. To determine if the proposed method was able to meet the objectives, three
different investigators performed manual laxity evaluations on one cadaveric specimen. Data
from these evaluations were sequentially downsampled and used to create many slightly different
UEs. The variance of these UEs at each point were combined with a general measure of variance,
found by calculating the variance of the error when RBFs use the sequentially downsampled data
to approximate known data. This estimate appeared to perform well throughout the envelope,
providing a standard deviation consistent with measurements from previous studies. Results also
showed a decrease in the median absolute difference between investigators of 18.3% in VV°,
15.6% in IE°, and 16.2% in AP position (mm) when compared to the previous method. The
importance of collecting both uniaxial and multiaxial loading trials during the laxity evaluation
was also verified. The following chapters provide further information on methods and results,
along with future applications.
4. iv
Table of Contents
Abstract..........................................................................................................................................iii
List of Tables .................................................................................................................................. v
List of Figures................................................................................................................................ vi
Chapter 1: Introduction................................................................................................................... 1
Chapter 2: Literature Review.......................................................................................................... 3
Chapter 3: A Novel Method for Quantifying Uncertainty in the Unified Envelope of Constraint
in the Passive Knee Joint ................................................................................................................ 8
3.1 Abstract ................................................................................................................................. 8
3.2 Introduction ........................................................................................................................... 9
3.3 Methods............................................................................................................................... 11
3.4 Results ................................................................................................................................. 16
3.4.1 Visualizing the UE in 3D ............................................................................................. 16
3.4.2 Uniaxial Loading vs Multiaxial Loading ..................................................................... 17
3.4.3 Comparing investigators and methods ......................................................................... 18
3.4.4 Standard Deviation and Significant Differences .......................................................... 19
3.5 Discussion ........................................................................................................................... 20
3.6 Tables .................................................................................................................................. 24
3.7 Figures................................................................................................................................. 26
Chapter 4: Conclusion................................................................................................................... 37
References..................................................................................................................................... 40
5. v
List of Tables
Table 1: Number of paths and data points by each investigator and loading type ...................... 24
Table 2: Values for the calculated variance from the RBFs, the observed variance from the
difference between predicted and measured values, and the difference between the two
used to uniformly shift the variance from the RBFs up to match the observed variance. 24
Table 3: Average MAD of SDS-UEs and average percent of the SDS-UE that is significantly
different (two-tailed t-test, p<0.05) between investigators separated by dependent
variable and type of loading used to train RBF. ............................................................... 24
Table 4: Median absolute difference of UEs between investigators at every grid point of the UE
when using both the previous method and the SDS Method along with the relative
percent decrease in median absolute difference from the previous method to the SDS
method............................................................................................................................... 25
Table 5: Minimum (0th
Percentile), median (50th
Percentile), and maximum (100th
Percentile)
values for standard deviation for each investigator and output. ....................................... 25
6. vi
List of Figures
Figure 1: Manual laxity evaluation setup [35]............................................................................. 26
Figure 2: LabVIEW Interface providing near real-time feedback for laxity evaluations. Square
color would change as more data were collected within each square, black means no data
has been collected in that region....................................................................................... 27
Figure 3: RBF flowchart. Experimental data is used to determine the weight of each
experimental point, then the weights are used to estimate dependent variables through a
generic, evenly spaced grid [26]. ...................................................................................... 28
Figure 4: Example of predictions made for one experimental point at 30.1° Flexion, -1.9Nm VV
torque, -5.6Nm IE torque, and 6.2N AP force. All four downsampling cases are included
with the 5 predictions made from every 5th
point in magenta, 10 predictions from every
10th
point in blue, 20 predictions from every 20th
point in cyan, and 40 predictions from
every 40th
point in green. Included are the four means 1 standard deviation of each
downsampling case. The red dot is the mean of the four downsampling case means 1
standard deviation and the black dot is the measured value that was being predicted. .... 29
Figure 5: Power spectral density and cumulative power spectral density for one laxity evaluation
trial with VV torque and IE torque applied by investigator 1. Cumulative power spectral
density chart is showing cumulative power from 0.9 to 1 and frequency from 0 to 5 Hz to
help differentiate between variables. Two other lines are included, one horizontal line at
99% cumulative power and one vertical line at a frequency of 2.5 Hz, to show that over
99% of the power in all signals are contained in frequencies below 2.5 Hz (every 40th
point from a 100Hz signal). .............................................................................................. 30
7. vii
Figure 6: Investigator 1’s UE calculated using the SDS method represented by isosurface
renderings at constant load magnitudes, scaled by their respective maximums (16 Nm of
VV torque and 6 Nm of IE torque. The blue surface is at 100% magnitude and the red
surface is at 50%. A: Data between 30° and 50° flexion is omitted for visualization. B:
Data with an external load is omitted for visualization. C: Data with a Valgus load is
omitted for visualization. .................................................................................................. 31
Figure 7: Investigator 1’s SDS-UE trained with only uniaxial loading trials shown in red, and
trained with only multiaxial loading trials shown in black. Top row has VV kinematics vs
flexion angle at -100% (left), 0% (center), and 100% (right) applied IE torque (-6 Nm, 0
Nm, 6 Nm). The lines in the top row correspond to applied VV torques at -100% (bottom
line, -16 Nm), -50%, 0% (center line, 0 Nm), 50%, and 100% (top line, 16 Nm). Bottom
row shows IE kinematics vs flexion angle at -100% (left), 0% (center), and 100% (right)
applied VV torque (-16 Nm, 0 Nm, 16 Nm). The lines in the bottom row correspond to
applied IE torques at -100% (bottom line, -6 Nm), -50%, 0% (center line, 0 Nm), 50%,
and 100% (top line, 6 Nm)................................................................................................ 32
Figure 8: Range of all Investigators UEs calculated with the previous method shown in red and
the SDS method in blue, areas with overlap are colored purple. Top row has VV
kinematics vs flexion angle at -100% (left), 0% (center), and 100% (right) applied IE
torque (-6 Nm, 0 Nm, 6 Nm). The lines in the top row correspond to applied VV torques
at -100% (bottom line, -16 Nm), -50%, 0% (center line, 0 Nm), 50%, and 100% (top line,
16 Nm). Bottom row shows IE kinematics vs flexion angle at -100% (left), 0% (center),
and 100% (right) applied VV torque (-16 Nm, 0 Nm, 16 Nm). The lines in the bottom
8. viii
row correspond to applied IE torques at -100% (bottom line, -6 Nm), -50%, 0% (center
line, 0 Nm), 50%, and 100% (top line, 6 Nm).................................................................. 33
Figure 9: Average of all investigators SDS-UE at 100% load magnitude colored by overall
standard deviation quartile of all variables and investigators (i.e. the percentile of VV
standard deviation, IE standard deviation, and AP standard deviation for I1, I2, and I3
was found at each grid point of the SDS-UE. The overall standard deviation quartile was
found by averaging these nine standard deviation percentiles). ....................................... 34
Figure 10: Investigator 1’s UE at 100% load magnitude colored by how much it differs in VV°
from Investigator 2’s UE at 100% load magnitude. A: Previous method for the UE was
used for the plot and in the difference calculation for color. B: SDS method for the UE
was used for the plot, in the difference calculation, and to determine significance. Only
significant differences are colored (one-tailed t-test, p<0.2). C: Same as plot B with
p<0.05 rather than p<0.2................................................................................................... 35
Figure 11: Investigator 1’s UE at 100% load magnitude colored by how much it differs in IE°
from Investigator 2’s UE at 100% load magnitude. A: Previous method for the UE was
used for the plot and in the difference calculation for color. B: SDS method for the UE
was used for the plot, in the difference calculation, and to determine significance. Only
significant differences are colored (one-tailed t-test, p<0.2). C: Same as plot B with
p<0.05 rather than p<0.2................................................................................................... 36
9. ix
List of Abbreviations
ACL Anterior Cruciate Ligament
AP Anterior-Posterior
DOFs Degrees of Freedom
EJBRL Experimental Joint Biomechanics Research Lab
I1 Investigator 1
I2 Investigator 2
I3 Investigator 3
IE Internal-External
MAD Median Absolute Difference
RBF Radial Basis Function
SDS Sequential Downsample
SDS-UE Unified Envelope found from sequentially downsampled data
UE Unified Envelope
VV Varus-Valgus
10. 1
Chapter 1: Introduction
The passive knee joint has been studied extensively in the past to form a better
understanding of the complex relationship between the different anatomical structures of the
knee joint [1]. The constraint of the knee is a result of multiple anatomical structures working
together to allow the knee both translational and rotational freedom of motion [2]. In the past,
methods for measuring this constraint have generally been limited to the application of a single
force followed by a measurement of relative displacement in the direction the force was applied.
More recent work, performed in the Experimental Joint Biomechanics Research Lab at
the University of Kansas has attempted to describe multiple degrees of freedom (DOF) at the
same time, using radial basis functions to develop the unified envelope (UE) of constraint. The
UE has been shown to be a valuable measure, and has been applied to research involving the
contribution of individual ligaments to overall constraint and has helped develop and verify
computational models of the knee. Currently, the UE describes the knee with exact positions at
each applied force and is limited by its inability to comment on confidence in the reported
position. This limits its application and makes comparisons between UEs more difficult.
The main goal of this research was to develop a measure of the uncertainty in the unified
envelope. There were also two secondary goals that focused on more general improvements to
the unified envelope. The first of these was to make the UE more robust by reducing the effect
the investigator has on the final result. The second goal was to determine the best protocol for
data collection during laxity evaluations by comparing UEs calculated from different loading
protocols.
11. 2
Chapter 2 provides a review of the current literature, providing background on joint laxity
assessments, radial basis functions, and the work done by the Experimental Joint Biomechanics
Research Lab at the University of Kansas. Chapter 3 presents a study with a method for
quantifying uncertainty in the UE and provides information on how well the method works.
Chapter 4 has the conclusions of the research and applications for the knowledge gained from
this thesis.
12. 3
Chapter 2: Literature Review
The laxity of the passive knee joint has been studied for decades by dozens of authors to
help understand how the anatomy of the knee affects overall joint constraint [1]. Laxity of the
knee is defined as the displacement of the tibia relative to the femur when transitioning from an
unloaded to a loaded state, and the knee joint is considered passive when there is no muscle
activation affecting motion. Laxity evaluations have been commonly performed both clinically
and in research labs to describe the behavior of the passive knee joint. A laxity evaluation is
performed by measuring the change in position of the tibia relative to the femur when a load or
torque is applied. Research involving laxity evaluations are generally performed in vitro to
completely remove muscle activation, but clinical laxity evaluations must be in vivo. Clinical
laxity evaluations are usually performed with the goal of diagnosing injuries, like torn ligaments
[3,4], or when balancing the ligaments after a total knee replacement [5]. Early methods for
evaluating knee laxity were performed manually and were more qualitative than many methods
used today [6]. This review describes methods for performing laxity evaluations along with the
accuracy and consistency of some of these methods.
The first manual method for evaluating knee laxity was the anterior drawer test [7],
which was first described as early as 1879 [8]. This method was used to diagnose a torn ACL by
qualitatively comparing the motion of the injured knee to the healthy knee. Another qualitative
test started being used in the 1970’s called the Lachman’s test. The Lachman’s test has been
shown to be more sensitive and specific than the anterior drawer test [9], but it still relies on
qualitatively comparing the injured knee to the healthy knee. Later clinical devices, such as the
13. 4
Rolimeter, KT-1000, and KT-2000, have attempted to quantify the Lachman’s test, but these
devices are primarily used as diagnoses tools, rather than laxity evaluations [10,11].
An objective measurement of the properties of the passive knee joint has long been a goal
of researchers. Methods for controlling loads have included direct manipulation [12],
instrumented handlebars [13], and Instron loading devices [14]. The motions of the knee have
been measured with Roentgen stereophotogrammetry [15], triaxial electrogoniometers [15], and
6 degree of freedom (DOF) instrumented spatial linkages [16]. These methods, along with the
introduction of the Grood and Suntay joint coordinate system as a standard for reporting
motions, greatly improved the understanding of the passive knee joint in vitro [17].
Most research performed before the 1980s focused on measuring a specific behavior for
many specimens, rather than measuring a range of behaviors for an individual specimen. To
address this limitation, Blankevoort et al. developed a rig in 1988 that would give the knee 6
DOFs while applying several combinations of external loads and torques at different flexion
angles [1]. Blankevoort called this the passive envelope of motion, and it provided a quantitative
way to consistently and repeatedly evaluate knee laxity under a variety of different conditions.
This method could be used outside of the clinic to compare differences between knees and
quantify the contribution of different ligaments to the overall constraint of the knee [18].
Blankevoort’s method was an improvement over other knee laxity evaluation tools at the time,
but operation was very time consuming and required a complicated rig to be built before any
knee laxity evaluations could be performed.
Many devices to measure knee laxity have been introduced since Blankevoort’s work on
the passive envelope. The most common device currently used in both research and the clinic is
the KT-2000 [19]. The KT-2000 is both simple to use and well-studied, but it only measures AP
14. 5
motion at one flexion angle. To fully understand the complex joint structures and ligament
interactions, laxity must be evaluated in multiple DOFs [2]. A new device was designed to
measure the knee in four DOFs (flexion angle, internal-external rotation, medial-lateral
translation, and anterior-posterior translation) in vivo, and preliminary cadaveric studies have
demonstrated proficiency in measuring clinical laxity tests [20]. However, this device has a
relatively high cost and would require more research before it is considered a reliable tool.
A protocol was developed at the University of Kansas Experimental Joint Biomechanics
Research Laboratory (EJBRL) to address limitations found in other knee laxity studies. The
protocol was designed to allow for loading and movement in 6 DOF with the goal of describing
how the knee responds to multidirectional loading throughout the range of flexion. This protocol
was performed by rigidly mounting the femur in an inverted-vertical position, and then manually
applying external loads and torques, measured by a load cell, to the distal end of the tibia [21].
Like Blankevoort’s method, this protocol allowed the knee to move in 6 DOFs and measured
performance with multiaxial loading. Blankevoort’s rig, however, was limited to only measuring
a few different load magnitudes at specific and controlled flexion angles, while this new protocol
allowed for the application of almost any load or combination of loads to be measured and
approximated throughout the full range of flexion. This protocol provided a more detailed
description of the passive knee joint [22], but was mostly limited to visualizing the effect of a
load or torque in only one direction with respect to flexion angle, and couldn’t interpolate very
well if the data set was too scattered or sparse. To address these limitations radial basis functions
(RBF) were introduced to describe multiple DOFs simultaneously [23]. RBFs have been shown
to outperform other approximation methods, such as splines or polynomials, when data sets have
many dimensions or when the distribution is non-uniform or scattered [24,25]. This method
15. 6
provided a way to approximate how the knee would behave with any kind of loading at any
flexion angle, or under specific loading conditions without the need to directly measure those
conditions. For example, one of the loading configurations from Blankevoort’s rig could be
approximated with an RBF without the need to build the rig and collect data from that exact
configuration. This ability to predict any position from a given set of loads made it possible to
compare knees through any range of flexion angles and applied loads, a task that was previously
only possible with direct measurements of both knees. Descriptions and visualizations of the
behavior of the knee in more than two dimensions was now possible as well [26].
While RBFs allowed for more comparisons to be made between the passive envelope of
motion of different knees, it was unclear whether these observed differences were meaningful or
not. To comment on whether or not observed differences may be significant, some measure of
uncertainty in the RBF approximation is necessary. Some methods have been proposed for
estimating the uncertainty in the approximations of RBFs, such as counting the number of
training points within an n-dimensional sphere of the point being approximated [27], but these
methods are primarily measures of how much the RBF is extrapolating, rather than how accurate
the extrapolation is. When estimating just uncertainty, there are many different steps in the
process where it can arise. First, there is uncertainty in the tools used for data collection. The
Optotrak 3020 camera system has been shown to have a standard deviation of 0.10 mm when
measuring a 10 mm translation [28]. There is also uncertainty in the calculation of Grood and
Suntay kinematics based on the position of probed bony anatomical landmarks [29]. Uncertainty
in the repeatability of measurements between investigators has been estimated to be 0.1°-0.2° for
varus-valgus rotations and 0.3°-0.4° for internal-external rotations [22]. Finally, the RBF
approximation has been shown to be robust to both random and clustered decimation of training
16. 7
data, with an average normalized root mean squared error below 2% and 2.6% respectively for
all DOFs [26].
Along with no measure for uncertainty, one of the biggest limitations still present in the
work done by EJBRL is the lack of compressive load control. Compressive loads have been
shown to significantly affect measurements of laxity [30], and incorporating a compressive load
into the laxity evaluations or into the calculation of the UE could help decrease uncertainty.
However, a measure for uncertainty would need to be developed before research could be done
on how to minimize it. The uncertainty in the path dependency of the knee and in the ability of
the RBF to approximate known data along each loading path has yet to be studied.
17. 8
Chapter 3: A Novel Method for Quantifying Uncertainty in the Unified Envelope of
Constraint in the Passive Knee Joint
3.1 Abstract
The unified envelope (UE) of constraint has been proposed as a way to describe the
overall laxity of the passive knee joint. The UE is currently calculated with the use of a radial
basis function, which has been shown to provide a useful approximation of the multidimensional
relationship between applied forces and observed kinematics. However, the UE does not provide
any information regarding its certainty in the approximation at any given position, making
comparisons between UEs more difficult. The primary goal of this paper was to develop a
measure of uncertainty in the UE, specific to each point within the UE, that could be used to
estimate how significant any observed differences may be. Other objectives included reducing
the investigator’s effect on the UE and determining the best protocol for loading during laxity
evaluations. Manual laxity evaluations were performed by three investigators on one cadaveric
specimen and used to calculate an estimate for the variation in calculated position throughout the
envelope. Depending on investigator, applied forces, and flexion angle, the estimate for standard
deviation ranged from 0.24°-0.58° VV, 0.92°-2.57° IE, and 0.31mm-0.98mm AP. Results
showed a decrease in the median absolute difference between investigators of 18.3% in VV
angle difference, 15.6% in IE angle difference, and 16.2% in AP position when compared to the
previous method. The importance of collecting both uniaxial and multiaxial loading trials during
the laxity evaluation was also verified. Future studies could apply this method to further research
on the knee, the constraint of other joints, or incorporating a measure of uncertainty into RBFs.
18. 9
3.2 Introduction
The performance and stability of the active knee joint has been shown to be a function of
both the anatomy of the knee and the applied forces. To improve the understanding of the
constraint provided by the anatomical structures, past research has focused on studying the laxity
of the passive knee joint [1]. Knee laxity is essentially how much the knee joint is displaced
under a load at a flexion angle, without muscle activation. The displacement observed with the
application of a load is a function of both the articular geometry and the soft tissue structures
around the knee and excessive knee laxity has been shown to be associated with less stability in
the joint, recurrent dislocations, and inflammatory arthritis [31]. Laxity evaluations are
commonly used in academic research, but also help inform models of the joint [32,33] and can
be used clinically [5]. Clinical laxity evaluations have traditionally been qualitative comparisons
between an affected knee and healthy knee for the same subject with the primary goal of
diagnosing injuries, usually involving torn ligaments [7,34]. Newer devices are more
quantitative, and generally work by applying a load in one direction, and then measuring the
displacement in that direction [6,19]. These devices are still used mainly for diagnoses [9] and
usually simplify their description of the knee to only one or two degrees of freedom (DOF) [11].
Describing the laxity of the knee in only two dimensions (i.e. flexion angle vs IE
rotation) does not fully capture the complex, multidimensional constraint of the joint [2]. To
combat this problem, radial basis functions (RBF) have been proposed as a method for
calculating the approximate position of the knee in multiple dimensions, when multiple loads are
acting on it [26]. RBFs provide a means for making approximations from multivariate data sets
based on the distance the point to approximate is from the experimental data. RBFs have been
19. 10
used in the past to describe data sets with high dimensionality, and can reliably describe data sets
with a scattered or unknown distribution [24,25].
The introduction of RBFs to calculate the total constraint of the passive knee joint has
allowed for a unified envelope (UE) description of the knee to be identified, with a continuous
relationship between the dependent (applied loads and flexion angles) and independent
(positions) variables [35]. Using RBFs to approximate the multidimensional load-displacement
response of the knee makes comparisons between knees much easier, because RBFs allow for
the approximation of multidimensional loading configurations to be calculated for any knee,
without the need to collect data around that specific loading configuration. The UE can be
calculated across a common grid space of applied loads and flexion angles, allowing for the
comparison of many different loading configurations at once.
Unfortunately, the UE will only provide a single estimate for the position of the knee at
any given combination of loads and flexion angle, without a measure of how confident it is in
that estimate. This means that the current methods for using RBFs to describe the UE assumes
that a given loading condition has a single position, regardless of the loading path taken to get
there. In other words, RBFs can help calculate a multidimensional surface of best fit, but the
accuracy of this surface isn’t clear [27].
The primary objective of this study was to develop a measure of uncertainty in the
calculated UE without sacrificing any of the benefits that have been associated with using RBFs
to describe the data. There are also two secondary objectives in this study, the first is to make the
UE more robust to changing investigators. A decrease in the difference between two
investigator’s UEs on the same knee should increase confidence in the ability of both UEs to
describe the knee in question. The last objective is to determine the best protocol for loading
20. 11
when collecting data, specifically whether multiaxial loading is necessary to collect, or if
uniaxial loading alone is able to adequately describe the knee.
3.3 Methods
Experimental data were collected on one fresh-frozen cadaveric leg (Male, Age: 67 years,
BMI: 19). The specimen was thawed for 24 hours before tissue farther than approximately 20 cm
from the epicondylar axis (both proximal and distal) was removed to allow for cylindrical
aluminum fixtures to be attached with bone cement to both the femur and tibia. The fibula was
secured to the tibial fixture to prevent any relative motion. The femoral fixture was mounted in
an inverted-vertical position (Fig. 1) while the tibial fixture was attached to a 6-DOF tri-axial
load cell (JR3 Inc., USA) and an analog foot. Both the femoral and tibial fixtures had rigid body
markers attached to them, and positions were tracked using an Optotrak Certus infrared camera
system (NDI, Canada). Both loads and kinematics were collected at 100Hz. After the laxity
evaluations were performed, the specimen was dissected down further to identify and digitize
bony landmarks on both the femur and tibia. These points were used to set up the coordinate
system to calculate relative motion between the tibia and femur based on Grood and Suntay’s 3-
cylindrical open-chain coordinate system [17].
The specimen then underwent a full laxity evaluation. A full laxity evaluation involves
manually applying a range of loads to the distal end of the tibia in one primary DOF at a time
(uniaxial loading) throughout the full flexion range of the specimen. The three primary loads and
torques were varus-valgus torque (VV) (±16 Nm), internal-external torque (IE) (±6 Nm), and
anterior-posterior force (AP) (±60 N), and they were applied by smoothly alternating between
21. 12
positive and negative loads and torques at different flexion angles. Along with the three uniaxial
loading conditions, multiaxial loading evaluations were also performed with loading applied in
two of the three axes at a time. Full laxity evaluations were performed by three different
investigators, each with a different level of experience performing laxity evaluations ranging
from very experienced to a first-time investigator, to determine the effect of investigator and
experience level on the final UE. The first investigator performed their laxity evaluation for
approximately three times as long as the others to determine if the collection of more data
resulted in a meaningful change to the UE. All three laxity evaluations were performed within 2
hours of each other to minimize the effect of tissue degradation. Each evaluation was assisted by
a near real-time LabVIEW interface that gave visual feedback for the applied loading at different
flexion angles (Fig. 2). This helped ensure that applied loading was consistent and that
experimental data were collected across all desired conditions by all investigators.
After data collection, the kinematics and loads were transformed into the Grood and
Suntay coordinate system for analysis. Then data from the load cell were used to separate the
experimental data into loading and unloading portions. Data that were collected while the overall
applied load was increasing was retained, while data collected while the overall applied load was
decreasing were removed. The loading data were then used as training data to develop multiple
radial basis functions (RBF) for each investigator’s laxity evaluation.
For this study, four independent variables [knee flexion angle (°), VV torque (Nm), IE
torque (Nm), and AP force (N)] were used to solve for three dependent variables [VV angle (°),
IE angle (°), and AP position (mm)] (Fig. 3). The three loads were scaled by 95% of their range
along with flexion angle, which was also scaled by 95% of its range, to get all variables on
approximately the same scale. In this study, RBFs were applied in two steps of the analysis.
22. 13
First, in order to determine how well RBFs can approximate known data, they were trained with
some of the experimental data withheld and then used to predict that withheld data. Then RBFs
were used to calculate the UE with three dependent variables across a generic, evenly spaced
grid of independent variables to allow for comparisons between knees.
First, in order to estimate how well RBFs can approximate known positions of the knee,
each individual loading path was removed from the data set, and then approximated using an
RBF trained with the remaining data. Kinematics of the knee depend on the forces applied to it,
but also on the previous position the knee was in. In other words, observed positions of the knee
are both load and path dependent. Therefore, in order to determine how well RBFs are able to
predict observed data, random data removal is not ideal because the RBF can use nearby points
along each loading path to reliably interpolate the randomly removed points. However, each
loading path, when treated as a whole, is independent from every other loading path. By
removing one full path from the data set and using an RBF fit to all other paths to predict it, it is
possible to see how well the RBF can predict independent points, like the grid points of the UE.
The number of loading paths for each investigator and loading type is shown in Table 1, along
with the total number of data points collected.
Once a path was removed, the remaining data were sequentially downsampled (SDS) by
taking every fifth data point collected starting at the first data point, then the second, third,
fourth, and finally fifth data point. This results in five nearly equal data sets that can be used to
train five different RBFs, which can then be used to get five slightly different predictions for
each point along the removed path (Fig. 4). The data were downsampled because otherwise, the
RBF puts too much weight into the closest group of points, creating a UE that is locally
overfitting to the data, rather than picking up on the broader patterns of the knee. After the five
23. 14
RBFs were used to get five predictions, the same downsampling, training, and prediction process
was repeated 10 times using every 10th
point, 20 times using every 20th
point, and 40 times using
every 40th
point to get a total of four downsampling cases with a total of 75 different predictions
(5 + 10 + 20 + 40 = 75) for each point along the path (Fig. 4). The RBF is robust to this kind of
downsampling because, based on the cumulative power spectrum density, over 99% of the power
in both the input and output signals are contained in frequencies less than 2.5Hz (every 40th
point
of 100Hz signal) (Fig. 5) and RBFs have been shown to approximate the UE well with fewer
than 900 well-spaced points [26].
The mean and variance of the predictions from each of the four downsampling cases were
calculated (i.e. the mean and variance of the results from the 5 RBFs trained with every 5th
point,
10 RBFs trained with every 10th
point, 20 RBFs trained with every 20th
point, and 40 RBFs
trained with every 40th
point) and then an overall mean and variance were found (Fig. 4) by
weighting each of the four downsampling cases equally, rather than weighting each of the 75
predictions equally (this gives more weight to the results from the RBFs that were trained with
more data). This gave a single prediction for every point, along with the variance for that
prediction. Unfortunately, the variance from just the RBFs predictions was too small to
accurately describe the observed uncertainty. To correct for this, the mean variance of each
dependent variable (VV angle, IE angle, AP position) was compared to the variance seen in the
difference between the observed data and predicted points for that dependent variable. This gave
a variance difference for each dependent variable that, when added to the variance calculated
from the RBFs, uniformly shifted each prediction’s variance up to more closely match the
observed variance (Table 2). This shift is also used to increase the variance for each dependent
variable in the generic UE.
24. 15
RBFs were also used to calculate the UE across a generic grid of evenly spaced
independent variables, rather than observed independent variables, to allow for comparisons
between knees and investigators. The grid was designed to stay within the range of
experimentally collected data, and included 13 flexion angles (every 10° from 0° to 120°), 17
VV torques (every 2 Nm from -16 Nm to 16 Nm), 13 IE torques (every 1 Nm from -6 Nm to 6
Nm), and 13 AP forces (every 10 N from -60 N to 60 N) to give 37,349 evenly spaced points.
The same sequential downsampling process used to predict points along loading paths was also
used here to calculate 75 different UEs. The means and variances of each downsampling case
were found and used to find an overall mean and variance for the SDS-UE (SDS will be used to
differentiate when the UE is calculated using the method proposed in this paper, rather than the
previous method), in the same way an overall mean and variance were found earlier when
predicting paths. The final step was to increase the variance for each dependent variable
throughout the SDS-UE by the variance shift that was used to fit predicted data to observed data.
Each of the three investigator’s laxity evaluation data were kept separate through every
step and were used to calculate four different UEs used for data analysis. The first UE was
calculated using the previous method for each investigator. This was to identify the differences
between the previous method for calculating the UE and the method proposed in this paper, or
the SDS-UE. Along with the main SDS-UE for each investigator, calculated with all available
loading data collected by an individual investigator, two other SDS-UEs were calculated, giving
a total of 9 SDS-UEs and 3 UEs from the previous method. One SDS-UE was trained with only
the uniaxial loading trials and the other was trained with only multiaxial loading trials. By using
each SDS-UE’s overall mean and shifted variance, it was possible to directly compare
25. 16
differences between knee envelopes and comment on whether or not the difference was
significant.
3.4 Results
3.4.1 Visualizing the UE in 3D
To visualize the UE, isosurface renderings of the first investigator’s SDS-UE were
created from 0° to 120° flexion (Fig. 6) at 50% and 100% of the largest magnitude of calculated
IE and VV torques, normalized to their respective maximums (16 Nm of VV torque and 6 Nm of
IE torque). As the flexion angle increases the plot appears to become wider along both the VV
and IE axes. This means the knee has a larger range of motion at 120° flexion than it does at 0°
flexion for both IE and VV motion. Notice that the UE is showing a pure external load along the
top spine of the plot. This top spine is the motion of the knee at 100% of the applied IE torque (6
Nm) and 0% of the applied VV torque while the bottom spine is the motion at 100% of the IE
torque applied internally (-6 Nm). Similarly, the top and bottom spine of the red isosurface is the
motion of the knee at 50% IE torque (3 Nm). When the data with a valgus load is omitted (Fig.
6C), these spines are represented more clearly as the edges of the UE at 50% and 100% IE
torque. The left and right spines of the UE should be thought of in the same way, but with an
applied VV torque rather than IE torque (Fig. 6B).
26. 17
3.4.2 Uniaxial Loading vs Multiaxial Loading
Along with the main SDS-UE, found by training the RBF with all the loading trials
collected by a single investigator, two other SDS-UEs were calculated. One was calculated using
only the uniaxial loading trials, and the other with only the multiaxial loading trials. To visualize
how the SDS-UE changes when different loading data is used, Figure 7 has these two cases
plotted together for investigator 1. The center column of plots show the UE with only one torque
applied at 5 magnitudes [100% (top line), 50%, 0% (center line), -50%, and -100% (bottom
line)] in the direction of each plot’s Y-axis (i.e. lines in top row of plots show different VV
torques while lines in the bottom row show different IE torques) while the outside plots show the
UE at ±100% IE (top row) or VV (bottom row) torque with the same 5 load magnitudes. The top
center plot in Figure 7 should be thought of as a top view of the edges in plot B of Figure 6 with
an extra line at 0 Nm of applied torque, while the bottom center plot in Figure 7 should be
thought of as a side view of the edges in plot C of Figure 6. Notice that the uniaxial and
multiaxial UEs perform very similarly when looking at only one applied torque at a time (center
column of plots and middle lines of outside plots), but there are large differences whenever
multiaxial loading is applied.
To numerically compare each type of loading, Table 3 shows the median absolute
difference (MAD) between all investigators for each variable and type of loading, along with the
percent of grid points that are significantly different (two-tailed t-test, p<0.05) for each variable
between. Note that Table 3 is comparing all investigators, so the values shown are the averages
from all investigator comparisons (I1 vs I2, I1 vs I3, and I2 vs I3). There is a decrease in average
MAD for all variables when comparing SDS-UEs trained with only uniaxial loading data to
27. 18
SDS-UEs trained with only multiaxial loads, and another decrease when the SDS-UEs have
access to both types of loading data for training. The trend for significant differences (two-tailed
t-test, p<0.05) between investigators is similar with almost all variables showing a decrease in
significant differences from only uniaxial loading to only multiaxial loading, and again from
only multiaxial loading to both types of loading.
3.4.3 Comparing investigators and methods
The median absolute difference was calculated between each investigator’s UE using
both the previous method and SDS method at all grid points in the UE (Table 4). A lower median
absolute difference suggests the UEs are more similar to each other, and that the method for
calculating the UE is less dependent on who the investigator is. On average, the MAD decreased
by 18.3% in VV angle difference, 15.6% in IE angle difference, and 16.2% in AP position with
the SDS method. The largest MAD between UEs is found when comparing investigator 1 to
investigator 3 for all variables and both methods.
To visualize the differences between methods, Figure 8 shows the range of all
investigators UEs when calculated using both the previous method (red) and the SDS method
(blue). It is clear from the figure that the SDS method produces much smoother lines, this is most
noticeable at the 0 Nm IE and 0 Nm VV lines where the previous method shows sharp points,
rather than gradual changes. The previous method also has a larger range between investigators
than the SDS method at nearly every torque and flexion angle, leading to only a few places
where the range of the SDS method isn’t already covered by the range of the previous method.
28. 19
3.4.4 Standard Deviation and Significant Differences
As a result of the way variance is uniformly shifted, the SDS method for estimating the
standard deviation of the UE produced values that are skewed to the right, or positively skewed,
for all variables and investigators (Table 5). It also produced standard deviations that are
investigator dependent, sometimes leading to little overlap in the range of standard deviation. For
example, the minimum standard deviation found for the VV angle of investigator 2 is 0.33°
while the maximum for investigator 3 is 0.36°. These factors make comparisons between
absolute standard deviation difficult, but by making comparisons with the percentile of standard
deviation, regions of more or less certainty become clearer.
By averaging the percentile of standard deviation found for each variable and
investigator, relative comparisons of overall uncertainty can be made. Figure 9 shows the 100%
load magnitude surface of the average SDS-UE from all investigators, colored by the average
quartile of standard deviation for all variables [VV (°), IE (°), and AP (mm)] and investigators
(I1, I2, and I3) in that region. The colors should be interpreted as regions of higher (red) and
lower (blue) relative uncertainty in the ability of the RBF to approximate the SDS-UE. The areas
of above average standard deviation (colored yellow for the 50th
to 75th
percentile, and red for
anything above the 75th
percentile) are concentrated near 0° and 120° flexion, and areas where
both IE and VV torques are being applied (multiaxial loading). The lowest standard deviations
appear near the four spines of the SDS-UE (i.e. areas of either maximum or minimum VV or IE
angles). Generally, these are the areas where only one torque is acting on the knee (uniaxial
loading).
29. 20
The benefits of using the SDS method to calculate the UE over the previous method are
best on display in Figures 9 and 10. Both figures are comparing investigator 1’s UE to
investigator 2’s UE, but Figure 10 is colored by VV° difference while Figure 11 is comparing
IE°. The first plot, A, in both figures was made using the previous method for calculating the
UE. In both figures, plot A shows the largest differences and the most area with differences. Plot
B and C were created using only the SDS method and had only significant differences colored
(one-tailed t-test). In plot B the significance level was set at p<0.2 while plot C had p<0.05. Plot
C should be thought of as the better comparison for both figures, showing almost no significant
differences in either figure. Plot B in both figures demonstrates how the SDS-UE performs in
areas where the previous method showed the largest differences. Figure 10 shows the previous
method had a large area of difference, from roughly the 20° to 70° flexion range, of about 0.6°
VV. The same place in plot B has a much smaller area, and the difference decreased to around
0.4° VV. Figure 11 has an area in about the same place that shows a difference of up to -6° IE
when using the previous method. In plot B, the same place has a difference of less than half the
magnitude of plot A, over much less area. Overall, the SDS method shows smaller differences
between investigators and produces easier to interpret plots.
3.5 Discussion
The results from this study suggest that the SDS method is less dependent on the
investigator collecting the data, while providing a method for estimating whether observed
differences between knees were significant, based on the uncertainty in the RBF predictions. The
data show less variation investigator-to-investigator with the SDS method for each dependent
30. 21
variable. The SDS method also provides a smoother, more continuous relationship between
independent (loads and flexion angle) and dependent variables (positions) than the previous
method. These factors all point to the SDS method producing more consistent and accurate UEs
than the previous method.
The main goal of this paper, however, was not to improve accuracy or consistency of the
UE, but rather to determine how well the calculated UE describes the knee in question, and
which conditions cause an increase or decrease in the uncertainty of the predicted position of the
knee. To summarize how this was accomplished, it was first determined how well the RBF could
predict real-world data. This information was used to calculate an average observed variance for
RBF approximations, which was compared to the variance of the RBF predictions from the
decimation cases. The difference in these variances, called the variance shift, was used to
uniformly shift up the variance for the grid points in the SDS-UE. This method resulted in the
heavy right skew in the distribution of standard deviations. It also means that there is a minimum
standard deviation for each variable that is determined by the variance shift calculation. For
example, if all 75 RBF predictions for a single point were the exact same, then the variance of
the RBFs prediction would be 0, so the standard deviation at that point would simply be the
square root of the variance shift. This method appears to perform well by limiting areas of
overconfidence with the variance shift, while also allowing for larger standard deviations where
appropriate, such as regions of high flexion or multiaxial loading.
The final objective of this paper was to determine the best protocol for evaluating laxity
during data collection. It was found that SDS-UEs calculated with only uniaxial loading data had
the largest MAD between investigators for all variables, along with the most significant
differences. There was a large improvement when using only multiaxial loading data, likely
31. 22
because the majority of the grid points for the UE have multiaxial loading, but using all loading
data was the most consistent. The difference in the total amount of data collected appeared to
have little effect on the final envelope. Investigator 2 and 3 each collected approximately 10
minutes of laxity evaluations, and their final envelopes were very similar to Investigator 1, who
collected closer to 30 minutes of data. The viscoelastic properties of the ligaments in the knee
can influence measurements of laxity, with studies showing a decrease in ACL stress by 50%
over 2 hours of constant strain [36]. However, stress relaxation is slower during cyclic loading
and the time scales used in this study make it unlikely that the speed of loading path had a
meaningful effect on the final envelope with investigator 1 having an average path length of 0.77
seconds, 0.60 seconds for investigator 2, and 1.01 seconds for investigator 3. Therefore, the most
important aspect of data collection is collecting both kinds of loading data for approximately 10
minutes total, with the speed of loading path likely having a relatively minor effect.
To determine if an investigator’s previous experience performing laxity evaluations had
an effect, each investigator in this study had a different level of experience. Investigator 1 had
the most experience performing laxity evaluations and felt comfortable with no instruction
needed, investigator 2 had limited experience and required some instruction, and investigator 3
had no prior experience. It is difficult to determine exactly how much of a factor experience was,
but the largest MAD was found between investigators 1 and 3. However, this difference could
have been influenced by the amount of time the knee had been sitting out, investigator 1
performed the first laxity evaluation, followed by investigator 2, and last was investigator 3.
Regardless of how much experience makes a difference, the SDS method produced more
consistent UEs, which reduces the need for experience relative to the previous method.
32. 23
This method can benefit a variety of future work involving the passive envelope of
constraint in the knee. It allows for more consistent envelopes between investigators, reducing
the need to control for investigator. This method can theoretically be applied to ligament testing,
helping to identify regions in the UE with more or less confidence in observed differences. The
UE has been used in the past to identify changes in constraint due to full ligament tears [37], but
due to the increased robustness it may now be able to reliably pick up on smaller changes in
constraint, perhaps from partial ligament tears or meniscal injuries. It can also be applied when
researching total knee replacements, helping to identify how different components can constrain
movement relative to both the natural knee and to other knee replacement components.
In conclusion, the method presented in this paper for calculating UEs has been shown to
be more consistent between investigators, and more useful when comparing different UEs. It can
be applied to a wide range of topics involving knee laxity and what contributes to the constraints
of the knee in multiple DOFs. There are also likely to be applications in measuring and
comparing the constraint of other joints, for example when comparing hip replacements to the
natural hip, or to other hip replacement options. Applications could also be found in other work
involving quantifying uncertainty, or specifically in quantifying uncertainty from RBFs. Overall,
this method helps to create a more complete understanding of the passive knee envelope and how
different degrees of freedom in the knee are related.
33. 24
3.6 Tables
Table 1: Number of paths and data points by each investigator and loading type
Investigator Uniaxial Loading Multiaxial Loading Total
1
298 paths
20,972 points
767 paths
61,471 points
1,069 paths
82,694 points
2
197 paths
10,210 points
233 paths
15,710 points
431 paths
25,942 points
3
106 paths
10,957 points
145 paths
14,395 points
251 paths
25,317 points
Table 2: Values for the calculated variance from the RBFs, the observed variance from the
difference between predicted and measured values, and the difference between the two used to
uniformly shift the variance from the RBFs up to match the observed variance.
Varus-Valgus (°) Internal-External (°) Anterior-Posterior (mm)
RBF Observed Shift RBF Observed Shift RBF Observed Shift
I1 0.009 0.07 0.06 0.15 1.08 0.93 0.02 0.11 0.09
I2 0.013 0.12 0.11 0.26 2.06 1.80 0.03 0.26 0.23
I3 0.005 0.07 0.07 0.06 0.91 0.84 0.02 0.22 0.20
Table 3: Average MAD of SDS-UEs and average percent of the SDS-UE that is significantly
different (two-tailed t-test, p<0.05) between investigators separated by dependent variable and
type of loading used to train RBF.
Average MAD % Significantly Different
Uniaxial Multiaxial Both Uniaxial Multiaxial Both
VV (°) 0.32 0.29 0.24 17.9% 7.6% 6.3%
IE (°) 1.05 0.81 0.72 18.4% 3.5% 3.7%
AP (mm) 0.99 0.48 0.39 36.7% 10.0% 9.0%
34. 25
Table 4: Median absolute difference of UEs between investigators at every grid point of the UE when
using both the previous method and the SDS Method along with the relative percent decrease in median
absolute difference from the previous method to the SDS method.
Median Absolute Difference Relative Decrease in
Median Absolute
DifferencePrevious Method SDS Method
VV (°) IE (°) AP (mm) VV (°) IE (°) AP (mm) VV (°) IE (°) AP (mm)
I1 vs I2 0.25 0.85 0.36 0.20 0.66 0.31 21.7% 22.4% 12.0%
I1 vs I3 0.35 0.97 0.57 0.30 0.80 0.50 14.6% 17.0% 13.4%
I2 vs I3 0.29 0.75 0.48 0.23 0.70 0.37 19.8% 6.1% 22.6%
Average 0.30 0.86 0.47 0.24 0.72 0.39 18.3% 15.6% 16.2%
Table 5: Minimum (0th
Percentile), median (50th
Percentile), and maximum (100th
Percentile)
values for standard deviation for each investigator and output.
Varus-Valgus (°) Internal-External (°) Anterior-Posterior (mm)
Min Median Max Min Median Max Min Median Max
I1 0.24 0.25 0.56 0.97 1.00 1.95 0.31 0.32 0.62
I2 0.33 0.34 0.58 1.34 1.37 2.57 0.48 0.50 0.98
I3 0.26 0.27 0.36 0.92 0.93 1.47 0.45 0.46 0.74
36. 27
Figure 2: LabVIEW Interface providing near real-time feedback for laxity evaluations. Square
color would change as more data were collected within each square, black means no data has
been collected in that region.
37. 28
Figure 3: RBF flowchart. Experimental data is used to determine the weight of each
experimental point, then the weights are used to estimate dependent variables through a generic,
evenly spaced grid [26].
38. 29
Figure 4: Example of predictions made for one experimental point at 30.1° Flexion, -1.9Nm VV
torque, -5.6Nm IE torque, and 6.2N AP force. All four downsampling cases are included with the
5 predictions made from every 5th
point in magenta, 10 predictions from every 10th
point in blue,
20 predictions from every 20th
point in cyan, and 40 predictions from every 40th
point in green.
Included are the four means 1 standard deviation of each downsampling case. The red dot is the
mean of the four downsampling case means 1 standard deviation and the black dot is the
measured value that was being predicted.
39. 30
Figure 5: Power spectral density and cumulative power spectral density for one laxity evaluation
trial with VV torque and IE torque applied by investigator 1. Cumulative power spectral density
chart is showing cumulative power from 0.9 to 1 and frequency from 0 to 5 Hz to help
differentiate between variables. Two other lines are included, one horizontal line at 99%
cumulative power and one vertical line at a frequency of 2.5 Hz, to show that over 99% of the
power in all signals are contained in frequencies below 2.5 Hz (every 40th
point from a 100Hz
signal).
40. 31
Figure 6: Investigator 1’s UE calculated using the SDS method represented by isosurface
renderings at constant load magnitudes, scaled by their respective maximums (16 Nm of VV
torque and 6 Nm of IE torque. The blue surface is at 100% magnitude and the red surface is at
50%. A: Data between 30° and 50° flexion is omitted for visualization. B: Data with an external
load is omitted for visualization. C: Data with a Valgus load is omitted for visualization.
A
C
B
41. 32
Figure 7: Investigator 1’s SDS-UE trained with only uniaxial loading trials shown in red, and
trained with only multiaxial loading trials shown in black. Top row has VV kinematics vs flexion
angle at -100% (left), 0% (center), and 100% (right) applied IE torque (-6 Nm, 0 Nm, 6 Nm).
The lines in the top row correspond to applied VV torques at -100% (bottom line, -16 Nm), -
50%, 0% (center line, 0 Nm), 50%, and 100% (top line, 16 Nm). Bottom row shows IE
kinematics vs flexion angle at -100% (left), 0% (center), and 100% (right) applied VV torque (-
16 Nm, 0 Nm, 16 Nm). The lines in the bottom row correspond to applied IE torques at -100%
(bottom line, -6 Nm), -50%, 0% (center line, 0 Nm), 50%, and 100% (top line, 6 Nm).
⎯ SDS-UE: Uniaxial Loading
⎯ SDS-UE: Multiaxial Loading
Top Row of Plots: VV torque
by line from top to bottom
−− 100% VV torque (16 Nm)
−− 50% VV torque (8 Nm)
−− 0% VV torque (0 Nm)
−− -50% VV torque (-8 Nm)
−− -100% VV torque (-16 Nm)
Bottom Row: IE torque by line
from top to bottom
−− 100% IE torque (6 Nm)
−− 50% IE torque (3 Nm)
−− 0% IE torque (0 Nm)
−− -50% IE torque (-3 Nm)
−− -100% IE torque (-6 Nm)
42. 33
Figure 8: Range of all Investigators UEs calculated with the previous method shown in red and
the SDS method in blue, areas with overlap are colored purple. Top row has VV kinematics vs
flexion angle at -100% (left), 0% (center), and 100% (right) applied IE torque (-6 Nm, 0 Nm, 6
Nm). The lines in the top row correspond to applied VV torques at -100% (bottom line, -16 Nm),
-50%, 0% (center line, 0 Nm), 50%, and 100% (top line, 16 Nm). Bottom row shows IE
kinematics vs flexion angle at -100% (left), 0% (center), and 100% (right) applied VV torque (-
16 Nm, 0 Nm, 16 Nm). The lines in the bottom row correspond to applied IE torques at -100%
(bottom line, -6 Nm), -50%, 0% (center line, 0 Nm), 50%, and 100% (top line, 6 Nm).
█ UE: Previous Method
█ UE: SDS Method
█ Overlap between Methods
Top Row of Plots: VV torque
by area from top to bottom
⎯ 100% VV torque (16 Nm)
⎯ 50% VV torque (8 Nm)
⎯ 0% VV torque (0 Nm)
⎯ -50% VV torque (-8 Nm)
⎯ -100% VV torque (-16 Nm)
Bottom Row: IE torque by
area from top to bottom
⎯ 100% IE torque (6 Nm)
⎯ 50% IE torque (3 Nm)
⎯ 0% IE torque (0 Nm)
⎯ -50% IE torque (-3 Nm)
⎯ -100% IE torque (-6 Nm)
43. 34
Figure 9: Average of all investigators SDS-UE at 100% load magnitude colored by overall
standard deviation quartile of all variables and investigators (i.e. the percentile of VV standard
deviation, IE standard deviation, and AP standard deviation for I1, I2, and I3 was found at each
grid point of the SDS-UE. The overall standard deviation quartile was found by averaging these
nine standard deviation percentiles).
44. 35
Figure 10: Investigator 1’s UE at 100% load magnitude colored by how much it differs in VV°
from Investigator 2’s UE at 100% load magnitude. A: Previous method for the UE was used for
the plot and in the difference calculation for color. B: SDS method for the UE was used for the
plot, in the difference calculation, and to determine significance. Only significant differences are
colored (one-tailed t-test, p<0.2). C: Same as plot B with p<0.05 rather than p<0.2.
C
B
A
45. 36
Figure 11: Investigator 1’s UE at 100% load magnitude colored by how much it differs in IE°
from Investigator 2’s UE at 100% load magnitude. A: Previous method for the UE was used for
the plot and in the difference calculation for color. B: SDS method for the UE was used for the
plot, in the difference calculation, and to determine significance. Only significant differences are
colored (one-tailed t-test, p<0.2). C: Same as plot B with p<0.05 rather than p<0.2.
C
B
A
46. 37
Chapter 4: Conclusion
The primary objective of this thesis was to incorporate a measure of uncertainty into the
UE. This measure could ideally be applied to any comparison between UEs, and allow for
comments on the significance of observed differences, rather than only reporting on the
magnitude of observed differences. This measure needed to be created in a way that didn’t
sacrifice any of the previously reported benefits of using RBFs to describe the UE, and would
ideally show improvements in objective measures. Therefore, the secondary objectives were to
reduce the difference seen between investigators and to determine the optimal protocol for laxity
evaluation.
The introduction of the sequential downsampling method in chapter 3 provided the means
for all objectives to be met. The variance estimate was created in a two-step process that utilized
sequential downsampling in both steps. First, there was the specific variance, calculated at each
grid point of the UE and for each variable. This was based on how much variation was seen in
the UEs trained with different sets of sequentially downsampled data. Second, there was a
general variance, determined by how well the RBFs could predict known data that was withheld
from training. These two values were combined to provide a standard deviation that was specific
to each knee, investigator, and variable, while also matching what has been observed by previous
studies [38].
Improvements related to the secondary objectives were also found. On average, the
median absolute difference between investigators has been reduced by 18.3% in VV°, 15.6% in
IE°, and 16.2% in AP position. This reduces the effect the researcher has on the UE and reduces
the dependence on experience, which lowers the barrier to entry for other labs to start performing
47. 38
this kind of analysis. This research also verified the importance of collecting laxity evaluations
with both uniaxial and multiaxial loading. UEs calculated from only uniaxial loading trials were
found to have a much larger MAD than those calculated from multiaxial trials. However, using
both types of loading trials was found to be the most consistent between investigators. Future
testing procedures should include a variety of both uniaxial and multiaxial loading trials, an
example protocol could be as follows:
1) Varus-Valgus torque (applied until data has been collected through the full range of
flexion, this should be assumed for all trials and takes approximately 60 to 90 seconds)
2) Internal-External torque
3) Anterior-Posterior force
4) Varus-Valgus + Internal torques (loads should be applied smoothly and together, rather
than one after the other, this should be assumed for all multiaxial trials)
5) Varus-Valgus + External torques
6) Varus-Valgus torque + Anterior force
7) Varus-Valgus torque + Posterior force
8) Internal-External torque + Anterior force
9) Internal-External torque + Posterior force
The SDS method for calculating the UE has been shown to be an improvement over the
previous method according to a variety of metrics. This method can be applied to future research
involving the laxity of the passive knee joint and is likely general enough to be applied to
research on the constraint of any joint. Applications may also be found in other work involving
48. 39
RBFs, or the uncertainty associated with their approximations. Future work in this area could
focus on creating total knee replacements that feel more like the natural knee, or a clinical device
that estimate the UE in vivo.
49. 40
References
[1] Blankevoort, L., Huiskes, R., and de Lange, A., 1988, “The Envelope of Passive Knee
Joint Motion,” J. Biomech., 21(9), pp. 705–720.
[2] Hirschmann, M. T., and Müller, W., 2015, “Complex Function of the Knee Joint: The
Current Understanding of the Knee,” Knee Surgery, Sport. Traumatol. Arthrosc., 23(10),
pp. 2780–2788.
[3] Mouton, C., Theisen, D., Meyer, T., Agostinis, H., Nührenbörger, C., Pape, D., and Seil,
R., 2015, “Combined Anterior and Rotational Knee Laxity Measurements Improve the
Diagnosis of Anterior Cruciate Ligament Injuries,” Knee Surgery, Sport. Traumatol.
Arthrosc., 23(10), pp. 2859–2867.
[4] Noyes, F. R., Cummings, J. F., Grood, E. S., Walz-Hasselfeld, K. A., and Wroble, R. R.,
1991, “The Diagnosis of Knee Motion Limits, Subluxations, and Ligament Injury.,” Am.
J. Sports Med., 19(2), pp. 163–71.
[5] Bellemans, J., D’Hooghe, P., Vandenneucker, H., Damme, G. Van, and Victor, J., 2006,
“Soft Tissue Balance in Total Knee Arthroplasty: Does Stress Relaxation Occur
Perioperatively?,” Clin. Orthop. Relat. Res., 452(452), pp. 49–52.
[6] Pugh, L., Mascarenhas, R., Arneja, S., Chin, P. Y. K., and Leith, J. M., 2009, “Current
Concepts in Instrumented Knee-Laxity Testing,” Am. J. Sports Med., 37(1), pp. 199–210.
[7] Torg, J. S., Conrad, W., and Kalen, V., 1976, “Clinical I Diagnosis of Anterior Cruciate
Ligament Instability in the Athlete,” Am. J. Sports Med., 4(2), pp. 84–93.
[8] Paessler, H. H., and Michel, D., 1992, “How New Is the Lachman Test?,” Am. J. Sport.
Med., 20(1), pp. 95–98.
[9] Benjaminse, A., Gokeler, A., and van der Schans, C. P., 2006, “Clinical Diagnosis of an
50. 41
Anterior Cruciate Ligament Rupture: A Meta-Analysis,” J. Orthop. Sport. Phys. Ther.,
36(5), pp. 267–288.
[10] Daniel, D. M., 1991, “Assessing the Limits of Knee Motion,” Am. J. Sports Med., 19(2),
pp. 139–147.
[11] Senioris, A., Rousseau, T., L’Hermette, M., Gouzy, S., Duparc, F., and Dujardin, F., 2017,
“Validity of Rotational Laxity Coupled with Anterior Translation of the Knee: A
Cadaveric Study Comparing Radiostereometry and the Rotab®,” Knee, 24(2), pp. 289–
294.
[12] Hallén, L. G., and Lindahl, O., 1966, “The Screw-Home Movement in the Knee-Joint,”
Acta Orthop., 37(1), pp. 97–106.
[13] Markolf, K. L., Mensch, J. S., and Amstutz, H. C., 1976, “Stiffness and Laxity of the
Knee--the Contributions of the Supporting Structures. A Quantitative in Vitro Study.,” J.
Bone Joint Surg. Am., 58(5), pp. 583–94.
[14] Hsieh, H. H., and Walker, P. S., 1976, “Stabilizing Mechanisms of the Loaded and
Unloaded Knee Joint,” J. Bone Jt. Surg. - Ser. A, 58(1), pp. 87–93.
[15] Chao, E. Y., Laughman, R. K., Schneider, E., and Stauffer, R. N., 1983, “Normative Data
of Knee Joint Motion and Ground Reaction Forces in Adult Level Walking,” J. Biomech.,
16(3), pp. 219–233.
[16] Suntay, W. J., Grood, E. S., Hefzy, M. S., Butler, D. L., and Noyes, F. R., 1983, “Error
Analysis of a System for Measuring Three-Dimensional Joint Motion,” J. Biomech. Eng.,
105(2), p. 127.
[17] Grood, E. S., and Suntay, W. J., 1983, “A Joint Coordinate System for the Clinical
Description of Three-Dimensional Motions: Application to the Knee.,” J. Biomech. Eng.,
51. 42
105(2), pp. 136–44.
[18] Blankevoort, L., Huiskes, R., and de Lange, A., 1991, “Recruitment of Knee Joint
Ligaments.,” J. Biomech. Eng., 113(1), pp. 94–103.
[19] Küpper, J. C., Loitz-Ramage, B., Corr, D. T., Hart, D. A., and Ronsky, J. L., 2007,
“Measuring Knee Joint Laxity: A Review of Applicable Models and the Need for New
Approaches to Minimize Variability,” Clin. Biomech., 22(1), pp. 1–13.
[20] Pedersen, D., Vanheule, V., Wirix-Speetjens, R., Taylan, O., Delport, H. P., Scheys, L.,
and Andersen, M. S., 2018, “A Novel Non-Invasive Method for Measuring Knee Joint
Laxity in 6-DOF: In Vitro Proof-of-Concept and Validation.”
[21] Mane, A., 2007, “Combined Experimental and Statistical Model to Understand the Role of
Anatomical and Implant Alignment Variables in Guiding Knee Joint Motion Combined
Experimental and Statistical Model to Understand the Role of Anatomical and Implant
Alignment Variables,” University of Kansas.
[22] Clary, C. W., 2009, “A Combined Experimental-Computational Method to Generate
Reliable Subject Specific Models of the Knee’s Ligamentous Constraint,” University of
Kansas.
[23] Cyr, A. J., 2014, “Quantification of the Relative Contribution of Individual Soft Tissue
Structures to Total Joint Constraint,” University of Kansas.
[24] Buhmann, M. D., and Levesley, J., 2004, Radial Basis Functions: Theory and
Implementations.
[25] Ahmed, S. G., and Mekey, M. L., 2010, “A Collocation and Cartesian Grid Methods
Using New Radial Basis Function to Solve Class of Partial Differential Equations,” Int. J.
Comput. Math., 87(6), pp. 1349–1362.
52. 43
[26] Cyr, A. J., and Maletsky, L. P., 2015, “Technical Note: A Multi-Dimensional Description
of Knee Laxity Using Radial Basis Functions,” Comput. Methods Biomech. Biomed.
Engin., 18(15), pp. 1674–1679.
[27] Leonard, J. A., Kramer, M. A., and Ungar, L. H., 1992, “Using Radial Basis Functions to
Approximate a Function and Its Error Bounds,” IEEE Trans. Neural Networks, 3(4), pp.
624–627.
[28] Maletsky, L. P., Sun, J., and Morton, N. A., 2007, “Accuracy of an Optical Active-Marker
System to Track the Relative Motion of Rigid Bodies,” J. Biomech., 40(3), pp. 682–685.
[29] Lenz, N. M., 2008, “The Effects of Femoral Fixed Body Coordinate System Definition on
Knee Kinematic Description,” J. Biomech. Eng., 130(2), p. 021014.
[30] Torzilli, P. A., Xianghua Deng, and Warren, R. F., 1994, “The Effect of Joint-
Compressive Load and Quadriceps Muscle Force on Knee Motion in the Intact and
Anterior Cruciate Ligament-Sectioned Knee,” Am. J. Sports Med., 22(1), pp. 105–112.
[31] Lewkonia, R. M., 1993, “The Biology and Clinical Consequences of Articular
Hypermobility.,” J. Rheumatol., 20(2), pp. 220–2.
[32] Blankevoort, L., and Huiskes, R., 1996, “Validation of a Three-Dimensional Model of the
Knee,” J. Biomech., 29(7), pp. 955–961.
[33] Mommersteeg, T. J. A., Huiskes, R., Blankevoort, L., Kooloos, J. G. M., Kauer, J. M. G.,
and Maathuis, P. G. M., 1996, “A Global Verification Study of a Quasi-Static Knee Model
with Multi-Bundle Ligaments,” J. Biomech., 29(12), pp. 1659–1664.
[34] Malanga, G. A., Andrus, S., Nadler, S. F., and McLean, J., 2003, “Physical Examination
of the Knee: A Review of the Original Test Description and Scientific Validity of
Common Orthopedic Tests,” Arch. Phys. Med. Rehabil., 84(4), pp. 592–603.
53. 44
[35] Cyr, A. J., and Maletsky, L. P., 2014, “Unified Quantification of Variation in Passive
Knee Joint Constraint,” Proc. Inst. Mech. Eng. Part H J. Eng. Med., 228(5), pp. 494–500.
[36] Kwan, M. K., Lin, T. H.-C., and Woo, S. L.-Y., 1993, “On the Viscoelastic Properties of
the Anteromedial Bundle of the Anterior Cruciate Ligament,” J. Biomech., 26(4–5), pp.
447–452.
[37] Cyr, A. J., Shalhoub, S. S., Fitzwater, F. G., Ferris, L. A., and Maletsky, L. P., 2015,
“Mapping of Contributions From Collateral Ligaments to Overall Knee Joint Constraint:
An Experimental Cadaveric Study,” J. Biomech. Eng., 137(6), p. 061006.
[38] Shultz, S. J., Shimokochi, Y., Nguyen, A. D., Schmitz, R. J., Beynnon, B. D., and Perrin,
D. H., 2007, “Measurement of Varus-Valgus and Internal-External Rotational Knee
Laxities in Vivo - Part I: Assessment of Measurement Reliability and Bilateral
Asymmetry,” J. Orthop. Res., 25(8), pp. 981–988.