This document presents a comparison of dimension reduction techniques for survival analysis, including principal component analysis (PCA), partial least squares (PLS), and random matrix approaches. Simulation data with 100 observations and 1000 covariates was generated to test the ability of each method to minimize bias and mean squared error in estimating survival functions. PCA and PLS were able to capture 50% of the variance by reducing the dimensions to 37. The estimated survival functions were compared to the true function over 5000 iterations. PLS had the lowest bias and mean squared error, followed by PCA, with the random matrix approaches performing worse.
Strum Liouville Problems in Eigenvalues and Eigenfunctionsijtsrd
This paper we discusses with Strum Liouville problem of eigenvalues and eigenfunctions, within the standard equation where p,q and r are given functions of the independent variable x is an interval The is a parameter and is the dependent variable. The method of separation of variable applied to second order liner partial differential equations, the equation is known because the Strum Liouville differential equation. Which appear in the overall theory of eigenvalues and eigenfunctions and eigenfunctions expansions is one of the deepest and richest parts of recent mathematics. These problems are associate with work of J.C.F strum and J.Liouville. B. Kavitha | Dr. C. Vimala "Strum - Liouville Problems in Eigenvalues and Eigenfunctions" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-5 , August 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31721.pdf Paper Url :https://www.ijtsrd.com/mathemetics/other/31721/strum--liouville-problems-in-eigenvalues-and-eigenfunctions/b-kavitha
Stuck with your Regression Assignment? Get 24/7 help from tutors with Phd in the subject. Email us at support@helpwithassignment.com
Reach us at http://www.HelpWithAssignment.com
Strum Liouville Problems in Eigenvalues and Eigenfunctionsijtsrd
This paper we discusses with Strum Liouville problem of eigenvalues and eigenfunctions, within the standard equation where p,q and r are given functions of the independent variable x is an interval The is a parameter and is the dependent variable. The method of separation of variable applied to second order liner partial differential equations, the equation is known because the Strum Liouville differential equation. Which appear in the overall theory of eigenvalues and eigenfunctions and eigenfunctions expansions is one of the deepest and richest parts of recent mathematics. These problems are associate with work of J.C.F strum and J.Liouville. B. Kavitha | Dr. C. Vimala "Strum - Liouville Problems in Eigenvalues and Eigenfunctions" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-5 , August 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31721.pdf Paper Url :https://www.ijtsrd.com/mathemetics/other/31721/strum--liouville-problems-in-eigenvalues-and-eigenfunctions/b-kavitha
Stuck with your Regression Assignment? Get 24/7 help from tutors with Phd in the subject. Email us at support@helpwithassignment.com
Reach us at http://www.HelpWithAssignment.com
Eigen values and eigen vectors engineeringshubham211
mathematics...for engineering mathematics.....learn maths...............................The individual items in a matrix are called its elements or entries.[4] Provided that they are the same size (have the same number of rows and the same number of columns), two matrices can be added or subtracted element by element. The rule for matrix multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second. Any matrix can be multiplied element-wise by a scalar from its associated field. A major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f(x) = 4x. For example, the rotation of vectors in three dimensional space is a linear transformation which can be represented by a rotation matrix R: if v is a column vector (a matrix with only one column) describing the position of a point in space, the product Rv is a column vector describing the position of that point after a rotation. The product of two transformation matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of systems of linear equations. If the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a square matrix has an inverse if and only if its determinant is not zero. Insight into the geometry of a linear transformation is obtainable (along with other information) from the matrix's eigenvalues and eigenvectors.
Applications of matrices are found in most scientific fields. In every branch of physics, including classical mechanics, optics, electromagnetism, quantum mechanics, and quantum electrodynamics, they are used to study physical phenomena, such as the motion of rigid bodies. In computer graphics, they are used to project a 3-dimensional image onto a 2-dimensional screen. In probability theory and statistics, stochastic matrices are used to describe sets of probabilities; for instance, they are used within the PageRank algorithm that ranks the pages in a Google search.[5] Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions.
A major branch of numerical analysis is devoted to the development of efficient algorithms for matrix computations, a subject that is centuries old and is today an expanding area of research. Matrix decomposition methods simplify computations, both theoretically and practically. Algorithms that are tailored to particular matrix structures, such as sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory. A simple example of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor series of a function
...
Numerical solution of eigenvalues and applications 2SamsonAjibola
This project aims at studying the methods of numerical solution of eigenvalue problems and their applications. An accurate mathematical method is needed to solve direct and inverse eigenvalue problems related to different applications such as engineering analysis and design, statistics, biology e.t.c. Eigenvalue problems are of immense interest and play a pivotal role not only in many fields of engineering but also in pure and applied mathematics, thus numerical methods are developed to solve eigenvalue problems. The primary objective of this work is to showcase these various eigenvalue algorithms such as QR algorithm, power method, Krylov subspace iteration (Lanczos and Arnoldi) and explain their effects and procedures in solving eigenvalue problems.
3. Linear Algebra for Machine Learning: Factorization and Linear TransformationsCeni Babaoglu, PhD
The seminar series will focus on the mathematical background needed for machine learning. The first set of the seminars will be on "Linear Algebra for Machine Learning". Here are the slides of the third part which is discussing factorization and linear transformations.
Here is the link of the first part which was discussing linear systems: https://www.slideshare.net/CeniBabaogluPhDinMat/linear-algebra-for-machine-learning-linear-systems/1
Here are the slides of the second part which was discussing basis and dimension:
https://www.slideshare.net/CeniBabaogluPhDinMat/2-linear-algebra-for-machine-learning-basis-and-dimension
Estimation of Parameters and Missing Responses In Second Order Response Surfa...inventionjournals
This article is an attempt to explore missing responses in second order response design model
using Expected Maximization algorithm with and without imposing restrictions on the design matrix towards or
thogonality are derived and are implemented with suitable examples. The properties of estimated parameters
and estimated responses are also studied and findings are presented in detail at the end of the study.
Eigen values and eigen vectors engineeringshubham211
mathematics...for engineering mathematics.....learn maths...............................The individual items in a matrix are called its elements or entries.[4] Provided that they are the same size (have the same number of rows and the same number of columns), two matrices can be added or subtracted element by element. The rule for matrix multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second. Any matrix can be multiplied element-wise by a scalar from its associated field. A major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f(x) = 4x. For example, the rotation of vectors in three dimensional space is a linear transformation which can be represented by a rotation matrix R: if v is a column vector (a matrix with only one column) describing the position of a point in space, the product Rv is a column vector describing the position of that point after a rotation. The product of two transformation matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of systems of linear equations. If the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a square matrix has an inverse if and only if its determinant is not zero. Insight into the geometry of a linear transformation is obtainable (along with other information) from the matrix's eigenvalues and eigenvectors.
Applications of matrices are found in most scientific fields. In every branch of physics, including classical mechanics, optics, electromagnetism, quantum mechanics, and quantum electrodynamics, they are used to study physical phenomena, such as the motion of rigid bodies. In computer graphics, they are used to project a 3-dimensional image onto a 2-dimensional screen. In probability theory and statistics, stochastic matrices are used to describe sets of probabilities; for instance, they are used within the PageRank algorithm that ranks the pages in a Google search.[5] Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions.
A major branch of numerical analysis is devoted to the development of efficient algorithms for matrix computations, a subject that is centuries old and is today an expanding area of research. Matrix decomposition methods simplify computations, both theoretically and practically. Algorithms that are tailored to particular matrix structures, such as sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory. A simple example of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor series of a function
...
Numerical solution of eigenvalues and applications 2SamsonAjibola
This project aims at studying the methods of numerical solution of eigenvalue problems and their applications. An accurate mathematical method is needed to solve direct and inverse eigenvalue problems related to different applications such as engineering analysis and design, statistics, biology e.t.c. Eigenvalue problems are of immense interest and play a pivotal role not only in many fields of engineering but also in pure and applied mathematics, thus numerical methods are developed to solve eigenvalue problems. The primary objective of this work is to showcase these various eigenvalue algorithms such as QR algorithm, power method, Krylov subspace iteration (Lanczos and Arnoldi) and explain their effects and procedures in solving eigenvalue problems.
3. Linear Algebra for Machine Learning: Factorization and Linear TransformationsCeni Babaoglu, PhD
The seminar series will focus on the mathematical background needed for machine learning. The first set of the seminars will be on "Linear Algebra for Machine Learning". Here are the slides of the third part which is discussing factorization and linear transformations.
Here is the link of the first part which was discussing linear systems: https://www.slideshare.net/CeniBabaogluPhDinMat/linear-algebra-for-machine-learning-linear-systems/1
Here are the slides of the second part which was discussing basis and dimension:
https://www.slideshare.net/CeniBabaogluPhDinMat/2-linear-algebra-for-machine-learning-basis-and-dimension
Estimation of Parameters and Missing Responses In Second Order Response Surfa...inventionjournals
This article is an attempt to explore missing responses in second order response design model
using Expected Maximization algorithm with and without imposing restrictions on the design matrix towards or
thogonality are derived and are implemented with suitable examples. The properties of estimated parameters
and estimated responses are also studied and findings are presented in detail at the end of the study.
Estimation of Parameters and Missing Responses In Second Order Response Surfa...inventionjournals
This article is an attempt to explore missing responses in second order response design model
using Expected Maximization algorithm with and without imposing restrictions on the design matrix towards or
thogonality are derived and are implemented with suitable examples. The properties of estimated parameters
and estimated responses are also studied and findings are presented in detail at the end of the study.
Keywords : EM algorithm, Estimation of Missing responses, Second Order Response Surface Design Model
Knowledge of cause-effect relationships is central to the field of climate science, supporting mechanistic understanding, observational sampling strategies, experimental design, model development and model prediction. While the major causal connections in our planet's climate system are already known, there is still potential for new discoveries in some areas. The purpose of this talk is to make this community familiar with a variety of available tools to discover potential cause-effect relationships from observed or simulation data. Some of these tools are already in use in climate science, others are just emerging in recent years. None of them are miracle solutions, but many can provide important pieces of information to climate scientists. An important way to use such methods is to generate cause-effect hypotheses that climate experts can then study further. In this talk we will (1) introduce key concepts important for causal analysis; (2) discuss some methods based on the concepts of Granger causality and Pearl causality; (3) point out some strengths and limitations of these approaches; and (4) illustrate such methods using a few real-world examples from climate science.
Deriving likelihood functions is often perceived as a daunting task. This slides shows how the likelihood function is derived in a general case and demonstrate it for different models.
Part of the Eawag Summer School on System Analysis.
Large amounts of heterogeneous medical data have become available in various healthcare organizations (payers, providers, pharmaceuticals). Those data could be an enabling resource for deriving insights for improving care delivery and reducing waste. The enormity and complexity of these datasets present great challenges in analyses and subsequent applications to a practical clinical environment. More details are available here http://dmkd.cs.wayne.edu/TUTORIAL/Healthcare/
A lambda calculus for density matrices with classical and probabilistic controlsAlejandro Díaz-Caro
Slides of my presentation at APLAS'17 (Suzhou, China, December 2017).
Publication: LNCS 10695:448-467, 2017 (http://dx.doi.org/10.1007/978-3-319-71237-6_22)
ArXiv'd at https://arxiv.org/abs/1705.00097
Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 10: Correlation and Regression
10.2: Regression
1. Survival Analysis
Dimension Reduction Techniques
Claressa Ullmayer and Iván Rodríguez
The University of Alaska, Fairbanks
The University of Arizona
30 July 2015
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
2. Background
Given a dataset, we want to estimate the true survival function.
Complications:
Data Dimensionality
Data Censoring
Unknown True Survival Curve
We want to minimize bias and mean-squared error (MSE)
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
3. Applications
Our running example: microarray gene expression datasets
with n patients and p genes such that n p
However, there exist many other implementations:
Engineering
Business
Public Health
Security
Biostatistics
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
4. The Survival Function
A survival function, S(t) describes the probability of an object
experiencing an explicit event after a particular time:
S(t) := P(T > t) =
∞
t
f(τ) dτ = 1 − F(t),
where t is the specific time, T is a random variable, f(τ) is the
PDF of T, and F(t) is the CDF of T
In our running example:
event of interest = death
S(t) = probability that a cancer patient survives—death
not observed—after a particular time
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
5. An Example of Survival Curves
Below are four survival arms demonstrating efficacy of different
drug choices for a particular cancer
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
6. The Accelerated Failure Time (AFT) Model
Two classic models are used to estimate the survival function:
Cox Proportional Hazards (CPH)
Accelerated Failure Time (AFT)
Chief differences:
Ease of interpretation—survivorship vis-à-vis hazard
AFT directly models survival times
AFT assumes covariates affect a constant
acceleration/deceleration of ‘disease’ life course
CPH posits no assumption about baseline hazard function
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
7. The Accelerated Failure Time (AFT) Model Cont.
Underlying formula:
ln(Ti) = µ + zi β + ei
i = 1, . . . , n total observations
Ti is the ith observation’s survival time
parameter µ is the theoretical mean
vector zi denotes the data covariates
vector β indicates the covariate or ‘regression’ coefficients
ei designates the random error for the ith observation
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
8. Dimension Reduction Techniques
Three dimension reduction techniques are compared given
predictors in X and responses in Y:
Principal Component Analysis (PCA)
Partial Least Squares (PLS)
Johnson-Lindenstrauss inspired Random Matrices (RM)
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
9. Principal Component Analysis (PCA)
PCA obtains orthogonal variance-maximized components in X
PCA is used when
X is highly collinear
covariates outnumber observations
Model: T = XW
Xn×p now related to Wp×p ‘loadings’ and Tn×p ‘scores’
Columns of W are eigenvectors of XT
X
Desired ‘principal’ components are retained
These have maximal variability in their respective directions
Note: response variable Y disregarded
Thus, known as an ‘unsupervised’ method
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
10. Partial Least Squares (PLS)
PLS analyzes linear combinations of X and Y
PLS is used when
X is highly collinear
covariates vastly outnumber observations
Y is multidimensional
Model: X = TP + E and Y = UQ + F
X now related to ‘scores’ T, ‘loadings’ P, and error E
Y now related to ‘scores’ U, ‘loadings’ Q, and error F
PLS is iterative
covariance maximized between T and U
resulting ‘latent vectors’ retained—subtracted from X and Y
process repeated until X is a null matrix
Note: PLS performs singular value decompositions of XTY
Hence, known as a ‘supervised’ method
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
11. Johnson-Lindenstrauss Lemma
Random Matrices inspired by the Johnson-Lindenstrauss
Lemma is the third dimension reduction technique
The Johnson-Lindenstrauss Lemma
For any ∈ (0, 1) and any n ∈ Z, let k ∈ Z be positive and let
k ≥
4 ln(n)
2/2 − 3/3
.
Then, for any set S of n points in Rd , there exists a mapping
f : Rd → Rk such that, for all points u, v ∈ S,
(1 − ) u − v 2
≤ f(u) − f(v) 2
≤ (1 + ) u − v 2
.
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
12. Generating Random Matrices
Three Random Matrices were generated according to the
papers of Achlioptas and Dasgupta-Gupta
Properties of Achlioptas Matrices:
Rij =
1
√
k
×
+1 with probability 1/2,
−1 with probability 1/2.
Rij =
3
k
×
+1 with probability 1/6,
0 with probability 2/3,
−1 with probability 1/6.
Properties of Dasgupta-Gupta Matrix:
entries from N(0, 1) distribution with normalized rows
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
13. Johnson-Lindenstrauss Success Simulations
Accuracy of the Johnson-Lindenstrauss Lemma was tested
with the three matrices testing varying values of and k
Johnson-Lindenstrauss Lemma passes 100% of the time
under the constraints for k and
To reduce X to 100 × 37, ≈ 0.65 to satisfy the
Johnson-Lindenstrauss Lemma
Show Simulations
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
14. Simulating Data
Data was simulated in order to test which method best
minimized bias and MSE
dimension of 100 × 1000 observations and covariates
β1×1000 random regression coefficients from
U(−1.0 × 10−7, 1.0 × 107)
z1×1000 covariates for 100 observations
z ∼ (N(0, 1), 1)
z was exponentiated to make all the values log-normally
distributed
Ti, survival times, are exponentially distributed with
λi = e−zi β
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
15. Applying PCA
Now that we had our data simulated, our next goal was to apply
our dimension reduction techniques.
First we implemented PCA and received 99 components
representing the eigenvalues of the variance-covariance
matrix
The components are linear combinations of the original
covariates (genes)
Below are the first ten components:
> z_star_PCA$eig
eigenvalue percentage of variance cumulative percentage of variance
comp 1 16.825242335933619841626 1.6825242335933618953447 1.682524233593361895345
comp 2 16.493827336712410414066 1.6493827336712409969977 3.331906967264602670298
comp 3 16.342590741574667845271 1.6342590741574667401181 4.966166041422069632461
comp 4 16.152005225467195970168 1.6152005225467198634703 6.581366563968789051842
comp 5 15.614782499223768041929 1.5614782499223769374197 8.142844813891166211306
comp 6 15.433746107384354928627 1.5433746107384354040448 9.686219424629602059440
comp 7 15.153687644673805579032 1.5153687644673805579032 11.201588189096982617343
comp 8 15.019382184758567788663 1.5019382184758567344574 12.703526407572839573845
comp 9 14.957535219902165835038 1.4957535219902164946859 14.199279929563054736263
comp 10 14.871508798048505894940 1.4871508798048505006761 15.686430809367905681029
We decided to incorporate 50% of the overall variance in
picking our components
Hence, we chose to reduce the data from 99 to 37Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
16. Applying PLS and AFT
Next, we implemented PLS using the same number of
components that we chose for PCA
From both PCA and PLS, we obtained the weights on all
the genes for each component (open PDF)
We then multiply our original 100 × 1000 matrix by the
resulting 1000 × 37 matrix of weights to get a 100 × 37
reduced matrix for both PLS and PCA
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
17. Estimating the Survival Function
We took these new matrices and fed them into the AFT
model to get our estimated regression coefficients
We can then estimate the survival function, defined as
ˆS0(t) = e−te−¯z∗ ˆβ∗
¯z∗
is the column-centered original matrix of observations
and covariates
ˆβ∗
is a matrix produced by multiplying our original simulated
regression coefficients by our matrix of obtained weights
−¯z∗ ˆβ∗
becomes a scalar
We know our real survival function is S0(t) = e−¯λt
We repeated this procedure for 5000 iterations
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
18. Results
Since we have the estimated and the real survival function,
we can estimate the bias and MSE for PCA, PLS, and the
three random matrices
To compare the performance of the dimension reduction
techniques , we first partitioned the y-axis of the survival
curve into equally spaced sections, ui for i = 1, . . . , 20
Then, we found the corresponding ti on the x-axis of the
survival curve
For each of the 20 ti, we summed the bias and MSE for
each point to get the distribution of the errors after 5000
iterations
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
19. Bias Plot, PCA and PLS
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
20. Mean-Squared Error Plot, PCA and PLS
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
21. Bias Plot, Random Matrices
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
22. Mean-Squared Error Plot, Random Matrices
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
23. Bias Plot, All Methods
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
24. Mean-Squared Error Plot, All Methods
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
25. Discussion
Censoring is when the event of interest for a given subject
was not observed for some extraneous reason
Naturally censoring is a problem in real life investigations
and studies. Unfortunately, we did not have the time to
incorporate the effect of censoring on our data simulations.
Furthermore, a complication arose in the generation of the
fixed β coefficients; essentially, R software necessitated
generating grossly smaller βs due to the exponent in
ˆS0(t) = e−te−¯z∗ ˆβ∗
in the survival curve estimate.
An initial goal was to apply our findings to real microarray
gene datasets—due to time constraints, this objective was
not fulfilled
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
26. References
Cox, DR. Regression Models and life tables (with discussion).
Journal of Royal Statistical Society Series B34: 187-220, 1972.
Johnson, W.B. and J. Lindenstrauss. Extensions of Lipschitz
maps into a Hilbert space. Contemp Math 26: 189-206, 1984.
Pearson, K. On lines and planes of closest fit to systems of
points in space. Philosophical Magazine 2: 559-572, 1901.
Wold, H. Estimation of principal components and related
models by iterative least squares. P.R. Krishnaiaah: 391-420,
1966.
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
27. References Cont.
Achlioptas, D. Database-friendly random projections:
Johnson-Lindenstrauss with binary coins. Journal of Computer
and System Sciences 66(4): 671-687, 2003.
Dasgupta, S. and A. Gupta. An elementary proof of a theorem
of Johnson and Lindenstrauss. Random Structures and
Algorithms 22(1): 60-65, 2003.
Nguyen, D.V. Partial least squares dimension reduction for
microarray gene expression data with a censored response.
Math Biosci 193: 119-137, 2005.
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
28. References Cont.
Nguyen, D.V., and D.M. Rocke. On partial least squares
dimension reduction for microarraybased classification: A
simulation study. Comput Stat Data Analysis 46: 407-425,
2004.
Nguyen, Tuan S. and Javier Rojo. Dimension Reduction of
Microarray Gene Expression Data: The Accelerated Failure
Time Model. Journal of Bioinformatics and Computational
Biology 7(6): 939-954, 2009.
Nguyen, Tuan S. and Javier Rojo. Dimension Reduction of
Microarray Data in the Presence of a Censored Survival
Response: A Simulation Study. Statistical Applications in
Genetics and Molecular Biology 8(1): 2009.
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques
29. Thank You
This research was supported by the National Security Agency
through REU Grant H98230-15-1-0048 to The University of
Nevada at Reno, Javier Rojo PI.
We would like to greatly thank the NSA for funding our
research this summer
Thank you all for taking the time to be here and listen to
our presentation
Claressa Ullmayer and Iván Rodríguez Survival Analysis Dimension Reduction Techniques