Split population models are also known as mixture model . The data used in this paper is Stanford Heart Transplant data. Survival times of potential heart transplant recipients from their date of acceptance into the Stanford Heart Transplant program [3]. This set consists of the survival times, in days, uncensored and censored for the 103 patients and with 3 covariates are considered Ages of patients in years, Surgery and Transplant, failure for these individuals is death. Covariate methods have been examined quite extensively in the context of parametric survival models for which the distribution of the survival times depends on the vector of covariates associated with each individual. See [6] for approaches which accommodate censoring and covariates in the ordinary exponential model for survival. Currently, such mixture models with immunes and covariates are in use in many areas such as medicine and criminology. See for examples [4][5][7]. In our formulation, the covariates are incorporated into a split loglogistic model by allowing the proportion of ultimate failures and the rate of failure to depend on the covariates and the unknown parameter vectors via logistic model. Within this setup, we provide simple sufficient conditions for the existence, consistency, and asymptotic normality of a maximum likelihood estimator for the parameters involved. As an application of this theory, the likelihood ratio test for a difference in immune proportions is shown to have an asymptotic chi-square distribution. These results allow immediate practical applications on the covariates and also provide some insight into the assumptions on the covariates and the censoring mechanism that are likely to be needed in practice. Our models and analysis are described in section 5.
A Cox model is a statistical technique used to analyze survival data with several explanatory variables. It allows estimation of the hazard or risk of an event like death for an individual based on prognostic factors. A Cox model expresses the hazard as an exponential function of the explanatory variables. Interpreting a Cox model involves examining the regression coefficients - a positive coefficient means a higher hazard/worse prognosis, while a negative coefficient implies a better prognosis. The model from a study of melanoma patients' survival found age and cancer type increased hazard, while male sex decreased it, and interferon treatment did not significantly impact survival.
Use Proportional Hazards Regression Method To Analyze The Survival of Patient...Waqas Tariq
The Kaplan Meier method is used to analyze data based on the survival time. In this paper used Kaplan Meier procedure and Cox regression with these objectives. The objectives are finding the percentage of survival at any time of interest, comparing the survival time of two studied groups and examining the effect of continuous covariates with the relationship between an event and possible explanatory variables. The variables (Age, Gender, Weight, Drinking, Smoking, District, Employer, Blood Group) are used to study the survival patients with cancer stomach. The data in this study taken from Hiwa/Hospital in Sualamaniyah governorate during the period of (48) months starting from (1/1/2010) to (31/12/2013) .After Appling the Cox model and achieve the hypothesis we estimated the parameters of the model by using (Partial Likelihood) method and then test the variables by using (Wald test) the result show that the variables age and weight are influential at the survival of time.
This document outlines topics related to survival analysis, including its objectives and key methods. Survival analysis is used to analyze longitudinal data on events like death or disease onset over time. It accounts for censoring of data. The Kaplan-Meier method estimates survival rates without dividing time into intervals like life tables do. The log-rank test statistically compares survival curves between groups. Cox regression analysis examines the relationship between covariates and survival while allowing hazards to vary over time.
An illustration of the usefulness of the multi-state model survival analysis ...cheweb1
This seminar will demonstrate the potential of multi-state survival modeling (MSM) as a tool for decision analytic modelling and compare it to the usual Markov transition modelling approach. After briefly reviewing examples of MSM in the health economics literature, a technology appraisal submitted to NICE evaluating the cost effectiveness of Rituximab for first line treatment of chronic lymphocytic leukaemia will be used for illustration purposes. Finally, areas of future research will be outlined.
A gentle introduction to survival analysisAngelo Tinazzi
This document provides an introduction to survival analysis techniques for statistical programmers. It discusses key concepts in survival analysis including censoring, the Kaplan-Meier method for estimating survival probabilities, and assumptions of survival models. Programming aspects like creating time-to-event datasets and using SAS procedures for survival analysis are also covered.
This document discusses survival analysis and Cox regression for cancer clinical trials. It begins with an introduction to Cox regression analysis and how it can be used to analyze the effects of covariates on survival rates in cancer trials. The document then provides examples of Cox regression outputs and how to interpret the results, including checking the proportional hazards assumption. It cautions against some invalid methods of survival analysis that do not properly account for censored or time-dependent data.
This document discusses survival analysis techniques. It defines key survival analysis concepts like survival time, censoring, survival curves, and hazard functions. It describes several approaches to expressing prognosis like case-fatality rates, 5-year survival, median survival time, and relative survival. Life table analysis and the Kaplan-Meier method are introduced as nonparametric techniques for analyzing censored survival data.
The document discusses survival analysis and Cox regression for cancer clinical trials. It begins with an overview of clinical trials for cancer, noting their complexity, long duration, high costs, and ethical concerns. It then covers survival analysis, describing key concepts like survival curves, hazard functions, and the Kaplan-Meier method for estimating survival when there is censoring. The document provides an example of survival data from a cancer study and discusses assumptions and parameters used in survival analysis like median and mean survival times.
A Cox model is a statistical technique used to analyze survival data with several explanatory variables. It allows estimation of the hazard or risk of an event like death for an individual based on prognostic factors. A Cox model expresses the hazard as an exponential function of the explanatory variables. Interpreting a Cox model involves examining the regression coefficients - a positive coefficient means a higher hazard/worse prognosis, while a negative coefficient implies a better prognosis. The model from a study of melanoma patients' survival found age and cancer type increased hazard, while male sex decreased it, and interferon treatment did not significantly impact survival.
Use Proportional Hazards Regression Method To Analyze The Survival of Patient...Waqas Tariq
The Kaplan Meier method is used to analyze data based on the survival time. In this paper used Kaplan Meier procedure and Cox regression with these objectives. The objectives are finding the percentage of survival at any time of interest, comparing the survival time of two studied groups and examining the effect of continuous covariates with the relationship between an event and possible explanatory variables. The variables (Age, Gender, Weight, Drinking, Smoking, District, Employer, Blood Group) are used to study the survival patients with cancer stomach. The data in this study taken from Hiwa/Hospital in Sualamaniyah governorate during the period of (48) months starting from (1/1/2010) to (31/12/2013) .After Appling the Cox model and achieve the hypothesis we estimated the parameters of the model by using (Partial Likelihood) method and then test the variables by using (Wald test) the result show that the variables age and weight are influential at the survival of time.
This document outlines topics related to survival analysis, including its objectives and key methods. Survival analysis is used to analyze longitudinal data on events like death or disease onset over time. It accounts for censoring of data. The Kaplan-Meier method estimates survival rates without dividing time into intervals like life tables do. The log-rank test statistically compares survival curves between groups. Cox regression analysis examines the relationship between covariates and survival while allowing hazards to vary over time.
An illustration of the usefulness of the multi-state model survival analysis ...cheweb1
This seminar will demonstrate the potential of multi-state survival modeling (MSM) as a tool for decision analytic modelling and compare it to the usual Markov transition modelling approach. After briefly reviewing examples of MSM in the health economics literature, a technology appraisal submitted to NICE evaluating the cost effectiveness of Rituximab for first line treatment of chronic lymphocytic leukaemia will be used for illustration purposes. Finally, areas of future research will be outlined.
A gentle introduction to survival analysisAngelo Tinazzi
This document provides an introduction to survival analysis techniques for statistical programmers. It discusses key concepts in survival analysis including censoring, the Kaplan-Meier method for estimating survival probabilities, and assumptions of survival models. Programming aspects like creating time-to-event datasets and using SAS procedures for survival analysis are also covered.
This document discusses survival analysis and Cox regression for cancer clinical trials. It begins with an introduction to Cox regression analysis and how it can be used to analyze the effects of covariates on survival rates in cancer trials. The document then provides examples of Cox regression outputs and how to interpret the results, including checking the proportional hazards assumption. It cautions against some invalid methods of survival analysis that do not properly account for censored or time-dependent data.
This document discusses survival analysis techniques. It defines key survival analysis concepts like survival time, censoring, survival curves, and hazard functions. It describes several approaches to expressing prognosis like case-fatality rates, 5-year survival, median survival time, and relative survival. Life table analysis and the Kaplan-Meier method are introduced as nonparametric techniques for analyzing censored survival data.
The document discusses survival analysis and Cox regression for cancer clinical trials. It begins with an overview of clinical trials for cancer, noting their complexity, long duration, high costs, and ethical concerns. It then covers survival analysis, describing key concepts like survival curves, hazard functions, and the Kaplan-Meier method for estimating survival when there is censoring. The document provides an example of survival data from a cancer study and discusses assumptions and parameters used in survival analysis like median and mean survival times.
Probability Models for Estimating Haplotype Frequencies and Bayesian Survival...Université de Dschang
M. Kum Cletus Kwa a soutenu une thèse de Doctorat/Phd en mathématiques ce 14 juin 2016 à l'Université de Dschang. Le jury lui a décerné à l'issue des échanges la mention très honorable.
This document provides an overview of survival analysis. It defines key terms like survival, censoring, and hazard functions. It describes the Kaplan-Meier method for estimating survival functions from censored data and comparing survival curves between groups using the log-rank test. Censoring occurs when subjects are lost to follow-up before the event of interest. The Kaplan-Meier method accounts for censoring to calculate the probability of surviving up to different time points.
This document provides an overview of survival analysis. It defines survival analysis as statistical methods for analyzing longitudinal data on the occurrence of events over time. Key features include events that may or may not occur for subjects and the length of time until an event can vary. Censoring, where subjects drop out before an event, is accommodated. The objectives, terms, and reasons for using survival analysis are described. Key concepts like hazard rates, survival functions, and the Kaplan-Meier estimate are also introduced.
This document discusses survival analysis techniques. It begins with an overview of survival, censoring, and the need for survival analysis when not all patients have died or had the event of interest. It then describes the key techniques of life tables/actuarial analysis and the Kaplan-Meier method. Life tables involve constructing a hypothetical cohort and estimating survival at different ages based on mortality rates. The Kaplan-Meier method is commonly used to illustrate survival curves and gives partial credit to censored observations. A modified life table is also presented to analyze survival outcomes in different treatment groups.
Survival analysis is a branch of statistics used to analyze time-to-event data, such as time until death or failure. It estimates the probability that an individual survives past a given time and compares survival times between groups. Objectives include estimating survival probabilities, comparing survival between groups, and assessing how covariates relate to survival time. Survival data can be complete or censored. The Kaplan-Meier estimator is used to estimate survival when there is censoring. The log-rank test compares survival curves between treatment groups, and Cox regression incorporates covariates to predict survival probabilities.
1. Sampling error refers to the sample means not being equal to the population mean and differing from each other.
2. The distribution of sample means follows certain patterns. If drawn from a normal population, the sample mean follows a normal distribution. If from a non-normal population, the distribution approximates normal as sample size increases.
3. Confidence intervals provide a range that is likely to include the true population parameter. Formulas are given for calculating confidence intervals for a population mean, single or two population probabilities, and the difference between two population means or probabilities. Sample size considerations are also discussed.
Survival Data Analysis for Sekolah Tinggi Ilmu Statistik JakartaSetia Pramana
This document outlines a course on survival data analysis. It provides an overview of the course content which includes introduction to survival data, Kaplan-Meier survival curves, Cox proportional hazards models, parametric survival functions, and competing risks. It also describes the course workload which is 40% theory and 60% practice, including a group project, weekly presentations, and using R for analysis. Reference books and materials for learning R for survival analysis are also provided.
Stability criterion of periodic oscillations in a (1)Alexander Decker
This document summarizes a mathematical model of the interaction between HIV, immune cells (CTLs), and antiretroviral drug treatment. The model accounts for time delays between viral infection of cells and viral production. It consists of 4 equations tracking healthy CD4+ T-cells, latently infected cells, productively infected cells, and free virus over time. The paper analyzes the model's equilibrium states and how time delays impact the stability of periodic oscillations. It finds that oscillations are stable if the time delay is within certain bounds, and drug efficacy can lead to oscillation death at a critical delay value. This threshold could help control HIV dynamics through treatment intervention.
This document analyzes breast cancer survival rates using survival analysis. It provides background on breast cancer being a leading cause of death for women due to abnormal breast cell growth. Survival analysis is used to determine how treatment accuracy affects survival time. The study analyzed 15 breast cancer patients diagnosed from 1989-1995, finding 60% at stage 2, 40% alive at last contact, and 53% dying of breast cancer. The key challenge in survival analysis is censoring, which occurs when the exact survival time is unknown. There are two types - left censoring when the actual time is less than observed, and right censoring when the actual time is at least as long as observed. Examples given are unknown start of breast cancer infection and deaths not
- The document describes two Markov models (Muenz-Rubinstein and Azzalini) for analyzing health condition data from the Health and Retirement Survey (HRS)
- It fits both models to HRS data to predict health conditions (dependent variable) based on age, gender, and BMI (independent variables)
- Results show Azzalini's model was more efficient at predicting health conditions compared to Muenz-Rubinstein based on the estimated efficiencies of each model
This document provides an overview of key concepts in statistics, including:
1. Statistics involves the systematic presentation of numerical data to minimize erroneous conclusions when information is incomplete. Induction and deduction are two main methods of assessment, and samples are used to make reasonable conclusions about whole populations.
2. For a sample to be representative, it should be randomly selected, large in size, and stratified if necessary to account for subgroups. Random allocation in experiments helps ensure intervention and control groups are similar.
3. Common statistical terms are defined, including mean, median, mode, range, and standard deviation. Normal distribution and confidence intervals are also explained.
Three statistical methods were used to address class imbalance in predicting rare events like CLABSI infections: exact logistic regression, rare events logistic regression, and Firth's penalized likelihood method. Gradient boosting machines using SMOTE oversampling of the rare positive cases achieved the best performance on the validation data with a sensitivity of 72.7%, specificity of 84.8%, and AUC of 85.3%. The optimal model was integrated into the hospital system to generate real-time alerts when a patient's infection risk score exceeded 70%.
This document discusses analyzing data from a study where rats were exposed to different doses of a potential carcinogen. Rats were followed until a tumor developed or 104 days. The study aims to determine if exposed rats develop tumors faster than unexposed rats. It discusses challenges in analyzing time-to-event data like censoring and lack of symmetry. Key methods covered include hazard ratios, log-rank tests, and comparing tumor rates between groups while accounting for varying follow-up times.
This chapter provides a historical introduction to mathematical modeling of epidemics and rumors. It discusses early empirical modeling from the 1700s, the first deterministic SIR model from the early 1900s, the development of homogeneous mixing models in the mid-1900s, and early stochastic models. The chapter outlines different modeling approaches and terminology. It concludes that modeling has progressed from curve-fitting empirical data to developing deterministic and stochastic models in both continuous and discrete time to better understand disease transmission dynamics.
This editorial discusses a study that uses survival modeling to predict the benefits of adjuvant chemoradiotherapy for patients with resected gallbladder cancer. The editorial makes three key points:
1) Computer modeling can help address gaps in clinical cancer research when randomized controlled trial data is limited, by using existing data to generate individualized survival predictions and treatment benefit estimates.
2) The gallbladder cancer study demonstrates how modeling can provide guidance for clinical decisions and research directions when high-quality evidence is lacking.
3) Models must be rigorously developed and validated, but can complement clinical trials by extending findings to new populations and longer time horizons not feasible in trials.
MATHEMATICAL MODELLING OF EPIDEMIOLOGY IN PRESENCE OF VACCINATION AND DELAYcscpconf
The Mathematical modeling of infectious disease is currently a major research topic in the public health domain. In some cases the infected individuals may not be infectious at the time of
infection. To become infectious, the infected individuals take some times which is known as latent period or delay. Here the two SIR models are taken into consideration for present analysis where the newly entered individuals have been vaccinated with a specific rate. The analysis of these models show that if vaccination is administered to the newly entering individuals then the system will be asymptotically stable in both cases i.e. with delay and
without delay
MEDICAL IMAGING MUTIFRACTAL ANALYSIS IN PREDICTION OF EFFICIENCY OF CANCER TH...csandit
Multifractal analysis of breast tumor tissue prior to chemotherapy can predict chemotherapy response with high accuracy. It was shown to distinguish histological images of different response groups with over 82% accuracy for pathological complete response and progressive/stable disease. The maximum of the multifractal spectrum parameter f(α)max provided the most important predictive value, suggesting it may detect unknown structural clues related to drug resistance. Further investigation of f(α)max could help characterize its predictive potential.
This document compares several dimension reduction techniques for survival analysis when there are many covariates: principal component analysis (PCA), partial least squares (PLS), and three variants of random matrices (RM) based on Johnson-Lindenstrauss embeddings. It simulates 5,000 datasets using the accelerated failure time model and determines the total bias error and mean-squared error between the true and estimated survivor curves for each method. The results indicate that PCA outperforms PLS, the RMs are comparable, and the RMs outdo both PCA and PLS.
This document discusses different methods for analyzing survival data in clinical trials, including Kaplan-Meier survival analysis and restricted mean survival time (RMST) analysis. It reviews literature on survival analysis concepts and applications. The document also notes limitations of Kaplan-Meier analysis when data does not satisfy proportional hazards assumptions or when patients are lost to follow up. RMST is presented as an alternative to estimate mean survival times without these limitations. The document then applies different survival analysis methods to a dataset to compare results.
This document presents a Bayesian semiparametric framework for analyzing semicompeting risks data where the observation of time to a non-terminal event (e.g. hospital readmission) is subject to a terminal event (e.g. death). The framework models the hazards of the non-terminal and terminal events using a shared frailty illness-death model, accounting for dependence between events. It allows researchers to estimate regression parameters, characterize event dependence, and predict outcomes. The framework is applied to Medicare data on pancreatic cancer patients to investigate risks of readmission and death.
Abstract- Statistical models include issues such as statistical characterization of numerical data, estimating the probabilistic future behaviour of a system based on past behaviour, extrapolation or interpolation of data based on some best-fit, error estimates of observations or model generated output. If the statistical model is used to analyse the survival data it is known as statistical model in survival analysis. There are different statistical data. Censored data is one of its kinds. Censoring means the actual survival time is unknown. Censoring may occur when a person does not experience the event before the study ends or lost to follow-up during the study period or withdraws from the study. For this type of censored data the suitable model is survival models. Survival models are classified as non-parametric, semi-parametric and parametric models. The survival probability can be obtained using these models. Using the health data of cancer registry in Tiruchirappalli, Tamil Nadu , a study on survival pattern of cancer patients was explored, the non-parametric modelling that is Kaplan-Meier method was used to estimate the survival probability and the comparison of survival probability of obtained by life table and Kaplan Meier methods for each stage of the disease were made. Log rank test has been used for the comparison between the estimates obtained at the different stages of the disease.
Clinical Trials Versus Health Outcomes Research: SAS/STAT Versus SAS Enterpri...cambridgeWD
Clinical trials and health outcomes research differ in important ways that impact statistical modeling approaches. Clinical trials typically use homogeneous samples and focus on a single endpoint, while health outcomes data is heterogeneous with multiple endpoints. Predictive modeling techniques used in health outcomes research, like those in SAS Enterprise Miner, are better suited than traditional methods as they can handle complex real-world data without strong assumptions and more accurately predict rare events. Validation of models on separate test data is also important for generalizing results.
Probability Models for Estimating Haplotype Frequencies and Bayesian Survival...Université de Dschang
M. Kum Cletus Kwa a soutenu une thèse de Doctorat/Phd en mathématiques ce 14 juin 2016 à l'Université de Dschang. Le jury lui a décerné à l'issue des échanges la mention très honorable.
This document provides an overview of survival analysis. It defines key terms like survival, censoring, and hazard functions. It describes the Kaplan-Meier method for estimating survival functions from censored data and comparing survival curves between groups using the log-rank test. Censoring occurs when subjects are lost to follow-up before the event of interest. The Kaplan-Meier method accounts for censoring to calculate the probability of surviving up to different time points.
This document provides an overview of survival analysis. It defines survival analysis as statistical methods for analyzing longitudinal data on the occurrence of events over time. Key features include events that may or may not occur for subjects and the length of time until an event can vary. Censoring, where subjects drop out before an event, is accommodated. The objectives, terms, and reasons for using survival analysis are described. Key concepts like hazard rates, survival functions, and the Kaplan-Meier estimate are also introduced.
This document discusses survival analysis techniques. It begins with an overview of survival, censoring, and the need for survival analysis when not all patients have died or had the event of interest. It then describes the key techniques of life tables/actuarial analysis and the Kaplan-Meier method. Life tables involve constructing a hypothetical cohort and estimating survival at different ages based on mortality rates. The Kaplan-Meier method is commonly used to illustrate survival curves and gives partial credit to censored observations. A modified life table is also presented to analyze survival outcomes in different treatment groups.
Survival analysis is a branch of statistics used to analyze time-to-event data, such as time until death or failure. It estimates the probability that an individual survives past a given time and compares survival times between groups. Objectives include estimating survival probabilities, comparing survival between groups, and assessing how covariates relate to survival time. Survival data can be complete or censored. The Kaplan-Meier estimator is used to estimate survival when there is censoring. The log-rank test compares survival curves between treatment groups, and Cox regression incorporates covariates to predict survival probabilities.
1. Sampling error refers to the sample means not being equal to the population mean and differing from each other.
2. The distribution of sample means follows certain patterns. If drawn from a normal population, the sample mean follows a normal distribution. If from a non-normal population, the distribution approximates normal as sample size increases.
3. Confidence intervals provide a range that is likely to include the true population parameter. Formulas are given for calculating confidence intervals for a population mean, single or two population probabilities, and the difference between two population means or probabilities. Sample size considerations are also discussed.
Survival Data Analysis for Sekolah Tinggi Ilmu Statistik JakartaSetia Pramana
This document outlines a course on survival data analysis. It provides an overview of the course content which includes introduction to survival data, Kaplan-Meier survival curves, Cox proportional hazards models, parametric survival functions, and competing risks. It also describes the course workload which is 40% theory and 60% practice, including a group project, weekly presentations, and using R for analysis. Reference books and materials for learning R for survival analysis are also provided.
Stability criterion of periodic oscillations in a (1)Alexander Decker
This document summarizes a mathematical model of the interaction between HIV, immune cells (CTLs), and antiretroviral drug treatment. The model accounts for time delays between viral infection of cells and viral production. It consists of 4 equations tracking healthy CD4+ T-cells, latently infected cells, productively infected cells, and free virus over time. The paper analyzes the model's equilibrium states and how time delays impact the stability of periodic oscillations. It finds that oscillations are stable if the time delay is within certain bounds, and drug efficacy can lead to oscillation death at a critical delay value. This threshold could help control HIV dynamics through treatment intervention.
This document analyzes breast cancer survival rates using survival analysis. It provides background on breast cancer being a leading cause of death for women due to abnormal breast cell growth. Survival analysis is used to determine how treatment accuracy affects survival time. The study analyzed 15 breast cancer patients diagnosed from 1989-1995, finding 60% at stage 2, 40% alive at last contact, and 53% dying of breast cancer. The key challenge in survival analysis is censoring, which occurs when the exact survival time is unknown. There are two types - left censoring when the actual time is less than observed, and right censoring when the actual time is at least as long as observed. Examples given are unknown start of breast cancer infection and deaths not
- The document describes two Markov models (Muenz-Rubinstein and Azzalini) for analyzing health condition data from the Health and Retirement Survey (HRS)
- It fits both models to HRS data to predict health conditions (dependent variable) based on age, gender, and BMI (independent variables)
- Results show Azzalini's model was more efficient at predicting health conditions compared to Muenz-Rubinstein based on the estimated efficiencies of each model
This document provides an overview of key concepts in statistics, including:
1. Statistics involves the systematic presentation of numerical data to minimize erroneous conclusions when information is incomplete. Induction and deduction are two main methods of assessment, and samples are used to make reasonable conclusions about whole populations.
2. For a sample to be representative, it should be randomly selected, large in size, and stratified if necessary to account for subgroups. Random allocation in experiments helps ensure intervention and control groups are similar.
3. Common statistical terms are defined, including mean, median, mode, range, and standard deviation. Normal distribution and confidence intervals are also explained.
Three statistical methods were used to address class imbalance in predicting rare events like CLABSI infections: exact logistic regression, rare events logistic regression, and Firth's penalized likelihood method. Gradient boosting machines using SMOTE oversampling of the rare positive cases achieved the best performance on the validation data with a sensitivity of 72.7%, specificity of 84.8%, and AUC of 85.3%. The optimal model was integrated into the hospital system to generate real-time alerts when a patient's infection risk score exceeded 70%.
This document discusses analyzing data from a study where rats were exposed to different doses of a potential carcinogen. Rats were followed until a tumor developed or 104 days. The study aims to determine if exposed rats develop tumors faster than unexposed rats. It discusses challenges in analyzing time-to-event data like censoring and lack of symmetry. Key methods covered include hazard ratios, log-rank tests, and comparing tumor rates between groups while accounting for varying follow-up times.
This chapter provides a historical introduction to mathematical modeling of epidemics and rumors. It discusses early empirical modeling from the 1700s, the first deterministic SIR model from the early 1900s, the development of homogeneous mixing models in the mid-1900s, and early stochastic models. The chapter outlines different modeling approaches and terminology. It concludes that modeling has progressed from curve-fitting empirical data to developing deterministic and stochastic models in both continuous and discrete time to better understand disease transmission dynamics.
This editorial discusses a study that uses survival modeling to predict the benefits of adjuvant chemoradiotherapy for patients with resected gallbladder cancer. The editorial makes three key points:
1) Computer modeling can help address gaps in clinical cancer research when randomized controlled trial data is limited, by using existing data to generate individualized survival predictions and treatment benefit estimates.
2) The gallbladder cancer study demonstrates how modeling can provide guidance for clinical decisions and research directions when high-quality evidence is lacking.
3) Models must be rigorously developed and validated, but can complement clinical trials by extending findings to new populations and longer time horizons not feasible in trials.
MATHEMATICAL MODELLING OF EPIDEMIOLOGY IN PRESENCE OF VACCINATION AND DELAYcscpconf
The Mathematical modeling of infectious disease is currently a major research topic in the public health domain. In some cases the infected individuals may not be infectious at the time of
infection. To become infectious, the infected individuals take some times which is known as latent period or delay. Here the two SIR models are taken into consideration for present analysis where the newly entered individuals have been vaccinated with a specific rate. The analysis of these models show that if vaccination is administered to the newly entering individuals then the system will be asymptotically stable in both cases i.e. with delay and
without delay
MEDICAL IMAGING MUTIFRACTAL ANALYSIS IN PREDICTION OF EFFICIENCY OF CANCER TH...csandit
Multifractal analysis of breast tumor tissue prior to chemotherapy can predict chemotherapy response with high accuracy. It was shown to distinguish histological images of different response groups with over 82% accuracy for pathological complete response and progressive/stable disease. The maximum of the multifractal spectrum parameter f(α)max provided the most important predictive value, suggesting it may detect unknown structural clues related to drug resistance. Further investigation of f(α)max could help characterize its predictive potential.
This document compares several dimension reduction techniques for survival analysis when there are many covariates: principal component analysis (PCA), partial least squares (PLS), and three variants of random matrices (RM) based on Johnson-Lindenstrauss embeddings. It simulates 5,000 datasets using the accelerated failure time model and determines the total bias error and mean-squared error between the true and estimated survivor curves for each method. The results indicate that PCA outperforms PLS, the RMs are comparable, and the RMs outdo both PCA and PLS.
This document discusses different methods for analyzing survival data in clinical trials, including Kaplan-Meier survival analysis and restricted mean survival time (RMST) analysis. It reviews literature on survival analysis concepts and applications. The document also notes limitations of Kaplan-Meier analysis when data does not satisfy proportional hazards assumptions or when patients are lost to follow up. RMST is presented as an alternative to estimate mean survival times without these limitations. The document then applies different survival analysis methods to a dataset to compare results.
This document presents a Bayesian semiparametric framework for analyzing semicompeting risks data where the observation of time to a non-terminal event (e.g. hospital readmission) is subject to a terminal event (e.g. death). The framework models the hazards of the non-terminal and terminal events using a shared frailty illness-death model, accounting for dependence between events. It allows researchers to estimate regression parameters, characterize event dependence, and predict outcomes. The framework is applied to Medicare data on pancreatic cancer patients to investigate risks of readmission and death.
Abstract- Statistical models include issues such as statistical characterization of numerical data, estimating the probabilistic future behaviour of a system based on past behaviour, extrapolation or interpolation of data based on some best-fit, error estimates of observations or model generated output. If the statistical model is used to analyse the survival data it is known as statistical model in survival analysis. There are different statistical data. Censored data is one of its kinds. Censoring means the actual survival time is unknown. Censoring may occur when a person does not experience the event before the study ends or lost to follow-up during the study period or withdraws from the study. For this type of censored data the suitable model is survival models. Survival models are classified as non-parametric, semi-parametric and parametric models. The survival probability can be obtained using these models. Using the health data of cancer registry in Tiruchirappalli, Tamil Nadu , a study on survival pattern of cancer patients was explored, the non-parametric modelling that is Kaplan-Meier method was used to estimate the survival probability and the comparison of survival probability of obtained by life table and Kaplan Meier methods for each stage of the disease were made. Log rank test has been used for the comparison between the estimates obtained at the different stages of the disease.
Clinical Trials Versus Health Outcomes Research: SAS/STAT Versus SAS Enterpri...cambridgeWD
Clinical trials and health outcomes research differ in important ways that impact statistical modeling approaches. Clinical trials typically use homogeneous samples and focus on a single endpoint, while health outcomes data is heterogeneous with multiple endpoints. Predictive modeling techniques used in health outcomes research, like those in SAS Enterprise Miner, are better suited than traditional methods as they can handle complex real-world data without strong assumptions and more accurately predict rare events. Validation of models on separate test data is also important for generalizing results.
Clinical Trials Versus Health Outcomes Research: SAS/STAT Versus SAS Enterpri...cambridgeWD
This document discusses the differences between clinical trials and health outcomes research. Clinical trials use homogeneous samples, surrogate endpoints, and focus on a single outcome. They are also typically underpowered for rare events. Health outcomes research uses heterogeneous data from the general population to examine multiple real endpoints simultaneously. It has larger samples and data that allow analysis of rare occurrences. Predictive modeling is better suited than traditional statistical methods for analyzing heterogeneous health outcomes data due to relaxed assumptions like normality.
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
This document presents a sequential probit model analysis of infant mortality in Nigeria using 2003 Nigeria Demographic and Health Survey data. The analysis examined factors affecting an infant's survival (stage 1) and age at death (stage 2). For stage 1, results showed infant mortality was positively affected by birth order and breastfeeding duration, and negatively affected by mother's education, number of total children born, and place of delivery. For stage 2, only place of delivery significantly affected infant's age at death. The error terms between stages were found to be significantly correlated.
This document summarizes a study on analyzing survival times of prostate cancer patients in Ilorin, Nigeria. The Kaplan-Meier method was used to estimate median survival times and the survival function for patients who underwent surgery (19 days) or chemotherapy (40 days). A log-rank test found no significant difference in survival between treatment groups. Cox regression estimated a hazards ratio of 0.84, indicating chemotherapy carried a lower risk of death than surgery. The study aimed to analyze and compare survival functions, hazards, and fit survival models to the data.
Estimating the proportion cured of cancer: Some practical advice for usersCancer Council NSW
Cure models can provide improved possibilities for inference if used appropriately, but there is potential for misleading results if care is not taken. In this study, we compared five commonly used approaches for modelling cure in a relative survival framework and provide some practical advice
on the use of these approaches.
This document describes a mathematical model that was developed to test the hypothesis of tumor self-seeding. Specifically:
1) The model represents the human circulatory system as a directed network with organs as nodes and arteries/veins as edges.
2) Circulating tumor cells are introduced into the model, with heterogeneity defined by phenotypes from the primary tumor.
3) The model incorporates the biological processes of shedding rate from the primary tumor, CTC heterogeneity, organ-specific filtration fractions, and clearance rates from blood and organs.
4) The goals of the model are to define CTC concentrations and phenotypes over time, calculate the average CTC lifespan in circulation, and better understand metastasis patterns based
This paper proposes a vaccine-dependent mathematical model to study the transmission dynamics of tuberculosis (TB) epidemics at the population level. The model divides the population into susceptible, latently infected unvaccinated, latently infected vaccinated, actively infected, recovered, and vaccinated classes. The paper proves the existence and uniqueness of a solution to the system of equations that defines the model. It also shows that the infection will die out if the basic reproduction number is less than one. The model could be used to estimate new TB infections and help design prevention and intervention strategies.
2019/ 10/ 11 Originality Report
iginalityReport/ultra?attemptId=929ba456-7401-4cb2-8818-df87061d712f&course_id=_65931_1&includeDeleted=true&print=true&download=true… 1/3
%33
SafeAssign Originality Report
(Current Semester - الفصل الحالي)HCM-506: Appli… • Turnitin Plagiarism Checker
%33Total Score: Medium risk
MOHAMMED ALAHMARI
Submission UUID: b770c976-29f1-45b0-6be2-a23c9f149d8d
Total Number of Repo…
1
Highest Match
33 %
133333.docx
Average Match
33 %
Submitted on
10/11/19
10:36 AM GMT+3
Average Word Count
362
Highest: 133333.docx
%33Attachment 1
Global database (3)
Student paper Student paper Student paper
Top sources (3)
Excluded sources (0)
View Originality Report - Old Design
Word Count: 362
133333.docx
3 2 1
3 Student paper 2 Student paper 1 Student paper
https://lms.seu.edu.sa/webapps/mdb-sa-BBLEARN/originalityReport?attemptId=929ba456-7401-4cb2-8818-df87061d712f&course_id=_65931_1&download=true&includeDeleted=true&print=true&force=true
2019/ 10/ 11 Originality Report
iginalityReport/ultra?attemptId=929ba456-7401-4cb2-8818-df87061d712f&course_id=_65931_1&includeDeleted=true&print=true&download=true… 2/3
Source Matches (6)
Student paper
86%
Student paper
98%
Student paper
96%
Student paper
92%
Module 13
Name: Date:
Kaplan-Meier plots: The goal is to estimate a population survival curve from a sample. i) If every patient is followed until
death, the curve may be estimated simply by computing the fraction surviving at each time. ii) However, in most studies
patients tend to drop out, become lost to follow-up, move away, etc. iii) A Kaplan-Meier analysis allows estimation of
survival over time, even when points drop out or are studied for different lengths of time.
H0 The survival time is not related to the patient treatment group. (Null Hypothesis) H1 The survival time is related to the
patient treatment group. (Alternative Hypothesis)
Baseline Survivor Function (at predictor means)... 0.0000 0.8465 1.0000 0.3114
Interpretation of the curve: Vertical axis represents estimated probability of survival for a hypothetical cohort, not actual %
surviving. • Precision of estimates depends on # observations; therefore, estimates at left-hand side are more precise than at
right-hand side (because of small #’s due to deaths and dropouts). • Curves may give the impression that a given event
occurs more frequently early than late, because of high survival rate and large # people at beginning
Yes, Kaplan-Meier survival curves prognostic indicator of UCEC because it can be generated by selecting the cancer type,
survival type and protein(s) of interest. This technique may also be used for breast, colon and other cancers because its a
statistical technique which gives proper result of survival data of the patients undergone cancer or other treatment.
References Hazra, A., & Gogtay, N. (2016). Biostatistics series module 6: Correlation and linear regression. Indian Journal of
Dermatology, 61(6), 593-601. Retr.
1- Why was the Tomasetti et al article so misinterpreted by thAgripinaBeaulieuyw
1- Why was the Tomasetti et al article so misinterpreted by the media? Draw from
your own personal experiences to discuss other scientific phenomenon that has
been similarly misinterpreted.
2- What might be the consequences of scientific misreporting of this article?
3- What is the overall value of the Tomasetti et al paper? Cite two and be specific in
your rational for selecting these reasons.
4- Cite three criticisms of this paper. Be specific in the rationale for your criticism.
5- Design an experiment which might distinguish between environmental effects
and replicative errors in canter etiology.
You may assume you have access to all resources currently available to scientists
today,
Be specific in elucidating the application of the scientific method to your
experiment:
1) Observation
2) Hypothesis
3) Predictions
4) Experiment: Independent and dependent variables, controls, uncontrolled
variables, sample size
5) Results (predicted): How would you analyze your data
6)Conclusions: predict conclusions for the various outcomes of your experiment
REPORT
◥
CANCER ETIOLOGY
Stem cell divisions, somatic
mutations, cancer etiology, and
cancer prevention
Cristian Tomasetti,1,2* Lu Li,2 Bert Vogelstein3*
Cancers are caused by mutations that may be inherited, induced by environmental
factors, or result from DNA replication errors (R). We studied the relationship between
the number of normal stem cell divisions and the risk of 17 cancer types in 69 countries
throughout the world. The data revealed a strong correlation (median = 0.80) between
cancer incidence and normal stem cell divisions in all countries, regardless of their
environment. The major role of R mutations in cancer etiology was supported by an
independent approach, based solely on cancer genome sequencing and epidemiological
data, which suggested that R mutations are responsible for two-thirds of the mutations
in human cancers. All of these results are consistent with epidemiological estimates of the
fraction of cancers that can be prevented by changes in the environment. Moreover, they
accentuate the importance of early detection and intervention to reduce deaths from the
many cancers arising from unavoidable R mutations.
I
t is now widely accepted that cancer is the
result of the gradual accumulation of driver
gene mutations that successively increase cell
proliferation (1–3). But what causes these muta-
tions? The role of environmental factors (E)
in cancer development has long been evident
from epidemiological studies, and this has fun-
damental implications for primary prevention.
The role of heredity (H) has been conclusively
demonstrated from both twin studies (4) and the
identification of the genes responsible for cancer
predisposition syndromes (3, 5). We recently hy-
pothesized that a third source—mutations due to
the random mistakes made during normal DNA
replication (R)—can explain why cancers occur
much more commonly in som ...
1- Why was the Tomasetti et al article so misinterpreted by thsachazerbelq9l
1- Why was the Tomasetti et al article so misinterpreted by the media? Draw from
your own personal experiences to discuss other scientific phenomenon that has
been similarly misinterpreted.
2- What might be the consequences of scientific misreporting of this article?
3- What is the overall value of the Tomasetti et al paper? Cite two and be specific in
your rational for selecting these reasons.
4- Cite three criticisms of this paper. Be specific in the rationale for your criticism.
5- Design an experiment which might distinguish between environmental effects
and replicative errors in canter etiology.
You may assume you have access to all resources currently available to scientists
today,
Be specific in elucidating the application of the scientific method to your
experiment:
1) Observation
2) Hypothesis
3) Predictions
4) Experiment: Independent and dependent variables, controls, uncontrolled
variables, sample size
5) Results (predicted): How would you analyze your data
6)Conclusions: predict conclusions for the various outcomes of your experiment
REPORT
◥
CANCER ETIOLOGY
Stem cell divisions, somatic
mutations, cancer etiology, and
cancer prevention
Cristian Tomasetti,1,2* Lu Li,2 Bert Vogelstein3*
Cancers are caused by mutations that may be inherited, induced by environmental
factors, or result from DNA replication errors (R). We studied the relationship between
the number of normal stem cell divisions and the risk of 17 cancer types in 69 countries
throughout the world. The data revealed a strong correlation (median = 0.80) between
cancer incidence and normal stem cell divisions in all countries, regardless of their
environment. The major role of R mutations in cancer etiology was supported by an
independent approach, based solely on cancer genome sequencing and epidemiological
data, which suggested that R mutations are responsible for two-thirds of the mutations
in human cancers. All of these results are consistent with epidemiological estimates of the
fraction of cancers that can be prevented by changes in the environment. Moreover, they
accentuate the importance of early detection and intervention to reduce deaths from the
many cancers arising from unavoidable R mutations.
I
t is now widely accepted that cancer is the
result of the gradual accumulation of driver
gene mutations that successively increase cell
proliferation (1–3). But what causes these muta-
tions? The role of environmental factors (E)
in cancer development has long been evident
from epidemiological studies, and this has fun-
damental implications for primary prevention.
The role of heredity (H) has been conclusively
demonstrated from both twin studies (4) and the
identification of the genes responsible for cancer
predisposition syndromes (3, 5). We recently hy-
pothesized that a third source—mutations due to
the random mistakes made during normal DNA
replication (R)—can explain why cancers occur
much more commonly in som ...
1- Why was the Tomasetti et al article so misinterpreted by th.docxjeremylockett77
1- Why was the Tomasetti et al article so misinterpreted by the media? Draw from
your own personal experiences to discuss other scientific phenomenon that has
been similarly misinterpreted.
2- What might be the consequences of scientific misreporting of this article?
3- What is the overall value of the Tomasetti et al paper? Cite two and be specific in
your rational for selecting these reasons.
4- Cite three criticisms of this paper. Be specific in the rationale for your criticism.
5- Design an experiment which might distinguish between environmental effects
and replicative errors in canter etiology.
You may assume you have access to all resources currently available to scientists
today,
Be specific in elucidating the application of the scientific method to your
experiment:
1) Observation
2) Hypothesis
3) Predictions
4) Experiment: Independent and dependent variables, controls, uncontrolled
variables, sample size
5) Results (predicted): How would you analyze your data
6)Conclusions: predict conclusions for the various outcomes of your experiment
REPORT
◥
CANCER ETIOLOGY
Stem cell divisions, somatic
mutations, cancer etiology, and
cancer prevention
Cristian Tomasetti,1,2* Lu Li,2 Bert Vogelstein3*
Cancers are caused by mutations that may be inherited, induced by environmental
factors, or result from DNA replication errors (R). We studied the relationship between
the number of normal stem cell divisions and the risk of 17 cancer types in 69 countries
throughout the world. The data revealed a strong correlation (median = 0.80) between
cancer incidence and normal stem cell divisions in all countries, regardless of their
environment. The major role of R mutations in cancer etiology was supported by an
independent approach, based solely on cancer genome sequencing and epidemiological
data, which suggested that R mutations are responsible for two-thirds of the mutations
in human cancers. All of these results are consistent with epidemiological estimates of the
fraction of cancers that can be prevented by changes in the environment. Moreover, they
accentuate the importance of early detection and intervention to reduce deaths from the
many cancers arising from unavoidable R mutations.
I
t is now widely accepted that cancer is the
result of the gradual accumulation of driver
gene mutations that successively increase cell
proliferation (1–3). But what causes these muta-
tions? The role of environmental factors (E)
in cancer development has long been evident
from epidemiological studies, and this has fun-
damental implications for primary prevention.
The role of heredity (H) has been conclusively
demonstrated from both twin studies (4) and the
identification of the genes responsible for cancer
predisposition syndromes (3, 5). We recently hy-
pothesized that a third source—mutations due to
the random mistakes made during normal DNA
replication (R)—can explain why cancers occur
much more commonly in som ...
This document discusses a systematic review and meta-analysis on the relationship between dietary fat intake and breast cancer risk. The meta-analysis included 45 studies with over 25,000 breast cancer patients. It found a small increased risk of breast cancer associated with higher total fat intake. The review also discusses terms related to systematic reviews and meta-analysis such as heterogeneity statistics, I2, and the Q statistic.
This document provides an introduction and tables for determining sample sizes in various health studies. It covers one-sample situations like estimating a population proportion with absolute or relative precision and hypothesis tests for a population proportion. Two-sample situations covered include estimating the difference between two population proportions and hypothesis tests for two population proportions. It also addresses case-control studies, cohort studies, lot quality assurance sampling, and incidence-rate studies. Tables of minimum sample sizes are provided for each situation.
A SEIR MODEL FOR CONTROL OF INFECTIOUS DISEASESSOUMYADAS835019
This document presents a SEIR model for controlling infectious diseases with constraints. It begins with an introduction to SEIR models and their use in modeling disease transmission and testing control strategies. It then motivates the study by discussing the COVID-19 pandemic. The document outlines the basic ideas of the SEIR model and describes the compartments and parameters. It presents the optimal control problem formulated to determine vaccination strategies over time. Potential application areas and future research scope are discussed before concluding with references.
The document discusses different types of epidemiological studies used to study disease distribution and risk factors. Observational studies include cross-sectional and longitudinal studies. Cross-sectional studies provide prevalence data but no time sequence, while longitudinal studies allow studying disease progression over time. Analytical studies include case-control and cohort studies. Case-control studies compare exposures in cases vs controls to identify risk factors, while cohort studies follow groups over time to determine incidence. Experimental studies like randomized controlled trials are used to evaluate new treatments by randomly assigning participants to intervention or control groups.
This document provides an overview of cross-sectional studies. It defines cross-sectional studies as studies that measure prevalence by observing exposures and outcomes in a population at a single point in time. It discusses key aspects of cross-sectional study design such as sampling, data collection methods, analysis of prevalence data, and potential biases like selection bias.
Similar to Logistic Loglogistic With Long Term Survivors For Split Population Model (20)
The Use of Java Swing’s Components to Develop a WidgetWaqas Tariq
Widget is a kind of application provides a single service such as a map, news feed, simple clock, battery-life indicators, etc. This kind of interactive software object has been developed to facilitate user interface (UI) design. A user interface (UI) function may be implemented using different widgets with the same function. In this article, we present the widget as a platform that is generally used in various applications, such as in desktop, web browser, and mobile phone. We also describe a visual menu of Java Swing’s components that will be used to establish widget. It will assume that we have successfully compiled and run a program that uses Swing components.
3D Human Hand Posture Reconstruction Using a Single 2D ImageWaqas Tariq
Passive sensing of the 3D geometric posture of the human hand has been studied extensively over the past decade. However, these research efforts have been hampered by the computational complexity caused by inverse kinematics and 3D reconstruction. In this paper, our objective focuses on 3D hand posture estimation based on a single 2D image with aim of robotic applications. We introduce the human hand model with 27 degrees of freedom (DOFs) and analyze some of its constraints to reduce the DOFs without any significant degradation of performance. A novel algorithm to estimate the 3D hand posture from eight 2D projected feature points is proposed. Experimental results using real images confirm that our algorithm gives good estimates of the 3D hand pose. Keywords: 3D hand posture estimation; Model-based approach; Gesture recognition; human- computer interface; machine vision.
Camera as Mouse and Keyboard for Handicap Person with Troubleshooting Ability...Waqas Tariq
Camera mouse has been widely used for handicap person to interact with computer. The utmost important of the use of camera mouse is must be able to replace all roles of typical mouse and keyboard. It must be able to provide all mouse click events and keyboard functions (include all shortcut keys) when it is used by handicap person. Also, the use of camera mouse must allow users troubleshooting by themselves. Moreover, it must be able to eliminate neck fatigue effect when it is used during long period. In this paper, we propose camera mouse system with timer as left click event and blinking as right click event. Also, we modify original screen keyboard layout by add two additional buttons (button “drag/ drop” is used to do drag and drop of mouse events and another button is used to call task manager (for troubleshooting)) and change behavior of CTRL, ALT, SHIFT, and CAPS LOCK keys in order to provide shortcut keys of keyboard. Also, we develop recovery method which allows users go from camera and then come back again in order to eliminate neck fatigue effect. The experiments which involve several users have been done in our laboratory. The results show that the use of our camera mouse able to allow users do typing, left and right click events, drag and drop events, and troubleshooting without hand. By implement this system, handicap person can use computer more comfortable and reduce the dryness of eyes.
A Proposed Web Accessibility Framework for the Arab DisabledWaqas Tariq
The Web is providing unprecedented access to information and interaction for people with disabilities. This paper presents a Web accessibility framework which offers the ease of the Web accessing for the disabled Arab users and facilitates their lifelong learning as well. The proposed framework system provides the disabled Arab user with an easy means of access using their mother language so they don’t have to overcome the barrier of learning the target-spoken language. This framework is based on analyzing the web page meta-language, extracting its content and reformulating it in a suitable format for the disabled users. The basic objective of this framework is supporting the equal rights of the Arab disabled people for their access to the education and training with non disabled people. Key Words : Arabic Moon code, Arabic Sign Language, Deaf, Deaf-blind, E-learning Interactivity, Moon code, Web accessibility , Web framework , Web System, WWW.
Real Time Blinking Detection Based on Gabor FilterWaqas Tariq
The document proposes a new method for real-time blinking detection based on Gabor filters. It begins by reviewing existing methods and their limitations in dealing with noise, variations in eye shape, and blinking speed. The proposed method uses a Gabor filter to extract the top and bottom arcs of the eye from an image. It then measures the distance between these arcs and compares it to a threshold: a distance below the threshold indicates a closed eye, while a distance above indicates an open eye. The document claims this Gabor filter-based approach is robust to noise, variations in eye shape and blinking speed. It presents experimental results showing the method can accurately detect blinking across different users.
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...Waqas Tariq
A method for computer input with human eyes-only using two Purkinje images which works in a real time basis without calibration is proposed. Experimental results shows that cornea curvature can be estimated by using two light sources derived Purkinje images so that no calibration for reducing person-to-person difference of cornea curvature. It is found that the proposed system allows usersf movements of 30 degrees in roll direction and 15 degrees in pitch direction utilizing detected face attitude which is derived from the face plane consisting three feature points on the face, two eyes and nose or mouth. Also it is found that the proposed system does work in a real time basis.
Toward a More Robust Usability concept with Perceived Enjoyment in the contex...Waqas Tariq
Mobile multimedia service is relatively new but has quickly dominated people¡¯s lives, especially among young people. To explain this popularity, this study applies and modifies the Technology Acceptance Model (TAM) to propose a research model and conduct an empirical study. The goal of study is to examine the role of Perceived Enjoyment (PE) and what determinants can contribute to PE in the context of using mobile multimedia service. The result indicates that PE is influencing on Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) and directly Behavior Intention (BI). Aesthetics and flow are key determinants to explain Perceived Enjoyment (PE) in mobile multimedia usage.
Collaborative Learning of Organisational KnolwedgeWaqas Tariq
This paper presents recent research into methods used in Australian Indigenous Knowledge sharing and looks at how these can support the creation of suitable collaborative envi- ronments for timely organisational learning. The protocols and practices as used today and in the past by Indigenous communities are presented and discussed in relation to their relevance to a personalised system of knowledge sharing in modern organisational cultures. This research focuses on user models, knowledge acquisition and integration of data for constructivist learning in a networked repository of or- ganisational knowledge. The data collected in the repository is searched to provide collections of up-to-date and relevant material for training in a work environment. The aim is to improve knowledge collection and sharing in a team envi- ronment. This knowledge can then be collated into a story or workflow that represents the present knowledge in the organisation.
Our research aims to propose a global approach for specification, design and verification of context awareness Human Computer Interface (HCI). This is a Model Based Design approach (MBD). This methodology describes the ubiquitous environment by ontologies. OWL is the standard used for this purpose. The specification and modeling of Human-Computer Interaction are based on Petri nets (PN). This raises the question of representation of Petri nets with XML. We use for this purpose, the standard of modeling PNML. In this paper, we propose an extension of this standard for specification, generation and verification of HCI. This extension is a methodological approach for the construction of PNML with Petri nets. The design principle uses the concept of composition of elementary structures of Petri nets as PNML Modular. The objective is to obtain a valid interface through verification of properties of elementary Petri nets represented with PNML.
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardWaqas Tariq
The main aim of this paper is to build a system that is capable of detecting and recognizing the hand gesture in an image captured by using a camera. The system is built based on Altera’s FPGA DE2 board, which contains a Nios II soft core processor. Image processing techniques and a simple but effective algorithm are implemented to achieve this purpose. Image processing techniques are used to smooth the image in order to ease the subsequent processes in translating the hand sign signal. The algorithm is built for translating the numerical hand sign signal and the result are displayed on the seven segment display. Altera’s Quartus II, SOPC Builder and Nios II EDS software are used to construct the system. By using SOPC Builder, the related components on the DE2 board can be interconnected easily and orderly compared to traditional method that requires lengthy source code and time consuming. Quartus II is used to compile and download the design to the DE2 board. Then, under Nios II EDS, C programming language is used to code the hand sign translation algorithm. Being able to recognize the hand sign signal from images can helps human in controlling a robot and other applications which require only a simple set of instructions provided a CMOS sensor is included in the system.
An overview on Advanced Research Works on Brain-Computer InterfaceWaqas Tariq
A brain–computer interface (BCI) is a proficient result in the research field of human- computer synergy, where direct articulation between brain and an external device occurs resulting in augmenting, assisting and repairing human cognitive. Advanced works like generating brain-computer interface switch technologies for intermittent (or asynchronous) control in natural environments or developing brain-computer interface by Fuzzy logic Systems or by implementing wavelet theory to drive its efficacies are still going on and some useful results has also been found out. The requirements to develop this brain machine interface is also growing day by day i.e. like neuropsychological rehabilitation, emotion control, etc. An overview on the control theory and some advanced works on the field of brain machine interface are shown in this paper.
Exploring the Relationship Between Mobile Phone and Senior Citizens: A Malays...Waqas Tariq
There is growing ageing phenomena with the rise of ageing population throughout the world. According to the World Health Organization (2002), the growing ageing population indicates 694 million, or 223% is expected for people aged 60 and over, since 1970 and 2025.The growth is especially significant in some advanced countries such as North America, Japan, Italy, Germany, United Kingdom and so forth. This growing older adult population has significantly impact the social-culture, lifestyle, healthcare system, economy, infrastructure and government policy of a nation. However, there are limited research studies on the perception and usage of a mobile phone and its service for senior citizens in a developing nation like Malaysia. This paper explores the relationship between mobile phones and senior citizens in Malaysia from the perspective of a developing country. We conducted an exploratory study using contextual interviews with 5 senior citizens of how they perceive their mobile phones. This paper reveals 4 interesting themes from this preliminary study, in addition to the findings of the desirable mobile requirements for local senior citizens with respect of health, safety and communication purposes. The findings of this study bring interesting insight to local telecommunication industries as a whole, and will also serve as groundwork for more in-depth study in the future.
Principles of Good Screen Design in WebsitesWaqas Tariq
Visual techniques for proper arrangement of the elements on the user screen have helped the designers to make the screen look good and attractive. Several visual techniques emphasize the arrangement and ordering of the screen elements based on particular criteria for best appearance of the screen. This paper investigates few significant visual techniques in various web user interfaces and showcases the results for better understanding and their presence.
This document discusses the progress of virtual teams in Albania. It provides context on virtual teams and how they differ from traditional teams in their reliance on technology for communication across distances. The document then examines the use of virtual teams in Albania, noting the growing infrastructure and technology usage that enables virtual collaboration. It highlights some virtual team examples in Albanian government and academic projects.
Cognitive Approach Towards the Maintenance of Web-Sites Through Quality Evalu...Waqas Tariq
It is a well established fact that the Web-Applications require frequent maintenance because of cutting– edge business competitions. The authors have worked on quality evaluation of web-site of Indian ecommerce domain. As a result of that work they have made a quality-wise ranking of these sites. According to their work and also the survey done by various other groups Futurebazaar web-site is considered to be one of the best Indian e-shopping sites. In this research paper the authors are assessing the maintenance of the same site by incorporating the problems incurred during this evaluation. This exercise gives a real world maintainability problem of web-sites. This work will give a clear picture of all the quality metrics which are directly or indirectly related with the maintainability of the web-site.
USEFul: A Framework to Mainstream Web Site Usability through Automated Evalua...Waqas Tariq
A paradox has been observed whereby web site usability is proven to be an essential element in a web site, yet at the same time there exist an abundance of web pages with poor usability. This discrepancy is the result of limitations that are currently preventing web developers in the commercial sector from producing usable web sites. In this paper we propose a framework whose objective is to alleviate this problem by automating certain aspects of the usability evaluation process. Mainstreaming comes as a result of automation, therefore enabling a non-expert in the field of usability to conduct the evaluation. This results in reducing the costs associated with such evaluation. Additionally, the framework allows the flexibility of adding, modifying or deleting guidelines without altering the code that references them since the guidelines and the code are two separate components. A comparison of the evaluation results carried out using the framework against published evaluations of web sites carried out by web site usability professionals reveals that the framework is able to automatically identify the majority of usability violations. Due to the consistency with which it evaluates, it identified additional guideline-related violations that were not identified by the human evaluators.
Robot Arm Utilized Having Meal Support System Based on Computer Input by Huma...Waqas Tariq
A robot arm utilized having meal support system based on computer input by human eyes only is proposed. The proposed system is developed for handicap/disabled persons as well as elderly persons and tested with able persons with several shapes and size of eyes under a variety of illumination conditions. The test results with normal persons show the proposed system does work well for selection of the desired foods and for retrieve the foods as appropriate as usersf requirements. It is found that the proposed system is 21% much faster than the manually controlled robotics.
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text EditorWaqas Tariq
In recent decades speech interactive systems have gained increasing importance. Performance of an ASR system mainly depends on the availability of large corpus of speech. The conventional method of building a large vocabulary speech recognizer for any language uses a top-down approach to speech. This approach requires large speech corpus with sentence or phoneme level transcription of the speech utterances. The transcriptions must also include different speech order so that the recognizer can build models for all the sounds present. But, for Telugu language, because of its complex nature, a very large, well annotated speech database is very difficult to build. It is very difficult, if not impossible, to cover all the words of any Indian language, where each word may have thousands and millions of word forms. A significant part of grammar that is handled by syntax in English (and other similar languages) is handled within morphology in Telugu. Phrases including several words (that is, tokens) in English would be mapped on to a single word in Telugu.Telugu language is phonetic in nature in addition to rich in morphology. That is why the speech technology developed for English cannot be applied to Telugu language. This paper highlights the work carried out in an attempt to build a voice enabled text editor with capability of automatic term suggestion. Main claim of the paper is the recognition enhancement process developed by us for suitability of highly inflecting, rich morphological languages. This method results in increased speech recognition accuracy with very much reduction in corpus size. It also adapts Telugu words to the database dynamically, resulting in growth of the corpus.
An Improved Approach for Word Ambiguity RemovalWaqas Tariq
Word ambiguity removal is a task of removing ambiguity from a word, i.e. correct sense of word is identified from ambiguous sentences. This paper describes a model that uses Part of Speech tagger and three categories for word sense disambiguation (WSD). Human Computer Interaction is very needful to improve interactions between users and computers. For this, the Supervised and Unsupervised methods are combined. The WSD algorithm is used to find the efficient and accurate sense of a word based on domain information. The accuracy of this work is evaluated with the aim of finding best suitable domain of word. Keywords: Human Computer Interaction, Supervised Training, Unsupervised Learning, Word Ambiguity, Word sense disambiguation
Parameters Optimization for Improving ASR Performance in Adverse Real World N...Waqas Tariq
From the existing research it has been observed that many techniques and methodologies are available for performing every step of Automatic Speech Recognition (ASR) system, but the performance (Minimization of Word Error Recognition-WER and Maximization of Word Accuracy Rate- WAR) of the methodology is not dependent on the only technique applied in that method. The research work indicates that, performance mainly depends on the category of the noise, the level of the noise and the variable size of the window, frame, frame overlap etc is considered in the existing methods. The main aim of the work presented in this paper is to use variable size of parameters like window size, frame size and frame overlap percentage to observe the performance of algorithms for various categories of noise with different levels and also train the system for all size of parameters and category of real world noisy environment to improve the performance of the speech recognition system. This paper presents the results of Signal-to-Noise Ratio (SNR) and Accuracy test by applying variable size of parameters. It is observed that, it is really very hard to evaluate test results and decide parameter size for ASR performance improvement for its resultant optimization. Hence, this study further suggests the feasible and optimum parameter size using Fuzzy Inference System (FIS) for enhancing resultant accuracy in adverse real world noisy environmental conditions. This work will be helpful to give discriminative training of ubiquitous ASR system for better Human Computer Interaction (HCI). Keywords: ASR Performance, ASR Parameters Optimization, Multi-Environmental Training, Fuzzy Inference System for ASR, ubiquitous ASR system, Human Computer Interaction (HCI)
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
Liberal Approach to the Study of Indian Politics.pdf
Logistic Loglogistic With Long Term Survivors For Split Population Model
1. Desi Rahmatina
International Journal of Scientific and Statistical Computing (IJSSC), Volume (4) : Issue (1) : 2013 19
Logistic Loglogistic With Long Term Survivors For Split
Population Model
Desi Rahmatina desirahmatina@gmail.com
Department of Accounting
Universitas Maritim Raja Ali Haji,
Tanjungpinang,Indonesia.
Abstract
The split population model postulates a mixed population with two types of individuals, the
susceptibles and long-term survivors. The susceptibles are at the risk of developing the event
under consideration, and the event would be observed with certainty if complete follow-up were
possible. However, the long-term survivors will never experience the event. We known that
populations are immune in the Stanford Heart Transplant data. This paper focus on the long term
survivors probability vary from individual to individual using logistic model for loglogistic survival
distribution. In addition, a maximum likelihood method to estimate parameters in the split
population model using the Newton-Raphson iterative method.
Keywords: Split Population Model, Logistic Loglogistic Model, Split Loglogistic Model.
1. INTRODUCTION
Split population models are also known as mixture model. The data used in this paper is Stanford
Heart Transplant data. Survival times of potential heart transplant recipients from their date of
acceptance into the Stanford Heart Transplant program [3]. This set consists of the survival times,
in days, uncensored and censored for the 103 patients and with 3 covariates are considered
Ages of patients in years, Surgery and Transplant, failure for these individuals is death. Covariate
methods have been examined quite extensively in the context of parametric survival models for
which the distribution of the survival times depends on the vector of covariates associated with
each individual. See [6] for approaches which accommodate censoring and covariates in the
ordinary exponential model for survival.
Currently, such mixture models with immunes and covariates are in use in many areas such as
medicine and criminology. See for examples [4][5][7]. In our formulation, the covariates are
incorporated into a split loglogistic model by allowing the proportion of ultimate failures and the
rate of failure to depend on the covariates and the unknown parameter vectors via logistic model.
Within this setup, we provide simple sufficient conditions for the existence, consistency, and
asymptotic normality of a maximum likelihood estimator for the parameters involved. As an
application of this theory, the likelihood ratio test for a difference in immune proportions is shown
to have an asymptotic chi-square distribution. These results allow immediate practical
applications on the covariates and also provide some insight into the assumptions on the
covariates and the censoring mechanism that are likely to be needed in practice. Our models and
analysis are described in section 5.
2. PREVIOUS METHODOLOGY AND DISCUSSION
[2] was the first to publish in a discussion paper of the Royal Statistical Society. He used the
method of maximum likelihood to estimate the proportion of cured breast cancer patients in a
population represented by a data set of 121 women from an English hospital. The follow-up time
for each woman varied up to a maximum of 14 years. [2] approach was to assume a lognormal
2. Desi Rahmatina
International Journal of Scientific and Statistical Computing (IJSSC), Volume (4) : Issue (1) 20
distribution as the survival distribution of the susceptibles, although, curiously he noted that an
exponential distribution is in fact a better fit to the particular set of data analyzed, and to treat
deaths from causes other than the cancer under consideration as a censoring mechanism.
[8] used a model consisting of a mixture of the exponential distribution and a degenerate
distribution, to allow for a cured proportion, they fitted this model to a large data set consisting of
2682 patients from the Mayo clinic who suffered from cancer of the stomach. The follow-up time
on some of their patients was as much as 15 years. They noted that a correct interpretation of the
existence of patients cured of the disease should be that the death rates for individual with long
follow-up drop to the baseline death rate of the population.
[9] applied a Weibull mixture model with allowance for immunes to a prospective study on breast
cancer. Information on various factors was collected at a certain time for approximately 5000
women, of whom 48 subsequently developed breast cancer. [9] wished to estimate that
proportion and to investigate how it may be influenced by risk factors, as well as to investigate
how risk factors might affect the time to development of the cancer, if this occurred.
Similarly above, we will allow the covariates associated with individuals, relate the loglogistic with
long term survivors, and relate to the probability of being immune in logistic model.
3. SPLIT MODELS
In this section, we will consider ‘split population models’ (or simply ‘split models’) in which the
probability of eventual death is an additional parameter to be estimated, and may be less than
one. Split models in the biometrics literature, i.e., part of the population is cured and will never
experience the event, and have both a long history [2] and widespread applications and
extensions in recent years [4]. The intuition behind these models is that, while standard duration
models require a proper distribution for the density which makes up the hazard (i.e., one which
integrates to one; in other words, that all subjects in the study will eventually fail), split population
models allow for a subpopulation which never experiences the event of interest. This is typically
accomplished through a mixture of a standard hazard density and a point mass at zero [6]. That
is, split population models estimate an additional parameter (or parameters) for the probability of
eventual failure, which can be less than one for some portion of the data. In contrast, standard
event history models assume that eventually all observations will fail, a strong and often
unrealistic assumption.
In standard survival analysis, data come in the form of failure times that are possibly censored,
along with covariate information on each individual. It is also assumed that if complete follow-up
were possible for all individual, each would eventually experience the event. Sometimes however,
the failure time data come from a population where a substantial proportion of the individuals
does not experience the event at the end of the observation period. In some situations, there is
reason to believe that some of these survivors are actually “cured” or “long–term survivors” the
sense that even after an extended follow-up, no further events are observed on these individuals.
Long-term survivors are those who are not subject to the event of interest. For example, in a
medical study involving patients with a fatal disease, the patients would be expected to die of the
disease sooner or later, and all deaths could be observed if the patients had been followed long
enough. However, when considering endpoints other than death, the assumption may not be
sustainable if long-term survivor are present in population. In contrast, the remaining individuals
are at the risk of developing the event and therefore, they are called susceptibles.
Using the notation of [7], we can express a split model as follows. Suppose that )(tFR is the
usual cumulative distribution function for death only, and ω is the probability of being subject to
reconviction, which is also usually known as the eventual death rate. The probability of being
immune is (1-ω ), which is sometimes described as the rate of termination. This second group of
3. Desi Rahmatina
International Journal of Scientific and Statistical Computing (IJSSC), Volume (4) : Issue (1) 21
immune individuals will never reoffend. Therefore their survival times are infinite (with probability
one) and so their associated cumulative distribution function is identically zero, for all finite t > 0.
If we now define )(tFS =ω )(tFR , as the new cumulative distribution function of failure for the
split-population, then this an improper distribution, in the sense that, for 0 < ω < 1, )(∞SF = ω <
1.
Let Yi be an indicative variable, such that
Yi =
faileventuallywillindividualith;1
failneverwillindividualith;0
and follows the discrete probability distribution
Pr[Yi = 1] = ω
and
Pr[Yi = 0] = (1-ω ).
For any individual belonging to the group of death, we define the density function of eventual
failure as )(tFR with corresponding survival function ),(tSR while for individual belonging to the
other (immune) group, the density function of failure is identically zero and the survival function is
identically one, for all finite time t.
Suppose the conditional probability density function for those who will eventually fail (death) is
=== )()1|( tfYtf R )('
tFR
wherever )(tFR is differentiable. The unconditional probability density function of the failure time
is given by
]1Pr[)1|(]0Pr[)0|()( ==+=== YYtfYYtftfs
= 0 (1-ω ) + )(tfR ω = ω )(tfR .
Similarly, the survival function for the recidivist group is defined as
∫
∞
===>=
t
R duYufYtTtS )1|(]1|Pr[)(
= ).(1)( tFduuf R
t
R −=∫
∞
The unconditional survival time is then defined for the split population as
∫
∞
==+===>=
t
S duYYufYYuftTtS ]}1Pr[)1|(]0Pr[)0|({]Pr[)(
= (1-ω ) +ω )(tSR
which corresponds to the probability of being a long-term survivor plus the probability of being a
recidivist who reoffends at some time beyond t.
In this case,
)()( tFtF RS ω=
is again an improper distribution function for ω < 1.
4. Desi Rahmatina
International Journal of Scientific and Statistical Computing (IJSSC), Volume (4) : Issue (1) 22
4. THE LIKELIHOOD FUNCTION
The likelihood function can then be written as
∏=
−
+−=
n
i
iRiR
ii
tStfL
1
1
)]()1[()]([),( δδ
ωωωθω
and the log-likelihood function becomes
∑=
+−−++==
n
i
iRiiRi tStfLl
1
)]}()1[(ln)1()](ln[ln{),(ln),( ωωδωδθωθω
where iδ is an indicator of the censoring status of observation ti, and θ is vector of all unknown
parameters for )(tfR and )(tSR . The existence of these two types of release, one type that
simply does not reoffend and another that eventually fails according to some distribution, leads to
what may be described as simple split-model. When we modify both )(tfR and )(tSR to include
covariate effects, )|( ztfR and )|( ztSR respectively, then these will be referred to as split
models.
We fit split models to our data using the same three distributions as were considered in the
section (exponential, Weibull and loglogistic). The likelihood values achieved were -511.21, -
495.60 and -489.17, respectively. The loglogistic model fits the estimation better than other two
distributions, while the exponential model better than weibull model. The value of the ‘splitting
parameter’ ω implied by our models were 0.81, 0.84 and 0.78 for the exponential, Weibull and
loglogistic distributions, respectively.
5. MODEL WITH EXPLANATORY VARIABLES
We now consider models with explanatory variables. This is obviously necessary if we are to
make predictions for individuals, or even if we are to make potentially accurate predictions for
groups which differ systematically from our original sample. Futhermore, in many applications in
economics or criminology the coefficients of the explanatory variables may be of obvious interest.
We begin by fitting a parametric model based on the loglogistic distribution. The model in its
most general form is a split model in which the probability of eventual death follows a logistic
model, while the distribution of the time until death is loglogistic, with its scale parameter
depending on explanatory variables. The estimate are based on the usual MLE method.
To be more explicit, we follow the notation of section 3. For individual i, there is an unobservable
variable iY which indicates whether or not individual i will eventually return to prison. The
probability of eventual failure for individual i will be denoted iω so that .)1( ii ω==YP Let iZ
be a (row) vector of individual characteristics (explanatory variables), and let α be the
corresponding vector of parameters. Then we assume a logistic model for eventual death:
[ ])(exp1
)exp(
i
T
i
T
i
z
z
α
α
ω
+
= .
Next, we assume that the distribution of time until death is loglogistic , with scale parameter λ
and shape parameter κ .
5. Desi Rahmatina
International Journal of Scientific and Statistical Computing (IJSSC), Volume (4) : Issue (1) 23
The likelihood function for this model is
.)]}()1[(ln)1()](ln[ln{
),(ln),(
1
iii
ii
∑=
+−−++=
=
n
i
iRiiRi tStf
Ll
ωωδωδ
θωθω
We can now define special cases of this general model. First, the model in which ,0i =ω but in
which the scale parameter depends on individual characteristics, will be called Loglogistic model
(with explanatory variables) , it is not a split model. Second, the model in which iω is replaced by
a single parameter ω will be referred to as the split Loglogistic model (with explanatory
variables). In this model the probability of eventual death is a constant, though not necessarily
equal to one, while the scale parameter of the distribution of time until death varies over
individuals or depend on individual characteristic iZ , so that )exp( i
T
i zβλ = .
The likelihood function for this model is
),,( i κβωl =
( )( )[ ]
( )[ ]
( )[ ]
∑=
+
++−
−
++−−+++
n
i
i
T
i
T
i
ii
T
i
T
ii
T
i
tz
tz
tzztz
1
2
i
2
i
i
)exp(1
)exp(1)1(
ln)1(
)exp(1ln2))exp(ln()1(lnln
κ
κ
κ
κβ
ωκβω
δ
κββκκβωδ
.
Third, the model in which iλ is replaced by a single parameter λ will be called the logistic
Loglogistic model. In this model the probability of eventual death varies over individual , while the
distribution of time until death (for the eventual death) does not depend on individual
characteristics. The likelihood function for this model is
),,( κλαl =
( )
[ ][ ]
.
)(1)exp(1
)exp(])(1[
ln)1(
)(1ln2)ln()1(lnln
)exp(1
)exp(
ln
1
2
2
∑=
++
++
−
+
+−−+++
+
=
n
i
i
T
i
T
i
i
i
T
i
T
i
tz
zt
tt
z
z
κ
κ
κ
λκα
αλκ
δ
λκλκκλ
α
α
δ
Finally, the general model as presented above will be called the logistic / individual Loglogistic
model. In this model both the probability of eventual death and the distribution of time until death
vary over individuals, the likelihood function for this model is
6. Desi Rahmatina
International Journal of Scientific and Statistical Computing (IJSSC), Volume (4) : Issue (1) 24
),,( κβαl =
( )
( )( )
( )
[ ] ( )[ ]
∑=
++
++
−
+
+
−−+++
+
=
n
i
i
T
ii
T
i
T
i
T
i
i
i
T
i
i
T
ii
T
i
T
i
T
i
ztz
zzt
zt
ztz
z
z
1
2
2
.
)exp(1)exp(1
)exp(])exp(1[
ln)1(
)exp(1ln2
)exp(ln)1(ln
)exp(1
)exp(
ln
κ
κ
κ
βκα
αβκ
δ
βκ
βκκβ
α
α
δ
In the tables 1 gives the results for the split loglogistic model and the logistic loglogistic model.
The split loglogistic model dominates the logistic loglogistic models. For example, the likelihood
value of -18.2007 for the split loglogistic model is noticably higher than the values for the logistic
loglogistic models with likelihood value of -19.7716. We now turn to the logistic /individual
loglogistic model, in which both the probability of eventual death and the distribution of time until
death vary according to individual characteristics. These parameter estimates are given in table
2. They are somewhat more complicated to discuss than the results from our other models, in
part because there are simply more parameters, and some of them turn out to be statistically
insignificant.
In table 2, we can see that two covariates have significant on the probability of immune, Age and
Transplant with ( p - value 0.0081 and 0.031, respectively) but different on the loglogistic
regression, Age is fail significant with p -value of 0.1932, while Transplant to be significant with
p - value of 0.0002. Surgery just fail to be significant on the probability of immune with p - value
of 0.9249 but significant on the loglogistic regression with p -value of 0.077.
Furthermore, these results are reasonably similar to the results we obtained using a
logistic/individual exponential model [1]. There are similars on the probability of immune that Age
and Transplant are significant with ( p - value 0.0081 and 0.031, respectively) for logistic
/individual loglogistic model and with ( p - value 0.0359 and 0.000, respectively) for logistic
/individual exponential model, while Surgery did not have significant on both the loglogistic and
exponential model with ( p -value 0.9249 and 0.0662, respectively). Next, we analyzing
statistically significant on the distribution of time until death using both the logistic/individual
loglogistic and exponential model. Age did not have significant with p -value of 0.1932 for
loglogistic model but significant with p - value of 0.0184 for the exponential model, Surgery is
significant with p -value of 0.0077 for loglogistic model but just fail significant with p -value of
0.8793 for the exponential model and finally Transplant is significant with p -value 0.0002 for
loglogistic model but marginally significant for exponential model with p -value 0.0655.
7. Desi Rahmatina
International Journal of Scientific and Statistical Computing (IJSSC), Volume (4) : Issue (1) 25
TABLE 1: Split Loglogistic Model and Logistic Loglogistic Model.
TABLE 2: Logistic / Individual Loglogistic Model.
6. CONCLUSION
In this section, we will summaries the result above about significantly covariates in the data for
those models which we presented in section 4, and we shown in table 3. As we can see in table
3, There are similars on both the split loglogistic and logistic/ individual loglogistic model that Age
is significant with ( p -value 0.9379 and 0.1555, respectively) and Surgery just fail to be significant
with ( p -value 0.2153 and 0.9249, respectively), while the different that Transplant did not have
significant with p -value of 0.1429 for split loglogistic but to be significant with p -value of 0.031
for logistic/ individual loglogistic model.
Variable
Split loglogistic Logistic loglogistic
Coefficient p - value Coefficient p - value
intercept 9.683928 0.0000 -0.207918 0.8972
Age -2.240139 0.0000 0.097019 0.0039
Surgery -8.762389 0.2153 -0.983625 0.187
Transplant -6.673942 0.1429 -3.074790 0.0468
κ = 0.094981 λ = 0.021881
ω = 0.808882
ln L = -18.2007
κ = 0.566241
ln L= -19.7716
Variable
E q u a t ion for
Pr(never fail)
E q u a t ion for duration,
given eventual failure
(Loglogistic regression)
Coefficient p - value Coefficient p - value
intercept -0.529806 0.6852 -4.387064 0.0012
Age 0.087290 0.0081 0.037826 0.1932
Surgery 0.130583 0.9249 -2.250424 0.0077
Transplant -2.254443 0.031 -2.064279 0.0002
κ = 0.767896
ln L = -468.533
8. Desi Rahmatina
International Journal of Scientific and Statistical Computing (IJSSC), Volume (4) : Issue (1) 26
* not relevant
TABLE 3: Significantly Covariates for the Stanford Heart Transplant Data.
Now, we can see that Age and Surgery have different significant effects on both the logistic
loglogistic and logistic/ individual loglogistic model. Age is found to be the significant with a p -
value of 0.0039 for logistic loglogistic but not on the logistic/ individual loglogistic model where p
-value of 0.1932. Surgery just fail significant for logistic loglogistic with p -value of 0.1870 but
significant on the logistic/ individual loglogistic model with p -value of 0.0077, and finally
Transplant have similar significant on both the logistic loglogistic and logistic/ individual
loglogistic model with ( p -value 0.0468 and 0.0002, respectively).
6.REFERENCES
[1] M.R Bakar and D.R Tina. “Split Population Model For Survival Analysis”. Seminar
Kebangsaan Sains Kumulatif XIII, Universiti Utara Malaysia, pp. 51-54. 2005.
[2] J. W Boag, “Maximum Likelihood Estimates of the Proportion of Patients Cured by Cancer
Therapy”. Journal of the Royal Statistical Society, series B . vol 11(1), pp. 15-44. 1949.
[3] J Crowley. and M Hu.” Covariance Analysis of Heart Transplant Survival Data”. Journal of the
American Statistical Association, vol. 72, pp. 27-36. 1977.
[4] Farewell.” The use of mixture models for the analysis of survival data with long-term
survivors”. Biometrics, vol. 38, pp. 1041-1046. 1982.
[5] D. F Greenberg. “Modeling criminal careers”. Criminology, vol. 29 , pp. 17-46. 1991.
[6] R. Maller and X. Zhou. Survival Analysis with Long-Term Survivors, 1
st
Ed., New York: Wiley,
1996, pp. 112-25
[7] P Schmidt and A. D Witte. “Predicting Recidivism Using Survival Models.” Research in
Criminolog, Springer- Verlag, New York. 1988.
p -value
Variable
Split
Loglogistic
Logistic
Loglogistic
Logistic/ individual
Loglogistic model.
0β (intercept) 0.0000 * 0.6852
1β (Age) 0.0000 * 0.0081
2β (Surgery) 0.2153 * 0.9249
3β (Transplant) 0.1429 * 0.031
0α (intercept) * 0.8972 0.0012
1α (Age) * 0.0039 0.1932
2α (Surgery) * 0.1870 0.0077
3α (Transplant) * 0.0468 0.0002
λ * 0.0044 -
κ 0.0000 0.0000 0.0000
ω (Population
split)
0.0000 - -
9. Desi Rahmatina
International Journal of Scientific and Statistical Computing (IJSSC), Volume (4) : Issue (1) 27
[8] Berkson,J. and Gage, R. P.(1952). “Survival Curve for Cancer Patients Following Treatment.”
Journal of the American Statistical Association 47:501-515.
[9] Farewell, V. T. (1977a). “A model for a binary variable with time censored observations.”
Biometrika, 64, 43-46.