The document presents a new capture-recapture model for estimating the size of elusive epidemiologic events using two lists or data sources. It compares three estimators - the proposed estimator N, the traditional Petersen estimator Ns, and another estimator Nq. Both the Akaike Information Criterion and Mean Absolute Deviation, based on simulation studies, showed that the proposed estimator N performs better by being more consistent and accurate compared to the other estimators, especially when recaptures/relistings are low. For elusive populations where recaptures tend to be low, the Petersen estimator breaks down. The proposed model uses a "coverage probability" that accounts for heterogeneity in capture probabilities across individuals
This document provides an overview of statistical concepts relevant to data mining. It discusses data mining tools and techniques for revealing patterns in data. Key concepts covered include supervised vs unsupervised learning, classification vs regression, measures of central tendency, variance, standard deviation, hypothesis testing, and the normal distribution. Examples are provided to illustrate calculating averages, variance, hypothesis testing, and interpreting normal distributions. Exercises with solutions are included to demonstrate applying statistical concepts.
Ars medicina: the art of healing through social mediaMike Pascoe
Medicine is the art of healing, and like all forms of art, medicine is practiced through a variety of techniques. In the spirit of enhancing the old art with new science, medical practitioners are encouraged to add social media (twitter) to their repertoire. This talk will examine current ways health care providers are leveraging social media as well as provide you with the technical skills required to unleash your inner creativity.
Minds&More is a consulting firm that supports clients' business growth through capabilities in marketing, sales, and transformation. They offer proven methodologies and flexible solutions including consulting, program/project management, and interim management. Their team of over 30 seasoned professionals helps clients through excellence in execution to achieve tangible results. Minds&More focuses on enabling positive growth through marketing, sales, and transformation services. They build long-term relationships with clients through active listening, structuring problems collaboratively, defining solution paths and metrics, and implementing and tracking solutions.
Mortgage Alliance collects personal information from clients to provide mortgage services and inform clients of new offerings. They only share this information with partners to fulfill requests and as required by law. The privacy policy outlines how information is collected and used, including sharing with lenders and insurers as needed. Clients consent to Mortgage Alliance obtaining their credit report and communicating financially relevant information via email. The policy is available online and clients can contact the Compliance Director with any questions.
This document provides an overview of statistical concepts relevant to data mining. It discusses data mining tools and techniques for revealing patterns in data. Key concepts covered include supervised vs unsupervised learning, classification vs regression, measures of central tendency, variance, standard deviation, hypothesis testing, and the normal distribution. Examples are provided to illustrate calculating averages, variance, hypothesis testing, and interpreting normal distributions. Exercises with solutions are included to demonstrate applying statistical concepts.
Ars medicina: the art of healing through social mediaMike Pascoe
Medicine is the art of healing, and like all forms of art, medicine is practiced through a variety of techniques. In the spirit of enhancing the old art with new science, medical practitioners are encouraged to add social media (twitter) to their repertoire. This talk will examine current ways health care providers are leveraging social media as well as provide you with the technical skills required to unleash your inner creativity.
Minds&More is a consulting firm that supports clients' business growth through capabilities in marketing, sales, and transformation. They offer proven methodologies and flexible solutions including consulting, program/project management, and interim management. Their team of over 30 seasoned professionals helps clients through excellence in execution to achieve tangible results. Minds&More focuses on enabling positive growth through marketing, sales, and transformation services. They build long-term relationships with clients through active listening, structuring problems collaboratively, defining solution paths and metrics, and implementing and tracking solutions.
Mortgage Alliance collects personal information from clients to provide mortgage services and inform clients of new offerings. They only share this information with partners to fulfill requests and as required by law. The privacy policy outlines how information is collected and used, including sharing with lenders and insurers as needed. Clients consent to Mortgage Alliance obtaining their credit report and communicating financially relevant information via email. The policy is available online and clients can contact the Compliance Director with any questions.
The document outlines the agenda for a webinar series hosted by Cleantech Open. The webinar series provides mentorship and guidance to cleantech startups. The agenda lists the dates and topics that will be covered in the summer webinar program, including sessions on business models, markets, fundraising, and preparing for investor presentations. It also outlines the agenda for the September 11th national webinar, which will include a session on perfecting the startup pitch as well as an overview of the next steps in Cleantech Open's program.
Mehak Handloom Industries is a leading manufacturer and exporter of blankets and bed sheets established in 2009 in Panipat, Haryana, India. They produce a wide range of blankets using high quality materials and advanced weaving machinery. The company exports their products to countries like South Africa, USA and Europe and offers multiple payment options for their customers.
OwnCloud adalah software suite yang berfokus pada layanan penyimpanan data berbasis lokasi yang independen (cloud storage) dan termasuk kategori Infrastructur as a Services (IaaS) Cloud Computing
This document provides information about direct investing in oil wells through Crude Energy. It discusses how advances in drilling technology like horizontal drilling and fracking have led to a boom in US oil production. Private investors can now directly invest in oil wells to participate in revenues. Crude Energy offers opportunities for accredited investors to invest in drilling projects and receive a portion of well income and tax benefits. The document provides details on various US shale formations with oil potential and how direct investing works.
El XIV Festival del Habano abrió con el lanzamiento del nuevo habano Cohiba Pirámides Extra. Más de 1,500 delegados de 70 países asistieron al evento y disfrutaron de un concierto. Habanos S.A. tuvo ventas récord de $401 millones en 2011, un aumento del 9% que mantiene su liderazgo en el mercado de puros premium a nivel mundial. La feria comercial también se inauguró con la presencia de representantes de Habanos S.A. y Tabacuba.
CAA2014 Community Archaeology and Technology: The ACCORD project: Archaeology...Nicole Beale
Stuart Jeffrey and Sian Jones
Paper presented at Computer Applications in Archaeology Conference 2014, 22nd - 25th April 2014, Université Paris 1 Panthéon-Sorbonne, Paris as part of Session 12: Community Archaeology and Technology. Session organisers: Nicole Beale and Eleonora Gandolfi. Session blog: http://blog.soton.ac.uk/comarch/
This document proposes a modified Mann-Whitney U test that adjusts for tied observations between two sampled populations. The modification develops a test statistic (W) that intrinsically accounts for ties in its expected value and variance. W is calculated based on the ranks assigned to observations from each population when pooled and ranked together. The null hypothesis is that the expected value of W is equal to 0, indicating the populations have equal medians. This can be tested using a chi-square distribution with the test statistic comparing W to its variance, which is also adjusted for ties. The method allows analysis when populations are on an ordinal scale and avoids problems with ties encountered in traditional methods.
Entropy Nucleus a nd Use i n Waste Disposal Policiesijitjournal
The central theme of this article is that the usual Shannon’s entropy [1] is not suff
icient to address the
unknown Gaussian population average. A remedy is necessary. By peeling away entropy junkies, a refined
version is introduced and it is named nucleus entropy in this article. Statistical properties and advantages
of the Gaussian nucleu
s entropy are derived and utilized to interpret 2005 and 2007 waste disposals (in
1,000 tons) by fifty
-
one states (including the District of Columbia) in USA. Each state generates its own,
imports from a state for a revenue, and exports to another state wi
th a payment waste disposal [2]. Nucleus
entropy is large when the population average is large and/or when the population variance is lesser.
Nucleus entropy advocates the significance of the waste policies under four scenarios: (1) keep only
generated, (2
) keep generated with receiving in and shipping out, (3) without receiving in, and (4) without
shipping out. In the end, a few recommendations are suggested for the waste management policy makers
Computational Pool-Testing with Retesting StrategyWaqas Tariq
Pool testing is a cost effective procedure for identifying defective items in a large population. It also improves the efficiency of the testing procedure when imperfect tests are employed. This study develops computational pool-testing strategy based on a proposed pool testing with re-testing strategy. Statistical moments based on this applied design have been generated. With advent of computers in 1980‘s, pool-testing with re-testing strategy under discussion is handled in the context of computational statistics. From this study, it has been established that re-testing reduces misclassifications significantly as compared to Dorfman procedure although re-testing comes with a cost i.e. increase in the number of tests. Re-testing considered improves the sensitivity and specificity of the testing scheme.
This thesis examines methods for estimating extinction times of species on the Mascarene Islands using occupancy data. Maximum likelihood estimation is used to fit population decline models, including exponential, linear, and changepoint models. Issues with sparse data and parameter redundancy make extinction time estimation difficult without external population information. Simulations show linear decline models can estimate extinction times well when the true population decline is linear, but tend to underestimate extinction times.
OPTIMAL PREDICTION OF THE EXPECTED VALUE OF ASSETS UNDER FRACTAL SCALING EXPO...mathsjournal
In this paper, the optimal prediction of the expected value of assets under the fractal scaling exponent is
considered. We first obtain a fractal exponent, then derive a seemingly Black-Scholes parabolic equation.
We further obtain its solutions under given conditions for the prediction of expected value of assets given
the fractal exponent.
Eva BONDA et al. (1995) Neural correlates of mental transformations of the bo...Dr Eva Bonda, PhD
This study used positron emission tomography (PET) to identify brain regions involved in mental rotation of body parts. In a preliminary experiment, reaction times when judging left vs. right hands in different orientations identified the most difficult orientations. In the PET study, activity was measured while subjects viewed these challenging hand stimuli and mentally rotated their own hand to match the orientation. Compared to a control task, superior parietal cortex, intraparietal sulcus, and adjacent inferior parietal lobule showed increased blood flow, indicating these areas are involved in mental transformations of the body.
1) The document presents a mathematical model for analyzing lung function measurements in asthmatic children under different stresses like histamine challenge, exercise challenge, and on a control day.
2) The inverse Gaussian distribution and its hazard rate function are utilized to model the critical maximum points of the lung function curves.
3) The results from applying the mathematical model show that the failure rate curve has an upside down bathtub shape, with the maximum hazard rate highest for exercise challenges and lowest for the control day.
Inferring Primary Extinction in Late Permian Food Webs using Approximate Baye...Brandon Chow
The end-Permian extinction was the most severe mass extinction in the history of life, yet its causes are not well understood. Here, we use probabilistic food web models to explore how disruption of primary production could have caused the collapse of end-Permian terrestrial ecosystems. This poster was presented in the Geological Society of America's Annual Conference (Baltimore, 2015)
The document outlines the agenda for a webinar series hosted by Cleantech Open. The webinar series provides mentorship and guidance to cleantech startups. The agenda lists the dates and topics that will be covered in the summer webinar program, including sessions on business models, markets, fundraising, and preparing for investor presentations. It also outlines the agenda for the September 11th national webinar, which will include a session on perfecting the startup pitch as well as an overview of the next steps in Cleantech Open's program.
Mehak Handloom Industries is a leading manufacturer and exporter of blankets and bed sheets established in 2009 in Panipat, Haryana, India. They produce a wide range of blankets using high quality materials and advanced weaving machinery. The company exports their products to countries like South Africa, USA and Europe and offers multiple payment options for their customers.
OwnCloud adalah software suite yang berfokus pada layanan penyimpanan data berbasis lokasi yang independen (cloud storage) dan termasuk kategori Infrastructur as a Services (IaaS) Cloud Computing
This document provides information about direct investing in oil wells through Crude Energy. It discusses how advances in drilling technology like horizontal drilling and fracking have led to a boom in US oil production. Private investors can now directly invest in oil wells to participate in revenues. Crude Energy offers opportunities for accredited investors to invest in drilling projects and receive a portion of well income and tax benefits. The document provides details on various US shale formations with oil potential and how direct investing works.
El XIV Festival del Habano abrió con el lanzamiento del nuevo habano Cohiba Pirámides Extra. Más de 1,500 delegados de 70 países asistieron al evento y disfrutaron de un concierto. Habanos S.A. tuvo ventas récord de $401 millones en 2011, un aumento del 9% que mantiene su liderazgo en el mercado de puros premium a nivel mundial. La feria comercial también se inauguró con la presencia de representantes de Habanos S.A. y Tabacuba.
CAA2014 Community Archaeology and Technology: The ACCORD project: Archaeology...Nicole Beale
Stuart Jeffrey and Sian Jones
Paper presented at Computer Applications in Archaeology Conference 2014, 22nd - 25th April 2014, Université Paris 1 Panthéon-Sorbonne, Paris as part of Session 12: Community Archaeology and Technology. Session organisers: Nicole Beale and Eleonora Gandolfi. Session blog: http://blog.soton.ac.uk/comarch/
This document proposes a modified Mann-Whitney U test that adjusts for tied observations between two sampled populations. The modification develops a test statistic (W) that intrinsically accounts for ties in its expected value and variance. W is calculated based on the ranks assigned to observations from each population when pooled and ranked together. The null hypothesis is that the expected value of W is equal to 0, indicating the populations have equal medians. This can be tested using a chi-square distribution with the test statistic comparing W to its variance, which is also adjusted for ties. The method allows analysis when populations are on an ordinal scale and avoids problems with ties encountered in traditional methods.
Entropy Nucleus a nd Use i n Waste Disposal Policiesijitjournal
The central theme of this article is that the usual Shannon’s entropy [1] is not suff
icient to address the
unknown Gaussian population average. A remedy is necessary. By peeling away entropy junkies, a refined
version is introduced and it is named nucleus entropy in this article. Statistical properties and advantages
of the Gaussian nucleu
s entropy are derived and utilized to interpret 2005 and 2007 waste disposals (in
1,000 tons) by fifty
-
one states (including the District of Columbia) in USA. Each state generates its own,
imports from a state for a revenue, and exports to another state wi
th a payment waste disposal [2]. Nucleus
entropy is large when the population average is large and/or when the population variance is lesser.
Nucleus entropy advocates the significance of the waste policies under four scenarios: (1) keep only
generated, (2
) keep generated with receiving in and shipping out, (3) without receiving in, and (4) without
shipping out. In the end, a few recommendations are suggested for the waste management policy makers
Computational Pool-Testing with Retesting StrategyWaqas Tariq
Pool testing is a cost effective procedure for identifying defective items in a large population. It also improves the efficiency of the testing procedure when imperfect tests are employed. This study develops computational pool-testing strategy based on a proposed pool testing with re-testing strategy. Statistical moments based on this applied design have been generated. With advent of computers in 1980‘s, pool-testing with re-testing strategy under discussion is handled in the context of computational statistics. From this study, it has been established that re-testing reduces misclassifications significantly as compared to Dorfman procedure although re-testing comes with a cost i.e. increase in the number of tests. Re-testing considered improves the sensitivity and specificity of the testing scheme.
This thesis examines methods for estimating extinction times of species on the Mascarene Islands using occupancy data. Maximum likelihood estimation is used to fit population decline models, including exponential, linear, and changepoint models. Issues with sparse data and parameter redundancy make extinction time estimation difficult without external population information. Simulations show linear decline models can estimate extinction times well when the true population decline is linear, but tend to underestimate extinction times.
OPTIMAL PREDICTION OF THE EXPECTED VALUE OF ASSETS UNDER FRACTAL SCALING EXPO...mathsjournal
In this paper, the optimal prediction of the expected value of assets under the fractal scaling exponent is
considered. We first obtain a fractal exponent, then derive a seemingly Black-Scholes parabolic equation.
We further obtain its solutions under given conditions for the prediction of expected value of assets given
the fractal exponent.
Eva BONDA et al. (1995) Neural correlates of mental transformations of the bo...Dr Eva Bonda, PhD
This study used positron emission tomography (PET) to identify brain regions involved in mental rotation of body parts. In a preliminary experiment, reaction times when judging left vs. right hands in different orientations identified the most difficult orientations. In the PET study, activity was measured while subjects viewed these challenging hand stimuli and mentally rotated their own hand to match the orientation. Compared to a control task, superior parietal cortex, intraparietal sulcus, and adjacent inferior parietal lobule showed increased blood flow, indicating these areas are involved in mental transformations of the body.
1) The document presents a mathematical model for analyzing lung function measurements in asthmatic children under different stresses like histamine challenge, exercise challenge, and on a control day.
2) The inverse Gaussian distribution and its hazard rate function are utilized to model the critical maximum points of the lung function curves.
3) The results from applying the mathematical model show that the failure rate curve has an upside down bathtub shape, with the maximum hazard rate highest for exercise challenges and lowest for the control day.
Inferring Primary Extinction in Late Permian Food Webs using Approximate Baye...Brandon Chow
The end-Permian extinction was the most severe mass extinction in the history of life, yet its causes are not well understood. Here, we use probabilistic food web models to explore how disruption of primary production could have caused the collapse of end-Permian terrestrial ecosystems. This poster was presented in the Geological Society of America's Annual Conference (Baltimore, 2015)
This document describes an analysis of count data from a study on the detection of anthelmintic resistance in gastrointestinal nematodes of small ruminants. The data consists of egg counts from 30 goats and 30 sheep that were grouped into Albendazole, Ivermectin, and control groups. The data was analyzed using Poisson and negative binomial regression models in R software. The Poisson model did not fit the data well due to overdispersion. However, the negative binomial regression model provided a better fit for the overdispersed data. Key findings from the negative binomial regression analysis are summarized.
This document proposes a new estimator for estimating the mean of a finite population using stratified random sampling when auxiliary attribute information is available and there is non-response. The proposed estimator combines two existing estimators from Singh et al. (2007) and Malik and Singh (2013). Expressions for the bias and mean squared error of the proposed estimator are derived up to the first degree of approximation and shown to be more efficient than traditional estimators. The theoretical properties are verified with a numerical example.
Mathematics has successfully been applied to understand physics, but its application to biology and medicine is still developing. While early attempts at mathematical formalization of biology lacked biological substance, the situation has improved in recent decades. In medical imaging, mathematical approaches can be used to understand image data and make inferences about organs. One promising approach models growth as random iterated diffeomorphisms in biologically meaningful "darcyan coordinates". This allows modeling of growth through discrete cellular decisions over time and derivation of differential equations describing growth in the limit. Further developing the biological basis and applying these methods to real medical data offers opportunities to advance the role of mathematics in understanding biology and medicine.
Classification of aedes adults mosquitoes in two distinct groups based on fis...Alexander Decker
This document describes a study that used Fisher linear discriminant analysis and FZOARO classification techniques to classify Aedes mosquitoes into male and female groups based on measurements of their wing lengths. The researchers bred two groups of Aedes mosquitoes with different feeding amounts. They then measured the wing lengths of 15 randomly selected mosquitoes from each group and categorized them as having either large or small body sizes. Both classification techniques were applied and correctly classified around 85% of mosquitoes into their proper gender groups, indicating that wing length can be used to predict the gender of Aedes mosquitoes.
The document discusses various methods for constructing confidence intervals for estimating multinomial proportions. It aims to analyze the propensity for aberrations (i.e. unrealistic bounds like negative values) in the interval estimates across different classical and Bayesian methods. Specifically, it provides the mathematical conditions under which each method may produce aberrant interval limits, such as zero-width intervals or bounds exceeding 0 and 1, especially for small sample counts. The document also develops an R program to facilitate computational implementation of the various methods for applied analysis of multinomial data.
COMPARISON OF THE RATIO ESTIMATE TO THE LOCAL LINEAR POLYNOMIAL ESTIMATE OF F...IJESM JOURNAL
In this paper, attempt to study effects of extreme observations on two estimators of finite
population total theoretically and by simulation is made. We compare the ratio estimate with the
local linear polynomial estimate of finite population total given different finite populations. Both
classical and the non parametric estimator based on the local linear polynomial produce good
results when the auxiliary and the study variables are highly correlated. It is however noted that
in the presence of outlying observations the local linear polynomial performs better with respect
to design mean square error (MSE) in all the artificial populations generated.
36086 Topic Discussion3Number of Pages 2 (Double Spaced).docxrhetttrevannion
36086 Topic: Discussion3
Number of Pages: 2 (Double Spaced)
Number of sources: 1
Writing Style: APA
Type of document: Essay
Academic Level:Master
Category: Psychology
Language Style: English (U.S.)
Order Instructions: Attached
I will upload the instructions
Reference/Module
Learning Objectives
•Explain what the x2 goodness-of-fit test is and what it does.
•Calculate a x2 goodness-of-fit test.
•List the assumptions of the x2 goodness-of-fit test.
•Calculate the x2 test of independence.
•Interpret the x2 test of independence.
•Explain the assumptions of the x2 test of independence.
The Chi-Square (x2) Goodness-of-Fit test: What It Is and What It Does
The chi-square (x2) goodness-of-fit test is used for comparing categorical information against what we would expect based on previous knowledge. As such, it tests what are called observed frequencies (the frequency with which participants fall into a category) against expected frequencies (the frequency expected in a category if the sample data represent the population). It is a nondirectional test, meaning that the alternative hypothesis is neither one-tailed nor two-tailed. The alternative hypothesis for a x2 goodness-of-fit test is that the observed data do not fit the expected frequencies for the population, and the null hypothesis is that they do fit the expected frequencies for the population. There is no conventional way to write these hypotheses in symbols, as we have done with the previous statistical tests. To illustrate the x2 goodness-of-fit test, let's look at a situation in which its use would be appropriate.
chi-square (x2) goodness-of-fit test A nonparametric inferential procedure that determines how well an observed frequency distribution fits an expected distribution.
observed frequencies The frequency with which participants fall into a category.
expected frequencies The frequency expected in a category if the sample data represent the population.
Calculations for the x2 Goodness-of-Fit Test
Suppose that a researcher is interested in determining whether the teenage pregnancy rate at a particular high school is different from the rate statewide. Assume that the rate statewide is 17%. A random sample of 80 female students is selected from the target high school. Seven of the students are either pregnant now or have been pregnant previously. The χ2goodness-of-fit test measures the observed frequencies against the expected frequencies. The observed and expected frequencies are presented in Table 21.1.
TABLE 21.1Observed and expected frequencies for χ2 goodness-of-fit example
FREQUENCIES
PREGNANT
NOT PREGNANT
Observed
7
73
Expected
14
66
As can be seen in the table, the observed frequencies represent the number of high school females in the sample of 80 who were pregnant versus not pregnant. The expected frequencies represent what we would expect based on chance, given what is known about the population. In this case, we would expect 17% of the females to be pregnant .
Pierre E. Jacob gave a presentation on Bayesian inference with models made of modules. He discussed several issues that can arise when using a joint modeling approach, including computational challenges as more modules are added and parameters becoming harder to interpret. He proposed two alternative approaches: the plug-in approach, which ignores uncertainty about the first module, and the cut approach, which propagates uncertainty between modules but prevents feedback. The cut approach defines a cut distribution that samples from the posterior of the first module and then the second module conditioned on the first, cutting off feedback in the Bayesian graph.
Computation of Moments in Group Testing with Re-testing and with Errors in In...CSCJournals
Screening of grouped urine sample was suggested during the Second World War as a method for reducing the cost of detecting syphilis in U.S. soldiers. Grouping has been used in epidemiological studies for screening of human immunodeficiency virus HIV/AIDS antibody to help curb the spread of the virus in recent studies. It reduces the cost of testing and more importantly it offers a feasible way to lower the misclassifications associated with labeling samples when imperfect tests are used. Furthermore, misclassifications can be reduced by employing a re-testing design in a group testing procedure. This study has developed a computational statistical model for classifying a large sample of interest based on a proposed design of group testing with re-testing. This model permits computation of moments on the number of tests and misclassification arising in this design. Simulated data from a multinomial distribution (specifically a trinomial distribution) has been used to illustrate these computations. From our study, it has been established that re-testing reduces misclassifications significantly and more so, it is stable at high rates of probability of incidences as compared to Dorfman procedure although re-testing comes with a cost i.e. increase in the number of tests. Re-testing considered reduces the sensitivity of the testing scheme but at the same time it improves the specificity.
This paper presents concepts of Bernoulli distribution, and how it can be used as an approximation of Binomial, Poisson, and Gaussian distributions with a different approach from earlier existing literature. Due to discrete nature of the random variable X, a more appropriate method of the principle of mathematical induction (PMI) is used as an alternative approach to limiting behavior of the binomial random variable. The study proved de Moivre–Laplace theorem (convergence of binomial distribution to Gaussian distribution) to all values of p such that p ≠ 0 and p ≠ 1 using a direct approach which opposes the popular and most widely used indirect method of moment generating function.
Bipolar Disorder Investigation Using Modified Logistic Ridge EstimatorIOSR Journals
This study investigates factors contributing to bipolar disorder among 109 teaching and non-teaching university staff using a modified logistic ridge estimator. Sex, age, occupation, and body mass index were analyzed as factors beyond traditional genetic, environmental, and neurochemical factors. The modified logistic ridge estimator was found to have smaller standard errors than the standard logistic ridge estimator or logistic estimator. The results found that sex, age, and body mass index significantly contribute to bipolar disorder, with men and older adults (≥40 years) being more predisposed, and higher body mass index positively correlated with bipolar disorder. The probability of bipolar tendency was highest for non-teaching males aged 40 or older with the highest recorded body mass index
Similar to Capture recapture estimation for elusive events with two lists (20)
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
Women with polycystic ovary syndrome (PCOS) have elevated levels of hormones like luteinizing hormone and testosterone, as well as higher levels of insulin and insulin resistance compared to healthy women. They also have increased levels of inflammatory markers like C-reactive protein, interleukin-6, and leptin. This study found these abnormalities in the hormones and inflammatory cytokines of women with PCOS ages 23-40, indicating that hormone imbalances associated with insulin resistance and elevated inflammatory markers may worsen infertility in women with PCOS.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
A transformational generative approach towards understanding al-istifhamAlexander Decker
This document discusses a transformational-generative approach to understanding Al-Istifham, which refers to interrogative sentences in Arabic. It begins with an introduction to the origin and development of Arabic grammar. The paper then explains the theoretical framework of transformational-generative grammar that is used. Basic linguistic concepts and terms related to Arabic grammar are defined. The document analyzes how interrogative sentences in Arabic can be derived and transformed via tools from transformational-generative grammar, categorizing Al-Istifham into linguistic and literary questions.
A time series analysis of the determinants of savings in namibiaAlexander Decker
This document summarizes a study on the determinants of savings in Namibia from 1991 to 2012. It reviews previous literature on savings determinants in developing countries. The study uses time series analysis including unit root tests, cointegration, and error correction models to analyze the relationship between savings and variables like income, inflation, population growth, deposit rates, and financial deepening in Namibia. The results found inflation and income have a positive impact on savings, while population growth negatively impacts savings. Deposit rates and financial deepening were found to have no significant impact. The study reinforces previous work and emphasizes the importance of improving income levels to achieve higher savings rates in Namibia.
A therapy for physical and mental fitness of school childrenAlexander Decker
This document summarizes a study on the importance of exercise in maintaining physical and mental fitness for school children. It discusses how physical and mental fitness are developed through participation in regular physical exercises and cannot be achieved solely through classroom learning. The document outlines different types and components of fitness and argues that developing fitness should be a key objective of education systems. It recommends that schools ensure pupils engage in graded physical activities and exercises to support their overall development.
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
This document summarizes a study examining efficiency in managing marketing executives in Nigerian banks. The study was examined through the lenses of Kaizen theory (continuous improvement) and efficiency theory. A survey of 303 marketing executives from Nigerian banks found that management plays a key role in identifying and implementing efficiency improvements. The document recommends adopting a "3H grand strategy" to improve the heads, hearts, and hands of management and marketing executives by enhancing their knowledge, attitudes, and tools.
This document discusses evaluating the link budget for effective 900MHz GSM communication. It describes the basic parameters needed for a high-level link budget calculation, including transmitter power, antenna gains, path loss, and propagation models. Common propagation models for 900MHz that are described include Okumura model for urban areas and Hata model for urban, suburban, and open areas. Rain attenuation is also incorporated using the updated ITU model to improve communication during rainfall.
A synthetic review of contraceptive supplies in punjabAlexander Decker
This document discusses contraceptive use in Punjab, Pakistan. It begins by providing background on the benefits of family planning and contraceptive use for maternal and child health. It then analyzes contraceptive commodity data from Punjab, finding that use is still low despite efforts to improve access. The document concludes by emphasizing the need for strategies to bridge gaps and meet the unmet need for effective and affordable contraceptive methods and supplies in Punjab in order to improve health outcomes.
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
1) The document discusses synthesizing Taylor's scientific management approach and Fayol's process management approach to identify an effective way to manage marketing executives in Nigerian banks.
2) It reviews Taylor's emphasis on efficiency and breaking tasks into small parts, and Fayol's focus on developing general management principles.
3) The study administered a survey to 303 marketing executives in Nigerian banks to test if combining elements of Taylor and Fayol's approaches would help manage their performance through clear roles, accountability, and motivation. Statistical analysis supported combining the two approaches.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey on live virtual machine migrations and its techniquesAlexander Decker
This document summarizes several techniques for live virtual machine migration in cloud computing. It discusses works that have proposed affinity-aware migration models to improve resource utilization, energy efficient migration approaches using storage migration and live VM migration, and a dynamic consolidation technique using migration control to avoid unnecessary migrations. The document also summarizes works that have designed methods to minimize migration downtime and network traffic, proposed a resource reservation framework for efficient migration of multiple VMs, and addressed real-time issues in live migration. Finally, it provides a table summarizing the techniques, tools used, and potential future work or gaps identified for each discussed work.
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
This document discusses data mining of big data using Hadoop and MongoDB. It provides an overview of Hadoop and MongoDB and their uses in big data analysis. Specifically, it proposes using Hadoop for distributed processing and MongoDB for data storage and input. The document reviews several related works that discuss big data analysis using these tools, as well as their capabilities for scalable data storage and mining. It aims to improve computational time and fault tolerance for big data analysis by mining data stored in Hadoop using MongoDB and MapReduce.
1. The document discusses several challenges for integrating media with cloud computing including media content convergence, scalability and expandability, finding appropriate applications, and reliability.
2. Media content convergence challenges include dealing with the heterogeneity of media types, services, networks, devices, and quality of service requirements as well as integrating technologies used by media providers and consumers.
3. Scalability and expandability challenges involve adapting to the increasing volume of media content and being able to support new media formats and outlets over time.
This document surveys trust architectures that leverage provenance in wireless sensor networks. It begins with background on provenance, which refers to the documented history or derivation of data. Provenance can be used to assess trust by providing metadata about how data was processed. The document then discusses challenges for using provenance to establish trust in wireless sensor networks, which have constraints on energy and computation. Finally, it provides background on trust, which is the subjective probability that a node will behave dependably. Trust architectures need to be lightweight to account for the constraints of wireless sensor networks.
This document discusses private equity investments in Kenya. It provides background on private equity and discusses trends in various regions. The objectives of the study discussed are to establish the extent of private equity adoption in Kenya, identify common forms of private equity utilized, and determine typical exit strategies. Private equity can involve venture capital, leveraged buyouts, or mezzanine financing. Exits allow recycling of capital into new opportunities. The document provides context on private equity globally and in developing markets like Africa to frame the goals of the study.
This document discusses a study that analyzes the financial health of the Indian logistics industry from 2005-2012 using Altman's Z-score model. The study finds that the average Z-score for selected logistics firms was in the healthy to very healthy range during the study period. The average Z-score increased from 2006 to 2010 when the Indian economy was hit by the global recession, indicating the overall performance of the Indian logistics industry was good. The document reviews previous literature on measuring financial performance and distress using ratios and Z-scores, and outlines the objectives and methodology used in the current study.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Capture recapture estimation for elusive events with two lists
1. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.8, 2012
Capture-Recapture Estimation for Elusive Events with Two Lists
Danjuma Jibasen1, Waheed Babatunde Yahya2*, Emanuel Teju Jolayemi2
1. Department of Statistics and Operations Research, Modibbo Adamawa University of Technology,
Yola, Nigeria.
2. Department of Statistics, University of Ilorin, P.M.B. 1515, Ilorin, Nigeria.
*E-mail of the corresponding author: wb_yahya@daad-alumni.de
Abstract
The application of capture-recapture methods to the estimation of population parameter for epidemiologic and
estimating the size of elusive epidemiologic event. The proposed estimator ܰ , the Petersen estimator ܰ௦ and
demographic event has been growing recently. This paper presents a robust capture-recapture model for
another estimator ܰ were compared using the Akaike Information Criterion (AIC) and the Mean Absolute
Deviation (MAD) through simulation studies. Both the AIC and MAD revealed that ܰ is a better and robust
estimator. It was discovered that ܰ௦ under estimates the total elusive population N, ܰ over estimate N while
ܰ was always consistent and performs better than the other two. ܰ is particularly better with lower recaptures
(‘relisting’) ݊ଵଵ which is the case with elusive events. The Petersen ܰ௦ breaks down in the presence of
elusiveness. The other estimator, ܰ always over estimates, except in few cases. It is therefore recommended
that the proposed estimator ܰ be used for estimating dual system elusive events.
Keywords: Trap Response; Dual System Estimation; Elusive events; Petersen estimator; capture-recapture.
1. Introduction
Dual System Estimation (DSE) is the nomenclature given to two samples Capture-Recapture Method when
applied to populations other than animals (Seber, 1965 and IWGDMF, 1990). The classical method of estimation
for this type of experiment was developed and applied to ecological problem by Petersen in 1896, using tagged
plaice, and Lincoln in 1930 who used band returns to estimate the size of the North American waterfowl
population as reported by Seber (1982a). Thus, the two-sample capture-recapture method is tagged
Lincoln-Petersen methods in some literature (Seber, 1982a; Pollock, 1991; Haines et al., 2000). Since the work of
Petersen and Lincoln, several authors have applied their model and its modified form under varying situations.
Sekar and Deming (1949) for instance, estimated birth and death rates using two lists: the registrars list (R)
and the Interviewers list (I) obtained from a complete house –to – house canvass. They also discussed theoretically
that stratification can be used to improve the Petersen estimates when heterogeneity is thought to affect the
estimates. Shapiro (1949) used the Petersen estimates to estimate Birth registration completeness in the United
States. Birth records on file for infants born during the period December 1, 1939 to March 31, 1940 were matched
against infants cards filled by enumerators and a set of death records on the file for children whose birthdates were
within the test period but who died before the census data April 1, 1940.
Alho (1990) introduced an estimator of the unknown population size in a dual registration process based on a
logistic regression model. His model allows different capture probabilities across individuals and across capture
times. The probabilities are estimated from the observed data using conditional maximum likelihood. Alho
developed his method because the classical estimator is known to be biased under population heterogeneity (Seber,
1982b; Burnham and Overton, 1978; Alho, 1990).
when the recaptures (n11) is small the interval between possible values of ܰ for fixed n1. and n.1 are large and that
A Bayesian modification of the Lincoln index was given by Gaskell and George (1972). They observed that
the formula (2.2) is at best a poor estimator of the total animal population N when n11 >10. They incorporated a
prior based on the belief that the experimenter does begin with an idea about the value of N, this may only be
possible in animal experiment. This is not possible in demographic or epidemiologic elusive events. For other
applications and modifications of the Petersen estimator for estimating animal population see Schwarz and Seber
(1999).
It was discovered that the Petersen Method is sensitive to the marginal totals and that it depends so much on
the sampling effort (Jibasen, 2011). That is, if the sampling effort is poor yielding fewer n11, the Petersen
estimator gets poorer, in line with the observation given by Gaskell and George (1972).
It is known in capture-recapture experiments that whether an individual is caught or not depends on a
variety of circumstances. One of this is trap response in which individual may exhibit an enduring behavioral
response to the first capture. That is, after an individual has been captured once, the individual has a long
1
2. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.8, 2012
memory of its first-capture experience and the effect lasts for the remainder of the experiment, leading to a
higher (trap-happy) or lower (trap-shy) capture probability for all subsequent recaptures. Behavior of individuals
towards trap will usually give wrong population estimates; trap-shyness results in over estimates of population
size, while trap-happiness results to underestimation. Trap response will generally affects the size of the second
catch (marginal total). In applying this method to elusive populations, individuals are always on the hide or run
once arrested or even hearing about the presence of law enforcement agents. This is similar to trap-shyness, that
is, do everything to avoid a re-arrest. Thus an individual may not be easily re-arrested, leading to a lower
recapture (‘relisting’) probability. In this work therefore, a novel capture-recapture estimation procedure for an
elusive population is proposed.
2. Methodology
Consider a closed population of size N individuals. Let a sample of size n1. be drawn from N without
replacement, marked and returned back into the population. For simplicity, we call this system 1 as presented in
Table 1 in the appendix. Here, elements of n1. sample may represent a set of drug addicts that were caught by
narcotic officials (system 1), probably reformed and released back into the population. If at the second time,
another sample of size n.1 was drawn from the same closed population without replacement using system 2, the
interest now is to examine the members of the first sample n1. (say, of drug addicts) that were caught the second
time (i.e. n11) (recaptured) by system 2.
The Lincoln-Petersen experiment is the simplest capture-recapture method, for estimating the size N of a closed
population which consists of catching, marking, and releasing a sample (sample 1) of n1 animals. After allowing
the marked to mix with the unmarked, a second sample, n.1 is taken from the population. In demography, this is
known as dual system estimation (Erickson and Kadame, 1985). Equating the proportion of the marked
recovered in the second sample (n11) to the population proportion n1. /N leads to an estimate of N.
Mathematically, we have that
୬భభ
ൌ ୬.
୬.భ ே
(2.1)
which simply leads to the estimate,
ܰ ൌ . .
భభ
(2.2)
If ݊. and ݊. are regarded as constants (that is, a random sample without replacement), then from Table 1 ݊ଵଵ
has a hypergeometric distribution of the form
ቀ . ቁቀ . ቁ
భభ మభ
, ݊ଵଵ ൌ ݉ܽݔሺ0, ݊. െ ݊. ሻ, … , ݉݅݊ሺ݊. , ݊. ሻ
ሺ݊ଵଵ ሻ ൌ ቐ ಿ
ቀ ቁ
.భ
(2.3)
0, ݁ݏ݅ݓݎ݄݁ݐ
Sometimes sample 2 is taken with replacement, in this case ݊ଵଵ has a binomial distribution. According to Seber
(1982a), the use of hypergeometric distribution emphasizes the fact that it is basically the activity of the
experimenter that brings about randomness. However, another approach in which randomness is related to the
activity of the animals considers the N animals in the population as N independent multinomial trials each with
the same probability of belonging to a given capture-recapture category. These categories are; caught in the first
sample (݊ଵଶ ) only, caught in the second sample (݊ଶଵ ) only, caught in both sample (݊ଵଵ ) and caught in neither
sample (݊ଶଶ ).
The assumptions of the Petersen estimator are well known (Seber, 1982a; Pollock, 1991). These are;
i.) The population size is closed so that N is constant.
ii.) For each sample, each individual has the same probability of being included in the sample.
iii.) Marking does not affect the catchability of animals.
iv.) Animals do not lose their marks between samples.
v.) All marks (or tags) are reported on recovery in the second sample.
vi.) The animals are independent of one another as far as catching is concerned.
The third assumption above does not hold in the presence of trap response, similar to elusiveness in human
2
3. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.8, 2012
population.
The assumptions for the elusive events can be reformulated as follows:
i.) The population size is closed, that is, no addition is allowed at the time of data collection.
ii.) Individuals can be matched from list to list, that is, individuals will not change their names or identities,
moving from one system to another.
iii.) Catching (listing) does affect the catchability (“listability”) of individuals.
iv.) The listing systems may not be independent.
Solving (2.2) for ݊ଵଵ yields, ݊ଵଵ ൌ . .
ே
Or more generally,
ܧሺ݊ଵଵ ሻ ൌ
. .
ே
(2.4)
Method (2.2) is at its “best” when the recaptures ݊ଵଵ (those re-listed, in terms of epidemiology) follows the
It can be seen that (2.3) is the expected value of the hypergeometric random variable, intuitively, the Petersen
hypergeometric distribution.
For demographic and epidemiologic elusive events, one cannot influence the recaptures, since lists form the
sampling occasions. Moreover, for such events the recaptures are relatively small, yielding low recapture
(re-listing) probabilities. Hence, we seek to estimate N via a Horvitz-Thompson method, where the listing
proposed modeling the capture probabilities of animals. Huggins used a form of the Horvitz-Thompson
probabilities (‘listabilities’) of all individuals is used. This approach was used by Huggins (1991), who
estimator, where the capture probabilities of the animals ̂ are estimated for which
ܰ ൌ ∑ୀଵ
శభ ଵ
ೖ
(2.5)
For dual system estimates (single recaptures), this yields exactly the Petersen estimator (see Jibasen, 2011). That
is,
̂ ൌ 1 െ ∏ଶሺ1 െ ̂௦ ሻ
→ ̂ ൌ 1 െ ሺ1 െ ଵ. ሻሺ1 െ .ଵ ሻ (2.6)
It can be shown that ̂ ൌ
where ܰ is the Petersen estimator (which is ܰ௦ in this work). That is, the
ே
Huggins (1991) method is related to the Petersen estimator by
ܰ௦ ൌ
ො
ೖ
Here we replaced ̂ with another probability ( which we called ‘coverage probability’) given as
∑ೞ ௫ೣ
ൌ ∑ೣ
ೣ ௦
(2.7)
ೣ
(see Seber, 1982b) where, x = 0,1,.., s, is the number of times an individual is listed, ݂௫ is the frequency of
individuals occurring x times and s is the number of systems (sources).
For a two system formulation,
∑ೞ ௫ೣ
̂ൌ ൌ
ೣ
∑ ௦ೣ ௦
(2.8)
ೣ
where r is the number of different individuals listed (caught), n is the total number of individuals on both lists
and s is the number of systems. Following the multinomial setting, the joint probability density function for this
model is given by Jibasen (2011) as;
݂ሺ݊ଵ. , ݊.ଵ , ݊ଵଵ ሻ ൌ ቀ ቁ ቀି. ቁ ቀ . ቁ .ା. ሺ1 െ ሻ ௦ି.ି.
ି
భ. . భభ భభ
(2.9)
3
4. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.8, 2012
The maximum likelihood estimator (MLE) of is ̂ൌ
௦
which finally leads to our proposed estimator
ܰ ൌ ∑ ො ൌ
ୀଵ
ଵ
ො
(2.10)
that is, ܰ is based on all listed individuals.
The listing probabilities (‘listabilities’) for this model are a random sample of all individuals in the population,
Another estimator used in this work is the no factor model estimator ܰ (Jibasen, 2011). This estimator is given
as;
ܰ ൌ
మ
ସభభ
(2.11)
Under this model it is assumed that all individuals have equal chance of being listed (caught) regardless of
systems, occasion, or even individual heterogeneity. This assumption is purely unrealistic especially for elusive
events in which a drug addict caught the first time will always developed measures to avoid being caught the
and by different systems. However, the new proposed estimator ܰ has largely catered for such a realistic
second or subsequent times, thereby having different probabilities (chances) of being caught at different times
situation.
3. Simulation Studies
assumed, the marginal totals ݊. and ݊. were fixed, but the recaptures ݊ଵଵ was generated randomly. The
Simulation was carried under the hypergeometric setting 2.3 using equation 2.2, where the population size N was
values assumed for N are 90, 100, 300, 500, 1000, 2000, 3000 with corresponding values of ݊. and ݊. fixed
݊. needed to simulate number of recapture ݊ଵଵ are simply obtained by the difference ܰ െ ݊. . For each triplet
at 50, 50, 150, 260, 450, 900, 1500 and 10, 10, 80, 30, 40, 80, 100 respectively. From the above, the values of
(ܰ, ݊. , ݊. ) as specified above, the simulation scheme was repeated ten times and the total size of the elusive
population N was estimated using the two estimators ܰ , ܰ௦ and the newly proposed estimator ܰ as
considered in this study. The performance of each estimator was assessed using Akaike Information Criterion
(AIC) and the Mean Absolute Deviation (MAD). All the simulations and data analysis were performed within the
environment of R statistical package (www.cran.org).
4. Results
It is observed from Table 2 that our new proposed estimator ܰ provides better estimates of N than either the
Various results from the simulations carried out are presented in Table 2 and Tables A1 to A6 in the appendix.
ܰ or ܰ௦ estimator as evident from the results of the AIC and MAD. However, at the seventh iteration of the
simulation (Table 2) where ݊. ൌ ݊ଵଵ , the ܰ yielded a perfect estimate of N like ܰ in many instances. This
better performance of ܰ at this iteration level could be attributed to chance factor since it is just one out of
several results obtained. The performances of the three estimators based on their AIC values are presented by the
in Figure 1. The superiority of ܰ over the other two estimators is clearly shown by this plot.
plot of their respective AIC estimates versus the number of times the simulation scheme was repeated as shown
Appendix indicated that the new estimator ܰ consistently performed better than both the Petersen estimator
When the size of the elusive population N increases from 90 to 100, results of AIC and MAD in Table 3 in the
ܰ௦ and ܰ . More specifically, it is observed that as the number of recapture (relisting) ݊ଵଵ gets fewer, both the
ܰ௦ and ܰ estimators get poorer while the ܰ estimator continues to yield better estimates of the intended size
Without loss of generality, various results obtained in this work showed that the new proposed estimator ܰ of
of the elusive population.
the size of elusive population N is more efficient and robust than the other two (ܰ ௦ and ܰ ) based on the AIC
For better understanding of the performances of the three estimators ܰ௦ , ܰ and ܰ under different
and MAD criteria employed as evident from various tables of results (Tables 2 to 8 in the appendix).
simulation schemes as defined by the triplet (ܰ, ݊ଵ. , ݊.ଵ ), we plotted their respective AIC values against the
number of times the simulation was performed (iterations) as presented in Figure 2 in the appendix. Only the
graphs for (300, 150, 80), (500, 260, 30), (1000, 450, 40) and (3000, 1500, 100) schemes based on the results in
in Figure 2 that the AIC values of our new estimator ܰ is relatively smaller and more stable (less variable) than
Tables 4, 5, 6 and 8 respectively are presented in Figure 2 due to space. It is easily observed from the four plots
that of ܰ
௦ and ܰ across all the ten repetitions irrespective the size of the elusive population.
4
5. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.8, 2012
5. Discussions and Conclusion
The work here presents a novel estimator of capture-recapture events in elusive population. A thorough
comparison of our proposed estimator with two of the existing ones through simulation studies clearly shows the
population N. Various results from Table 2 and Tables 3 to 8 in the Appendix show that the estimator ܰ
superiority of our estimator over others in terms of better performance at estimating the size of the elusive
always overestimates the size of the elusive population N, ܰ௦ underestimates N while our new proposed
estimator ܰ always performs better in term of the closeness of its estimates to the target population of
elusiveness. Further, the results show that ܰ is a better estimator of the targeted elusive population irrespective
the population size. Also, our proposed estimator performs better with fewer ‘relistings’ ݊ଵଵ , this is the case of
In other words, the Petersen is at its best when the expected value of ݊ଵଵ follows the hyper geometric
elusiveness.
distribution; but the use of hypergeometric distribution emphasizes the fact that it is basically the activity of the
randomness. Thus, Petersen performs very poor with fewer ‘relistings’ ݊ଵଵ which is the case of elusive
experimenter that brings about randomness. Elusive events are such that the experimenter cannot influence
Results from this work suggest that both ܰ and ܰ௦ cannot be used to estimate the size of elusive populations.
population.
The work established that the new proposed ܰ is a better model compared to ܰ and ܰ௦ (the Petersen’s
be used. In such cases our proposed estimator ܰ is a better estimator.
estimator). This turns to suggest that for elusive epidemiologic populations, the popular Petersen’s model cannot
Estimation of these types of Events (ܰ and ܰ௦ ) has been focused towards multiple recaptures in literature, but
some demographic events are such that multiple recaptures may not be possible. This is the case with elusive
populations. Hence, the need for this proposed method for estimating the size of such events.
References
Alho, J.M. (1990). Logistic regression in capture-recapture models. Biometrics 46, 623- 635.
Erickson, E.P. & Kadame, J. B. (1985). Estimating the population in a census year. Journal of the American
Statistical Association 80, 98-109.
Haines, D.E., Pollock, K.H. & Pantula, S. G. (2000). Population Size and Total Estimation When Sampling from
Incomplete List Frames with Heterogeneous Inclusion probabilities. Biometrics 26, 2 121-129
Huggins, R. (1991). Some practical aspects of a conditional likelihood approach to capture experiments.
Biometrics. 47, 725-732.
International Working Group for Disease Monitoring and Forecasting (IWGDMF) (1995a). Capture-Recapture
and Multiple-Record Systems Estimation I: History and Theoretical Development. Am. J. Epidemiol.
142,1047-1058.
Jibasen, D (2011). Capture-recapture Type Models for Estimating the Size of An Elusive Population. An
Unpublished PhD Thesis submitted to the University of Ilorin, Ilorin, Nigeria.
R Development Core Team (2011). R: A language and environment for statistical computing. R foundation for
Statistical computing, Vienna, Austria, (2007), ISBN: 3-900051-07-0, URL http://www.R-project.org
Schwarz, C. J. & Seber, G. A. F. (1999). Estimating animal abundance: Statistical Science 14, 427-456.
Seber, G. A. F.(1965). A note on the multiple-recapture census. Biometrika 52, 249-259
Seber, G. A. F. (1982a). Capture-recapture methods: Encyclopedia of Statistical SciencesVol.1 367-374. Edited
by Samuel, K. and Norman, L.J. New York: John Wiley and sons.
Seber, G. A. F. (1982b). Estimating Animal Abundance and related Parameters, 2nd edition. London: Charles
Griffin & Sons.
Sekar, C. & Deming E.W. (1949). On a Method of Estimating Birth and Death Rates and Extent of Registration.
Journal of the America Stat. Assoc. 44,101-115
Shapiro, S. (1949). Estimating Birth Registration Completeness. Journal of the American Stat. Assoc. 45,
261-264.
Danjuma Jibasen. Dr. Danjuma Jibasen obtained his B.Tech. (Hons.) Statistics at Federal University of
Technology, Yola, Nigeria in 1997. He had his M.Sc. Statistics(with specialty in Biostatistics) in 2003 from
University of Ilorin, Ilorin, Nigeria from where he obtained his Ph.D. in Statistics under the supervision of
Professor E. T. Jolayemi. He is currently a lecturer of statistics at Modibbo Adamawa University of Technology,
Yola, Nigeria. He has a number of scientific publications in reputable academic journals.
Waheed Babatunde Yahya. Dr. Waheed Babatunde Yahya obtained his B.Sc. (Hons.) Statistics and M.Sc.
Statistics degrees from University of Ilorin, Ilorin, Nigeria in 2001 and 2003 respectively. He obtained his Ph.D.
5
6. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.8, 2012
in Statistics in 2009 at the Ludwig-Maximilians University of Munich, Munich, Germany under the supervision
of Prof. Dr. Kurt Ulm. His research interests include Categorical data analysis, Biostatistics and analysis of
high-throughput genomic and proteomic data. He taught undergraduate students of Medical Statistics and
Bioinformatics at the Department of Medical Statistics and Epidemiology, Technical University of Munich,
Munich, Germany during the 2007/2008 academic session. He has been teaching Statistics and Biostatistics at
various levels for over a decade at University of Ilorin, Ilorin, Nigeria. He is a Deutscher Akademischer
Austausch Dienst (DAAD) scholar and has published sufficient number of peer reviewed academic papers in
reputed national and international journals.
Emanuel Teju Jolayemi. Professor Emanuel Teju Jolayemi had his B.Sc. (Hons.) Statistics from Ahmadu Bello
University, Zaria, Nigeria in 1976. He obtained his M.Sc. Statistics in 1979 and Ph.D. Biostatistics in 1982
(under the supervision of Professor M. B. Brown) both from University of Michigan, Ann Arbor, U.S.A. He has
been teaching Statistics and Biostatistics at various levels since 1977. He has more than fifty academic
publications in reputed national and international journals to his credit.
Appendix
Table 1: A 2ൈ2 contingency table illustrating capture-recapture scheme from a closed population of size N
individuals using two systems.
System 2
Total
Caught Uncaught
Caught n11 n12 n1.
System 1
Uncaught n21 n22 n2.
Total n.1 n.2 N
Table 2: Table of results of the three estimators (ܰ , ܰ௦ , ܰ ) of elusive population N for simulation scheme with
N = 90, n1. = 50 and n.1 = 10. The AIC values of the estimators at each simulation (iteration) as well as their
respective computed MADs are equally reported in the table.
Iteration ݊ଵଵ ܰ AIC ܰ௦ AIC ܰ AIC
1 7 129 7.141 71 5.366 94 4.272
2 7 129 7.141 71 5.366 94 4.272
3 6 150 9.137 83 4.490 97 4.543
4 8 113 5.760 63 6.112 90 4.010
5 8 113 5.760 63 6.112 90 4.010
6 8 113 5.760 63 6.112 90 4.010
7 10 90 4.000 50 7.124 83 4.490
8 7 129 7.141 71 5.366 94 4.272
9 8 113 5.760 63 6.112 90 4.010
10 5 180 12.197 100 4.759 101 4.824
MAD 44 28 5
6
7. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.8, 2012
Table 3 : Table of results of the three estimators (ܰ , ܰ௦ , ܰ ) of elusive population N for simulation scheme
with N = 100, ݊ଵ. = 50 and ݊.ଵ = 10. The AIC values of the estimators at each simulation (iteration) as well as
their respective computed MADs are equally reported in the table.
Iteration ݊ଵଵ ܰ AIC ܰ௦ AIC ܰ AIC
1 4 225 16.29 125 6.018 105 4.348
2 6 150 8.281 83 5.225 97 4.211
3 6 150 8.281 83 5.225 97 4.211
4 7 129 6.327 71 6.101 94 4.476
5 7 129 6.327 71 6.101 94 4.476
6 7 129 6.327 71 6.101 94 4.476
7 7 129 6.327 71 6.101 94 4.476
8 8 113 4.978 63 6.880 90 4.733
9 6 150 8.281 83 5.225 97 4.211
10 5 180 11.29 100 4.000 101 4.063
MAD 48 23 5
Table 4: Table of results of the three estimators (ܰ , ܰ௦ , ܰ ) of elusive population N for simulation scheme with
N= 300, ݊ଵ. = 150 and ݊.ଵ = 80. The AIC values of the estimators at each simulation (iteration) as well as their
respective computed MADs are equally reported in the table.
Iteration ݊ଵଵ ܰ AIC ܰ௦ AIC ܰ AIC
1 52 254 6.698 231 8.106 276 5.456
2 55 240 7.519 218 8.906 266 5.996
3 54 245 7.253 222 8.643 269 5.818
4 49 270 5.786 245 7.254 285 4.904
5 51 259 6.405 235 7.830 279 5.274
6 53 250 6.980 226 8.377 272 5.638
7 57 232 8.030 211 9.424 260 6.351
8 53 250 6.980 226 8.377 272 5.638
9 55 240 7.519 218 8.906 266 5.996
10 55 240 7.519 218 8.906 266 5.996
MAD 52 75 29
Table 5: Table of results of the three estimators (ܰ , ܰ௦ , ܰ ) of elusive population N for simulation scheme with
N = 500, ݊ଵ. = 260 and ݊.ଵ = 30. The AIC values of the estimators at each simulation (iteration) as well as their
respective computed MADs are equally reported in the table.
Iteration ݊ଵଵ ܰ AIC ܰ௦ AIC ܰ AIC
1 17 1237 85.62 459 7.46 514 5.20
2 19 1107 68.92 411 11.43 506 4.55
3 23 914 45.76 339 17.64 492 4.71
4 16 1314 95.88 488 5.06 518 5.53
5 22 956 50.59 355 16.19 495 4.40
6 20 1051 62.06 390 13.13 503 4.24
7 22 956 50.59 355 16.19 495 4.40
8 16 1314 95.88 488 5.06 518 5.53
9 18 1168 76.71 433 9.56 510 4.88
10 20 1051 62.06 390 13.13 503 4.24
MAD 607 89 9
7
8. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.8, 2012
Table 6: Table of results of the three estimators (ܰ , ܰ௦ , ܰ ) of elusive population N for simulation scheme with
N = 1000, ݊ଵ. = 450 and ݊.ଵ = 40. The AIC values of the estimators at each simulation (iteration) as well as
their respective computed MADs are equally reported in the table.
Iteration ݊ଵଵ ܰ AIC ܰ௦ AIC ܰ AIC
1 30 2001 117.7 600 38.61 864 16.29
2 29 2070 126.8 621 36.54 867 15.96
3 29 2070 126.8 621 36.54 867 15.96
4 28 2144 136.6 643 34.44 871 15.63
5 24 2501 186.1 750 25.07 886 14.30
6 33 1819 94.4 545 45.04 852 17.27
7 31 1936 109.3 581 40.68 860 16.62
8 22 2728 219.0 818 19.24 894 13.62
9 27 2223 147.3 667 32.28 875 15.30
10 23 2610 201.7 783 22.28 890 13.96
MAD 1210 337 127
Table 7: Table of results of the three estimators (ܰ , ܰ௦ , ܰ ) of elusive population N for simulation scheme with
N = 2000, ݊ଵ. = 900 and ݊.ଵ = 80. The AIC values of the estimators at each simulation (iteration) as well as
their respective computed MADs are equally reported in the table.
Iteration ݊ଵଵ ܰ AIC ܰ௦ AIC ܰ AIC
1 49 4900 353.7 1469 49.74 1769 24.93
2 48 5002 368.2 1500 47.13 1773 24.60
3 57 4212 259.1 1263 67.99 1739 27.59
4 63 3811 206.8 1143 80.48 1716 29.56
5 52 4617 314.2 1385 57.05 1758 25.94
6 60 4002 231.4 1200 74.22 1727 28.58
7 59 4069 240.3 1220 72.15 1731 28.25
8 56 4288 269.2 1286 65.88 1742 27.26
9 55 4365 279.7 1309 63.74 1746 26.93
10 60 4002 231.4 1200 74.22 1727 28.58
MAD 2327 702 257
Table 8: Table of results of the three estimators (ܰ , ܰ௦ , ܰ ) of elusive population N for simulation scheme with
N = 3000, ݊ଵ. = 1500 and ݊.ଵ = 100. The AIC values of the estimators at each simulation (iteration) as well as
their respective computed MADs are equally reported in the table.
Iteration ݊ଵଵ ܰ AIC ܰ௦ AIC ܰ AIC
1 71 9014 784.4 2113 86.04 2922 11.45
2 71 9014 784.4 2113 86.04 2922 11.45
3 65 9846 914.1 2308 67.57 2945 9.26
4 76 8421 694.3 1974 100.21 2903 13.27
5 74 8649 728.6 2027 94.61 2911 12.55
6 67 9552 867.8 2239 73.99 2938 9.99
7 71 9014 784.4 2113 86.04 2922 11.45
8 64 10000 938.5 2344 64.24 2949 8.89
9 55 11636 1205 2727 28.85 2984 5.56
10 64 10000 938.5 2344 64.24 2949 8.89
MAD 6515 770 65
8
9. Mathematical Theory and Modeling www.iiste.org
ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online)
Vol.2, No.8, 2012
14
12
10
AIC values
8
AIC_N0
6
4 AIC_Ns
2 AIC_Nc
0
1 2 3 4 5 6 7 8 9 10
Number of Repetitions
Figure 1: The plot of Akaike Information Criteria (AIC) of the three estimators, ܰ௦ , ܰ and ܰ of the elusive
population N. The plot shows better performance (low AIC values) of our new proposed estimator ܰ over
others.
Figure 2: The plot of the AIC values of each of the estimators ܰ௦ , ܰ and ܰ against the number of times
simulation scheme was repeated for the selected schemes (ܰ, ݊ଵ. , ݊.ଵ ) = {(300, 150, 80), (500, 260, 30), (1000,
450, 40), (3000, 1500, 100)} where N is the assumed size of the elusive population, ݊ଵ. and ݊.ଵ are the total
number of persons (drug addicts) caught by systems 1 and 2 respectively.
9
10. This academic article was published by The International Institute for Science,
Technology and Education (IISTE). The IISTE is a pioneer in the Open Access
Publishing service based in the U.S. and Europe. The aim of the institute is
Accelerating Global Knowledge Sharing.
More information about the publisher can be found in the IISTE’s homepage:
http://www.iiste.org
The IISTE is currently hosting more than 30 peer-reviewed academic journals and
collaborating with academic institutions around the world. Prospective authors of
IISTE journals can find the submission instruction on the following page:
http://www.iiste.org/Journals/
The IISTE editorial team promises to the review and publish all the qualified
submissions in a fast manner. All the journals articles are available online to the
readers all over the world without financial, legal, or technical barriers other than
those inseparable from gaining access to the internet itself. Printed version of the
journals is also available upon request of readers and authors.
IISTE Knowledge Sharing Partners
EBSCO, Index Copernicus, Ulrich's Periodicals Directory, JournalTOCS, PKP Open
Archives Harvester, Bielefeld Academic Search Engine, Elektronische
Zeitschriftenbibliothek EZB, Open J-Gate, OCLC WorldCat, Universe Digtial
Library , NewJour, Google Scholar