Derivative of sine function: A graphical explanationHideo Hirose
The derivative of a sine function can be derived by using the limit for sine function. However, it seems difficult to understand this transformation. Thus, I have drawn a figure expressing the differentiation.
sine関数微分 d sin x / dx = cos x の説明は、sineの差の公式を積に変換して、sin x / x → 1 (x → 0) を使って説明されることが多い。
ここでは、図形的に示してみた。sin x / x → 1 (x → 0) が見えないだけになっているが、結局は、、、
Success/Failure Prediction for Final Examination using the Trend of Weekly On...Hideo Hirose
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise boosts blood flow, releases endorphins, and promotes changes in the brain which help regulate emotions and stress levels.
Attendance to Lectures is Crucial in Order Not to Drop OutHideo Hirose
H. Hirose, Attendance to Lectures is Crucial in Order Not to Drop Out, 7th International Conference on Learning Technologies and Learning Environments (LTLE2018), pp.194-198, July 8-12, 2018.
Derivative of sine function: A graphical explanationHideo Hirose
The derivative of a sine function can be derived by using the limit for sine function. However, it seems difficult to understand this transformation. Thus, I have drawn a figure expressing the differentiation.
sine関数微分 d sin x / dx = cos x の説明は、sineの差の公式を積に変換して、sin x / x → 1 (x → 0) を使って説明されることが多い。
ここでは、図形的に示してみた。sin x / x → 1 (x → 0) が見えないだけになっているが、結局は、、、
Success/Failure Prediction for Final Examination using the Trend of Weekly On...Hideo Hirose
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise boosts blood flow, releases endorphins, and promotes changes in the brain which help regulate emotions and stress levels.
Attendance to Lectures is Crucial in Order Not to Drop OutHideo Hirose
H. Hirose, Attendance to Lectures is Crucial in Order Not to Drop Out, 7th International Conference on Learning Technologies and Learning Environments (LTLE2018), pp.194-198, July 8-12, 2018.
How many times are we tossing coins until we observe head, tail, and head? It's ten. It's not eight. This is an intriguing result against our intuition.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Different classification results under different criteria, distance and proba...Hideo Hirose
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Interesting but difficult problem: find the optimum saury layout on a gridiro...Hideo Hirose
Even though we use a simple but ridiculous problem finding the optimum saury baking layout on a fish gridiron by Joule heat, we can invoke the interest to science by combining electrical engineering, linear algebra and probability viewpoints. These elements are, use of solving linear equation and Poisson's equation, and applying the central limit theorem to this situation. In addition, by removing the constraints, we can create a new problem free from our common sense. Presenting funny but essential problems could be another aspect for active learning using the problem of the interdisciplinary scientific methods.
With 80 steps Galton boards, we can see the binomial distribution approximated to the normal distribution.
Youtube ===>>>
https://www.youtube.com/watch?v=3w4e1RQTAB8
The cumulative exposure model (CEM) is often used to express the failure probability model in the step-up test method; the step-up procedure continues until a breakdown occurs. This probability model is widely accepted in reliability fields because accumulation of fatigue is considered to be reasonable. Contrary to this, the memoryless model (MM) is also used in electrical engineering because accumulation of fatigue is not observed in some cases. We propose here a new model, the extended cumulative exposure model (ECEM), which includes features of both the described models. A simulation study and an application to the actual experimental case of oil insulation test support the validity of the proposed model. The independence model (IM) is also discussed.
Parameter estimation for the truncated weibull model using the ordinary diffe...Hideo Hirose
In estimating the number of failures using the truncated data for the Weibull model, we often encounter a case that the estimate is smaller than the true one when we use the likelihood principle to conditional probability. In infectious disease predictions, the SIR model described by simultaneous ordinary differential equations are often used, and this model can predict the final stage condition, i.e., the total number of infected patients, well, even if the number of observed data is small. These two models have the same condition for the observed data: truncated to the right. Thus, we have investigated whether the number of failures in the Weibull model can be estimated accurately using the ordinary differential equation. The positive results to this conjecture are shown.
This document summarizes Hideo Hirose's presentation on trunsored data analysis at the IEEE Reliability Society Japan Chapter Annual Meeting. Hirose discusses different types of incomplete data including censored, truncated, and trunsored (mixture) data. He presents likelihood functions for censored, truncated, and mixture models. Hirose also provides an example analysis of failure time data using censored, truncated, and mixture models. He notes that while parameter estimates can be obtained from the mixture model, confidence intervals are not straightforward, especially when p is near 1. Hirose discusses approaches for hypothesis testing when p is near 1 where the data could be assumed censored.
An accurate ability evaluation method for every student with small problem it...Hideo Hirose
To enhance the chance of use of the item response theory (IRT) in universities, we developed a test evaluation system via the Web for university teachers, and we have been evaluating students' abilities by using the IRT system in midterm and final examinations for two years.
We show a surprising aspect regarding the adoption of the IRT system in university tests. That is, the IRT can not only give us the problem difficulty information but also can provide the accurate student ability evaluation, even if the number of problems is small. Therefore, we can include high and low level test items together so that we can assess a variety of students' abilities accurately and fairly; we do not worry about providing easier problems that will make the lecture level decline; in other words, we do not care about finding the most appropriate problem levels to each student. We can provide all level problems uniformly distributed to all students, and we can still assess the students' abilities accurately. Consequently, students do not raise claims about their scores; they seem to be satisfied with it.
We show these results, in this paper, by a theoretical background, a simulation study, and our empirical results.
A successful maximum likelihood parameter estimation in skewed distributions ...Hideo Hirose
A successful maximum likelihood parameter estimation scheme using
the continuation method (homotopy method) is introduced. This
algorithm is particularly useful for the three-parameter skewed
distributions including thresholds. Such three-parameter
distributions are, for example, Weibull, log-normal, gamma and
inverse Gaussian distributions. As the proposed algorithm can almost
always obtain the local maximum likelihood estimates automatically,
it is of considerable practical value. The Monte Carlo simulation
study shows the effectiveness of the proposed method.
Estimation for the number of fragile samples in the trunsored and truncated m...Hideo Hirose
A method to obtain the estimate and its confidence interval for the number of fragile samples in mixed populations of the fragile and durable samples, i.e., in the trunsored model, is introduced. The confidence interval in the trunsored model is compared with that in the truncated model. Although the maximum likelihood estimates for the parameters in the underlying probability distribution in both models are the same, the confidence interval for the estimated number of samples in the trunsored model is differ from that in the truncated model. When the censoring time goes to infinity, the confidence interval in the truncated model converges to zero, whereas the confidence interval in the trunsored model converges to a positive constant value.
The error for the number of fragile samples in the trunsored model is affected by the two kinds of fluctuation effect due to the censoring time: one is the fluctuation of the parameter estimates, and the other is the ratio of the number of fragile samples to the total number of samples. However, in the truncated model, the fluctuation depends only on the parameter estimates, and the error by this effect will vanish when the censoring time goes to infinity.
A typical example of the method is applied to the case fatality ratio for the infectious diseases such as SARS.
In difficult classification problems of the z-dimensional points into two groups having 0-1 responses due to the messy data structure, it is more favorable to search for the denser regions for the re- sponse 1 assigned points than to find the boundaries to separate the two groups. To such problems of- ten seen in customer databases, we have developed a bump hunting method using probabilistic and sta- tistical methods. By specifying a pureness rate in advance, a maximum capture rate will be obtained. Then, a trade-off curve between the pureness rate and the capture rate can be constructed. In find- ing the maximum capture rate, we have used the decision tree method combined with the genetic al- gorithm. We first explain a brief introduction of our research: what the bump hunting is, the trade-off curve between the pureness rate and the capture rate, the bump hunting using the tree genetic algorithm, the upper bounds for the trade-off curve using the extreme-value statistics. Then, the assessment for the accuracy of the trade-off curve is tackled from the genetic algorithm procedure viewpoint. Using the new genetic algorithm procedure proposed, we can obtain the upper bound accuracy for the trade- off curve. Then, we may expect the actually attain- able trade-off curve upper bound. The bootstrapped hold-out method is used in assessing the accuracy of the trade-off curve, as well as the cross validation method.
Accuracy assessment for the trade off curve and its upper curve in the bump h...Hideo Hirose
Suppose that we are interested in classifying n points in a z-dimensional space into two groups having response 1 and response 0 as the target variable. In some real data cases in customer classification, it is difficult to discriminate the favorable customers showing response 1 from others because many re- sponse 1 points and 0 points are closely located. In such a case, to find the denser regions to the favorable customers is considered to be an alter- native. Such regions are called the bumps, and finding them is called the bump hunting. By pre-specifying a pureness rate p in advance a maximum capture rate c could be obtained; the pureness rate is the ratio of the num- ber of response 1 points to the total number of points in the target region; the capture rate is the ratio of the number of response 1 points to the total number of points in the total regions. Then a trade-off curve between p and c can be constructed. Thus, the bump hunting is the same as the trade-off curve constructing. In order to make future actions easier, we adopt simpler boundary shapes for the bumps such as the union of z-dimensional boxes located parallel to some explanation variable axes; this means that we adopt the binary decision tree. Since the conventional binary decision tree will not provide the maximum capture rates because of its local optimizer property, some probabilistic methods would be required. Here, we use the genetic al- gorithm (GA) specified to the tree structure to accomplish this; we call this the tree GA. The tree GA has a tendency to provide many local maxima of the capture rates unlike the ordinary GA. According to this property, we can estimate the upper bound curve for the trade-off curve by using the extreme-value statistics. However, these curves could be optimistic if they are constructed using the training data alone. We should be careful in as- sessing the accuracy of these curves. By applying the test data, the accuracy of the trade-off curve itself can easily be assessed. However, the property of the local maxima would not be preserved. In this paper, we have developed a new tree GA to preserve the property of the local maxima of the capture rates by assessing the test data results in each evolution procedure. Then, the accuracy of the trade-off curve and its upper bound curve are assessed.
Random number generation for the generalized normal distribution using the re...Hideo Hirose
When we want to grasp the characteristics of the time series signals emitted massively from electric
power apparatuses or electroencephalogram, and want to decide some diagnoses about the apparatuses or
human brains, we may use some statistical distribution functions. In such cases, the generalized normal
distribution is frequently used in pattern analysts. In assessing the correctness of the estimates of the
shape of the distribution function accurately, we often use a Monte Carlo simulation study; thus, a fast
and efficient random number generation method for the distribution function is needed. However, the
method for generating the random numbers of the distribution seems not easy and not yet to have been
developed. In this paper, we propose a random number generation method for the distribution function
using the the rejection method. A newly developed modified adaptive rejection method works well in the case
of log-convex density functions.
How many times are we tossing coins until we observe head, tail, and head? It's ten. It's not eight. This is an intriguing result against our intuition.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Different classification results under different criteria, distance and proba...Hideo Hirose
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Interesting but difficult problem: find the optimum saury layout on a gridiro...Hideo Hirose
Even though we use a simple but ridiculous problem finding the optimum saury baking layout on a fish gridiron by Joule heat, we can invoke the interest to science by combining electrical engineering, linear algebra and probability viewpoints. These elements are, use of solving linear equation and Poisson's equation, and applying the central limit theorem to this situation. In addition, by removing the constraints, we can create a new problem free from our common sense. Presenting funny but essential problems could be another aspect for active learning using the problem of the interdisciplinary scientific methods.
With 80 steps Galton boards, we can see the binomial distribution approximated to the normal distribution.
Youtube ===>>>
https://www.youtube.com/watch?v=3w4e1RQTAB8
The cumulative exposure model (CEM) is often used to express the failure probability model in the step-up test method; the step-up procedure continues until a breakdown occurs. This probability model is widely accepted in reliability fields because accumulation of fatigue is considered to be reasonable. Contrary to this, the memoryless model (MM) is also used in electrical engineering because accumulation of fatigue is not observed in some cases. We propose here a new model, the extended cumulative exposure model (ECEM), which includes features of both the described models. A simulation study and an application to the actual experimental case of oil insulation test support the validity of the proposed model. The independence model (IM) is also discussed.
Parameter estimation for the truncated weibull model using the ordinary diffe...Hideo Hirose
In estimating the number of failures using the truncated data for the Weibull model, we often encounter a case that the estimate is smaller than the true one when we use the likelihood principle to conditional probability. In infectious disease predictions, the SIR model described by simultaneous ordinary differential equations are often used, and this model can predict the final stage condition, i.e., the total number of infected patients, well, even if the number of observed data is small. These two models have the same condition for the observed data: truncated to the right. Thus, we have investigated whether the number of failures in the Weibull model can be estimated accurately using the ordinary differential equation. The positive results to this conjecture are shown.
This document summarizes Hideo Hirose's presentation on trunsored data analysis at the IEEE Reliability Society Japan Chapter Annual Meeting. Hirose discusses different types of incomplete data including censored, truncated, and trunsored (mixture) data. He presents likelihood functions for censored, truncated, and mixture models. Hirose also provides an example analysis of failure time data using censored, truncated, and mixture models. He notes that while parameter estimates can be obtained from the mixture model, confidence intervals are not straightforward, especially when p is near 1. Hirose discusses approaches for hypothesis testing when p is near 1 where the data could be assumed censored.
An accurate ability evaluation method for every student with small problem it...Hideo Hirose
To enhance the chance of use of the item response theory (IRT) in universities, we developed a test evaluation system via the Web for university teachers, and we have been evaluating students' abilities by using the IRT system in midterm and final examinations for two years.
We show a surprising aspect regarding the adoption of the IRT system in university tests. That is, the IRT can not only give us the problem difficulty information but also can provide the accurate student ability evaluation, even if the number of problems is small. Therefore, we can include high and low level test items together so that we can assess a variety of students' abilities accurately and fairly; we do not worry about providing easier problems that will make the lecture level decline; in other words, we do not care about finding the most appropriate problem levels to each student. We can provide all level problems uniformly distributed to all students, and we can still assess the students' abilities accurately. Consequently, students do not raise claims about their scores; they seem to be satisfied with it.
We show these results, in this paper, by a theoretical background, a simulation study, and our empirical results.
A successful maximum likelihood parameter estimation in skewed distributions ...Hideo Hirose
A successful maximum likelihood parameter estimation scheme using
the continuation method (homotopy method) is introduced. This
algorithm is particularly useful for the three-parameter skewed
distributions including thresholds. Such three-parameter
distributions are, for example, Weibull, log-normal, gamma and
inverse Gaussian distributions. As the proposed algorithm can almost
always obtain the local maximum likelihood estimates automatically,
it is of considerable practical value. The Monte Carlo simulation
study shows the effectiveness of the proposed method.
Estimation for the number of fragile samples in the trunsored and truncated m...Hideo Hirose
A method to obtain the estimate and its confidence interval for the number of fragile samples in mixed populations of the fragile and durable samples, i.e., in the trunsored model, is introduced. The confidence interval in the trunsored model is compared with that in the truncated model. Although the maximum likelihood estimates for the parameters in the underlying probability distribution in both models are the same, the confidence interval for the estimated number of samples in the trunsored model is differ from that in the truncated model. When the censoring time goes to infinity, the confidence interval in the truncated model converges to zero, whereas the confidence interval in the trunsored model converges to a positive constant value.
The error for the number of fragile samples in the trunsored model is affected by the two kinds of fluctuation effect due to the censoring time: one is the fluctuation of the parameter estimates, and the other is the ratio of the number of fragile samples to the total number of samples. However, in the truncated model, the fluctuation depends only on the parameter estimates, and the error by this effect will vanish when the censoring time goes to infinity.
A typical example of the method is applied to the case fatality ratio for the infectious diseases such as SARS.
In difficult classification problems of the z-dimensional points into two groups having 0-1 responses due to the messy data structure, it is more favorable to search for the denser regions for the re- sponse 1 assigned points than to find the boundaries to separate the two groups. To such problems of- ten seen in customer databases, we have developed a bump hunting method using probabilistic and sta- tistical methods. By specifying a pureness rate in advance, a maximum capture rate will be obtained. Then, a trade-off curve between the pureness rate and the capture rate can be constructed. In find- ing the maximum capture rate, we have used the decision tree method combined with the genetic al- gorithm. We first explain a brief introduction of our research: what the bump hunting is, the trade-off curve between the pureness rate and the capture rate, the bump hunting using the tree genetic algorithm, the upper bounds for the trade-off curve using the extreme-value statistics. Then, the assessment for the accuracy of the trade-off curve is tackled from the genetic algorithm procedure viewpoint. Using the new genetic algorithm procedure proposed, we can obtain the upper bound accuracy for the trade- off curve. Then, we may expect the actually attain- able trade-off curve upper bound. The bootstrapped hold-out method is used in assessing the accuracy of the trade-off curve, as well as the cross validation method.
Accuracy assessment for the trade off curve and its upper curve in the bump h...Hideo Hirose
Suppose that we are interested in classifying n points in a z-dimensional space into two groups having response 1 and response 0 as the target variable. In some real data cases in customer classification, it is difficult to discriminate the favorable customers showing response 1 from others because many re- sponse 1 points and 0 points are closely located. In such a case, to find the denser regions to the favorable customers is considered to be an alter- native. Such regions are called the bumps, and finding them is called the bump hunting. By pre-specifying a pureness rate p in advance a maximum capture rate c could be obtained; the pureness rate is the ratio of the num- ber of response 1 points to the total number of points in the target region; the capture rate is the ratio of the number of response 1 points to the total number of points in the total regions. Then a trade-off curve between p and c can be constructed. Thus, the bump hunting is the same as the trade-off curve constructing. In order to make future actions easier, we adopt simpler boundary shapes for the bumps such as the union of z-dimensional boxes located parallel to some explanation variable axes; this means that we adopt the binary decision tree. Since the conventional binary decision tree will not provide the maximum capture rates because of its local optimizer property, some probabilistic methods would be required. Here, we use the genetic al- gorithm (GA) specified to the tree structure to accomplish this; we call this the tree GA. The tree GA has a tendency to provide many local maxima of the capture rates unlike the ordinary GA. According to this property, we can estimate the upper bound curve for the trade-off curve by using the extreme-value statistics. However, these curves could be optimistic if they are constructed using the training data alone. We should be careful in as- sessing the accuracy of these curves. By applying the test data, the accuracy of the trade-off curve itself can easily be assessed. However, the property of the local maxima would not be preserved. In this paper, we have developed a new tree GA to preserve the property of the local maxima of the capture rates by assessing the test data results in each evolution procedure. Then, the accuracy of the trade-off curve and its upper bound curve are assessed.
Random number generation for the generalized normal distribution using the re...Hideo Hirose
When we want to grasp the characteristics of the time series signals emitted massively from electric
power apparatuses or electroencephalogram, and want to decide some diagnoses about the apparatuses or
human brains, we may use some statistical distribution functions. In such cases, the generalized normal
distribution is frequently used in pattern analysts. In assessing the correctness of the estimates of the
shape of the distribution function accurately, we often use a Monte Carlo simulation study; thus, a fast
and efficient random number generation method for the distribution function is needed. However, the
method for generating the random numbers of the distribution seems not easy and not yet to have been
developed. In this paper, we propose a random number generation method for the distribution function
using the the rejection method. A newly developed modified adaptive rejection method works well in the case
of log-convex density functions.