We present a linear regression method for predictions on a small data
set making use of a second possibly biased data set that may be much
larger. Our method ts linear regressions to the two data sets while
penalizing the dierence between predictions made by those two models.
The document discusses three perspectives on predicting sets of items rather than single items. It describes how sets are common in data such as team formations, medical codes, and online purchases. It then discusses three specific approaches to set prediction: (1) predicting which sets an individual will interact with based on their history, (2) modeling sequences of sets using a generative model, and (3) understanding characteristics of set-based data like subsets and repeats. Applications include prediction, analysis, and simulation.
Link prediction in networks with core-fringe structureAustin Benson
1. The document discusses link prediction in networks with a core-fringe structure. It examines how including connections from fringe nodes affects the performance of link prediction algorithms on the core nodes.
2. An experiment was conducted where a link prediction algorithm was run multiple times, each time including more fringe nodes and connections in order to measure the effect on link prediction accuracy for the core nodes.
3. The results showed that including more information from the fringe helped improve the link prediction performance on the core nodes.
K-NN Classifier Performs Better Than K-Means Clustering in Missing Value Imp...IOSR Journals
This document compares the performance of K-means clustering and K-nearest neighbor (K-NN) classification for imputing missing values. It finds that K-NN performs better than K-means clustering in terms of accuracy when imputing missing values at rates from 2% to 20%. The document simulates datasets with various missing value rates, uses each method to group the data and impute missing values via mean substitution, and compares the results to the original complete dataset to calculate accuracy. K-NN achieved an average accuracy of 67% compared to 62% for K-means clustering across the different missing value rates tested.
Simplicial closure and higher-order link prediction --- SIAMNS18Austin Benson
The document discusses higher-order link prediction, which aims to predict the formation of new groups or "simplices" containing more than two nodes, based on structural properties in timestamped simplex data from various domains. It finds that predicting the closure of open triangles (where a pair of nodes have interacted but not with the third) performs well, and that simply averaging the edge weights in a triangle is often a good predictor. Predicting new structures in communication, collaboration and proximity networks can provide insights beyond classical link prediction.
Three hypergraph eigenvector centralitiesAustin Benson
Three hypergraph eigenvector centralities are proposed to measure the importance of nodes in complex systems modeled as hypergraphs. Hypergraphs generalize graphs by allowing edges to connect any number of nodes. The proposed centralities are adaptations of the standard graph eigenvector centrality to hypergraphs. They measure a node's centrality based on 1) the centralities of its neighbors, 2) being positive values, and 3) being the principal eigenvector of the hypergraph adjacency matrix.
WisconsinBreastCancerDiagnosticClassificationusingKNNandRandomForestSheing Jing Ng
This document discusses using K-Nearest Neighbors (KNN) and Random Forest classifiers to classify breast cancer diagnoses as benign or malignant using a dataset from the University of Wisconsin Hospitals. KNN achieved an accuracy of 95-97% while Random Forest achieved 96-98% accuracy. Both performed well but Random Forest had a slight advantage due to its ability to handle noise and randomness in data better than KNN. The classifiers show potential to help physicians make more accurate diagnosis decisions.
The document discusses Naive Bayes classifiers, which are a family of algorithms for classification that are based on Bayes' theorem and assume independence between features. It provides definitions of key terms like conditional probability and Bayes' theorem. It then derives the Naive Bayes classifier equation and discusses how it works, including an example of classifying whether to play golf based on weather conditions. The document also covers advantages like speed, disadvantages like the independence assumption, and applications like spam filtering.
The document discusses three perspectives on predicting sets of items rather than single items. It describes how sets are common in data such as team formations, medical codes, and online purchases. It then discusses three specific approaches to set prediction: (1) predicting which sets an individual will interact with based on their history, (2) modeling sequences of sets using a generative model, and (3) understanding characteristics of set-based data like subsets and repeats. Applications include prediction, analysis, and simulation.
Link prediction in networks with core-fringe structureAustin Benson
1. The document discusses link prediction in networks with a core-fringe structure. It examines how including connections from fringe nodes affects the performance of link prediction algorithms on the core nodes.
2. An experiment was conducted where a link prediction algorithm was run multiple times, each time including more fringe nodes and connections in order to measure the effect on link prediction accuracy for the core nodes.
3. The results showed that including more information from the fringe helped improve the link prediction performance on the core nodes.
K-NN Classifier Performs Better Than K-Means Clustering in Missing Value Imp...IOSR Journals
This document compares the performance of K-means clustering and K-nearest neighbor (K-NN) classification for imputing missing values. It finds that K-NN performs better than K-means clustering in terms of accuracy when imputing missing values at rates from 2% to 20%. The document simulates datasets with various missing value rates, uses each method to group the data and impute missing values via mean substitution, and compares the results to the original complete dataset to calculate accuracy. K-NN achieved an average accuracy of 67% compared to 62% for K-means clustering across the different missing value rates tested.
Simplicial closure and higher-order link prediction --- SIAMNS18Austin Benson
The document discusses higher-order link prediction, which aims to predict the formation of new groups or "simplices" containing more than two nodes, based on structural properties in timestamped simplex data from various domains. It finds that predicting the closure of open triangles (where a pair of nodes have interacted but not with the third) performs well, and that simply averaging the edge weights in a triangle is often a good predictor. Predicting new structures in communication, collaboration and proximity networks can provide insights beyond classical link prediction.
Three hypergraph eigenvector centralitiesAustin Benson
Three hypergraph eigenvector centralities are proposed to measure the importance of nodes in complex systems modeled as hypergraphs. Hypergraphs generalize graphs by allowing edges to connect any number of nodes. The proposed centralities are adaptations of the standard graph eigenvector centrality to hypergraphs. They measure a node's centrality based on 1) the centralities of its neighbors, 2) being positive values, and 3) being the principal eigenvector of the hypergraph adjacency matrix.
WisconsinBreastCancerDiagnosticClassificationusingKNNandRandomForestSheing Jing Ng
This document discusses using K-Nearest Neighbors (KNN) and Random Forest classifiers to classify breast cancer diagnoses as benign or malignant using a dataset from the University of Wisconsin Hospitals. KNN achieved an accuracy of 95-97% while Random Forest achieved 96-98% accuracy. Both performed well but Random Forest had a slight advantage due to its ability to handle noise and randomness in data better than KNN. The classifiers show potential to help physicians make more accurate diagnosis decisions.
The document discusses Naive Bayes classifiers, which are a family of algorithms for classification that are based on Bayes' theorem and assume independence between features. It provides definitions of key terms like conditional probability and Bayes' theorem. It then derives the Naive Bayes classifier equation and discusses how it works, including an example of classifying whether to play golf based on weather conditions. The document also covers advantages like speed, disadvantages like the independence assumption, and applications like spam filtering.
This document discusses research on modeling and predicting higher-order interactions in networks beyond pairwise connections. The researchers collected datasets containing time-stamped groups or "simplices" of nodes and analyzed properties like triangle closure. They propose "higher-order link prediction" to predict which new simplices will form based on structural features like edge weights between nodes. Scoring functions were tested and averages of edge weights often performed well, differing from classical link prediction methods.
Simplicial closure and higher-order link prediction (SIAMNS18)Austin Benson
This document summarizes research on modeling and predicting the formation of higher-order relationships or interactions between nodes in network datasets. It introduces the concept of "simplicial closure" to describe how groups of nodes interact over time until forming a simplex or higher-order relationship. The researchers propose "higher-order link prediction" as a framework to evaluate models for predicting the formation of new simplices. They test various methods for scoring open triangles based on edge weights and other structural properties to predict which will become closed triangles. The results show these approaches can significantly outperform random prediction, with simply averaging edge weights often performing well.
This document discusses several methods for preparing data before analysis, including handling outliers, missing data, duplicated data, and heterogeneous data formats. For outliers, it describes techniques like trimming, winsorizing, and changing regression models. For missing data, it covers identifying patterns, assessing causes, and handling techniques like listwise deletion, imputation, and multiple imputations. It also addresses detecting and removing duplicate records based on field similarities, as well as standardizing heterogeneous data formats.
This document summarizes a study analyzing the effectiveness of data breaches in 2013 and 2014. The study uses data on the number of files breached from the Privacy Rights Clearinghouse database for 2013 and 2014 breaches. Statistical tests, including two-sample t-tests with and without outliers, are conducted to determine if the difference in the mean number of files breached between the two years is statistically significant. The results of the tests fail to reject the null hypothesis that the means are equal, indicating the number of files breached in 2013 and 2014 are not significantly different. Therefore, the study finds that the effectiveness of data breaches, as measured by the number of compromised files, was statistically similar between 2013 and 2014.
The independent variable in the Stroop Effect experiment is whether the color word matches or mismatches the color of the ink. The dependent variable is reaction time. It is hypothesized that reaction time will be lower for color-word matches than mismatches. Descriptive statistics on the dataset show higher mean and variability for the incongruent condition. A t-test was conducted and found a statistically significant difference between conditions, rejecting the null hypothesis. This matches expectations that it takes longer to name a color when it mismatches the written word.
Types of analytics & the structures of dataRupak Roy
Get to know more about Prescriptive, Predictive analytics like market basket analytics plus the data structure and variables to apply the analytics.
for more info you can ping me at google #bobrupakroy
Machine Learning for the System Administratorbutest
This document discusses how machine learning techniques can be applied to system monitoring tasks performed by system administrators. It argues that machine learning can help improve the accuracy of monitoring by detecting complex relationships between system measurements that would be difficult for humans to specify. The document provides examples of how machine learning can be used to identify normal and abnormal system behavior based on the covariance, contravariance, or independence of measurement pairs, without needing explicit thresholds. It suggests this approach could provide more specific and sensitive monitoring than traditional threshold-based methods.
Simplicial closure & higher-order link predictionAustin Benson
The document discusses higher-order link prediction in networks. It summarizes previous work representing higher-order interactions as tensors, hypergraphs, etc. It then proposes evaluating models of higher-order data using "higher-order link prediction" to predict which groups of more than two nodes will interact based on past data. The authors analyze dynamics of triadic closure in several real-world networks and propose methods to predict closure based on structural properties like edge weights.
Data Preparation with the help of Analytics MethodologyRupak Roy
Get involved with the steps of data preparation and data assessment using widely used methodologies for machine learning data science modeling.
Let me know if anything is required, ping me at google #bobrupakroy
V.8.0-Emerging Frontiers and Future Directions for Predictive AnalyticsElinor Velasquez
This document proposes a novel methodology for predictive analytics based on topological-geometric-analytic-algebraic principles. It views the universe as a canonical heat bath partitioned into components that act as restricted thermal reservoirs. Each component has a well-defined structure and invariant that allows for new predictions. The methodology generalizes concepts like entropy and reinterprets prediction in terms of biological form and function. This provides a new framework for predictive modeling, especially with big data.
This chapter discusses methods for hypothesis testing and constructing confidence intervals for two populations or groups. It provides examples comparing testosterone levels before and after having children, weight loss from a diet, and approval ratings between age groups. The chapter explores the processes and formulas for hypothesis tests and confidence intervals involving two proportions, including a worked example comparing reported rates of cheating between husbands and wives.
This document summarizes a talk about higher-order link prediction in networks. It discusses organizational principles of systems with higher-order interactions, how they evolve over time through simplicial closure events, and how insights can be used to create effective higher-order link prediction methods. Key points include that simplicial closure depends on the structure and strength of ties in the projected graph, and this closure process is similar for 3 and 4 nodes.
A Time Series Analysis for Predicting Basketball StatisticsJoseph DeLay
This document summarizes a time series analysis of points scored by NBA player Derrick Rose. The analysis found that an IMA(1,1) model best fit the data. When used to forecast future points, the model predictions narrowed to Rose's average points per game due to the limited data points. Adding more seasons of data would improve the model's accuracy for long-term predictions.
Distinction between outliers and influential data points w out hyp testAditya Praveen Kumar
This document distinguishes between outliers and influential data points in regression analysis. An outlier is a data point whose response y does not follow the general trend of other y values, while an influential point unduly influences the regression results. Through four examples, it shows that outliers may or may not be influential. Example 1 has no outliers or influential points. Example 2 has an outlier but not an influential point. Example 3 has neither. Example 4 has an outlier and influential point that significantly changes the regression slope. Outliers can influence results, but not all do; it is important to check for influential points.
Hypothesis testing involves developing a null hypothesis (H0) and an alternative hypothesis (Ha) to test a given situation. H0 states there is no difference, while Ha states there is a difference. Tests can be one-tailed or two-tailed. A two-tailed test rejects H0 if the sample mean is significantly different in either direction, while a one-tailed test only rejects if the difference is in the direction specified by Ha. When conducting a test, there is a risk of making a Type I error by rejecting a true H0, or a Type II error by failing to reject a false H0. The significance level determines the probability of a Type I error.
The document discusses algorithms for tree data structures. It describes AVL trees as self-balancing binary search trees where the heights of subtrees can differ by at most one. It also describes red-black trees, noting they are similar to AVL trees but use color attributes (red or black) to balance the tree. The key differences are that AVL trees are generally faster for lookup-intensive applications as they are more rigidly balanced, while red-black tree insertions and deletions may require fewer rotations than AVL trees.
Possibility Theory versus Probability Theory in Fuzzy Measure TheoryIJERA Editor
The purpose of this paper is to compare probability theory with possibility theory, and to use this comparison in comparing probability theory with fuzzy set theory. The best way of comparing probabilistic and possibilistic conceptualizations of uncertainty is to examine the two theories from a broader perspective. Such a perspective is offered by evidence theory, within which probability theory and possibility theory are recognized as special branches. While the various characteristic of possibility theory within the broader framework of evidence theory are expounded in this paper, we need to introduce their probabilistic counterparts to facilitate our discussion.
Mayo: 2nd half “Frequentist Statistics as a Theory of Inductive Inference” (S...jemille6
This document summarizes issues related to data-dependent selections and hypothesis testing. It discusses how preliminary inspection of data can influence test statistics and null hypotheses, potentially altering a test's ability to reliably detect discrepancies from the null. Two examples are provided:
1) "Hunting" through multiple independent tests and only reporting the most statistically significant result can incorrectly estimate the actual error rate as being much higher than the nominal rate of 5%.
2) Searching a DNA database and declaring a match with the first individual is different, as each non-match strengthens evidence for the inferred match. Adjusting is not needed as in the statistical "hunting" case.
Selection of cut-offs or model
A Preference Model on Adaptive Affinity PropagationIJECEIAES
In recent years, two new data clustering algorithms have been proposed. One of them is Affinity Propagation (AP). AP is a new data clustering technique that use iterative message passing and consider all data points as potential exemplars. Two important inputs of AP are a similarity matrix (SM) of the data and the parameter ”preference” p. Although the original AP algorithm has shown much success in data clustering, it still suffer from one limitation: it is not easy to determine the value of the parameter ”preference” p which can result an optimal clustering solution. To resolve this limitation, we propose a new model of the parameter ”preference” p, i.e. it is modeled based on the similarity distribution. Having the SM and p, Modified Adaptive AP (MAAP) procedure is running. MAAP procedure means that we omit the adaptive p-scanning algorithm as in original Adaptive-AP (AAP) procedure. Experimental results on random non-partition and partition data sets show that (i) the proposed algorithm, MAAP-DDP, is slower than original AP for random non-partition dataset, (ii) for random 4-partition dataset and real datasets the proposed algorithm has succeeded to identify clusters according to the number of dataset’s true labels with the execution times that are comparable with those original AP. Beside that the MAAP-DDP algorithm demonstrates more feasible and effective than original AAP procedure.
This document provides instructions for an assignment on data and file structures for the MCS-021 course. It outlines four questions to answer for the assignment, which is worth 100 marks total and 25% of the course grade. Students must answer all four questions, with each question worth 20 marks. The assignment should be submitted by October 15th, 2013 for the July 2013 session or April 15th, 2014 for the January 2014 session. Question 1 asks students to write an algorithm for implementing doubly linked lists.
Optimizing Budget Constrained Spend in Search AdvertisingSunny Kr
Search engine ad auctions typically have a signicant frac-
tion of advertisers who are budget constrained, i.e., if al-
lowed to participate in every auction that they bid on, they
would spend more than their budget. This yields an im-
portant problem: selecting the ad auctions in which these
advertisers participate, in order to optimize dierent system
This document discusses the importance of reproducibility in human computation tasks. It argues that for the results of human computation to be meaningful and informative, they must be reproducible by different human contributors working independently. Reproducibility ensures the results are not due to chance and can reflect the underlying properties of the task. The document outlines sources of variability in human judgments and draws similarities between human computation and content analysis in behavioral sciences, where reproducibility is crucial. It suggests ensuring reproducibility through clear task design and measurement.
This document discusses research on modeling and predicting higher-order interactions in networks beyond pairwise connections. The researchers collected datasets containing time-stamped groups or "simplices" of nodes and analyzed properties like triangle closure. They propose "higher-order link prediction" to predict which new simplices will form based on structural features like edge weights between nodes. Scoring functions were tested and averages of edge weights often performed well, differing from classical link prediction methods.
Simplicial closure and higher-order link prediction (SIAMNS18)Austin Benson
This document summarizes research on modeling and predicting the formation of higher-order relationships or interactions between nodes in network datasets. It introduces the concept of "simplicial closure" to describe how groups of nodes interact over time until forming a simplex or higher-order relationship. The researchers propose "higher-order link prediction" as a framework to evaluate models for predicting the formation of new simplices. They test various methods for scoring open triangles based on edge weights and other structural properties to predict which will become closed triangles. The results show these approaches can significantly outperform random prediction, with simply averaging edge weights often performing well.
This document discusses several methods for preparing data before analysis, including handling outliers, missing data, duplicated data, and heterogeneous data formats. For outliers, it describes techniques like trimming, winsorizing, and changing regression models. For missing data, it covers identifying patterns, assessing causes, and handling techniques like listwise deletion, imputation, and multiple imputations. It also addresses detecting and removing duplicate records based on field similarities, as well as standardizing heterogeneous data formats.
This document summarizes a study analyzing the effectiveness of data breaches in 2013 and 2014. The study uses data on the number of files breached from the Privacy Rights Clearinghouse database for 2013 and 2014 breaches. Statistical tests, including two-sample t-tests with and without outliers, are conducted to determine if the difference in the mean number of files breached between the two years is statistically significant. The results of the tests fail to reject the null hypothesis that the means are equal, indicating the number of files breached in 2013 and 2014 are not significantly different. Therefore, the study finds that the effectiveness of data breaches, as measured by the number of compromised files, was statistically similar between 2013 and 2014.
The independent variable in the Stroop Effect experiment is whether the color word matches or mismatches the color of the ink. The dependent variable is reaction time. It is hypothesized that reaction time will be lower for color-word matches than mismatches. Descriptive statistics on the dataset show higher mean and variability for the incongruent condition. A t-test was conducted and found a statistically significant difference between conditions, rejecting the null hypothesis. This matches expectations that it takes longer to name a color when it mismatches the written word.
Types of analytics & the structures of dataRupak Roy
Get to know more about Prescriptive, Predictive analytics like market basket analytics plus the data structure and variables to apply the analytics.
for more info you can ping me at google #bobrupakroy
Machine Learning for the System Administratorbutest
This document discusses how machine learning techniques can be applied to system monitoring tasks performed by system administrators. It argues that machine learning can help improve the accuracy of monitoring by detecting complex relationships between system measurements that would be difficult for humans to specify. The document provides examples of how machine learning can be used to identify normal and abnormal system behavior based on the covariance, contravariance, or independence of measurement pairs, without needing explicit thresholds. It suggests this approach could provide more specific and sensitive monitoring than traditional threshold-based methods.
Simplicial closure & higher-order link predictionAustin Benson
The document discusses higher-order link prediction in networks. It summarizes previous work representing higher-order interactions as tensors, hypergraphs, etc. It then proposes evaluating models of higher-order data using "higher-order link prediction" to predict which groups of more than two nodes will interact based on past data. The authors analyze dynamics of triadic closure in several real-world networks and propose methods to predict closure based on structural properties like edge weights.
Data Preparation with the help of Analytics MethodologyRupak Roy
Get involved with the steps of data preparation and data assessment using widely used methodologies for machine learning data science modeling.
Let me know if anything is required, ping me at google #bobrupakroy
V.8.0-Emerging Frontiers and Future Directions for Predictive AnalyticsElinor Velasquez
This document proposes a novel methodology for predictive analytics based on topological-geometric-analytic-algebraic principles. It views the universe as a canonical heat bath partitioned into components that act as restricted thermal reservoirs. Each component has a well-defined structure and invariant that allows for new predictions. The methodology generalizes concepts like entropy and reinterprets prediction in terms of biological form and function. This provides a new framework for predictive modeling, especially with big data.
This chapter discusses methods for hypothesis testing and constructing confidence intervals for two populations or groups. It provides examples comparing testosterone levels before and after having children, weight loss from a diet, and approval ratings between age groups. The chapter explores the processes and formulas for hypothesis tests and confidence intervals involving two proportions, including a worked example comparing reported rates of cheating between husbands and wives.
This document summarizes a talk about higher-order link prediction in networks. It discusses organizational principles of systems with higher-order interactions, how they evolve over time through simplicial closure events, and how insights can be used to create effective higher-order link prediction methods. Key points include that simplicial closure depends on the structure and strength of ties in the projected graph, and this closure process is similar for 3 and 4 nodes.
A Time Series Analysis for Predicting Basketball StatisticsJoseph DeLay
This document summarizes a time series analysis of points scored by NBA player Derrick Rose. The analysis found that an IMA(1,1) model best fit the data. When used to forecast future points, the model predictions narrowed to Rose's average points per game due to the limited data points. Adding more seasons of data would improve the model's accuracy for long-term predictions.
Distinction between outliers and influential data points w out hyp testAditya Praveen Kumar
This document distinguishes between outliers and influential data points in regression analysis. An outlier is a data point whose response y does not follow the general trend of other y values, while an influential point unduly influences the regression results. Through four examples, it shows that outliers may or may not be influential. Example 1 has no outliers or influential points. Example 2 has an outlier but not an influential point. Example 3 has neither. Example 4 has an outlier and influential point that significantly changes the regression slope. Outliers can influence results, but not all do; it is important to check for influential points.
Hypothesis testing involves developing a null hypothesis (H0) and an alternative hypothesis (Ha) to test a given situation. H0 states there is no difference, while Ha states there is a difference. Tests can be one-tailed or two-tailed. A two-tailed test rejects H0 if the sample mean is significantly different in either direction, while a one-tailed test only rejects if the difference is in the direction specified by Ha. When conducting a test, there is a risk of making a Type I error by rejecting a true H0, or a Type II error by failing to reject a false H0. The significance level determines the probability of a Type I error.
The document discusses algorithms for tree data structures. It describes AVL trees as self-balancing binary search trees where the heights of subtrees can differ by at most one. It also describes red-black trees, noting they are similar to AVL trees but use color attributes (red or black) to balance the tree. The key differences are that AVL trees are generally faster for lookup-intensive applications as they are more rigidly balanced, while red-black tree insertions and deletions may require fewer rotations than AVL trees.
Possibility Theory versus Probability Theory in Fuzzy Measure TheoryIJERA Editor
The purpose of this paper is to compare probability theory with possibility theory, and to use this comparison in comparing probability theory with fuzzy set theory. The best way of comparing probabilistic and possibilistic conceptualizations of uncertainty is to examine the two theories from a broader perspective. Such a perspective is offered by evidence theory, within which probability theory and possibility theory are recognized as special branches. While the various characteristic of possibility theory within the broader framework of evidence theory are expounded in this paper, we need to introduce their probabilistic counterparts to facilitate our discussion.
Mayo: 2nd half “Frequentist Statistics as a Theory of Inductive Inference” (S...jemille6
This document summarizes issues related to data-dependent selections and hypothesis testing. It discusses how preliminary inspection of data can influence test statistics and null hypotheses, potentially altering a test's ability to reliably detect discrepancies from the null. Two examples are provided:
1) "Hunting" through multiple independent tests and only reporting the most statistically significant result can incorrectly estimate the actual error rate as being much higher than the nominal rate of 5%.
2) Searching a DNA database and declaring a match with the first individual is different, as each non-match strengthens evidence for the inferred match. Adjusting is not needed as in the statistical "hunting" case.
Selection of cut-offs or model
A Preference Model on Adaptive Affinity PropagationIJECEIAES
In recent years, two new data clustering algorithms have been proposed. One of them is Affinity Propagation (AP). AP is a new data clustering technique that use iterative message passing and consider all data points as potential exemplars. Two important inputs of AP are a similarity matrix (SM) of the data and the parameter ”preference” p. Although the original AP algorithm has shown much success in data clustering, it still suffer from one limitation: it is not easy to determine the value of the parameter ”preference” p which can result an optimal clustering solution. To resolve this limitation, we propose a new model of the parameter ”preference” p, i.e. it is modeled based on the similarity distribution. Having the SM and p, Modified Adaptive AP (MAAP) procedure is running. MAAP procedure means that we omit the adaptive p-scanning algorithm as in original Adaptive-AP (AAP) procedure. Experimental results on random non-partition and partition data sets show that (i) the proposed algorithm, MAAP-DDP, is slower than original AP for random non-partition dataset, (ii) for random 4-partition dataset and real datasets the proposed algorithm has succeeded to identify clusters according to the number of dataset’s true labels with the execution times that are comparable with those original AP. Beside that the MAAP-DDP algorithm demonstrates more feasible and effective than original AAP procedure.
This document provides instructions for an assignment on data and file structures for the MCS-021 course. It outlines four questions to answer for the assignment, which is worth 100 marks total and 25% of the course grade. Students must answer all four questions, with each question worth 20 marks. The assignment should be submitted by October 15th, 2013 for the July 2013 session or April 15th, 2014 for the January 2014 session. Question 1 asks students to write an algorithm for implementing doubly linked lists.
Optimizing Budget Constrained Spend in Search AdvertisingSunny Kr
Search engine ad auctions typically have a signicant frac-
tion of advertisers who are budget constrained, i.e., if al-
lowed to participate in every auction that they bid on, they
would spend more than their budget. This yields an im-
portant problem: selecting the ad auctions in which these
advertisers participate, in order to optimize dierent system
This document discusses the importance of reproducibility in human computation tasks. It argues that for the results of human computation to be meaningful and informative, they must be reproducible by different human contributors working independently. Reproducibility ensures the results are not due to chance and can reflect the underlying properties of the task. The document outlines sources of variability in human judgments and draws similarities between human computation and content analysis in behavioral sciences, where reproducibility is crucial. It suggests ensuring reproducibility through clear task design and measurement.
It is well-known that SRPT is optimal for minimizing
ow time on machines that run one job at a time.
However, running one job at a time is a big under-
utilization for modern systems where sharing, simultane-
ous execution, and virtualization-enabled consolidation
are a common trend to boost utilization. Such machines,
used in modern large data centers and clouds, are
powerful enough to run multiple jobs/VMs at a time
subject to overall CPU, memory, network, and disk
capacity constraints.
Motivated by this pr
Times after the concord stagecoach was on displaySunny Kr
The document summarizes recent events and developments at the San Diego Historical Society:
1) The Society opened a new exhibition called "Nikkei Youth Culture: Past, Present, Future" in their newly dedicated Youth Gallery, focusing on the experiences of Japanese American youth over time.
2) They also advanced the second phase of their core exhibition "Place of Promise" highlighting the diverse histories that formed the foundation of San Diego. Artifacts like a stagecoach and quilts will be featured.
3) The Balboa Art Conservation Center is partnering with the Society to evaluate collection needs and make recommendations for funding conservation, storage, and management efforts.
Algorithmic entropy can be seen as a special case of entropy as studied in
statistical mechanics. This viewpoint allows us to apply many techniques
developed for use in thermodynamics to the subject of algorithmic information theory. In particular, suppose we fix a universal prefix-free Turing
Approximation Algorithms for the Directed k-Tour and k-Stroll ProblemsSunny Kr
In the Asymmetric Traveling Salesman Problem (ATSP), the input is a directed n-vertex graph G = (V; E) with nonnegative edge lengths, and the goal is to nd a minimum-length tour, visiting
each vertex at least once. ATSP, along with its undirected counterpart, the Traveling Salesman
problem, is a classical combinatorial optimization problem
Many latent (factorized) models have been
proposed for recommendation tasks like collaborative filtering and for ranking tasks like
document or image retrieval and annotation.
Common to all those methods is that during inference the items are scored independently by their similarity to the query in the
latent embedding space. The structure of the
ranked list (i.e. considering the set of items
returned as a whole) is not taken into account. This can be a problem because the
set of top predictions can be either too diverse (contain results that contradict each
other) or are not diverse enough
This document summarizes a study that used logistic regression to predict the probability of a second date between speed dating participants. It used variables like age, attractiveness ratings, and shared interests to build a model. The best model used only shared interests rated by the male and female as predictors. A threshold of 48% probability maximized sensitivity of predicting positive matches at 89%, though overall accuracy was only 67%. While not perfect, the model provides a reasonable way to forecast speed dating success based on participant ratings.
This document provides guidance on analyzing data using SPSS. It covers topics such as different data types, structuring data for analysis in SPSS, descriptive statistics, graphs, inferential statistics, and specific tests like t-tests, ANOVA, and correlation. The document is intended as a practical guide for researchers who need to analyze their data using SPSS. It defines key terms and provides examples to illustrate different statistical concepts and analysis procedures.
This document provides guidance on analyzing data using SPSS. It covers topics such as different data types, structuring data for analysis in SPSS, descriptive statistics, graphs, inferential statistics, and specific tests like t-tests, ANOVA, and correlation. The document is intended as a practical guide for researchers who need to analyze their data using SPSS. It defines key terms and provides examples to illustrate different statistical concepts and analysis procedures.
This document provides guidance on analysing data using SPSS. It discusses key considerations for determining the appropriate analysis method, including the type of data (nominal, ordinal, interval, ratio), whether the data is paired, whether it is parametric, and what is being examined (differences, correlations, etc.). It covers descriptive statistics, inferential statistics, and specific tests like t-tests, ANOVA, correlation, and chi-square. Examples are provided to illustrate different analysis techniques for various research study designs.
Assigning Scores For Ordered Categorical ResponsesMary Montoya
This document summarizes a research article that proposes a new method for assigning scores to ordered categorical response variables in statistical analysis. Specifically, it discusses the ordered stereotype model, which allows for uneven spacing between categories of an ordinal variable through estimated score parameters. The article presents simulation studies showing the disadvantages of assuming equal spacing, and applies the ordered stereotype model to a real dataset, demonstrating non-equal spacing. It also proposes a new median measure for ordinal data based on estimated score parameters from the ordered stereotype model.
This thesis aims to formulate a simple measurement to evaluate and compare the predictive distributions of out-of-sample forecasts between autoregressive (AR) and vector autoregressive (VAR) models. The author conducts simulation studies to estimate AR and VAR models using Bayesian inference. A measurement is developed that uses out-of-sample forecasts and predictive distributions to evaluate the full forecast error probability distribution at different horizons. The measurement is found to accurately evaluate single forecasts and calibrate forecast models.
This document discusses bias and variance in machine learning models. It begins by introducing bias as a stronger force that is always present and harder to eliminate than variance. Several examples of bias are provided. Through simulations of sampling from a normal distribution, it is shown that sample statistics like the mean and standard deviation are always biased compared to the population parameters. Sample size also impacts bias, with larger samples having lower bias. Variance refers to a model's ability to generalize, with higher variance indicating overfitting. The tradeoff between bias and variance is that reducing one increases the other. Several techniques for optimizing this tradeoff are discussed, including cross-validation, bagging, boosting, dimensionality reduction, and changing the model complexity.
Data science notes for ASDS calicut 2.pptxswapnaraghav
Data science involves both statistics and practical hacking skills. It is the engineering of data - applying tools and theoretical understanding to data in a practical way. Statistical modeling is the process of using mathematical models to analyze and understand data in order to make general predictions. There are several statistical modeling techniques including linear regression, classification, resampling, non-linear models, tree-based methods, and neural networks. Unsupervised learning identifies patterns in data without pre-existing categories by techniques like clustering. Time series forecasting predicts future values based on patterns in historical time series data.
Contains
a.Statistics-1
b. SAS-1
c. Statistics-2
d. Market Research
e. MS Excel
f. SAS-2
g. Data Audit & Data Sanitization
h. SQL
i. Model Building
j. HR
High dimensional data presents a challenge for the clas-sification problem because of the diffculty in modeling the precise relationship between the large number of the class variable and feature variables. In such cases, it can be desirable to reduce the information for a small number of dimensions in order to improve the accuracy and effectiveness of the classification process. While data reduction has been a well studied problem for the unsupervised domain, this technique has not been explored as extensively for the supervised case. for practical use in the high dimensional case the existing techniques which try to perform dimensional- ity reduction are too slow. These techniques find global discriminants in the data. However, the data behavior often varies with data locality and different subspaces may show better discrimination in different localities. This is the more challenging task than the global discrimination problem because of the data localization issue. In this paper, I propose the PCA(Principal Component Analysis) method in order to create a reduced representation of the data for classification applications in an efficient and effective way. Because of this method, the procedure is extremely fast and scales almost linearly both with data set size and dimensionality.
This document provides an overview of descriptive statistics, inferential statistics, and regression analysis using PASW Statistics software. It discusses topics such as frequency analysis, measures of central tendency, hypothesis testing, t-tests, ANOVA, chi-square tests, correlation, and linear regression. The document is divided into multiple parts that cover opening and manipulating data files, descriptive statistics, tests of significance, regression analysis, and chi-square/ANOVA. It also discusses importing/exporting data and using scripts in PASW Statistics.
IDENTIFICATION OF OUTLIERS IN OXAZOLINES AND OXAZOLES HIGH DIMENSION MOLECULA...IJDKP
This document summarizes an algorithm called Principal Component Outlier Detection (PrCmpOut) for identifying outliers in high-dimensional molecular descriptor datasets. PrCmpOut uses principal component analysis to transform the data into a lower-dimensional space, where it can more efficiently detect outliers using robust estimators of location and covariance. The properties of PrCmpOut are analyzed and compared to other robust outlier detection methods through simulation studies using a dataset of oxazoline and oxazole molecular descriptors. Numerical results show PrCmpOut performs well at outlier detection in high-dimensional data.
This document discusses using multiple regression analysis to predict real estate sale prices. Several independent variables are considered as predictors, including floor height, distance from elevator, ocean view, whether it is an end unit, and whether furniture is included. The analysis finds some variables like ocean view and floor height are statistically significant in predicting sale price, while others like the interaction between distance from elevator and ocean view are also important. The regression model provides insight into how real estate businesses can focus their resources based on which factors most influence prices.
This document provides a tutorial on principal components analysis (PCA). It begins with an introduction to PCA and its applications. It then covers the necessary background mathematical concepts, including standard deviation, covariance, and eigenvalues/eigenvectors. The tutorial includes examples throughout and recommends a textbook for further mathematical information.
This Presentation is on recommended system on question paper predication using machine learning techniques. We did literature survey and implement using same technique.
Multinomial logisticregression basicrelationshipsAnirudha si
This document provides an overview of multinomial logistic regression. It discusses how multinomial logistic regression compares multiple groups through binary logistic regressions. It describes how to interpret the results, including evaluating the overall relationship between predictors and the dependent variable and relationships between individual predictors and the dependent variable. Requirements and assumptions of the analysis are explained, such as the dependent variable being non-metric and cases-to-variable ratios. Methods for evaluating model accuracy and usefulness are also outlined.
INFLUENCE OF DATA GEOMETRY IN RANDOM SUBSET FEATURE SELECTIONIJDKP
The geometry of data, also known as probability distribution, is an important consideration for accurate computation of data mining tasks, such as pre-processing, classification and interpretation. The data geometry influences outcome and accuracy of the statistical analysis to a large extent. The current paper focuses on, understanding the influence of data geometry in the feature subset selection process using random forest algorithm. In practice, it is assumed that the data follows normal distribution and most of the time, it may not be true. The dimensionality reduction varies, due to change in the distribution of the data. A comparison is made using three standard distributions such as Triangular, Uniform and Normal Distribution. The results are discussed in this paper.
Simulation Study of Hurdle Model Performance on Zero Inflated Count DataIan Camacho
The document summarizes a simulation study that evaluates the performance of hurdle models on zero-inflated count data under different scenarios. It finds that hurdle models can omit significant predictors but their performance decreases substantially with multicollinearity, with about 50% larger errors and biased parameter estimates. The study generates data with different sample sizes from 100 to 1 million cases and introduces multicollinearity and omission of predictors to evaluate hurdle model adequacy.
Selection of appropriate data analysis techniqueRajaKrishnan M
- The document discusses choosing the right statistical method for data analysis, which depends on factors like the number and measurement level of variables, the distribution of variables, the dependence/independence structure, the nature of the hypotheses, and sample size.
- It presents flowcharts for choosing a statistical method based on whether the hypothesis involves one variable (univariate), two variables (bivariate), or more than two variables (multivariate).
- For univariate data, descriptive statistics or a one-sample t-test can be used depending on whether description or inference is the goal; for bivariate data, the choice depends on the nature of the hypothesis (difference or association) and the level of measurement (parametric or nonparame
UNIVERSAL ACCOUNT NUMBER (UAN) Manual
UNIVERSAL ACCOUNT NUMBER (UAN) Manual
how to find uan number for pf
uan number epf registration
epf contact number
epf contact number kl
epf helpline number
how to find uan number in epf
old pf number to new pf number
old pf account number
how to get old pf number
how to know my pf account number
how to find the pf number
pf account number search
pf no check
how to know my pf balance
pf account balance enquiry
epf department contact number
epf ambattur contact number
epf contact number
epf contact number kl
epf helpline number
epf claim status contact number
Advanced Tactics and Engagement Strategies for Google+Sunny Kr
Succeeding Liam Walsh is Dan Petrovic, the CEO of Dejan SEO, and a well-known search engine specialist. A well-traveled and seasoned presenter, Dan brought a myriad of underutilised Google + capabilities to our attention.
A scalable gibbs sampler for probabilistic entity linkingSunny Kr
This document summarizes a research paper that proposes a scalable Gibbs sampling approach for probabilistic entity linking. The approach formulates entity linking as probabilistic inference in a topic model where each topic corresponds to a Wikipedia article. It introduces an efficient Gibbs sampling scheme that exploits the sparsity in the Wikipedia-LDA model to allow inference over millions of topics. Experimental results show it achieves state-of-the-art performance on the Aida-CoNLL dataset.
Through a detailed analysis of logs of activity for all Google employees, this paper shows how the Google Docs suite (documents, spreadsheets and slides) enables and increases collaboration within Google. In particular, visualization and analysis of the evolution of Google’s collaboration network show that new employees, have started collaborating more quickly and with more people as usage of Docs has grown.
Think Hotels - Book Cheap Hotels WorldwideSunny Kr
This document provides information on booking hotels through an online service. It allows users to choose from over 25,000 hotel locations worldwide and offers the best prices with a single click. Additional details are provided about available accommodations.
Wireframes are basic sketches of an app or website that communicate design ideas without visual elements like fonts or colors. They allow teams to align expectations early on and are used by business decision makers to visualize requirements, developers to understand technical needs, and QA testers to write test scripts. The process involves starting with requirements, creating visual mockups that are iterated based on feedback, building the product while updating designs, and keeping wireframes current throughout development and testing.
comScore Inc. - 2013 Mobile Future in FocusSunny Kr
This document provides an overview and analysis of the mobile and connected device landscape in the United States and internationally. Some of the key points covered include:
- Smartphone and tablet adoption has surged, with over 120 million smartphone owners and nearly 50 million tablet owners in the US. This widespread adoption is ushering in a new "Brave New Digital World" of multi-platform media consumption.
- Mobile channels now account for over 1 in 3 minutes of digital media time spent, demonstrating the rise of multi-platform consumption as a new reality. Leading digital properties are extending their reach by 29% on average through mobile.
- The top mobile platforms are Android and iOS, which combined control nearly 90% of the
Search results clustering (SRC) is a challenging algorithmic
problem that requires grouping together the results returned
by one or more search engines in topically coherent clusters,
and labeling the clusters with meaningful phrases describing
the topics of the results included in them.
HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardin...Sunny Kr
Cardinality estimation has a wide range of applications and
is of particular importance in database systems. Various
algorithms have been proposed in the past, and the HyperLogLog algorithm is one of them
Auctions for perishable goods such as internet ad inventory need to make real-time allocation
and pricing decisions as the supply of the good arrives in an online manner, without knowing the
entire supply in advance. These allocation and pricing decisions get complicated when buyers
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Things to Consider When Choosing a Website Developer for your Website | FODUUFODUU
Choosing the right website developer is crucial for your business. This article covers essential factors to consider, including experience, portfolio, technical skills, communication, pricing, reputation & reviews, cost and budget considerations and post-launch support. Make an informed decision to ensure your website meets your business goals.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
20240609 QFM020 Irresponsible AI Reading List May 2024
Data enriched linear regression
1. Data enriched linear regression
Aiyou Chen Art B. Owen∗ Minghui Shi
Google Inc. Stanford University Google Inc.
December 2012
Abstract
We present a linear regression method for predictions on a small data
set making use of a second possibly biased data set that may be much
larger. Our method fits linear regressions to the two data sets while
penalizing the difference between predictions made by those two models.
The resulting algorithm is a shrinkage method similar to those used in
small area estimation. Our main result is a Stein-type finding for Gaussian
responses: when the model has 5 or more coefficients and 10 or more error
degrees of freedom, it becomes inadmissible to use only the small data set,
no matter how large the bias is. We also present both plug-in and AICc-
based methods to tune the penalty parameter. Most of our results use an
L2 penalty, but we also obtain formulas for L1 penalized estimates when
the model is specialized to the location setting.
1 Introduction
The problem we consider here is how to combine linear regressions based on data
from two sources. There is a small data set of expensive high quality observa-
tions and a possibly much larger data set with less costly observations. The big
data set is thought to have similar but not identical statistical characteristics
to the small one. The conditional expectation might be different there or the
predictor variables might have been measured in somewhat different ways. The
motivating application comes from within Google. The small data set is a panel
of consumers, selected by a probability sample, who are paid to share their
internet viewing data along with other data on television viewing. There is a
second and potentially much larger panel, not selected by a probability sample
who have opted in to the data collection process.
The goal is to make predictions for the population from which the smaller
sample was drawn. If the data are identically distributed in both samples, we
should simply pool them. If the big data set is completely different from the
small one, then it makes sense to ignore it and fit only to the smaller data set.
∗ Art Owen was a paid consultant for this project; it was not part of his Stanford respon-
sibilities.
1
2. Many settings are intermediate between these extremes: the big data set is
similar but not necessarily identical to the small one. We stand to benefit from
using the big data set at the risk of introducing some bias. Our goal is to glean
some information from the larger data set to increase accuracy for the smaller
one. The difficulty is that our best information about how the two populations
are similar is our samples from them.
The motivating problem at Google has some differences from the problem
we consider here. There were response variables observed in the small sample
that were not observed in the large one and the goal was to study the joint
distribution of those responses. That problem also had binary responses instead
of the continuous ones considered here. This paper studies linear regression
because it is more amenable to theoretical analysis and thus allows us to explain
the results we saw.
The linear regression method we use is a hybrid between simply pooling
the two data sets and fitting separate models to them. As explained in more
detail below, we apply shrinkage methods penalizing the difference between the
regression coefficients for the two data sets. Both the specific penalties we use,
and our tuning strategies, reflect our greater interest in the small data set. Our
goal is to enrich the analysis of the smaller data set using possibly biased data
from the larger one.
Section 2 presents our notation and introduces L1 and L2 penalties on the
parameter difference. Most of our results are for the L2 penalty. For the L2
penalty, the resulting estimate is a linear combination of the two within sample
estimates. Theorem 1 gives a formula for the degrees of freedom of that estimate.
Theorem 2 presents the mean squared error of the estimator and forms the basis
for plug-in estimation of an oracle’s value when an L2 penalty is used.
Section 3 considers in detail the case where the regression simplifies to esti-
mation of a population mean. In that setting, we can determine how plug-in,
bootstrap and cross-validation estimates of tuning parameters behave. We get
an expression for how much information the large sample can add. Theorem 3
gives a soft-thresholding expression for the estimate produced by L1 penaliza-
tion and Theorem 4 can be used to find the penalty parameter that an L1 oracle
would choose when the data are Gaussian.
Section 4 presents some simulated examples. We simulate the location prob-
lem and find that numerous L2 penalty methods are admissible, varying in how
aggressively they use the larger sample. The L1 oracle is outperformed by the
L2 oracle in this setting. When the bias is small, the data enrichment methods
improve upon the small sample, but when the bias is large then it is best to use
the small sample only. Things change when we simulate the regression model.
For dimension d 5, data enrichment outperforms the small sample method
in our simulations at all bias levels. We did not see such an inadmissibility
outcome when we simulated cases with d 4.
Section 5 presents our main theoretical result, Theorem 5. When there are 5
or more predictors and 10 or more degrees of freedom for error, then some of our
data enrichment estimators make simply using the small sample inadmissible.
The reduction in mean squared error is greatest when the bias is smallest, but
2
3. no matter how large the bias is, we gain an improvement. This result is similar
to Stein’s classic result on estimation of a Gaussian mean (Stein, 1956), but
the critical threshold here is dimension 5, not dimension 3. The estimator we
study employs a data-driven weighting of the two within-sample least squares
estimators. We believe that our plug-in estimator is even better than this one.
We have tested our method on some Google data. Privacy considerations
do not allow us to describe it in detail. We have seen data enrichment perform
better than pooling the two samples and better than ignoring the larger one. We
have also seen data enrichment do worse than pooling but better than ignoring
the larger sample. Our theory allows for pooling the data to be better than data
enrichment. That may just be a sign that the bias between the two populations
was very small.
There are many ideas in different literatures on combining non-identically
distributed data sets in order to share or borrow statistical strength. Of these,
the closest to our work is small area estimation (Rao, 2003) used in survey
sampling. In chemometrics there is a similar problem called transfer calibration
(Feudale et al., 2002). Medicine and epidemiology among other fields use meta-
analysis (Borenstein et al., 2009). Data fusion (D’Orazio et al., 2006) is widely
used in marketing. The problem has been studied for machine learning where
it is called transfer learning. An older machine learning term for the underly-
ing issue is concept drift. Bayesian statisticians use hierarchical models. Our
methods are more similar to empirical Bayes methods, drawing heavily on ideas
of Charles Stein. A Stein-like result also holds for multiple regression in the
context of just one sample. The result is intermediate between our two sample
regression setting and the one sample mean problem. In regression, shrinkage
makes the usual MLE inadmissible when in dimension p 4 (with the intercept
counted as one dimension) and a sufficiently large n. See Copas (1983) for a
discussion of shrinkage in regression and Stein (1960) who also obtained this
result for regression, but under stronger assumptions.
A more detailed discussion of these different but overlapping literatures is
in Section 6. Some of our proofs are given in an Appendix.
There are also settings where one might want to use a small data set to enrich
a large one. For example the small data set may have a better design matrix or
smaller error variance. Such possibilities are artificial in the motivating context
so we don’t investigate them further here.
2 Data enrichment regression
Consider linear regression with a response Y ∈ R and predictors X ∈ Rd . The
model for the small data set is
Yi = Xi β + εi , i∈S
d
for a parameter β ∈ R and independent errors εi with mean 0 and variance
2
σS . Now suppose that the data in the big data set follow
Yi = Xi (β + γ) + εi , i∈B
3
4. where γ ∈ Rd is a bias parameter and εi are independent with mean 0 and
2
variance σB . The sample sizes are n in the small sample and N in the big
sample.
There are several kinds of departures of interest. It could be, for instance,
that the overall level of Y is different in S than in B but that the trends are
similar. That is, perhaps only the intercept component of γ is nonzero. More
generally, the effects of some but not all of the components in X may differ in
the two samples. One could apply hypothesis testing to each component of γ
but that is unattractive as the number of scenarios to test for grows as 2d .
Let XS ∈ Rn×d and XB ∈ RN ×d have rows made of vectors Xi for i ∈ S
and i ∈ B respectively. Similarly, let YS ∈ Rn and YB ∈ RN be corresponding
T T
vectors of response values. We use VS = XS XS and VB = XB XB .
2.1 Partial pooling via shrinkage and weighting
Our primary approach is to pool the data but put a shrinkage penalty on γ. We
estimate β and γ by minimizing
(Yi − Xi β)2 + (Yi − Xi (β + γ))2 + λP (γ) (1)
i∈S i∈B
where λ ∈ [0, ∞] and P (γ) 0 is a penalty function. There are several reason-
able choices for the penalty function, including
2 2
γ 2, XS γ 2, γ 1, and XS γ 1.
ˆ ˆ ˆ
For each of these penalties, setting λ = 0 leads to separate fits β and β + γ in
the two data sets. Similarly, taking λ = ∞ constrains γ = 0 and amounts to
ˆ
pooling the samples. In many applications one will want to regularize β as well,
but in this paper we only penalize γ.
The L1 penalties have an advantage in interpretation because they identify
which parameters or which specific observations might be differentially affected.
The quadratic penalties are simpler, so we focus most of this paper on them.
Both quadratic penalties can be expressed as XT γ 2 for a matrix XT .
2
The rows of XT represent a hypothetical target population of NT items for
T
prediction. Or more generally, the matrix Σ = ΣT = XT XT is proportional to
the matrix of mean squares and mean cross-products for predictors in the target
population.
If we want to remove the pooling effect from one of the coefficients, such
as the intercept term, then the corresponding column of XT should contain all
zeros. We can also constrain γj = 0 (by dropping its corresponding predictor)
in order to enforce exact pooling on the j’th coefficient.
ˆ
A second, closely related approach is to fit βS by minimizing i∈S (Yi −
2 ˆB by minimizing 2
Xi β) , fit β i∈B (Yi − Xi β) , and then estimate β by
ˆ ˆ ˆ
β(ω) = ω βS + (1 − ω)βB
4
5. for some 0 ω 1. In some special cases the estimates indexed by the weighting
parameter ω ∈ [n/(n + N ), 1] are a relabeling of the penalty-based estimates
indexed by the parameter λ ∈ [0, ∞]. In other cases, the two families of estimates
differ. The weighting approach allows simpler tuning methods. Although we
think that the penalization method may be superior, we can prove stronger
results about the weighting approach.
Given two values of λ we consider the larger one to be more ’aggressive’ in
that it makes more use of the big sample bringing with it the risk of more bias
in return for a variance reduction. Similarly, aggressive estimators correspond
to small weights ω on the small target sample.
2.2 Special cases
An important special case for our applications is the cell partition model. In
the cell partition model, Xi is a vector containing C − 1 zeros and one 1. The
model has C different cells in it. Cell c has Nc observations from the large data
set and nc observations from the small data set. In an advertising context a cell
may correspond to one specific demographic subset of consumers. The cells may
be chosen exogenously to the given data sets. When the cells are constructed
using the regression data then cross-validation or other methods should be used.
T
A second special case, useful in theoretical investigations, has XS XS ∝
T
XB XB . This is the proportional design matrix case.
The simplest case of all is the location model. It is the cell mean model
with C = 1 cell, and it has proportional design matrices. We can get formulas
for the optimal tuning parameter in the location model and it is also a good
workbench for comparing estimates of tuning parameters. Furthermore, we are
able to get some results for the L1 case in the location model setting.
2.3 Quadratic penalties and degrees of freedom
The quadratic penalty takes the form P (γ) = XT γ 2 = γ T VT γ for a matrix
2
XT ∈ Rr×d and VT = XT XT ∈ Rd×d . The value r is d or n in the examples
T
above and could take other values in different contexts. Our criterion becomes
2 2 2
YS − XS β + YB − XB (β + γ) + λ XT γ . (2)
Here and below x means the Euclidean norm x 2 .
Given the penalty matrix XT and a value for λ, the penalized sum of
ˆ
squares (2) is minimized by βλ and γλ satisfying
ˆ
ˆ
βλ
X TX = X TY
γλ
ˆ
where
XS 0 YS
X = XB XB ∈ R(n+N +r)×2d , and Y = YB . (3)
1/2 0
0 λ XT
5
6. To avoid uninteresting complications we suppose that the matrix X T X is
invertible. The representation (3) also underlies a convenient computational
ˆ
approach to fitting βλ and γλ using r rows of pseudo-data just as one does in
ˆ
ridge regression.
ˆ ˆ −1 T
The estimate βλ can be written in terms of βS = VS XS YS and βB = ˆ
−1 T
VB XB YB as the next lemma shows.
Lemma 1. Let XS , XB , and XT in (2) all have rank d. Then for any λ 0,
ˆ
the minimizers β and γ of (2) satisfy
ˆ
ˆ ˆ ˆ
β = Wλ βS + (I − Wλ )βB
ˆ ˆ
and γ = (VB + λVT )−1 VB (βB − β) for a matrix
ˆ
−1 −1
Wλ = (VS + λVT VB VS + λVT )−1 (VS + λVT VB VS ). (4)
If VT = VS , then
Wλ = (VB + λVS + λVB )−1 (VB + λVS ).
Proof. The normal equations of (2) are
ˆ ˆ ˆ
(VB + VS )β = VS βS + VB βB − VB γ
ˆ and ˆ ˆ
(VB + λVT )ˆ = VB βB − VB β.
γ
Solving the second equation for γ , plugging the result into the first and solving
ˆ
ˆ
for β, yields the result with Wλ = (VS + VB − VB (VB + λVT )−1 VB )−1 VS . This
expression for Wλ simplifies as given and simplifies further when VT = VS .
The remaining challenge in model fitting is to choose a value of λ. Because
we are only interested in making predictions for the S data, not the B data,
the ideal value of λ is one that optimizes the prediction error on sample S. One
reasonable approach is to use cross-validation by holding out a portion of sample
S and predicting the held-out values from a model fit to the held-in ones as well
as the entire B sample. One may apply either leave-one-out cross-validation or
more general K-fold cross-validation. In the latter case, sample S is split into K
nearly equally sized parts and predictions based on sample B and K − 1 parts
of sample S are used for the K’th held-out fold of sample S.
In some of our applications we prefer to use criteria such as AIC, AICc,
or BIC in order to avoid the cost and complexity of cross-validation. These
alternatives are of most value when data enrichment is itself the inner loop of a
more complicated algorithm.
To compute AIC and alternatives, we need to measure the degrees of freedom
used in fitting the model. We follow Ye (1998) and Efron (2004) in defining the
degrees of freedom to be
1 ˆ
df(λ) = 2 cov(Yi , Yi ), (5)
σS
i∈S
6
7. ˆ ˆ
where YS = XS βλ . Because of our focus on the S data, only the S data appear
in the degrees of freedom formula. We will see later that the resulting AIC
type estimates based on the degrees of freedom perform similarly to our focused
cross-validation described above.
Theorem 1. For data enriched regression the degrees of freedom given at (5)
satisfies df(λ) = tr(Wλ ) where Wλ is given in Lemma 1. If VT = VS , then
d
1 + λνj
df(λ) = (6)
j=1
1 + λ + λνj
1/2−1 1/2 1/2
where ν1 , . . . , νd are the eigen-values of VS VB VS in which VS is a sym-
metric matrix square root of VS .
Proof. Please see Section 8.1 in the Appendix.
With a notion of degrees of freedom customized to the data enrichment
context we can now define the corresponding criteria such as
2df(λ)
σ2
AIC(λ) = n log(ˆS (λ)) + n 1 + and
n
df(λ) df(λ) + 2
σ2
AICc(λ) = n log(ˆS (λ)) + n 1 + 1− , (7)
n n
n ˆ
where σS (λ) = (n−d)−1 i∈S (Yi −Xi β(λ))2 . The AIC is more appropriate than
ˆ2
BIC here since our goal is prediction accuracy, not model selection. We prefer
the AICc criterion of Hurvich and Tsai (1989) because it is more conservative
as the degrees of freedom become large compared to the sample size.
Next we illustrate some special cases of the degrees of freedom formula in
Theorem 1. First, suppose that λ = 0, so that there is no penalization on γ.
Then df(0) = tr(I) = d as is appropriate for regression on sample S only.
We can easily see that the degrees of freedom are monotone decreasing in λ.
d
As λ → ∞ the degrees of freedom drop to df(∞) = j=1 νj /(1 + νj ). This can
be much smaller than d. For instance in the proportional design case, VS = nΣ
and VB = N Σ for a matrix Σ. Then all νj = n/N and so df(∞) = d/(1 + N/n),
which is quite small when n N.
For the cell partition model, d becomes C, ΣS = diag(nc ) and ΣB =
C
diag(Nc ). In this case df(∞) = c=1 nc /(nc + Nc ) which will usually be much
smaller than df(0) = C.
Monotonicity of the degrees of freedom makes it easy to search for the value
λ which delivers a desired degrees of freedom. We have found it useful to inves-
tigate λ over a numerical grid corresponding to degrees of freedom decreasing
from d by an amount ∆ (such as 0.25) to the smallest such value above df(∞).
It is easy to adjoin λ = ∞ (sample pooling) to this list as well.
7
8. 2.4 Predictive mean square errors
Here we develop an oracle’s choice for λ and a corresponding plug-in estimate.
We work in the case where VS = VT and we assume that VS has full rank. Given
ˆ
λ, the predictive mean square error is E( XS (β − β) 2 ).
1/2
We will use a symmetric square root VS of VS as well as the matrix M =
1/2 −1 1/2
VS VB VS with eigendecomposition M = U DU T where the j’th column of
U is uj and D = diag(νj ).
Theorem 2. The predictive mean square error of the data enrichment estimator
is
d d
(1 + λνj )2 λ2 κ2
j
ˆ
E XS (β − β) 2 2
= σS + (8)
j=1
(1 + λ + λνj )2 j=1 (1 + λ + λνj )2
1/2 1/2 −1
where κ2 = uT VS ΘVS uj for Θ = γγ T + σB VB .
j j
2
Proof. Please see Section 8.2.
2
The first term in (8) is a variance term. It equals dσS when λ = 0 but for
λ > 0 it is reduced due to the use of the big sample. The second term represents
the error, both bias squared and variance, introduced by the big sample.
2.5 A plug-in method
A natural choice of λ is to minimize the predictive mean square error, which
must be estimated. We propose a plug-in method that replaces the unknown
parameters σS and κ2 from Theorem 2 by sample estimates. For estimates σS
2
j ˆ2
2
and κj we choose
ˆ
d
σS (1 + λνj )2 + λ2 κ2
ˆ2 ˆj
ˆ
λ = arg min . (9)
λ 0 (1 + λ + λνj ) 2
j=1
ˆ2 ˆ
From the sample data we take σS = YS −XS βS 2
/(n−d). A straightforward
plug-in estimate of Θ is
ˆ 2 −1
Θplug = γ γ T + σB VB ,
ˆˆ
ˆ ˆ 1/2 1/2
where γ = βB − βS . Now we take κ2 = uT VS ΘVS uj recalling that uj and
ˆ ˆj j
−1 1/2 1/2
νj derive from the eigendecomposition of M = VS VB VS . The resulting
ˆ
optimization yields an estimate λplug .
−1
The estimate Θplug is biased upwards because E(ˆ γ T ) = γγ T + σB VB +
γˆ 2
2 −1
σS VS . We have used a bias-adjusted plug-in estimate
ˆ 2 −1 ˆ 2 −1 ˆ 2 −1
Θbapi = σB VB + (ˆ γ T − σB VB − σS VS )+
γˆ (10)
8
9. where the positive part operation on a symmetric matrix preserves its eigenvec-
tors but replaces any negative eigenvalues by 0. Similar results can be obtained
ˆ 2 −1
with Θbapi = γ γ T − σS VS + . This latter estimator is somewhat simpler but
ˆˆ
ˆ 2 −1
the former has the advantage of being at least as large as σB VB while the
latter can degenerate to 0.
3 The location model
The simplest instance of our problem is the location model where XS is a column
of n ones and XB is a column of N ones. Then the vector β is simply a scalar
intercept that we call µ and the vector γ is a scalar mean difference that we call
δ. The response values in the small data set are Yi = µ + εi while those in the
big data set are Yi = (µ + δ) + εi . Every quadratic penalty defines the same
family of estimators as we get using penalty λδ 2 .
The quadratic criterion is i∈S (Yi − µ)2 + i∈B (Yi − µ − δ)2 + λδ 2 . Taking
VS = n, VB = N and VT = 1 in Lemma 1 yields
¯ ¯ nN + nλ 1 + λ/N
µ = ω YS + (1 − ω)YB
ˆ with ω = = .
nN + nλ + N λ 1 + λ/N + λ/n
Choosing a value for ω corresponds to choosing
nN (1 − ω)
λ= .
N ω − n(1 − ω)
The degrees of freedom in this case reduce to df(λ) = ω, which ranges from
df(0) = 1 down to df(∞) = n/(n + N ).
3.1 Oracle estimator of ω
The mean square error of µ(ω) is
ˆ
2
σS σ2
MSE(ω) = ω 2 + (1 − ω)2 B + δ 2 .
n N
The mean square optimal value of ω (available to an oracle) is
δ 2 + σB /N
2
ωorcl = 2 2 .
δ 2 + σB /N + σS /n
Pooling the data corresponds to ωpool = n/(N +n) and makes µ equal the pooled
ˆ
¯ ¯ ¯
mean YP ≡ (nYS + N YB )/(n + N ). Ignoring the large data set corresponds to
ωS = 1. Here ωpool ωorcl ωS . The oracle’s choice of ω can be used to infer
the oracle’s choice of λ. It is
2
nN (1 − ωorcl ) N σS
λorcl = = 2 + σ2 − σ2
. (11)
N ωorcl − n(1 − ωorcl ) Nδ B S
9
10. The mean squared error reduction for the oracle is
MSE(ωorcl )
= ωorcl , (12)
MSE(ωS )
after some algebra. If δ = 0, then as min(n, N ) → ∞ we find ωorcl → 1 and the
optimal λ corresponds to simply using the small sample and ignoring the large
one. If we suppose that δ = 0 and N → ∞ then the effective sample size for
data enrichment may be defined using (12) as
n δ 2 + σB /N + σS /n
2 2
σ2
n= =n 2 → n+ S. (13)
ωorcl δ 2 + σB /N δ2
The mean squared error from data enrichment with n observations in the small
sample, using the oracle’s choice of λ, matches that of n IID observations from
the small sample. We effectively gain up to σS /δ 2 observations worth of infor-
2
mation. This is an upper bound on the gain because we will have to estimate λ.
Equation (13) shows that the benefit from data enrichment is a small sample
phenomenon. The effect is additive not multiplicative on the small sample size
n. As a result, more valuable gains are expected in small samples. In some
of the motivating examples we have found the most meaningful improvements
from data enrichment on disaggregated data sets, such as specific groups of
consumers. Some large data sets resemble the union of a great many small
ones.
3.2 Plug-in and other estimators of ω
A natural approach to choosing ω is to plug in sample estimates
ˆ ¯ ¯ 1 ¯ 1 ¯
δ0 = YB − YS , ˆ2
σS = (Yi − YS )2 , ˆ2
and σB = (Yi − YB )2 .
n N
i∈S i∈B
ˆ2 ˆ 2 ˆ2 ˆ 2 ˆ2
We then use ωplug = (δ0 + σB /N )/(δ0 + σB /N + σS /n) or alternatively λplug =
2 2 ˆ2 ). Our bias-adjusted plug-in method reduces to
σS /(ˆB + N δ0
ˆ σ
ˆ
θbapi σ2
ˆ ˆ2 σ2
ωbapi = , where ˆ ˆ2 σ ˆ
θbapi = B + δ0 − S − B
ˆ ˆ2
θbapi + σS /n N n N +
ˆ2 ˆ 2 ˆ2
The simpler alternative ωbapi = ((δ0 − σS /n)/δ0 )+ gave virtually identical values
in our numerical results reported below.
If we bootstrap the S and B samples independently M times and choose ω
to minimize
M
1 ¯ ¯ m∗ ¯ m∗ 2 ,
YS − ω YS − (1 − ω)YB
M m=1
then the minimizing value tends to ωplug as M → ∞. Thus bootstrap methods
give an approach analogous to plug-in methods, when no simple plug-in formula
10
11. exists. This is perhaps not surprising since the bootstrap is often described as
an example of a plug-in principle.
We can also determine the effects of cross-validation in the location setting,
and arrive at an estimate of ω that we can use without actually cross-validating.
Consider splitting the small sample into K parts that are held out one by one
in turn. The K − 1 retained parts are used to estimate µ and then the squared
error is judged on the held-out part. That is
K
1 ¯ ¯ ¯ 2
ωcv = arg min YS,k − ω YS,−k − (1 − ω)YB ,
ω K
k=1
¯ ¯
where YS,k is the average of Yi over the k’th part of S and YS,−k is the average
of Yi over all K − 1 parts excluding the k’th. We suppose for simplicity that
¯ ¯ ¯
n = rK for an integer r. In that case YS,−k = (nYS − rYS,k )/(n − r). Now
¯ ¯ ¯ ¯
− YB )(YS,k − YB )
k (YS,−k
ωcv = ¯S,−k − YB )2
¯ (14)
k (Y
After some algebra, the numerator of (14) is
K
¯ ¯ r ¯ ¯
K(YS − YB )2 − (YS,k − YS )2
n−r
k=1
and the denominator is
2 K
¯ ¯ r ¯ ¯
K(YS − YB )2 + (YS,k − YS )2 .
n−r
k=1
ˆ ¯ ¯ K ¯ ¯
ˆ2
Letting δ0 = YB − YS and σS,K = (1/K) k=1 (YS,k − YS )2 , we have
ˆ2 ˆ 2
δ0 − σS,K /(K − 1)
ωcv = .
ˆ
δ 2 + σ 2 /(K − 1)2
ˆ
0 S,K
The only quantity in ωcv which depends on the specific K-way partition
ˆ2
used is σS,K . If the groupings are chosen by sampling without replacement,
then under this sampling,
¯ ¯ s2
E(ˆS,K ) = E((YS,1 − YS )2 ) =
σ2 S
(1 − 1/K)
r
using the finite population correction for simple random sampling, where s2 =
S
ˆ2
σS n/(n − 1). This simplifies to
n 1K −1 K −1
σ2 ˆ2
E(ˆS,K ) = σS ˆ2
= σS .
n−1r K n−1
11
12. Thus K-fold cross-validation chooses a weighting centered around
ˆ2 ˆ 2
δ0 − σS /(n − 1)
ωcv,K = . (15)
ˆ2 ˆ 2
δ0 + σS /[(n − 1)(K − 1)]
Cross-validation has the strange property that ω < 0 is possible. This can arise
when the bias is small and then sampling alone makes the held-out part of the
small sample appear negatively correlated with the held-in part. The effect can
appear with any K. We replace any ωcv,K < n/(n + N ) by n/(n + N ).
Leave-one-out cross-validation has K = n (and r = 1) so that
ˆ2 ˆ 2
δ0 − σS /n
ωcv,n ≈ .
ˆ
δ 2 + σ 2 /n2
ˆ
0 S
Smaller K, such as choosing K = 10 versus n, tend to make ωcv,K smaller
¯ ˆ
resulting in less weight on YS . In the extreme with δ0 = 0 we find ωcv,K ≈
−(K − 1) so 10 fold CV is then very different from leave-one-out CV.
Remark 1. The cross-validation estimates do not make use of σB becauseˆ2
the large sample is held fixed. They are in this sense conditional on the large
sample. Our oracle takes account of the randomness in set B, so it is not
conditional. One can define a conditional oracle without difficulty, but we omit
the details. Neither the bootstrap nor the plug-in methods are conditional, as
they approximate our oracle. Comparing cross-validation to the oracle we expect
2
this to be reasonable if σB /N min(δ 2 , σs /n). Taking ωbapi as a representor
2
of unconditional methods and ωcv,n as a representor of conditional ones, we
see that the latter has a larger denominator while they both have the same
ˆ2 ˆ2
numerator, at least when δ0 > σS /n. This suggests that conditional methods
are more aggressive and we will see this in the simulation results.
3.3 L1 penalty
For the location model, it is convenient to write the L1 penalized criterion as
(Yi − µ)2 + (Yi − µ − δ)2 + 2λ|δ|. (16)
i∈S i∈B
ˆ
The minimizers µ and δ satisfy
ˆ
¯ ¯ ˆ
nYS + N (YB − δ)
µ=
ˆ , and
n+N (17)
ˆ ¯
δ = Θ(YB − µ; λ/N )
ˆ
for the well-known soft thresholding operator Θ(z; τ ) = sign(z)(|z| − τ )+ .
¯ ¯
The estimate µ ranges from YS at λ = 0 to the pooled mean YP at λ = ∞.
ˆ
¯ ¯ ¯
In fact µ reaches YP at a finite value λ = λ∗ ≡ nN |YB − YS |/(N + n) and both
ˆ
ˆ
µ and δ are linear in λ on the interval [0, λ∗ ]:
ˆ
12
13. Theorem 3. If 0 λ ¯ ¯
nN |YB − YS |/(n + N ) then the minimizers of (16) are
¯ λ ¯ ¯
µ = YS + sign(YB − YS ), and
ˆ
n (18)
ˆ ¯ ¯ N +n ¯ ¯
δ = YB − YS − λ sign(YB − YS ).
Nn
If λ > nN |Y ˆ
¯B − YS |/(n + N ) then they are δ = 0 and µ = YP .
¯ ˆ ¯
¯ ¯
Proof. If λ > nN |YB − YS |/(n + N ) then we may find directly that with any
value of δ > 0 and corresponding µ given by (17), the derivative of (16) with
ˆ ˆ
respect to δ is positive. Therefore δ 0 and a similar argument gives δ 0, so
ˆ ¯ ¯
that δ = 0 and then µ = (nYS + N YB )/(n + N ).
ˆ
Now suppose that λ λ∗ . We verify that the quantities in (18) jointly
ˆ
satisfy equations (17). Substituting δ from (18) into the first line of (17) yields
¯ ¯
nYS + N (YS + λ(N + n)η/(N n)) λ
¯ ¯ ¯
= YS + sign(YB − YS ),
n+N n
matching the value in (18). Conversely, substituting µ from (18) into the second
ˆ
line of (17) yields
¯ λ ¯ ¯ λ ¯ ¯ λ
Θ YB − µ;
ˆ = Θ YB − YS − sign(YB − YS ); . (19)
N n N
¯ ¯ ¯
Because of the upper bound on λ, the result is YB − YS −λ(1/n+1/N )sign(YB −
¯S ) which matches the value in (18).
Y
With an L1 penalty on δ we find from Theorem 3 that
¯ ¯ ¯
µ = YS + min(λ, λ∗ )sign(YB − YS )/n.
ˆ
¯ ¯
That is, the estimator moves YS towards YB by an amount λ/n except that
¯P . The optimal choice of λ is not
it will not move past the pooled average Y
available in closed form.
3.4 An L1 oracle
Under a Gaussian data assumption, it is possible to derive a formula for the
mean squared error of the L1 penalized data enrichment estimator at any value
of λ. While it is unwieldy, the L1 mean square error formula is computable and
we can optimize it numerically to compute an oracle formula. As with the L2
setting we must plug in estimates of some unknowns first before optimizing. This
allows us to compare L1 to L2 penalization in the location setting simulations
of Section 4.
To obtain a solution we make a few changes of notation just for this subsec-
ˆ ¯
tion. We replace λ/n by λ and define a = N/(N + n) and use δ0 = YB − YS . ¯
Then
¯ ˆ ˆ ¯ ¯ ˆ
µ(λ) = (YS + λ · sign(δ0 ))I(|δ0 |a λ) + (aYB + (1 − a)YS )I(|δ0 |a < λ)
ˆ
¯ ¯ ˆ ˆ ˆ
= (aYB + (1 − a)YS ) − (aδ0 − λ · sign(δ0 ))I(|δ0 |a λ). (20)
13
14. Without loss of generality we may center and scale the Gaussian distributions
¯ ¯
so that YS ∼ N (0, 1) and YB ∼ N (δ, σ 2 ). The next Theorem defines the
distributions of Yi for i ∈ S and i√ B to obtain that scaling. We also introduce
∈ √
˜
constants b = σ 2 /(1+σ 2 ), δ = δ/ 1 + σ 2 , x = (λ/a)/ 1 + σ 2 , and the function
˜
g(x) = Φ(x) − xϕ(x) where ϕ and Φ are the N (0, 1) probability density function
and cumulative distribution function, respectively.
iid iid
Theorem 4. Suppose that Yi ∼ N (0, n) for i ∈ S independently of Yi ∼
N (δ, σ 2 N ) for i ∈ B. Let µ be the L1 estimate from (20), using parameter
ˆ
λ 0. Then the predictive mean squared error is
E(ˆ(λ)2 ) = a2 δ 2 + (a + b − 1)2 (1 + σ 2 ) + b
µ
x ˜ x ˜
− a(a + 2b − 2)(1 + σ 2 )[1 − g(˜ − δ) + g(−˜ − δ)]
x ˜
− [2aλ + 2(a + b − 1)(aδ − λ)] 1 + σ 2 ϕ(˜ − δ) (21)
x ˜
− [2aλ − 2(a + b − 1)(aδ + λ)] 1 + σ 2 ϕ(−˜ − δ)
x ˜ x ˜
− (aδ − λ)(aδ + λ)[1 − Φ(˜ − δ) + Φ(−˜ − δ)].
Proof. Please see Section 8.3 in the Appendix.
3.5 Cell means
The cell mean setting is simply C copies of the location problem. One could
estimate separate values of λ in each of them. Here we remark briefly on the
consequences of using a common λ or ω over all cells.
We do not simulate the various choices. We look instead at what assumptions
would make them match the oracle formula. In applications we can choose the
method whose matching assumptions are more plausible.
In the L2 setting, one could choose a common λ using either the penalty
C 2 C 2
λ c=1 nc δc or λ c=1 δc . Call these cases L2,n and L2,1 respectively. Dropping
the subscript c we find
1 + λn/N 1 + λ/N
ωL2,n = , and ωL2,1 =
1 + λn/N + λ 1 + λ/N + λ/n
compared to ωorcl = (nδ 2 + σB n/N )/(nδ 2 + σB n/N + σS ).
2 2 2
We can find conditions under which a single value of λ recovers the oracle’s
2 2 2 2
weighting. For ωL2,1 these are σB,c = σS,c in all cells as well as λ = σS,c /δc
2 2 2 2
constant in c. For ωL2,n these are σB,c = σS,c and λ = σS,c /(nc δc ) constant
in c. The L2,1 criterion looks more reasonable here because we have no reason
√
to expect the relative bias δc /σS,c to be inversely proportional to nc .
2 2
For a common ω to match the oracle, we need σB,c /Nc = σS,c /nc to hold in
2 2
all cells as well as a σS,c /(nc δc ) to be constant in c. The first clause seems quite
unreasonable and so we prefer common-λ approaches to common weights.
For a common L1 penalty, we cannot get good expressions for the weight
variable ω. But we can see how the L1 approach shifts the mean. An L1,1
14
15. ¯ ¯
approach moves µc from YS,c towards YB,c by the amount λ/nc in cell c, but
ˆ
¯ ¯ ¯
not going past the pooled mean YP,c = (nYS,c + N YB,c )/(N + n) for that cell.
¯
The other approaches use different shifts. An L1,n approach moves µc from YS,c
ˆ
¯ ¯
towards YB,c by the amount λ in cell c (but not past YP,c ). It does not seem
reasonable to move µc by the same distance in all cells, or to move them by an
ˆ
¯
amount proportional to 1/nc and stopping at YP,c doesn’t fix this. We could
√
use a common moving distance proportional to 1/ nc (which is the order of
¯ C √
statistical uncertainty in YS,c ) by using the penalty c=1 nc |γc |.
4 Numerical examples
We have simulated some special cases of the data enrichment problem. First
we simulate the pure location problem which has d = 1. Then we consider the
regression problem with varying d.
4.1 Location
We simulated Gaussian data for the location problem. The large sample had
N = 1000 observations and the small sample had n = 100 observations: Xi ∼
2 2
N (µ, σS ) for i ∈ S and Xi ∼ N (µ + δ, σB ) for i ∈ B. Our data had µ = 0 and
2 2
σS = σB = 1. We define the relative bias as
|δ| √
δ∗ = √ = n|δ|.
σS / n
We investigated a range of relative bias values. It is only a small simplification
2 2 2
to take σS = σB . Doubling σB has a very similar effect to halving N . Equal
variances might have given a slight relative advantage to the hypothesis testing
method as described below.
The accuracy of our estimates is judged by the relative mean squared error
µ 2
ˆ ¯
E((ˆ − µ)2 )/(σS /n). Simply taking µ = YS attains a relative mean squared
error of 1.
Figure 1 plots relative mean squared error versus relative bias for a collection
of estimators, with the results averaged over 10,000 simulated data sets. We used
the small sample only method as a control variate.
The solid curve in Figure 1 shows the oracle’s value. It lies strictly below
the horizontal S-only line. None of the competing curves lie strictly below that
¯
line. None can because YS is an admissible estimator for d = 1 (Stein, 1956).
The second lowest curve in Figure 1 is for the oracle using the L1 version of
the penalty. The L1 penalized oracle is not as effective as the L2 oracle and
it is also more difficult to approximate. The highest observed predictive MSEs
come from a method of simply pooling the two samples. That method is very
successful when the relative bias is near zero but has an MSE that becomes
unbounded as the relative bias increases.
Now we discuss methods that use the data to decide whether to use the
small sample only, pool the samples or choose an amount of shrinkage. We may
15
16. 2.5
L2 oracle
plug−in
leave−1−out
S only
hypo. testing
2.0
10−fold
5−fold
Relative predictive MSE
AICc
pooling
1.5
L1 oracle
1.0
0.5
0.0
0 2 4 6 8
Relative bias
Figure 1: Numerical results for the location problem. The horizontal line at 1
represents using the small sample only and ignoring the large one. The lowest
line shown is for an oracle choosing λ in the L2 penalization. The green curve
shows an oracle using the L1 penalization. The other curves are as described in
the text.
list them in order of their worst case performance. From top (worst) to bottom
(best) in Figure 1 they are: hypothesis testing, 5-fold cross-validation, 10-fold
cross-validation, AICc, leave-one-out cross-validation, and then the simple plug-
in method which is minimax among this set of choices. AICc and leave-one-out
are very close. Our cross-validation estimators used ω = max(ωcv,K , n/(n + N ))
where ωcv,K is given by (15).
The hypothesis testing method is based on a two-sample t-test of whether
δ = 0. If the test is rejected at α = 0.05, then only the small sample data is
used. If the test is not rejected, then the two samples are pooled. That test was
2 2
based on σB = σS which may give hypothesis testing a slight advantage in this
setting (but it still performed poorly).
The AICc method performs virtually identically to leave-one-out cross-validation
16
17. over the whole range of relative biases.
None of these methods makes any other one inadmissible: each pair of curves
crosses. The methods that do best at large relative biases tend to do worst
at relative bias near 0 and vice versa. The exception is hypothesis testing.
Compared to the others it does not benefit fully from low relative bias but it
recovers the quickest as the bias increases. Of these methods hypothesis testing
is best at the highest relative bias, K-fold cross-validation with small K is best
at the lowest relative bias, and the plug-in method is best in between.
Aggressive methods will do better at low bias but worse at high bias. What
we see in this simulation is that K-fold cross-validation is the most aggressive
followed by leave-one-out and AICc and that plug-in is least aggressive. These
findings confirm what we saw in the formulas from Section 3. Hypothesis testing
does not quite fit into this spectrum: its worst case performance is much worse
than the most aggressive methods yet it fails to fully benefit from pooling when
the bias is smallest. Unlike aggressive methods it does very well at high bias.
4.2 Regression
We simulated our data enrichment method for the following scenario. The small
sample had n = 1000 observations and the large sample had N = 10,000. The
true β was taken to be 0. This has no loss of generality because we are not
shrinking β towards 0. The value of γ was taken uniformly on the unit sphere
in d dimensions and then multiplied by a scale factor that we varied.
We considered d = 2, 4, 5 and 10. All of our examples included an intercept
column of 1s in both XS and XB . The other d−1 predictors were sampled from a
Gaussian distribution with covariance CS or CB , respectively. In one simulation
we took CS and CB to be independent Wishart(I, d − 1, d − 1) random matrices.
In the other they were sampled as CS = Id−1 + ρuuT and CB = Id−1 + ρvv T
where u and v are independently and uniformly sampled from the unit sphere
in Rd−1 and ρ 0 is a parameter that measures the lack of proportionality
between covariances. We chose ρ = d so that the sample specific portion of the
variance has comparable magnitude to the common part.
We scaled the results so that regression using sample S only yields a mean
squared error of 1 at all values of the relative bias. We computed the risk of an
L2 oracle, as well as sampling errors when λ is estimated by the plug-in formula,
by our bias-adjusted plug-in formula and via AICc. In addition we considered
ˆ ˆ
the simple weighted combination ω βS + (1 − ω)βB with ω chosen by the plug-in
formula.
Figure 2 shows the results. For d = 2 and also d = 4 none of our methods
universally outperforms simply using the S sample. For d = 5 and d = 10, all
of our estimators have lower mean squared error than using the S sample alone,
though the difference becomes small at large relative bias.
We find in this setting that our bias-adjusted plug-in estimator closely matches
the AICc estimate. The relative performance of the other methods varies with
the problem. Plain plug-in always seemed worse than AICc and adjusted plug-
in at low relative bias and better than these at high biases. Plug-in’s gains
17
18. L2 oracle AICc weighting
plug−in adj plug−in
0123456 0123456
Wishart Wishart Wishart Wishart
d=2 d=4 d=5 d=10
1.2
1.0
0.8
Relative predictive MSE
0.6
0.4
0.2
orthogonal orthogonal orthogonal orthogonal
d=2 d=4 d=5 d=10
1.2
1.0
0.8
0.6
0.4
0.2
0123456 0123456
Relative bias
Figure 2: This figure shows relative predicted MSE versus relative bias for two
simulated regression problems described in the text.
at high biases appear to be less substantial than its losses at low biases. Of
the other methods, simple scalar weighting is worst for the high dimensional
Wishart case without being better in the other cases. The best overall choices
are bias-adjusted plug-in and AICc.
5 Proportional design and inadmissibility
The proportional design case has VB ∝ VS and VT ∝ VS . Suppose that VB =
N Σ, VS = nΣ and VT = Σ for a positive definite matrix Σ. Our data enrichment
estimator simplifies greatly in this case. The weighting matrix Wλ in Lemma 1
simplifies to Wλ = ωI where ω = (N + nλ)/(N + nλ + N λ). As a result
ˆ ˆ ˆ
β = ω βS + (1 − ω)βB and we can find and estimate an oracle’s value for ω. If
different constants of proportionality, say M and m are used, then the effect is
largely to reparameterize λ giving the same family of estimates under different
labels. There is one difference though. The interval of possible values for ω is
[n/N, 1] in our case versus [m/M, 1] for the different constants. To attain the
18
19. same sets of ω values could require use of negative λ.
ˆ ˆ
The resulting estimator of β with estimated ω dominates βS (making it
inadmissible) under mild conditions. These conditions given below even allow
violations of the proportionality condition VB ∝ VS but they still require VT ∝
VS . Among these conditions we will need the model degrees of freedom to be
at least 5, and it will suffice to have the error degrees of freedom in the small
sample regression be at least 10. The result also requires a Gaussian assumption
in order to use a lemma of Stein’s.
iid 2
We write YS = XS β + εS and YB = XB (β + γ) + εB for εS ∼ N (0, σS )
iid 2 ˆ
and εB ∼ N (0, σB ). The data enrichment estimators are β(λ) and γ (λ). The
ˆ
parameter of most interest is β. If we were to use only the small sample we
ˆ ˆ
would get βS = (XS XS )−1 XS YS = β(0).
T T
In the proportional design setting, the mean squared prediction error is
ˆ
f (ω) = E( XT (β(ω) − β) 2 )
= tr((ω 2 σS Σ−1 + (1 − ω)2 (γγ T + σB Σ−1 ))Σ).
2
S
2
B
This error is minimized by the oracle’s parameter value
tr((γγ T + σB Σ−1 )Σ)
2
B
ωorcl = .
tr((γγ T + σB ΣB )Σ) + σS tr(Σ−1 Σ)
2 −1 2
S
With ΣS = nΣ and ΣB = N Σ, we find
γ T Σγ + dσB /N
2
ωorcl = 2 /N + dσ 2 /n .
γ T Σγ + dσB S
The plug-in estimator is
γ T Σˆ + dˆB /N
ˆ γ σ2
ωplug =
ˆ 2 /N + dˆ 2 /n (22)
γ T Σˆ
ˆ γ + dˆB
σ σS
ˆ2 ˆ ˆ
where σS = YS − XS βS 2 /(n − d) and σB = YB − XB βB 2 /(N − d). We
ˆ2
σ2
will have reason to generalize this plug-in estimator. Let h(ˆB ) be any nonneg-
ˆ2 σ2
ative measurable function of σB with E(h(ˆB )) < ∞. The generalized plug-in
estimator is
γ T Σˆ + h(ˆB )
ˆ γ σ2
ωplug,h =
ˆ 2 ) + dˆ 2 /n . (23)
γ T Σˆ
ˆ γ + h(ˆB
σ σS
ˆ
Here are the conditions under which βS is made inadmissible by the data
enrichment estimator.
Theorem 5. Let XS ∈ Rn×d and XB ∈ RN ×d be fixed matrices with XS XS =
T
T 2
nΣ and XB XB = N ΣB where Σ and ΣB both have rank d. Let YS ∼ N (XS β, σS In )
2
independently of YB ∼ N (XB (β + γ), σB IN ). If d 5 and m ≡ n − d 10,
then
ˆω
E( XT β(ˆ ) − XT β 2 ˆ
) < E( XT βS − XT β 2
) (24)
19
20. T
holds for any nonrandom matrix XT with XT XT = Σ and any ω = ωplug,h given
ˆ ˆ
by (23).
Proof. Please see Section 8.5 in the Appendix.
The condition on m can be relaxed at the expense of a more complicated
statement. From the details in the proof, it suffices to have d 5 and m(1 −
4/d) 2.
The result in Theorem 5 is similar to the Stein estimator result. There, the
sample mean of a Gaussian population is an inadmissible estimator in d = 3
dimensions or higher but is admissible in 1 or 2 dimensions. Here there are two
samples to pool and the change takes place at d = 5.
Because E(ˆ T Σˆ ) = γ T Σγ + dσS /n + dσB /N it is biased high and so there-
γ γ 2 2
fore is ωplug , making it a little conservative. We can make a bias adjustment,
ˆ
replacing γ T Σˆ by γ T Σˆ − dˆS /n − dˆB /N . The result is
ˆ γ ˆ γ σ2 σ2
γ T Σˆ − dˆS /n
ˆ γ σ2 n
ωbapi =
ˆ T Σˆ
∨ , (25)
γ γ
ˆ n+N
where values below n/(n + N ) get rounded up. This bias-adjusted estimate of ω
ˆ2 ˆ2 ˆ2
is not covered by Theorem 5. Subtracting only σB /N instead of σB /N + σS /n
is covered, yielding
γ T Σˆ
ˆ γ
ωbapi =
ˆ , (26)
γ T Σˆ
ˆ γ σ2
+ dˆS /n
σ2
which corresponds to taking h(ˆB ) ≡ 0 in equation (23).
6 Related literatures
There are many disjoint literatures that study problems like the one we have
presented. They do not seem to have been compared before and the literatures
seem to be mostly unaware of each other. We give a summary of them here,
kept brief because of space limitations.
The key ingredient in this problem is that we care more about the small
sample than the large one. Were that not the case, we could simply pool all the
data and fit a model with indicator variables picking out one or indeed many
different small areas. Without some kind of regularization, that approach ends
up being similar to taking λ = 0 and hence does not borrow strength.
The closest match to our problem setting comes from small area estimation in
survey sampling. The monograph by Rao (2003) is a comprehensive treatment
of that work and Ghosh and Rao (1994) provide a compact summary. In that
context the large sample may be census data from the entire country and the
small sample (called the small area) may be a single county or a demographically
defined subset. Every county or demographic group may be taken to be the
small sample in its turn. The composite estimator (Rao, 2003, Chapter 4.3) is a
weighted sum of estimators from small and large samples. The estimates being
20
21. combined may be more complicated than regressions, involving for example
ratio estimates. The emphasis is usually on scalar quantities such as small
area means or totals, instead of the regression coefficients we consider. One
particularly useful model (Ghosh and Rao, 1994, Equation (4.2)) allows the
small areas to share regression coefficients apart from an area specific intercept.
Then BLUP estimation methods lead to shrinkage estimators similar to ours.
The methods of Copas (1983) can be applied to our problem and will result
ˆ
in another combination that makes βS inadmissible. That combination requires
only four dimensional regressions instead of the five used in Theorem 5 for
pooling weights. That combination yields less aggressive predictions.
In chemometrics a calibration transfer problem (Feudale et al., 2002) comes
up when one wants to adjust a model to new spectral hardware. There may be a
regression model linking near-infrared spectroscopy data to a property of some
sample material. The transfer problem comes up for data from a new machine.
Sometimes one can simply run a selection of samples through both machines
but in other cases that is not possible, perhaps because one machine is remote
(Woody et al., 2004). Their primary and secondary instruments correspond to
our small and big samples respectively. Their emphasis is on transfering either
principal components regression or partial least squares models, not the plain
regressions we consider here.
A common problem in marketing is data fusion, also known as statistical
matching. Variables (X, Y ) are measured in one sample while variables (X, Z)
are measured in another. There may or may not be a third sample with some
measured triples (X, Y, Z). The goal in data fusion is to use all of the data to
form a large synthetic data set of (X, Y, Z) values, perhaps by imputing missing
Z for the (X, Y ) sample and/or missing Y for the (X, Z) sample. When there is
no (X, Y, Z) sample some untestable assumptions must be made about the joint
distribution, because it cannot be recovered from its bivariate margins. The
text by D’Orazio et al. (2006) gives a comprehensive summary of what can and
cannot be done. Many of the approaches are based on methods for handling
missing data (Little and Rubin, 2009).
Our problem is an instance of what machine learning researchers call do-
main adaptation. They may have fit a model to a large data set (the ’source’)
and then wish to adapt that model to a smaller specialized data set (the ’tar-
get’). This is especially common in natural language processing. NIPS 2011
included a special session on domain adaptation. In their motivating problems
there are typically a very large number of features (e.g., one per unique word
appearing in a set of documents). They also pay special attention to problems
where many of the data points do not have a measured response. Quite often
a computer can gather high dimensional X while a human rater is necessary
to produce Y . Daum´ (2009) surveys various wrapper strategies, such as fit-
e
ting a model to weighted combinations of the data sets, deriving features from
the reference data set to use in the target one and so on. Cortes and Mohri
(2011) consider domain adaptation for kernel-based regularization algorithms,
including kernel ridge regression, support vector machines (SVMs), or support
21
22. vector regression (SVR). They prove pointwise loss guarantees depending on
the discrepancy distance between the empirical source and target distributions,
and demonstrate the power of the approach on a number of experiments using
kernel ridge regression.
A related term in machine learning is concept drift (Widmer and Kubat,
1996). There a prediction method may become out of date as time goes on.
The term drift suggests that slow continual changes are anticipated, but they
also consider that there may be hidden contexts (latent variables in statistical
teminology) affecting some of the data.
7 Conclusions
We have studied a middle ground between pooling a large data set into a smaller
target one and ignoring it completely. In dimension d 5 only a small number of
error degrees of freedom suffice to make ignoring the large data set inadmissible.
When there is no bias, pooling the data sets may be optimal. Theorem 5 does
not say that pooling is inadmissible. When there is no bias, pooling the data
sets may be optimal. We prefer our hybrid because the risk from pooling grows
without bound as the bias increases.
Acknowledgments
We thank the following people for helpful discussions: Penny Chu, Corinna
Cortes, Tony Fagan, Yijia Feng, Jerome Friedman, Jim Koehler, Diane Lambert,
Elissa Lee and Nicolas Remy.
References
Borenstein, M., Hedges, L. V., Higgins, J. P. T., and Rothstein, H. R. (2009).
Introduction to Meta-Analysis. Wiley, Chichester, UK.
Copas, J. B. (1983). Regression, prediction and shrinkage. Journal of the Royal
Statistical Society, Series B, 45(3):311–354.
Cortes, C. and Mohri, M. (2011). Domain adaptation in regression. In Proceed-
ings of The 22nd International Conference on Algorithmic Learning Theory
(ALT 2011), pages 308–323, Heidelberg, Germany. Springer.
Daum´, H. (2009). Frustratingly easy domain adaptation. (arXiv:0907.1815).
e
D’Orazio, M., Di Zio, M., and Scanu, M. (2006). Statistical Matching: Theory
and Practice. Wiley, Chichester, UK.
Efron, B. (2004). The estimation of prediction error. Journal of the American
Statistical Association, 99(467):619–632.
22
23. Feudale, R. N., Woody, N. A., Tan, H., Myles, A. J., Brown, S. D., and Ferr´, J.
e
(2002). Transfer of multivariate calibration models: a review. Chemometrics
and Intelligent Laboratory Systems, 64:181–192.
Ghosh, M. and Rao, J. N. K. (1994). Small area estimation: an appraisal.
Statistical Science, 9(1):55–76.
Hurvich, C. and Tsai, C. (1989). Regression and time series model selection in
small samples. Biometrika, 76(2):297–307.
Little, R. J. A. and Rubin, D. B. (2009). Statistical Analysis with Missing Data.
John Wiley & Sons Inc., Hoboken, NJ, 2nd edition.
Rao, J. N. K. (2003). Small Area Estimation. Wiley, Hoboken, NJ.
Stein, C. M. (1956). Inadmissibility of the usual estimator for the mean of a mul-
tivariate normal distribution. In Proceedings of the Third Berkeley symposium
on mathematical statistics and probability, volume 1, pages 197–206.
Stein, C. M. (1960). Multiple regression. In Olkin, I., Ghurye, S. G., Hoeffding,
W., Madow, W. G., and Mann, H. B., editors, Contributions to probability
and statistics: essays in honor of Harald Hotelling. Stanford University Press,
Stanford, CA.
Stein, C. M. (1981). Estimation of the mean of a multivariate normal distribu-
tion. The Annals of Statistics, 9(6):1135–1151.
Widmer, G. and Kubat, M. (1996). Learning in the presence of concept drift
and hidden contexts. Machine Learning, 23:69–101.
Woody, N. A., Feudale, R. N., Myles, A. J., and Brown, S. D. (2004). Transfer
of multivariate calibrations between four near-infrared spectrometers using
orthogonal signal correction. Analytical Chemistry, 76(9):2596–2600.
Ye, J. (1998). On measuring and correcting the effects of data mining and model
selection. Journal of the American Statistical Association, 93:120–131.
8 Appendix: proofs
This appendix presents proofs of the results in this article. They are grouped
into sections by topic, with some technical supporting lemmas separated into
their own sections.
23
24. 8.1 Proof of Theorem 1
−2 ˆ −2
First df(λ) = σS tr(cov(XS β, YS )) = σS tr(XS Wλ (XS XS )−1 XS σS ) = tr(Wλ ).
T T 2
1/2 −1 1/2
Next with XT = XS , and M = VS VB VS ,
−1 −1
tr(Wλ ) = tr(VS + λVS VB VS + λVS )−1 (VS + λVS VB VS ).
1/2 −1/2
We place VS VS between these factors and absorb them left and right. Then
we reverse the order of the factors and repeat the process, yielding
tr(Wλ ) = tr(I + λM + λI)−1 (I + λM ).
Writing M = U diag(ν1 , . . . , νd )U T for an orthogonal matrix U and simplifying
yields the result.
8.2 Proof of Theorem 2
ˆ ˆ ˆ
Proof. First E( XT β − XT β 2 ) = tr(VS E((β − β)(β − β)T )). Next using W =
Wλ , we make a bias-variance decomposition,
ˆ ˆ ˆ ˆ
E (β − β)(β − β)T = (I − W )γγ T (I − W )T + cov(W βS ) + cov((I − W )βB )
−1
= σS W VS W T + (I − W )Θ(I − W )T ,
2
2 −1 ˆ
for Θ = γγ T + σB VB . Therefore E XS (β − β) 2 −1
= σS tr(VS W VS W T ) +
2
tr(Θ(I − W )T VS (I − W )).
1/2 −1/2
Now we introduce W = VS W VS finding
1/2 −1/2
W = VS (VB + λVS + λVB )−1 (VB + λVS )VS
= (I + λM + λI)−1 (I + λM )
= U DU T ,
where D = diag((1 + λνj )/(1 + λ + λνj )). This allows us to write the first term
of the mean squared error as
d
−1 (1 + λνj )2
σS tr(VS W VS W T ) = σS tr(W W T ) = σS
2 2 2
.
j=1
(1 + λ + λvj )2
1/2 1/2
For the second term, let Θ = VS ΘVS . Then
tr Θ(I − W )T VS (I − W ) = tr(Θ(I − W )T (I − W ))
˜
= tr(ΘU (I − D)2 U T )
d 1/2 1/2
uT VS ΘVS uk
= λ2 k
.
(1 + λ + λνk )2
k=1
24
25. 8.3 Proof of Theorem 4
We will use this small lemma.
Lemma 2. If X ∼ N (0, 1), then E(XI(X x)) = −ϕ(x), E(X 2 I(X x)) =
g(x) and
E(X 2 I(|X + b| x)) = 1 − g(x − b) + g(−x − b)
where g(x) = Φ(x) − xϕ(x).
x x
Proof. First E(XI(X x)) = −∞
zϕ(z) dz = − −∞
ϕ (z) dz = −ϕ(x). Next,
x x x x
z 2 ϕ(z) dz = − zϕ (z) dz = − ϕ(z) dz − zϕ(z) = g(x).
−∞ −∞ −∞ −∞
Then
E(X 2 I(|X + b| x)) = E(X 2 I(X + b x)) + E(X 2 I(X + b −x))
2
= E(X (1 − I(X + b x)) + g(−x − b))
2 2
= E(X ) − E(X I(X + b x)) + g(−x − b)
= 1 − g(x − b) + g(−x − b).
Now we prove Theorem 4. We let ˆ ¯ ¯
= δ0 − δ and η = YB + σ 2 YS − δ. Then
¯ η−
cov( , η) = 0, ∼ N (0, 1 + σ 2 ), η ∼ N (0, σ 2 + σ 4 ), and YS = .
1 + σ2
Recall that we defined b = σ 2 /(1 + σ 2 ), and so
¯ η−
YB = δ + η − σ 2 = δ + b + (1 − b)η.
1 + σ2
Also with a = N/(N + n),
¯ ¯ η−
aYB + (1 − a)YS = aδ + a(b + (1 − b)η) + (1 − a)
1 + σ2
= aδ + (ab − (1 − a)(1 − b)) + (a(1 − b) + (1 − a)(1 − b))η
= aδ + (a + b − 1) + (1 − b)η.
Letting S = + δ, we have
µ = aδ + (a + b − 1) + (1 − b)η − (aS − λ · sign(S))I(|S|
ˆ a−1 λ)
from which the MSE can be calculated:
E(ˆ2 (λ)) = E (aδ + (a + b − 1) + (1 − b)η)2
µ
− 2E (aδ + (a + b − 1) + (1 − b)η)(aS − λ · sign(S))I(|S| a−1 λ)
+ E (aS − λ · sign(S))2 I(|S| a−1 λ)
≡ [1] − 2 × [2] + [3].
25
26. First
[1] = a2 δ 2 + (a + b − 1)2 (1 + σ 2 ) + (1 − b)2 σ 2 (1 + σ 2 )
= a2 δ 2 + (a + b − 1)2 (1 + σ 2 ) + b.
¯
Next using Φ(x) = 1 − Φ(x),
[2] = E [aδ + (a + b − 1) ][a(S) − λ · sign(S)]I(|S| a−1 λ)
=E aδ(aδ − λ · sign(S)) + [aδa + (a + b − 1)(aδ − λ · sign(S))] + a(a + b − 1) 2 }I(|S| a−1 λ)
= E aδ(aδ − λ · sign(S))I(|S| a−1 λ)
+ E a2 δ + (a + b − 1)(aδ − λ · sign(S)) I(|S| a−1 λ)
+ E a(a + b − 1) 2 I(|S| a−1 λ)
−1 −1
¯ a λ − δ + aδ(aδ + λ)Φ −a λ − δ
= aδ(aδ − λ)Φ √ √
1 + σ2 1 + σ2
2 −1
+ [a δ + (a + b − 1)(aδ − λ)]E I(S a λ)
+ [a2 δ + (a + b − 1)(aδ + λ)]E I(S < −a−1 λ)
+ a(a + b − 1)E 2
I(|S| a−1 λ) .
√ √
˜
Recall that we defined x = a−1 λ/ 1 + σ 2 and δ = δ/ 1 + σ 2 . Now using
˜
Lemma 2
δ a−1 λ
E 2
I(|S| a−1 λ) = (1 + σ 2 )E X 2 I |X + |
(1 + σ 2 ) (1 + σ 2 )
x ˜ x ˜
= (1 + σ 2 )[1 − g(˜ − δ) + g(−˜ − δ)].
Next
E I(|S| a−1 λ) = E I(S a−1 λ) + E I(S −a−1 λ)
= −E I(S a−1 λ) + E I(S −a−1 λ)
= x ˜
1 + σ 2 ϕ(˜ − δ) − x ˜
1 + σ 2 ϕ(−˜ − δ).
So,
¯ x ˜ x ˜
[2] = aδ(aδ − λ)Φ(˜ − δ) + aδ(aδ + λ)Φ(−˜ − δ)
+ [a2 δ + (a + b − 1)(aδ − λ)] x ˜
1 + σ 2 ϕ(˜ − δ)
− [a2 δ + (a + b − 1)(aδ + λ)] x ˜
1 + σ 2 ϕ(−˜ − δ)
2
x ˜ x ˜
+ a(a + b − 1)(1 + σ )[1 − g(˜ − δ) + g(−˜ − δ)].
26
27. Finally,
[3] = E [a(S) − λ · sign(S)]2 I(|S| a−1 λ)
= E [a2 2
+ 2a(aδ − λ · sign(S)) + (aδ − λ · sign(S))2 ]I(|S| a−1 λ)
= E a2 2 I(|S| a−1 λ)
+ 2E a(aδ − λ · sign(S)) I(|S| a−1 λ)
+ E (aδ − λ · sign(S))2 I(|S| a−1 λ)
x ˜ x ˜
= a2 (1 + σ 2 )[1 − g(˜ − δ) + g(−˜ − δ)]
x ˜ x ˜
+ 2a(aδ − λ) 1 + σ 2 ϕ(˜ − δ) − 2a(aδ + λ) 1 + σ 2 ϕ(−˜ − δ)
¯ x ˜ x ˜
+ (aδ − λ)2 Φ(˜ − δ) + (aδ + λ)2 Φ(−˜ − δ).
Hence, the MSE is
E µ2 = [1] − 2 × [2] + [3]
ˆ
= a2 δ 2 + (a + b − 1)2 (1 + σ 2 ) + b
x ˜ x ˜
− a(a + 2b − 2)(1 + σ 2 )[1 − g(˜ − δ) + g(−˜ − δ)]
x ˜
− [2aλ + 2(a + b − 1)(aδ − λ)] 1 + σ 2 ϕ(˜ − δ)
x ˜
− [2aλ − 2(a + b − 1)(aδ + λ)] 1 + σ 2 ϕ(−˜ − δ)
x ˜ x ˜
− (aδ − λ)(aδ + λ)[1 − Φ(˜ − δ) + Φ(−˜ − δ)].
8.4 Supporting lemmas for inadmissibility
In this section we first recall Stein’s Lemma. Then we prove two technical
lemmas used in the proof of Theorem 5.
Lemma 3. Let Z ∼ N (0, 1) and let g : R → R be an indefinite integral of the
Lebesgue measurable function g , essentially the derivative of g. If E(|g (Z)|) <
∞ then
E(g (Z)) = E(Zg(Z)).
Proof. Stein (1981).
Lemma 4. Let η ∼ N (0, Id ), b ∈ Rd , and let A > 0 and B > 0 be constants.
Let
A(b − η)
Z =η+ .
b−η 2+B
Then
2 A(A + 4 − 2d) AB(A + 4)
E( Z )=d+E −E
b−η 2+B ( b − η 2 + B)2
A(A + 4 − 2d)
<d+E .
b−η 2+B
27
28. Proof. First,
d
2 A2 b − η 2 ηk (bk − ηk )
E( Z )=d+E + 2A E .
( b − η 2 + B)2 b−η 2+B
k=1
Now define
bk − ηk bk − η k
g(ηk ) = = .
b−η 2+B (bk − ηk )2 + b−k − η−k 2 +B
By Stein’s lemma (Lemma 3), we have
ηk (bk − ηk ) 2(bk − ηk )2 1
E = E(g (ηk )) = E −
b−η 2+B ( b−η 2 + B)2 b−η 2+B
and thus
2 (4A + A2 ) b − η 2 2Ad
E( Z )=d+E −
( b − η 2 + B)2 b−η 2+B
(4A + A2 ) b − η 2 2Ad( b − η 2 + B)
=d+E −
( b − η 2 + B)2 ( b − η 2 + B)2
2
(4A + A − 2Ad) (4A + A2 )B
=d+E − ,
b−η 2+B ( b − η 2 + B)2
after collecting terms.
Lemma 5. For integer m 1, let Q ∼ χ2 , C > 1, D > 0 and put
(m)
Q(C − m−1 Q)
Z= .
Q+D
Then
(C − 1)m − 2
E(Z) .
m+2+D
and so E(Z) > 0 whenever C > 1 + 2/m.
Proof. The χ2
(m) density function is pm (x) = (2
m/2−1
Γ( m ))−1 xm/2−1 e−x/2 .
2
Thus
∞
1 x(C − m−1 x) m/2−1 −x/2
E(Z) = m/2 m x e dx
2 Γ( 2 ) 0 x+D
∞
1 C − m−1 x (m+2)/2−1 −x/2
= x e dx
2m/2 Γ( m )
2 0 x+D
2 m/2+1
Γ( 2 ) ∞
m+2
C − m−1 x
= pm+2 (x) dx
2m/2 Γ( m )
2 0 x+D
∞ −1
C −m x
=m pm+2 (x) dx
x+D 0
C − (m + 2)/m
m
m+2+D
by Jensen’s inequality.
28
29. 8.5 Proof of Theorem 5.
σ2 σ2
We prove this first for ωplug,h = ωplug , that is, taking h(ˆB ) = dˆB /n. We also
ˆ ˆ
assume at first that ΣB = Σ.
ˆ T T ˆ
Note that βS = β + (XS XS )−1 XS εS and βB = β + γ + (XB XB )−1 XB εB .
T T
It is convenient to define
ηS = Σ1/2 (XS XS )−1 XS εS
T T
and ηB = Σ1/2 (XB XB )−1 XB εB .
T T
ˆ ˆ
Then we can rewrite βS = β + Σ−1/2 ηS and βB = β + γ + Σ−1/2 ηB . Similarly,
we let
ˆ
YS − XS βS 2 ˆ
YB − XB βB 2
ˆ2
σS = ˆ2
and σB = .
n−d N −d
ˆ2 ˆ2
Now (ηS , ηB , σS , σB ) are mutually independent, with
2 2
σS σB
ηS ∼ N 0, Id , ηB ∼ N 0, Id ,
n N
2 2
σS 2 σB
ˆ2
σS ∼ χ , and ˆ2
σB ∼ χ2 .
n − d (n−d) N − d (N −d)
ˆ
We easily find that E( X βS − Xβ 2 ) = dσS /n. Next we find ω and a bound
2
ˆ
ˆ ω ) − Xβ 2 ).
on E( X β(ˆ
ˆ ˆ
Let γ ∗ = Σ1/2 γ so that γ = βB − βS = Σ−1/2 (γ ∗ + ηB − ηS ). Then
ˆ
γ T Σˆ + dˆB /N
ˆ γ σ2
ω = ωplug =
ˆ ˆ
σ2 σ2
γ T Σˆ + dˆB /N + dˆS /n
ˆ γ
γ ∗ + ηB − ηS 2 + dˆB /N
σ2
= .
σ2 ˆ2
γ ∗ + ηB − ηS 2 + d(ˆB /N + σS /n)
Now we can express the mean squared error as
ˆω
E( X β(ˆ ) − Xβ 2
) = E( XΣ−1/2 (ˆ ηS + (1 − ω )(γ ∗ + ηB )) 2 )
ω ˆ
= E( ω ηS + (1 − ω )(γ ∗ + ηB ) 2 )
ˆ ˆ
= E( ηS + (1 − ω )(γ ∗ + ηB − ηS ) 2 )
ˆ
(γ ∗ + ηB − ηS )dˆS /n
σ2 2
=E ηS + 2 /N + σ 2 /n) .
γ ∗ + ηB − ηS 2 + d(ˆB
σ ˆS
To simplify the expression for mean squared error we introduce
σ2 2
Q = mˆS /σS ∼ χ2(m)
∗
√
ηS = n ηS /σS ∼ N (0, Id ),
√
b = n(γ ∗ + ηB )/σS ,
σ2 2
A = dˆS /σS = dQ/m, and
B= σ2
nd(ˆB /N ˆ2 2
+ σS /n)/σS
= σ2 2
d((n/N )ˆB /σS + Q/m).
29
30. The quantities A and B are, after conditioning, the constants that appear in
technical Lemma 4. Similarly C and D introduced below match the constants
used in Lemma 5.
With these substitutions and some algebra,
2 ∗ 2
ˆω 2 σS ∗ A(b − ηS )
E( X β(ˆ ) − Xβ )= E ηS + ∗ 2+B
n b − ηS
2 ∗ 2
σS ∗ A(b − ηS )
= E E ηS + ∗ ˆ2 ˆ2
η B , σS , σB .
n b − ηS 2 + B
We now apply two technical lemmas from Section 8.4.
∗
Since ηS is independent of (b, A, B) and Q ∼ χ2 , by Lemma 4, we have
(m)
∗ 2
∗ A(b − ηS ) A(A + 4 − 2d)
E ηS + ∗ 2+B ˆ2 ˆ2
η B , σS , σB <d+E ∗ ˆ2 ˆ2
ηB , σS , σB .
b − ηS b − ηS 2 + B
Hence
ˆ
∆ ≡ E( X βS − Xβ 2 ˆω
) − E( X β(ˆ ) − Xβ 2
)
2
σS A(2d − A − 4)
= E ∗
n b − ηS 2 + B
2
σS (dQ/m)(2d − dQ/m − 4)
= E ∗
n b − ηS 2 + (B − A) + dQ/m)
2
dσS Q(2 − Q/m − 4/d)
= E ∗
n b− ηS 2 m/d + (B − A)m/d + Q
dσ 2 Q(C − Q/m)
= SE
n Q+D
where C = 2 − 4/d and D = (m/d)( b − ηS 2 + dnN −1 σB /σS ).
∗
ˆ2 2
Now suppose that d 5. Then C 2 − 4/5 > 1 and so conditionally on
ˆ2
ηS , ηB , and σB , the requirements of Lemma 5 are satisfied by C, D and Q.
Therefore
2
dσS m(1 − 4/d) − 2
∆ E (27)
n m+2+D
∗
where the randomness in (27) is only through D which depends on ηS , ηB
2
(through b) and σB . By Jensen’s inequality
ˆ
2
dσS m(1 − 4/d) − 2
∆> 0 (28)
n m + 2 + E(D)
whenever m(1 − 4/d) 2. The first inequality in (28) is strict because var(D) >
0. Therefore ∆ > 0. The condition on m and d holds for any m 10 when
d 5.
30