This document summarizes linear regression methods for modeling relationships between variables, including least squares regression, QR decomposition, subset selection, and coefficient shrinkage techniques. It introduces the linear regression model and describes how to estimate regression coefficients by minimizing the residual sum of squares. Methods for selecting significant variables like stepwise selection and for shrinking coefficients like ridge regression and the lasso are also overviewed. An example using prostate cancer data is presented to illustrate error comparison between models.
Stochastic reaction networks (SRNs) are a particular class of continuous-time Markov chains used to model a wide range of phenomena, including biological/chemical reactions, epidemics, risk theory, queuing, and supply chain/social/multi-agents networks. In this context, we explore the efficient estimation of statistical quantities, particularly rare event probabilities, and propose two alternative importance sampling (IS) approaches [1,2] to improve the Monte Carlo (MC) estimator efficiency. The key challenge in the IS framework is to choose an appropriate change of probability measure to achieve substantial variance reduction, which often requires insights into the underlying problem. Therefore, we propose an automated approach to obtain a highly efficient path-dependent measure change based on an original connection between finding optimal IS parameters and solving a variance minimization problem via a stochastic optimal control formulation. We pursue two alternative approaches to mitigate the curse of dimensionality when solving the resulting dynamic programming problem. In the first approach [1], we propose a learning-based method to approximate the value function using a neural network, where the parameters are determined via a stochastic optimization algorithm. As an alternative, we present in [2] a dimension reduction method, based on mapping the problem to a significantly lower dimensional space via the Markovian projection (MP) idea. The output of this model reduction technique is a low dimensional SRN (potentially one dimension) that preserves the marginal distribution of the original high-dimensional SRN system. The dynamics of the projected process are obtained via a discrete $L^2$ regression. By solving a resulting projected Hamilton-Jacobi-Bellman (HJB) equation for the reduced-dimensional SRN, we get projected IS parameters, which are then mapped back to the original full-dimensional SRN system, and result in an efficient IS-MC estimator of the full-dimensional SRN. Our analysis and numerical experiments verify that both proposed IS (learning based and MP-HJB-IS) approaches substantially reduce the MC estimator’s variance, resulting in a lower computational complexity in the rare event regime than standard MC estimators. [1] Ben Hammouda, C., Ben Rached, N., and Tempone, R., and Wiechert, S. Learning-based importance sampling via stochastic optimal control for stochastic reaction net-works. Statistics and Computing 33, no. 3 (2023): 58. [2] Ben Hammouda, C., Ben Rached, N., and Tempone, R., and Wiechert, S. (2023). Automated Importance Sampling via Optimal Control for Stochastic Reaction Networks: A Markovian Projection-based Approach. To appear soon.
Stochastic reaction networks (SRNs) are a particular class of continuous-time Markov chains used to model a wide range of phenomena, including biological/chemical reactions, epidemics, risk theory, queuing, and supply chain/social/multi-agents networks. In this context, we explore the efficient estimation of statistical quantities, particularly rare event probabilities, and propose two alternative importance sampling (IS) approaches [1,2] to improve the Monte Carlo (MC) estimator efficiency. The key challenge in the IS framework is to choose an appropriate change of probability measure to achieve substantial variance reduction, which often requires insights into the underlying problem. Therefore, we propose an automated approach to obtain a highly efficient path-dependent measure change based on an original connection between finding optimal IS parameters and solving a variance minimization problem via a stochastic optimal control formulation. We pursue two alternative approaches to mitigate the curse of dimensionality when solving the resulting dynamic programming problem. In the first approach [1], we propose a learning-based method to approximate the value function using a neural network, where the parameters are determined via a stochastic optimization algorithm. As an alternative, we present in [2] a dimension reduction method, based on mapping the problem to a significantly lower dimensional space via the Markovian projection (MP) idea. The output of this model reduction technique is a low dimensional SRN (potentially one dimension) that preserves the marginal distribution of the original high-dimensional SRN system. The dynamics of the projected process are obtained via a discrete $L^2$ regression. By solving a resulting projected Hamilton-Jacobi-Bellman (HJB) equation for the reduced-dimensional SRN, we get projected IS parameters, which are then mapped back to the original full-dimensional SRN system, and result in an efficient IS-MC estimator of the full-dimensional SRN. Our analysis and numerical experiments verify that both proposed IS (learning based and MP-HJB-IS) approaches substantially reduce the MC estimator’s variance, resulting in a lower computational complexity in the rare event regime than standard MC estimators. [1] Ben Hammouda, C., Ben Rached, N., and Tempone, R., and Wiechert, S. Learning-based importance sampling via stochastic optimal control for stochastic reaction net-works. Statistics and Computing 33, no. 3 (2023): 58. [2] Ben Hammouda, C., Ben Rached, N., and Tempone, R., and Wiechert, S. (2023). Automated Importance Sampling via Optimal Control for Stochastic Reaction Networks: A Markovian Projection-based Approach. To appear soon.
This slide contain description about the line, circle and ellipse drawing algorithm in computer graphics. It also deals with the filled area primitive.
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning.
Part 3 covers: 1. Motivation: Average-case versus worst-case in high dimensions 2. Algorithm halting times (runtimes) 3. Outlook
MVPA with SpaceNet: sparse structured priorsElvis DOHMATOB
The GraphNet (aka S-Lasso), as well as other “sparsity + structure” priors like TV (Total-Variation), TV-L1, etc., are not easily applicable to brain data because of technical problems
relating to the selection of the regularization parameters. Also, in
their own right, such models lead to challenging high-dimensional optimization problems. In this manuscript, we present some heuristics for speeding up the overall optimization process: (a) Early-stopping, whereby one halts the optimization process when the test score (performance on leftout data) for the internal cross-validation for model-selection stops improving, and (b) univariate feature-screening, whereby irrelevant (non-predictive) voxels are detected and eliminated before the optimization problem is entered, thus reducing the size of the problem. Empirical results with GraphNet on real MRI (Magnetic Resonance Imaging) datasets indicate that these heuristics are a win-win strategy, as they add speed without sacrificing the quality of the predictions. We expect the proposed heuristics to work on other models like TV-L1, etc.
Convex Optimization Modelling with CVXOPTandrewmart11
An introduction to convex optimization modelling using cvxopt in an IPython environment. The facility location problem is used as an example to demonstrate modelling in cvxopt.
Opening of our Deep Learning Lunch & Learn series. First session: introduction to Neural Networks, Gradient descent and backpropagation, by Pablo J. Villacorta, with a prologue by Fernando Velasco
A lambda calculus for density matrices with classical and probabilistic controlsAlejandro Díaz-Caro
Slides of my presentation at APLAS'17 (Suzhou, China, December 2017).
Publication: LNCS 10695:448-467, 2017 (http://dx.doi.org/10.1007/978-3-319-71237-6_22)
ArXiv'd at https://arxiv.org/abs/1705.00097
This slide contain description about the line, circle and ellipse drawing algorithm in computer graphics. It also deals with the filled area primitive.
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning.
Part 3 covers: 1. Motivation: Average-case versus worst-case in high dimensions 2. Algorithm halting times (runtimes) 3. Outlook
MVPA with SpaceNet: sparse structured priorsElvis DOHMATOB
The GraphNet (aka S-Lasso), as well as other “sparsity + structure” priors like TV (Total-Variation), TV-L1, etc., are not easily applicable to brain data because of technical problems
relating to the selection of the regularization parameters. Also, in
their own right, such models lead to challenging high-dimensional optimization problems. In this manuscript, we present some heuristics for speeding up the overall optimization process: (a) Early-stopping, whereby one halts the optimization process when the test score (performance on leftout data) for the internal cross-validation for model-selection stops improving, and (b) univariate feature-screening, whereby irrelevant (non-predictive) voxels are detected and eliminated before the optimization problem is entered, thus reducing the size of the problem. Empirical results with GraphNet on real MRI (Magnetic Resonance Imaging) datasets indicate that these heuristics are a win-win strategy, as they add speed without sacrificing the quality of the predictions. We expect the proposed heuristics to work on other models like TV-L1, etc.
Convex Optimization Modelling with CVXOPTandrewmart11
An introduction to convex optimization modelling using cvxopt in an IPython environment. The facility location problem is used as an example to demonstrate modelling in cvxopt.
Opening of our Deep Learning Lunch & Learn series. First session: introduction to Neural Networks, Gradient descent and backpropagation, by Pablo J. Villacorta, with a prologue by Fernando Velasco
A lambda calculus for density matrices with classical and probabilistic controlsAlejandro Díaz-Caro
Slides of my presentation at APLAS'17 (Suzhou, China, December 2017).
Publication: LNCS 10695:448-467, 2017 (http://dx.doi.org/10.1007/978-3-319-71237-6_22)
ArXiv'd at https://arxiv.org/abs/1705.00097
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
5. Linear Assumption
A linear model assumes the regression
function E(Y | X) is reasonably approximated
as linear
i.e.
• The regression function f(x) = E(Y | X=x) was the result
of minimizing squared expected prediction error
• Making the above assumption has high bias, but low
variance
)
,...
,
(
,
)
( 2
1
1
0 p
p
j
j
j X
X
X
X
X
X
f
6. Least Squares Regression
Estimate the parameters based on a set of
training data: (x1, y1)…(xN, yN)
Minimize residual sum of squares
• Training samples are random, independent draws
• OR, yi’s are conditionally independent given xi
2
1 1
0
)
(
N
i
p
j
j
ij
i x
y
RSS
Reasonable criterion when…
7. Matrix Notation
X is N (p+1) of
input vectors
y is the N-vector of
outputs (labels)
is the (p+1)-
vector of
parameters
Np
N
N
p
p
T
N
T
T
x
x
x
x
x
x
x
x
x
x
x
x
...
1
...
...
1
...
1
1
...
1
1
2
1
2
22
21
1
12
11
2
1
X
N
y
y
y
y
...
2
1
p
...
1
0
8. Perfectly Linear Data
When the data is exactly linear, there exists
s.t.
(linear regression model in matrix form)
Usually the data is not an exact fit, so…
X
y
9. Finding the Best Fit?
-4
0
4
8
12
16
20
0 2 4 6 8 10
X
Y
Fitting Data from Y=1.5X+.35+N(0,1.2)
10. Minimize the RSS
We can rewrite the RSS in Matrix form
Getting a least squares fit involves
minimizing the RSS
Solve for the parameters for which the
first derivative of the RSS is zero
X
X
y
y
RSS
T
)
(
11. Solving Least Squares
Derivative of a Quadratic Product
b
x
e
x
e
x
b
x
dx
d T
T
T
T
A
C
D
D
C
A
D
C
A
X
X
X
X
X
X
X
X
y
y
I
y
I
y
I
y
RSS
T
T
N
T
N
T
N
T
2
y
y
y
T
T
T
T
T
T
X
X
X
X
X
X
X
X
X
1
0
Then,
Setting the First Derivative to Zero:
12. Least Squares Solution
Y
X
X)
(X T
1
T
β̂ Y
X
X)
X(X
β
X
Y T
1
T
ˆ
ˆ
1
p
N
)
ˆ
(
RSS
•Least Squares Coefficients •Least Squares Predictions
•Estimated Variance
N
i
i
i ŷ
y
p
N
ˆ
1
2
2
1
1
14. Statistics of Least Squares
We can draw inferences about the parameters,
, by assuming the true model is linear with
noise, i.e.
Then,
)
,
0
(
~
, 2
1
0
N
X
Y
p
j
j
j
2
1
,
~
ˆ
X
XT
N
)
1
(
χ
~
ˆ
1 2
2
2
p
N
p
N
15. Significance of One Parameter
Can we eliminate one parameter, Xj
(j=0)?
Look at the standardized coefficient
),
1
(
~
ˆ
ˆ
p
N
t
v
z
j
j
j
vj is the jth diagonal element of (XTX)-1
16. Significance of Many Parameters
We may want to test many features at
once
Comparing model M1 with p1+1 parameters to
model M0 with p0+1 parameters from M1 (p0<p1)
Use the F statistic:
)
1
,
(
~
1
1
0
1
1
1
0
1
1
0
p
N
p
p
F
p
N
RSS
p
p
RSS
RSS
F
17. Confidence Interval for Beta
We can find a confidence interval for j
Confidence Interval for single parameter (1-2
confidence interval for j )
Confidence Interval for entire parameter
(Bounds on )
σ̂
v
z
β̂
,
σ̂
v
z
β̂ /
j
α
j
/
j
α
j
2
1
1
2
1
1
1
2
1
2
p
T
T
ˆ
ˆ
ˆ X
X
18. 2.1 : Prostate cancer < Example>
Data
• lcavol: log cancer volume
• lweight: log prostate weight
• age: age
• lbph: log of benign prostatic
hyperplasia amount
• svi: seminal vesicle invasion
• lcp: log of capsular penetration
• Gleason: gleason scores
• Pgg45: percent Gleason scores 4
or 5
19. Technique for Multiple Regression
Computing directly has poor
numeric properties
QR Decomposition of X
Decompose X = QR where
• Q is N (p+1) orthogonal vector (QTQ = I(p+1))
• R is an (p+1) (p+1) upper triangular matrix
Then
y
T
T
X
X
X
1
ˆ
y
y
y
y
ˆ T
T
T
T
T
T
T
T
T
T
T
Q
R
Q
R
R
R
Q
R
R
R
Q
R
QR
Q
R 1
1
1
1
1
y
ŷ T
QQ
1
1 q
x 11
r
2
22
12
2 q
q
x 1 r
r
3
33
2
23
13
3 q
q
q
x 1 r
r
r
…
20. Gram-Schmidt Procedure
1) Initialize z0 = x0 = 1
2) For j = 1 to p
For k = 0 to j-1, regress xj on the zk’s so that
Then compute the next residual
3) Let Z = [z0 z1 … zp] and be upper triangular with
entries kj
X = Z = ZD-1D = QR
where D is diagonal with Djj = || zj ||
k
k
j
k
kj
z
z
x
z
1
0
j
k
k
kj
j
j z
x
z
(univariate least squares estimates)
21. Subset Selection
We want to eliminate unnecessary features
Best subset regression
• Choose the subset of size k with lowest RSS
• Leaps and Bounds procedure works with p up to 40
Forward Stepwise Selection
• Continually add features to with the largest F-ratio
Backward Stepwise Selection
• Remove features from with small F-ratio
Greedy techniques – not guaranteed to find the best model
)
1
,
1
(
~
1
1
1
1
1
0
p
N
F
p
N
RSS
RSS
RSS
F
22. Coefficient Shrinkage
Use additional penalties to reduce
coefficients
Ridge Regression
• Minimize least squares s.t.
The Lasso
• Minimize least squares s.t.
Principal Components Regression
• Regress on M < p principal components of X
Partial Least Squares
• Regress on M < p directions of X weighted by y
p
j
j s
1
|
|
p
j
j s
1
2
25. Shrinkage Methods (Ridge Regression)
Minimize RSS() + T
• Use centered data, so 0 is not penalized
• xj are of length p, no longer including the initial 1
The Ridge estimates are:
N
x
x
x
N
y
N
i
ij
ij
ij
N
i
i /
,
/
ˆ
1
1
0
y
y
y
y
RSS
y
y
RSS
T
p
T
p
T
T
T
T
T
T
X
I
X
X
I
X
X
X
X
X
X
X
X
X
1
0
0
2
2
)
(
27. The Lasso
Use centered data, as before
The L1 penalty makes solutions nonlinear
in yi
• Quadratic programming are used to compute them
s
x
y
RSS
p
j
j
N
i
p
j
j
ij
i
1
2
1 1
0 |
|
)
(
subject to
29. Principal Components Regression
Singular Value Decomposition (SVD) of X
• U is N p, V is p p; both are orthogonal
• D is a p p diagonal matrix
Use linear combinations (v) of X as new features
• vj is the principal component (column of V) corresponding to
the jth largest element of D
• vj are the directions of maximal sample variance
• use only M < p features, [z1…zM] replaces X
T
UDV
X
M
j
v
z j
j ...
1
X
m
M
m
m
pcr
z
ˆ
y
ŷ
1
m
m
m
m z
,
z
/
y
,
z
ˆ
30. Partial Least Squares
Construct linear combinations of inputs
incorporating y
Finds directions with maximum variance
and correlation with the output
The variance aspect seems to dominate
and partial least squares operates like
principal component regression
31. 4.4 Methods Using Derived Input Directions
(PLS)
• Partial Least Squares
34. A Unifying View
We can view all the linear regression
techniques under a common framework
includes bias, q indicates a prior distribution
on
• =0: least squares
• >0, q=0: subset selection (counts number of nonzero parameters)
• >0, q=1: the lasso
• >0, q=2: ridge regression
p
j
q
j
N
i
p
j
j
ij
i x
y
1
2
1 1
0 |
|
min
arg
ˆ