SlideShare a Scribd company logo
Adjusting PageRank parameters and comparing results
Web graphs unaltered are reducible, and thus the rate of convergence of the power-iteration
method is the rate at which αk
→ 0, where α is the damping factor, and k is the iteration
count. An estimate of the number of iterations needed to converge to a tolerance τ is logα τ.
For τ = 10-6
and α = 0.85, it can take roughly 85 iterations to converge. For α = 0.95, and α =
0.75, with the same tolerance τ = 10-6
, it takes roughly 269 and 48 iterations respectively. For
τ = 10-9
, and τ = 10-3
, with the same damping factor α = 0.85, it takes roughly 128 and 43
iterations respectively. Thus, adjusting the damping factor or the tolerance parameters of
the PageRank algorithm can have a significant effect on the convergence rate, both in terms
of time and iterations. However, especially with the damping factor α, adjustment of the
parameter value is a delicate balancing act. For smaller values of α, the convergence is fast,
but the link structure of the graph used to determine ranks is less true. Slightly different
values for α can produce very different rank vectors. Moreover, as α → 1, convergence
slows down drastically, and sensitivity issues begin to surface [langville04].
For the first experiment, the damping factor α (which is usually 0.85) is varied from 0.50 to
1.00 in steps of 0.05. This is in order to compare the performance variation with each
damping factor. The calculated error is the L1-norm with respect to default PageRank (α =
0.85). The PageRank algorithm used here is the standard power-iteration (pull) based
PageRank. The rank of a vertex in an iteration is calculated as c0 + αΣrn/dn, where c0 is the
common teleport contribution, α is the damping factor, rn is the previous rank of vertex with
an incoming edge, dn is the out-degree of the incoming-edge vertex, and N is the total
number of vertices in the graph. The common teleport contribution c0, calculated as (1-α)/N
+ αΣrn/N, includes the contribution due to a teleport from any vertex in the graph due to the
damping factor (1-α)/N, and teleport from dangling vertices (with no outgoing edges) in the
graph αΣrn/N. This is because a random surfer jumps to a random page upon visiting a page
with no links, in order to avoid the rank-sink effect.
All seventeen graphs used in this experiment are stored in the MatrixMarket (.mtx) file
format, and obtained from the SuiteSparse Matrix Collection. These include: web-Stanford,
web-BerkStan, web-Google, web-NotreDame, soc-Slashdot0811, soc-Slashdot0902,
soc-Epinions1, coAuthorsDBLP, coAuthorsCiteseer, soc-LiveJournal1, coPapersCiteseer,
coPapersDBLP, indochina-2004, italy_osm, great-britain_osm, germany_osm, asia_osm.
The experiment is implemented in C++, and compiled using GCC 9 with optimization level 3
(-O3). The system used is a Dell PowerEdge R740 Rack server with two Intel Xeon Silver
4116 CPUs @ 2.10GHz, 128GB DIMM DDR4 Synchronous Registered (Buffered) 2666 MHz
(8x16GB) DRAM, and running CentOS Linux release 7.9.2009 (Core). The iterations taken
with each test case is measured. 500 is the maximum iterations allowed. Statistics of each
test case is printed to standard output (stdout), and redirected to a log file, which is then
processed with a script to generate a CSV file, with each row representing the details of a
single test case. This CSV file is imported into Google Sheets, and necessary tables are set
up with the help of the FILTER function to create the charts.
When comparing the relative performance of different approaches with multiple test graphs,
there are two ways to obtain an average comparison: relative-average, and average-relative.
A relative-average comparison first finds relative performance (ratio) of each approach with
respect to a baseline approach (one of them), and then averages them. Consider, for
example, three approaches a, b, and c, with 3 test runs for each of the three approaches,
labeled a1, a2, a3, b1, b2, b3, c1, c2, c3. The relative performance of each approach with
respect to c would be a1/c1, b1/c1, c1/c1, a2/c2, b2/c2, and so on. The relative-average
comparison is now the average of these ratios, i.e., (a1/c1+a2/c2+a3/c3)/3 for a,
(b1/c1+b2/c2+b3/c3)/3, and 1 for c. In contrast, an average-relative comparison first finds the
average time/iterations taken for each approach with respect to a baseline approach, and
then finds the relative performance. Again, considering three approaches, with 3 test runs as
above, the average values of each approach would be (a1+a2+a3)/3 for a, (b1+b2+b3)/3 for b,
and (c1+c2+c3)/3 for c. The average-relative comparison of each approach with respect to c
would then be (a1+a2+a3)/(c1+c2+c3) for a, (b1+b2+b3)/(c1+c2+c3) for b, and 1 for c.
Semantically, a relative-average comparison gives equal importance to the relative
performance of each test run (graph), while an average-relative comparison gives equal
importance to magnitude (time/iterations) of all test runs (or simply, it gives higher
importance to test runs with larger graphs). For these experiments, both comparisons are
made, but only one of them is presented here if they are quite similar.
Figure 1: Average iterations for PageRank computation with damping factor α adjusted from 0.50 -
1.00 in steps of 0.05. Charts for relative-average, and average-relative iterations (with respect to
damping factor α = 0.85) follow the same curve, but with different values (values for relative-average
and average-relative iterations are quite similar).
Results (figure 1) indicate that increasing the damping factor α beyond 0.85 significantly
increases convergence time, and lowering it below 0.85 decreases convergence time. On
average, using a damping factor α = 0.95 increases both convergence time and iterations by
192%, and using a damping factor α = 0.75 decreases both by 41% (compared to damping
factor α = 0.85). Note that a higher damping factor implies that a random surfer follows links
with higher probability (and jumps to a random page with lower probability).
Observing that adjusting the damping factor has a significant effect, another experiment was
performed. The idea behind this experiment was to adjust the damping factor α in steps,
to see if it might help reduce PageRank computation time. The PageRank computation first
starts with a small α, changes it when ranks have converged, until the final desired value of
α. For example, the computation starts initially with α = 0.5, lets ranks converge quickly, and
then switches to α = 0.85 and continues PageRank computation until it converges. This
single-step change is attempted with the initial (fast converge) damping factor α from 0.1 to
0.84. Similar to this, two-step, three-step, and four-step changes are also attempted. With
a two-step approach, a midpoint between the damping_start value and 0.85 is selected as
well for the second set of iterations. Similarly, three-step and four-step approaches use two
and three midpoints respectively.
A small sample graph is used in this experiment, which is stored in the MatrixMarket (.mtx)
file format. The experiment is implemented in Node.js, and executed on a personal laptop.
Only the iteration count of each test case is measured. The tolerance τ = 10-5 is used for all
test cases. Statistics of each test case is printed to standard output (stdout), and redirected
to a log file, which is then processed with a script to generate a CSV file, with each row
representing the details of a single test case. This CSV file is imported into Google Sheets,
and necessary tables are set up with the help of the FILTER function to create the charts.
Figure 2: Iterations required for PageRank computation, when damping factor α is adjusted in 1-4
steps, starting with damping_start. 0-step is the fixed damping factor PageRank, with α = 0.85.
From the results (figure 2), it is clear that modifying the damping factor α in steps is not a
good idea. The standard fixed damping factor PageRank, with α = 0.85, converges in 35
iterations. Using a single step approach increases the number of iterations required, which
further increases as the initial damping factor damping_start is increased. Switching to a
multi-step approach also increases the number of iterations needed for convergence. A
possible explanation for this effect is that the ranks for different values of the damping factor
α are significantly different, and switching to a different damping factor α after each step
mostly leads to recomputation.
Similar to the damping factor α, adjusting the value of tolerance τ can have a significant
effect as well. Apart from the value of tolerance τ, it is observed that different people make
use of different error functions for measuring tolerance. Although L1 norm is commonly
used for convergence check, it appears nvGraph uses L2 norm instead [nvgraph]. Another
person in stackoverflow seems to suggest the use of per-vertex tolerance comparison, which
is essentially the L∞ norm. The L1 norm ||E||1 between two (rank) vectors r and s is
calculated as ||E||1 = Σ|rn - sn|, or as the sum of absolute errors. The L2 norm ||E||2 is
calculated as ||E||2 = √Σ|rn - sn|2
, or as the square-root of the sum of squared errors
(euclidean distance between the two vectors). The L∞ norm ||E||∞ is calculated as ||E||∞ =
max(|rn - sn|), or as the maximum of absolute errors.
This experiment was for comparing the performance between PageRank computation with
L1, L2 and L∞ norms as convergence check, for various tolerance τ values ranging from 10-0
to 10-10
(10-0
, 5×10-0
, 10-1
, 5×10-1
, ...). The input graphs, system used, and the rest of the
experimental process is similar to that of the first experiment.
tolerance L1 norm L2 norm L∞ norm
1.00E-05 49 65 27
5.00E-06 53 65 31
1.00E-06 63 500 41
5.00E-07 67 500 45
1.00E-07 77 500 55
5.00E-08 84 500 59
1.00E-08 500 500 70
5.00E-09 500 500 73
1.00E-09 500 500 500
5.00E-10 500 500 500
1.00E-10 500 500 500
Table 1: Iterations taken for PageRank computation of the web-Stanford graph, with L1, L2, and L∞
norms used as convergence check. At tolerance τ = 10-6
, the L2 norm suffers from sensitivity issues,
followed by L1 and L∞ norms at 10-8
and 10-9
respectively. Only relevant tolerances are shown here.
Figure 3: Iterations taken for PageRank computation of the asia_osm graph, with L1, L2, and L∞
norms used as convergence check. Until tolerance τ = 10-7
, the L∞ norm converges in just one
iteration.
Figure 4: Average iterations taken for PageRank computation with L1, L2 and L∞ norms as
convergence check, and tolerance τ adjusted from 10-0
to 10-10
(10-0
, 5×10-0
, 10-1
, 5×10-1
, ...). L∞
norm convergence check seems to be the fastest, followed by L1 norm (on average).
Figure 5: Average-relative iterations taken for PageRank computation with L1, L2 and L∞ norms as
convergence check, and tolerance τ adjusted from 10-0
. L∞ norm convergence check seems to be the
fastest, however, it is difficult to tell whether L1, or L2 norm comes in seconds place (on average).
Figure 6: Relative-average iterations taken for PageRank computation with L1, L2 and L∞ norms as
convergence check, and tolerance τ adjusted from 10-0
. L∞ norm convergence check seems to be the
fastest, followed by L2 norm (on average).
For various graphs, it is observed that PageRank computation with L1, L2, or L∞ norm as
convergence check suffers from sensitivity issues beyond certain (smaller) tolerance τ
values. As tolerance τ is decreased from 10-0
to 10-10
, L2 norm is usually (except road
networks) the first to suffer from this issue, followed by L1 norm (or L2), and eventually L∞
norm (if ever). This sensitivity issue was recognized by the fact that a given approach
abruptly takes 500 (max iterations) for the next lower tolerance τ value. This is shown in
table 1.
It is also observed that PageRank computation with L∞ norm as convergence check
completes in just one iteration (even for tolerance τ ≥ 10-6
) for large graphs (road
networks). This is because it is calculated as ||E||∞ = max(|rn - sn|), and depending upon the
order (number of vertices) N of the graph, 1/N can be less than the required tolerance τ to
converge.
Based on average-relative comparison, the relative iterations between PageRank
computation with L1, L2, and L∞ norm as convergence check is 4.73 : 4.08 : 1.00. Hence L2
norm is on average 16% faster than L1 norm, and L∞ norm is 308% faster (~4x) than L2
norm. The variation of average-relative iterations for various tolerance τ values is shown in
figure 5. A similar effect is also seen in figure 4, where average iterations for various
tolerance τ values is shown. On the other hand, based on relative-average comparison, the
relative iterations between PageRank computation with L1, L2, and L∞ norm as
convergence check is 10.42 : 6.18 : 1. Hence, L2 norm is on average 69% faster than L1
norm, and L∞ norm is 518% faster (~6x) than L2 norm. The variation of relative-average
iterations for various tolerance τ values is shown in figure 6. This shows that while L1 norm
is on average slower than L2 norm, the difference between the two diminishes for large
graphs (average-relative comparison gives higher importance to results from larger graphs,
unlike relative-average). It should also be noted that L2 norm is not always faster than L1
norm in several cases (usually for smaller tolerance τ values) as can be seen in table 1.
Parameter values can have a significant effect on performance, as seen in these
experiments. Different convergence functions converge at different rates, and which of
them converges faster depends upon the tolerance τ value. Iteration count needs to be
checked in order to ensure that no approach is suffering from sensitivity issues, or is leading
to a single iteration convergence. Finally, the relative performance comparison method
affects which results get more importance, and which do not, in the final average. Taking
note of each of these points, when comparing iterative algorithms, will thus ensure that the
performance results are accurate and useful.
Table 2: List of parameter adjustment strategies, and links to source code.
Damping Factor adjust dynamic-adjust
Tolerance L1 norm L2 norm L∞ norm
1. Comparing the effect of using different values of damping factor, with PageRank (pull, CSR).
2. Experimenting PageRank improvement by adjusting damping factor (α) between iterations.
3. Comparing the effect of using different functions for convergence check, with PageRank (...).
4. Comparing the effect of using different values of tolerance, with PageRank (pull, CSR).

More Related Content

What's hot

Mathematical Modelling of Control Systems
Mathematical Modelling of Control SystemsMathematical Modelling of Control Systems
Mathematical Modelling of Control SystemsDivyanshu Rai
 
Isen 614 project presentation
Isen 614 project presentationIsen 614 project presentation
Isen 614 project presentationVanshaj Handoo
 
Sensor Fusion Study - Ch7. Kalman Filter Generalizations [김영범]
Sensor Fusion Study - Ch7. Kalman Filter Generalizations [김영범]Sensor Fusion Study - Ch7. Kalman Filter Generalizations [김영범]
Sensor Fusion Study - Ch7. Kalman Filter Generalizations [김영범]AI Robotics KR
 
Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...
Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...
Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...AI Robotics KR
 
Computing Transformations Spring2005
Computing Transformations Spring2005Computing Transformations Spring2005
Computing Transformations Spring2005guest5989655
 
Sensor Fusion Study - Ch5. The discrete-time Kalman filter [박정은]
Sensor Fusion Study - Ch5. The discrete-time Kalman filter  [박정은]Sensor Fusion Study - Ch5. The discrete-time Kalman filter  [박정은]
Sensor Fusion Study - Ch5. The discrete-time Kalman filter [박정은]AI Robotics KR
 
Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [이해구]
Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [이해구]Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [이해구]
Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [이해구]AI Robotics KR
 
Logarithmic transformations
Logarithmic transformationsLogarithmic transformations
Logarithmic transformationsamylute
 
Time Series Analysis - 2 | Time Series in R | ARIMA Model Forecasting | Data ...
Time Series Analysis - 2 | Time Series in R | ARIMA Model Forecasting | Data ...Time Series Analysis - 2 | Time Series in R | ARIMA Model Forecasting | Data ...
Time Series Analysis - 2 | Time Series in R | ARIMA Model Forecasting | Data ...Simplilearn
 
Time Series - Auto Regressive Models
Time Series - Auto Regressive ModelsTime Series - Auto Regressive Models
Time Series - Auto Regressive ModelsBhaskar T
 
SPSSAssignment2_Report_BerkeleyCTeate
SPSSAssignment2_Report_BerkeleyCTeateSPSSAssignment2_Report_BerkeleyCTeate
SPSSAssignment2_Report_BerkeleyCTeateBerkeley Teate
 
Isen 614 project report
Isen 614 project reportIsen 614 project report
Isen 614 project reportVanshaj Handoo
 
Aristotle boyd martin-peci_poster_2017
Aristotle boyd martin-peci_poster_2017Aristotle boyd martin-peci_poster_2017
Aristotle boyd martin-peci_poster_2017Aristotle Boyd-Martin
 
Systemic Arterial Pulse Pressure Analysis
Systemic Arterial Pulse Pressure AnalysisSystemic Arterial Pulse Pressure Analysis
Systemic Arterial Pulse Pressure AnalysisCody Pilot
 
Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]
Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]
Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]AI Robotics KR
 

What's hot (20)

Mathematical Modelling of Control Systems
Mathematical Modelling of Control SystemsMathematical Modelling of Control Systems
Mathematical Modelling of Control Systems
 
X bar and-r_charts
X bar and-r_chartsX bar and-r_charts
X bar and-r_charts
 
Isen 614 project presentation
Isen 614 project presentationIsen 614 project presentation
Isen 614 project presentation
 
Sensor Fusion Study - Ch7. Kalman Filter Generalizations [김영범]
Sensor Fusion Study - Ch7. Kalman Filter Generalizations [김영범]Sensor Fusion Study - Ch7. Kalman Filter Generalizations [김영범]
Sensor Fusion Study - Ch7. Kalman Filter Generalizations [김영범]
 
Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...
Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...
Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...
 
Computing Transformations Spring2005
Computing Transformations Spring2005Computing Transformations Spring2005
Computing Transformations Spring2005
 
Sensor Fusion Study - Ch5. The discrete-time Kalman filter [박정은]
Sensor Fusion Study - Ch5. The discrete-time Kalman filter  [박정은]Sensor Fusion Study - Ch5. The discrete-time Kalman filter  [박정은]
Sensor Fusion Study - Ch5. The discrete-time Kalman filter [박정은]
 
Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [이해구]
Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [이해구]Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [이해구]
Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [이해구]
 
Logarithmic transformations
Logarithmic transformationsLogarithmic transformations
Logarithmic transformations
 
Time Series Analysis - 2 | Time Series in R | ARIMA Model Forecasting | Data ...
Time Series Analysis - 2 | Time Series in R | ARIMA Model Forecasting | Data ...Time Series Analysis - 2 | Time Series in R | ARIMA Model Forecasting | Data ...
Time Series Analysis - 2 | Time Series in R | ARIMA Model Forecasting | Data ...
 
Time Series - Auto Regressive Models
Time Series - Auto Regressive ModelsTime Series - Auto Regressive Models
Time Series - Auto Regressive Models
 
SPSSAssignment2_Report_BerkeleyCTeate
SPSSAssignment2_Report_BerkeleyCTeateSPSSAssignment2_Report_BerkeleyCTeate
SPSSAssignment2_Report_BerkeleyCTeate
 
Isen 614 project report
Isen 614 project reportIsen 614 project report
Isen 614 project report
 
AR model
AR modelAR model
AR model
 
Aristotle boyd martin-peci_poster_2017
Aristotle boyd martin-peci_poster_2017Aristotle boyd martin-peci_poster_2017
Aristotle boyd martin-peci_poster_2017
 
R chart
R chartR chart
R chart
 
Systemic Arterial Pulse Pressure Analysis
Systemic Arterial Pulse Pressure AnalysisSystemic Arterial Pulse Pressure Analysis
Systemic Arterial Pulse Pressure Analysis
 
Av 738- Adaptive Filtering - Wiener Filters[wk 3]
Av 738- Adaptive Filtering - Wiener Filters[wk 3]Av 738- Adaptive Filtering - Wiener Filters[wk 3]
Av 738- Adaptive Filtering - Wiener Filters[wk 3]
 
Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]
Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]
Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]
 
Detection & Estimation Theory
Detection & Estimation TheoryDetection & Estimation Theory
Detection & Estimation Theory
 

Similar to Adjusting PageRank parameters and comparing results : REPORT

Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...ANIRBANMAJUMDAR18
 
tw1979 Exercise 1 Report
tw1979 Exercise 1 Reporttw1979 Exercise 1 Report
tw1979 Exercise 1 ReportThomas Wigg
 
Integral method to analyze reaction kinetics
Integral method to analyze reaction kineticsIntegral method to analyze reaction kinetics
Integral method to analyze reaction kineticsvarshabhi27
 
Chapter Two PPT Lecture - Part One.ppt
Chapter Two PPT Lecture - Part One.pptChapter Two PPT Lecture - Part One.ppt
Chapter Two PPT Lecture - Part One.pptjosh658552
 
Applied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, InterpolationApplied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, InterpolationBrian Erandio
 
Simple lin regress_inference
Simple lin regress_inferenceSimple lin regress_inference
Simple lin regress_inferenceKemal İnciroğlu
 
Comparing the methods of Estimation of Three-Parameter Weibull distribution
Comparing the methods of Estimation of Three-Parameter Weibull distributionComparing the methods of Estimation of Three-Parameter Weibull distribution
Comparing the methods of Estimation of Three-Parameter Weibull distributionIOSRJM
 
Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...
Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...
Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...Thomas Templin
 
2014-mo444-practical-assignment-04-paulo_faria
2014-mo444-practical-assignment-04-paulo_faria2014-mo444-practical-assignment-04-paulo_faria
2014-mo444-practical-assignment-04-paulo_fariaPaulo Faria
 
Regression with Time Series Data
Regression with Time Series DataRegression with Time Series Data
Regression with Time Series DataRizano Ahdiat R
 
An econometric model for Linear Regression using Statistics
An econometric model for Linear Regression using StatisticsAn econometric model for Linear Regression using Statistics
An econometric model for Linear Regression using StatisticsIRJET Journal
 
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...IRJET Journal
 

Similar to Adjusting PageRank parameters and comparing results : REPORT (20)

Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...
 
R analysis of covariance
R   analysis of covarianceR   analysis of covariance
R analysis of covariance
 
tw1979 Exercise 1 Report
tw1979 Exercise 1 Reporttw1979 Exercise 1 Report
tw1979 Exercise 1 Report
 
working with python
working with pythonworking with python
working with python
 
Integral method to analyze reaction kinetics
Integral method to analyze reaction kineticsIntegral method to analyze reaction kinetics
Integral method to analyze reaction kinetics
 
Building the Professional of 2020: An Approach to Business Change Process Int...
Building the Professional of 2020: An Approach to Business Change Process Int...Building the Professional of 2020: An Approach to Business Change Process Int...
Building the Professional of 2020: An Approach to Business Change Process Int...
 
Chapter Two PPT Lecture - Part One.ppt
Chapter Two PPT Lecture - Part One.pptChapter Two PPT Lecture - Part One.ppt
Chapter Two PPT Lecture - Part One.ppt
 
Applied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, InterpolationApplied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
 
RS
RSRS
RS
 
Simple lin regress_inference
Simple lin regress_inferenceSimple lin regress_inference
Simple lin regress_inference
 
Ann a Algorithms notes
Ann a Algorithms notesAnn a Algorithms notes
Ann a Algorithms notes
 
Comparing the methods of Estimation of Three-Parameter Weibull distribution
Comparing the methods of Estimation of Three-Parameter Weibull distributionComparing the methods of Estimation of Three-Parameter Weibull distribution
Comparing the methods of Estimation of Three-Parameter Weibull distribution
 
Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...
Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...
Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...
 
Data Analysis Homework Help
Data Analysis Homework HelpData Analysis Homework Help
Data Analysis Homework Help
 
Chapter 18,19
Chapter 18,19Chapter 18,19
Chapter 18,19
 
2014-mo444-practical-assignment-04-paulo_faria
2014-mo444-practical-assignment-04-paulo_faria2014-mo444-practical-assignment-04-paulo_faria
2014-mo444-practical-assignment-04-paulo_faria
 
Regression with Time Series Data
Regression with Time Series DataRegression with Time Series Data
Regression with Time Series Data
 
An econometric model for Linear Regression using Statistics
An econometric model for Linear Regression using StatisticsAn econometric model for Linear Regression using Statistics
An econometric model for Linear Regression using Statistics
 
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...
 
Computation Assignment Help
Computation Assignment Help Computation Assignment Help
Computation Assignment Help
 

More from Subhajit Sahu

About TrueTime, Spanner, Clock synchronization, CAP theorem, Two-phase lockin...
About TrueTime, Spanner, Clock synchronization, CAP theorem, Two-phase lockin...About TrueTime, Spanner, Clock synchronization, CAP theorem, Two-phase lockin...
About TrueTime, Spanner, Clock synchronization, CAP theorem, Two-phase lockin...Subhajit Sahu
 
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
 
Adjusting Bitset for graph : SHORT REPORT / NOTES
Adjusting Bitset for graph : SHORT REPORT / NOTESAdjusting Bitset for graph : SHORT REPORT / NOTES
Adjusting Bitset for graph : SHORT REPORT / NOTESSubhajit Sahu
 
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...Subhajit Sahu
 
Adjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTESAdjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
 
Experiments with Primitive operations : SHORT REPORT / NOTES
Experiments with Primitive operations : SHORT REPORT / NOTESExperiments with Primitive operations : SHORT REPORT / NOTES
Experiments with Primitive operations : SHORT REPORT / NOTESSubhajit Sahu
 
PageRank Experiments : SHORT REPORT / NOTES
PageRank Experiments : SHORT REPORT / NOTESPageRank Experiments : SHORT REPORT / NOTES
PageRank Experiments : SHORT REPORT / NOTESSubhajit Sahu
 
Algorithmic optimizations for Dynamic Monolithic PageRank (from STICD) : SHOR...
Algorithmic optimizations for Dynamic Monolithic PageRank (from STICD) : SHOR...Algorithmic optimizations for Dynamic Monolithic PageRank (from STICD) : SHOR...
Algorithmic optimizations for Dynamic Monolithic PageRank (from STICD) : SHOR...Subhajit Sahu
 
Adjusting OpenMP PageRank : SHORT REPORT / NOTES
Adjusting OpenMP PageRank : SHORT REPORT / NOTESAdjusting OpenMP PageRank : SHORT REPORT / NOTES
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
 
word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings o...
word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings o...word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings o...
word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings o...Subhajit Sahu
 
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTES
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTESDyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTES
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTESSubhajit Sahu
 
Shared memory Parallelism (NOTES)
Shared memory Parallelism (NOTES)Shared memory Parallelism (NOTES)
Shared memory Parallelism (NOTES)Subhajit Sahu
 
A Dynamic Algorithm for Local Community Detection in Graphs : NOTES
A Dynamic Algorithm for Local Community Detection in Graphs : NOTESA Dynamic Algorithm for Local Community Detection in Graphs : NOTES
A Dynamic Algorithm for Local Community Detection in Graphs : NOTESSubhajit Sahu
 
Scalable Static and Dynamic Community Detection Using Grappolo : NOTES
Scalable Static and Dynamic Community Detection Using Grappolo : NOTESScalable Static and Dynamic Community Detection Using Grappolo : NOTES
Scalable Static and Dynamic Community Detection Using Grappolo : NOTESSubhajit Sahu
 
Application Areas of Community Detection: A Review : NOTES
Application Areas of Community Detection: A Review : NOTESApplication Areas of Community Detection: A Review : NOTES
Application Areas of Community Detection: A Review : NOTESSubhajit Sahu
 
Community Detection on the GPU : NOTES
Community Detection on the GPU : NOTESCommunity Detection on the GPU : NOTES
Community Detection on the GPU : NOTESSubhajit Sahu
 
Survey for extra-child-process package : NOTES
Survey for extra-child-process package : NOTESSurvey for extra-child-process package : NOTES
Survey for extra-child-process package : NOTESSubhajit Sahu
 
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTER
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTERDynamic Batch Parallel Algorithms for Updating PageRank : POSTER
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTERSubhajit Sahu
 
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...Subhajit Sahu
 
Fast Incremental Community Detection on Dynamic Graphs : NOTES
Fast Incremental Community Detection on Dynamic Graphs : NOTESFast Incremental Community Detection on Dynamic Graphs : NOTES
Fast Incremental Community Detection on Dynamic Graphs : NOTESSubhajit Sahu
 

More from Subhajit Sahu (20)

About TrueTime, Spanner, Clock synchronization, CAP theorem, Two-phase lockin...
About TrueTime, Spanner, Clock synchronization, CAP theorem, Two-phase lockin...About TrueTime, Spanner, Clock synchronization, CAP theorem, Two-phase lockin...
About TrueTime, Spanner, Clock synchronization, CAP theorem, Two-phase lockin...
 
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
 
Adjusting Bitset for graph : SHORT REPORT / NOTES
Adjusting Bitset for graph : SHORT REPORT / NOTESAdjusting Bitset for graph : SHORT REPORT / NOTES
Adjusting Bitset for graph : SHORT REPORT / NOTES
 
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
 
Adjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTESAdjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTES
 
Experiments with Primitive operations : SHORT REPORT / NOTES
Experiments with Primitive operations : SHORT REPORT / NOTESExperiments with Primitive operations : SHORT REPORT / NOTES
Experiments with Primitive operations : SHORT REPORT / NOTES
 
PageRank Experiments : SHORT REPORT / NOTES
PageRank Experiments : SHORT REPORT / NOTESPageRank Experiments : SHORT REPORT / NOTES
PageRank Experiments : SHORT REPORT / NOTES
 
Algorithmic optimizations for Dynamic Monolithic PageRank (from STICD) : SHOR...
Algorithmic optimizations for Dynamic Monolithic PageRank (from STICD) : SHOR...Algorithmic optimizations for Dynamic Monolithic PageRank (from STICD) : SHOR...
Algorithmic optimizations for Dynamic Monolithic PageRank (from STICD) : SHOR...
 
Adjusting OpenMP PageRank : SHORT REPORT / NOTES
Adjusting OpenMP PageRank : SHORT REPORT / NOTESAdjusting OpenMP PageRank : SHORT REPORT / NOTES
Adjusting OpenMP PageRank : SHORT REPORT / NOTES
 
word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings o...
word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings o...word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings o...
word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings o...
 
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTES
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTESDyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTES
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTES
 
Shared memory Parallelism (NOTES)
Shared memory Parallelism (NOTES)Shared memory Parallelism (NOTES)
Shared memory Parallelism (NOTES)
 
A Dynamic Algorithm for Local Community Detection in Graphs : NOTES
A Dynamic Algorithm for Local Community Detection in Graphs : NOTESA Dynamic Algorithm for Local Community Detection in Graphs : NOTES
A Dynamic Algorithm for Local Community Detection in Graphs : NOTES
 
Scalable Static and Dynamic Community Detection Using Grappolo : NOTES
Scalable Static and Dynamic Community Detection Using Grappolo : NOTESScalable Static and Dynamic Community Detection Using Grappolo : NOTES
Scalable Static and Dynamic Community Detection Using Grappolo : NOTES
 
Application Areas of Community Detection: A Review : NOTES
Application Areas of Community Detection: A Review : NOTESApplication Areas of Community Detection: A Review : NOTES
Application Areas of Community Detection: A Review : NOTES
 
Community Detection on the GPU : NOTES
Community Detection on the GPU : NOTESCommunity Detection on the GPU : NOTES
Community Detection on the GPU : NOTES
 
Survey for extra-child-process package : NOTES
Survey for extra-child-process package : NOTESSurvey for extra-child-process package : NOTES
Survey for extra-child-process package : NOTES
 
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTER
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTERDynamic Batch Parallel Algorithms for Updating PageRank : POSTER
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTER
 
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...
 
Fast Incremental Community Detection on Dynamic Graphs : NOTES
Fast Incremental Community Detection on Dynamic Graphs : NOTESFast Incremental Community Detection on Dynamic Graphs : NOTES
Fast Incremental Community Detection on Dynamic Graphs : NOTES
 

Recently uploaded

A Comprehensive Appium Guide for Hybrid App Automation Testing.pdf
A Comprehensive Appium Guide for Hybrid App Automation Testing.pdfA Comprehensive Appium Guide for Hybrid App Automation Testing.pdf
A Comprehensive Appium Guide for Hybrid App Automation Testing.pdfkalichargn70th171
 
AI/ML Infra Meetup | Reducing Prefill for LLM Serving in RAG
AI/ML Infra Meetup | Reducing Prefill for LLM Serving in RAGAI/ML Infra Meetup | Reducing Prefill for LLM Serving in RAG
AI/ML Infra Meetup | Reducing Prefill for LLM Serving in RAGAlluxio, Inc.
 
BoxLang: Review our Visionary Licenses of 2024
BoxLang: Review our Visionary Licenses of 2024BoxLang: Review our Visionary Licenses of 2024
BoxLang: Review our Visionary Licenses of 2024Ortus Solutions, Corp
 
Using IESVE for Room Loads Analysis - Australia & New Zealand
Using IESVE for Room Loads Analysis - Australia & New ZealandUsing IESVE for Room Loads Analysis - Australia & New Zealand
Using IESVE for Room Loads Analysis - Australia & New ZealandIES VE
 
Dominate Social Media with TubeTrivia AI’s Addictive Quiz Videos.pdf
Dominate Social Media with TubeTrivia AI’s Addictive Quiz Videos.pdfDominate Social Media with TubeTrivia AI’s Addictive Quiz Videos.pdf
Dominate Social Media with TubeTrivia AI’s Addictive Quiz Videos.pdfAMB-Review
 
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoam
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamOpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoam
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
 
GraphAware - Transforming policing with graph-based intelligence analysis
GraphAware - Transforming policing with graph-based intelligence analysisGraphAware - Transforming policing with graph-based intelligence analysis
GraphAware - Transforming policing with graph-based intelligence analysisNeo4j
 
Prosigns: Transforming Business with Tailored Technology Solutions
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns: Transforming Business with Tailored Technology Solutions
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
 
Designing for Privacy in Amazon Web Services
Designing for Privacy in Amazon Web ServicesDesigning for Privacy in Amazon Web Services
Designing for Privacy in Amazon Web ServicesKrzysztofKkol1
 
How to Position Your Globus Data Portal for Success Ten Good Practices
How to Position Your Globus Data Portal for Success Ten Good PracticesHow to Position Your Globus Data Portal for Success Ten Good Practices
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
 
GlobusWorld 2024 Opening Keynote session
GlobusWorld 2024 Opening Keynote sessionGlobusWorld 2024 Opening Keynote session
GlobusWorld 2024 Opening Keynote sessionGlobus
 
Accelerate Enterprise Software Engineering with Platformless
Accelerate Enterprise Software Engineering with PlatformlessAccelerate Enterprise Software Engineering with Platformless
Accelerate Enterprise Software Engineering with PlatformlessWSO2
 
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
 
Studiovity film pre-production and screenwriting software
Studiovity film pre-production and screenwriting softwareStudiovity film pre-production and screenwriting software
Studiovity film pre-production and screenwriting softwareinfo611746
 
De mooiste recreatieve routes ontdekken met RouteYou en FME
De mooiste recreatieve routes ontdekken met RouteYou en FMEDe mooiste recreatieve routes ontdekken met RouteYou en FME
De mooiste recreatieve routes ontdekken met RouteYou en FMEJelle | Nordend
 
Abortion ^Clinic ^%[+971588192166''] Abortion Pill Al Ain (?@?) Abortion Pill...
Abortion ^Clinic ^%[+971588192166''] Abortion Pill Al Ain (?@?) Abortion Pill...Abortion ^Clinic ^%[+971588192166''] Abortion Pill Al Ain (?@?) Abortion Pill...
Abortion ^Clinic ^%[+971588192166''] Abortion Pill Al Ain (?@?) Abortion Pill...Abortion Clinic
 
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
 
Developing Distributed High-performance Computing Capabilities of an Open Sci...
Developing Distributed High-performance Computing Capabilities of an Open Sci...Developing Distributed High-performance Computing Capabilities of an Open Sci...
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
 
Globus Connect Server Deep Dive - GlobusWorld 2024
Globus Connect Server Deep Dive - GlobusWorld 2024Globus Connect Server Deep Dive - GlobusWorld 2024
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
 
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
 

Recently uploaded (20)

A Comprehensive Appium Guide for Hybrid App Automation Testing.pdf
A Comprehensive Appium Guide for Hybrid App Automation Testing.pdfA Comprehensive Appium Guide for Hybrid App Automation Testing.pdf
A Comprehensive Appium Guide for Hybrid App Automation Testing.pdf
 
AI/ML Infra Meetup | Reducing Prefill for LLM Serving in RAG
AI/ML Infra Meetup | Reducing Prefill for LLM Serving in RAGAI/ML Infra Meetup | Reducing Prefill for LLM Serving in RAG
AI/ML Infra Meetup | Reducing Prefill for LLM Serving in RAG
 
BoxLang: Review our Visionary Licenses of 2024
BoxLang: Review our Visionary Licenses of 2024BoxLang: Review our Visionary Licenses of 2024
BoxLang: Review our Visionary Licenses of 2024
 
Using IESVE for Room Loads Analysis - Australia & New Zealand
Using IESVE for Room Loads Analysis - Australia & New ZealandUsing IESVE for Room Loads Analysis - Australia & New Zealand
Using IESVE for Room Loads Analysis - Australia & New Zealand
 
Dominate Social Media with TubeTrivia AI’s Addictive Quiz Videos.pdf
Dominate Social Media with TubeTrivia AI’s Addictive Quiz Videos.pdfDominate Social Media with TubeTrivia AI’s Addictive Quiz Videos.pdf
Dominate Social Media with TubeTrivia AI’s Addictive Quiz Videos.pdf
 
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoam
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamOpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoam
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoam
 
GraphAware - Transforming policing with graph-based intelligence analysis
GraphAware - Transforming policing with graph-based intelligence analysisGraphAware - Transforming policing with graph-based intelligence analysis
GraphAware - Transforming policing with graph-based intelligence analysis
 
Prosigns: Transforming Business with Tailored Technology Solutions
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns: Transforming Business with Tailored Technology Solutions
Prosigns: Transforming Business with Tailored Technology Solutions
 
Designing for Privacy in Amazon Web Services
Designing for Privacy in Amazon Web ServicesDesigning for Privacy in Amazon Web Services
Designing for Privacy in Amazon Web Services
 
How to Position Your Globus Data Portal for Success Ten Good Practices
How to Position Your Globus Data Portal for Success Ten Good PracticesHow to Position Your Globus Data Portal for Success Ten Good Practices
How to Position Your Globus Data Portal for Success Ten Good Practices
 
GlobusWorld 2024 Opening Keynote session
GlobusWorld 2024 Opening Keynote sessionGlobusWorld 2024 Opening Keynote session
GlobusWorld 2024 Opening Keynote session
 
Accelerate Enterprise Software Engineering with Platformless
Accelerate Enterprise Software Engineering with PlatformlessAccelerate Enterprise Software Engineering with Platformless
Accelerate Enterprise Software Engineering with Platformless
 
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...
 
Studiovity film pre-production and screenwriting software
Studiovity film pre-production and screenwriting softwareStudiovity film pre-production and screenwriting software
Studiovity film pre-production and screenwriting software
 
De mooiste recreatieve routes ontdekken met RouteYou en FME
De mooiste recreatieve routes ontdekken met RouteYou en FMEDe mooiste recreatieve routes ontdekken met RouteYou en FME
De mooiste recreatieve routes ontdekken met RouteYou en FME
 
Abortion ^Clinic ^%[+971588192166''] Abortion Pill Al Ain (?@?) Abortion Pill...
Abortion ^Clinic ^%[+971588192166''] Abortion Pill Al Ain (?@?) Abortion Pill...Abortion ^Clinic ^%[+971588192166''] Abortion Pill Al Ain (?@?) Abortion Pill...
Abortion ^Clinic ^%[+971588192166''] Abortion Pill Al Ain (?@?) Abortion Pill...
 
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...
 
Developing Distributed High-performance Computing Capabilities of an Open Sci...
Developing Distributed High-performance Computing Capabilities of an Open Sci...Developing Distributed High-performance Computing Capabilities of an Open Sci...
Developing Distributed High-performance Computing Capabilities of an Open Sci...
 
Globus Connect Server Deep Dive - GlobusWorld 2024
Globus Connect Server Deep Dive - GlobusWorld 2024Globus Connect Server Deep Dive - GlobusWorld 2024
Globus Connect Server Deep Dive - GlobusWorld 2024
 
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...
 

Adjusting PageRank parameters and comparing results : REPORT

  • 1. Adjusting PageRank parameters and comparing results Web graphs unaltered are reducible, and thus the rate of convergence of the power-iteration method is the rate at which αk → 0, where α is the damping factor, and k is the iteration count. An estimate of the number of iterations needed to converge to a tolerance τ is logα τ. For τ = 10-6 and α = 0.85, it can take roughly 85 iterations to converge. For α = 0.95, and α = 0.75, with the same tolerance τ = 10-6 , it takes roughly 269 and 48 iterations respectively. For τ = 10-9 , and τ = 10-3 , with the same damping factor α = 0.85, it takes roughly 128 and 43 iterations respectively. Thus, adjusting the damping factor or the tolerance parameters of the PageRank algorithm can have a significant effect on the convergence rate, both in terms of time and iterations. However, especially with the damping factor α, adjustment of the parameter value is a delicate balancing act. For smaller values of α, the convergence is fast, but the link structure of the graph used to determine ranks is less true. Slightly different values for α can produce very different rank vectors. Moreover, as α → 1, convergence slows down drastically, and sensitivity issues begin to surface [langville04]. For the first experiment, the damping factor α (which is usually 0.85) is varied from 0.50 to 1.00 in steps of 0.05. This is in order to compare the performance variation with each damping factor. The calculated error is the L1-norm with respect to default PageRank (α = 0.85). The PageRank algorithm used here is the standard power-iteration (pull) based PageRank. The rank of a vertex in an iteration is calculated as c0 + αΣrn/dn, where c0 is the common teleport contribution, α is the damping factor, rn is the previous rank of vertex with an incoming edge, dn is the out-degree of the incoming-edge vertex, and N is the total number of vertices in the graph. The common teleport contribution c0, calculated as (1-α)/N + αΣrn/N, includes the contribution due to a teleport from any vertex in the graph due to the damping factor (1-α)/N, and teleport from dangling vertices (with no outgoing edges) in the graph αΣrn/N. This is because a random surfer jumps to a random page upon visiting a page with no links, in order to avoid the rank-sink effect. All seventeen graphs used in this experiment are stored in the MatrixMarket (.mtx) file format, and obtained from the SuiteSparse Matrix Collection. These include: web-Stanford, web-BerkStan, web-Google, web-NotreDame, soc-Slashdot0811, soc-Slashdot0902, soc-Epinions1, coAuthorsDBLP, coAuthorsCiteseer, soc-LiveJournal1, coPapersCiteseer, coPapersDBLP, indochina-2004, italy_osm, great-britain_osm, germany_osm, asia_osm. The experiment is implemented in C++, and compiled using GCC 9 with optimization level 3 (-O3). The system used is a Dell PowerEdge R740 Rack server with two Intel Xeon Silver 4116 CPUs @ 2.10GHz, 128GB DIMM DDR4 Synchronous Registered (Buffered) 2666 MHz (8x16GB) DRAM, and running CentOS Linux release 7.9.2009 (Core). The iterations taken with each test case is measured. 500 is the maximum iterations allowed. Statistics of each test case is printed to standard output (stdout), and redirected to a log file, which is then processed with a script to generate a CSV file, with each row representing the details of a single test case. This CSV file is imported into Google Sheets, and necessary tables are set up with the help of the FILTER function to create the charts. When comparing the relative performance of different approaches with multiple test graphs, there are two ways to obtain an average comparison: relative-average, and average-relative.
  • 2. A relative-average comparison first finds relative performance (ratio) of each approach with respect to a baseline approach (one of them), and then averages them. Consider, for example, three approaches a, b, and c, with 3 test runs for each of the three approaches, labeled a1, a2, a3, b1, b2, b3, c1, c2, c3. The relative performance of each approach with respect to c would be a1/c1, b1/c1, c1/c1, a2/c2, b2/c2, and so on. The relative-average comparison is now the average of these ratios, i.e., (a1/c1+a2/c2+a3/c3)/3 for a, (b1/c1+b2/c2+b3/c3)/3, and 1 for c. In contrast, an average-relative comparison first finds the average time/iterations taken for each approach with respect to a baseline approach, and then finds the relative performance. Again, considering three approaches, with 3 test runs as above, the average values of each approach would be (a1+a2+a3)/3 for a, (b1+b2+b3)/3 for b, and (c1+c2+c3)/3 for c. The average-relative comparison of each approach with respect to c would then be (a1+a2+a3)/(c1+c2+c3) for a, (b1+b2+b3)/(c1+c2+c3) for b, and 1 for c. Semantically, a relative-average comparison gives equal importance to the relative performance of each test run (graph), while an average-relative comparison gives equal importance to magnitude (time/iterations) of all test runs (or simply, it gives higher importance to test runs with larger graphs). For these experiments, both comparisons are made, but only one of them is presented here if they are quite similar. Figure 1: Average iterations for PageRank computation with damping factor α adjusted from 0.50 - 1.00 in steps of 0.05. Charts for relative-average, and average-relative iterations (with respect to damping factor α = 0.85) follow the same curve, but with different values (values for relative-average and average-relative iterations are quite similar). Results (figure 1) indicate that increasing the damping factor α beyond 0.85 significantly increases convergence time, and lowering it below 0.85 decreases convergence time. On average, using a damping factor α = 0.95 increases both convergence time and iterations by 192%, and using a damping factor α = 0.75 decreases both by 41% (compared to damping
  • 3. factor α = 0.85). Note that a higher damping factor implies that a random surfer follows links with higher probability (and jumps to a random page with lower probability). Observing that adjusting the damping factor has a significant effect, another experiment was performed. The idea behind this experiment was to adjust the damping factor α in steps, to see if it might help reduce PageRank computation time. The PageRank computation first starts with a small α, changes it when ranks have converged, until the final desired value of α. For example, the computation starts initially with α = 0.5, lets ranks converge quickly, and then switches to α = 0.85 and continues PageRank computation until it converges. This single-step change is attempted with the initial (fast converge) damping factor α from 0.1 to 0.84. Similar to this, two-step, three-step, and four-step changes are also attempted. With a two-step approach, a midpoint between the damping_start value and 0.85 is selected as well for the second set of iterations. Similarly, three-step and four-step approaches use two and three midpoints respectively. A small sample graph is used in this experiment, which is stored in the MatrixMarket (.mtx) file format. The experiment is implemented in Node.js, and executed on a personal laptop. Only the iteration count of each test case is measured. The tolerance τ = 10-5 is used for all test cases. Statistics of each test case is printed to standard output (stdout), and redirected to a log file, which is then processed with a script to generate a CSV file, with each row representing the details of a single test case. This CSV file is imported into Google Sheets, and necessary tables are set up with the help of the FILTER function to create the charts. Figure 2: Iterations required for PageRank computation, when damping factor α is adjusted in 1-4 steps, starting with damping_start. 0-step is the fixed damping factor PageRank, with α = 0.85.
  • 4. From the results (figure 2), it is clear that modifying the damping factor α in steps is not a good idea. The standard fixed damping factor PageRank, with α = 0.85, converges in 35 iterations. Using a single step approach increases the number of iterations required, which further increases as the initial damping factor damping_start is increased. Switching to a multi-step approach also increases the number of iterations needed for convergence. A possible explanation for this effect is that the ranks for different values of the damping factor α are significantly different, and switching to a different damping factor α after each step mostly leads to recomputation. Similar to the damping factor α, adjusting the value of tolerance τ can have a significant effect as well. Apart from the value of tolerance τ, it is observed that different people make use of different error functions for measuring tolerance. Although L1 norm is commonly used for convergence check, it appears nvGraph uses L2 norm instead [nvgraph]. Another person in stackoverflow seems to suggest the use of per-vertex tolerance comparison, which is essentially the L∞ norm. The L1 norm ||E||1 between two (rank) vectors r and s is calculated as ||E||1 = Σ|rn - sn|, or as the sum of absolute errors. The L2 norm ||E||2 is calculated as ||E||2 = √Σ|rn - sn|2 , or as the square-root of the sum of squared errors (euclidean distance between the two vectors). The L∞ norm ||E||∞ is calculated as ||E||∞ = max(|rn - sn|), or as the maximum of absolute errors. This experiment was for comparing the performance between PageRank computation with L1, L2 and L∞ norms as convergence check, for various tolerance τ values ranging from 10-0 to 10-10 (10-0 , 5×10-0 , 10-1 , 5×10-1 , ...). The input graphs, system used, and the rest of the experimental process is similar to that of the first experiment. tolerance L1 norm L2 norm L∞ norm 1.00E-05 49 65 27 5.00E-06 53 65 31 1.00E-06 63 500 41 5.00E-07 67 500 45 1.00E-07 77 500 55 5.00E-08 84 500 59 1.00E-08 500 500 70 5.00E-09 500 500 73 1.00E-09 500 500 500 5.00E-10 500 500 500 1.00E-10 500 500 500 Table 1: Iterations taken for PageRank computation of the web-Stanford graph, with L1, L2, and L∞ norms used as convergence check. At tolerance τ = 10-6 , the L2 norm suffers from sensitivity issues, followed by L1 and L∞ norms at 10-8 and 10-9 respectively. Only relevant tolerances are shown here.
  • 5. Figure 3: Iterations taken for PageRank computation of the asia_osm graph, with L1, L2, and L∞ norms used as convergence check. Until tolerance τ = 10-7 , the L∞ norm converges in just one iteration. Figure 4: Average iterations taken for PageRank computation with L1, L2 and L∞ norms as convergence check, and tolerance τ adjusted from 10-0 to 10-10 (10-0 , 5×10-0 , 10-1 , 5×10-1 , ...). L∞ norm convergence check seems to be the fastest, followed by L1 norm (on average).
  • 6. Figure 5: Average-relative iterations taken for PageRank computation with L1, L2 and L∞ norms as convergence check, and tolerance τ adjusted from 10-0 . L∞ norm convergence check seems to be the fastest, however, it is difficult to tell whether L1, or L2 norm comes in seconds place (on average). Figure 6: Relative-average iterations taken for PageRank computation with L1, L2 and L∞ norms as convergence check, and tolerance τ adjusted from 10-0 . L∞ norm convergence check seems to be the fastest, followed by L2 norm (on average).
  • 7. For various graphs, it is observed that PageRank computation with L1, L2, or L∞ norm as convergence check suffers from sensitivity issues beyond certain (smaller) tolerance τ values. As tolerance τ is decreased from 10-0 to 10-10 , L2 norm is usually (except road networks) the first to suffer from this issue, followed by L1 norm (or L2), and eventually L∞ norm (if ever). This sensitivity issue was recognized by the fact that a given approach abruptly takes 500 (max iterations) for the next lower tolerance τ value. This is shown in table 1. It is also observed that PageRank computation with L∞ norm as convergence check completes in just one iteration (even for tolerance τ ≥ 10-6 ) for large graphs (road networks). This is because it is calculated as ||E||∞ = max(|rn - sn|), and depending upon the order (number of vertices) N of the graph, 1/N can be less than the required tolerance τ to converge. Based on average-relative comparison, the relative iterations between PageRank computation with L1, L2, and L∞ norm as convergence check is 4.73 : 4.08 : 1.00. Hence L2 norm is on average 16% faster than L1 norm, and L∞ norm is 308% faster (~4x) than L2 norm. The variation of average-relative iterations for various tolerance τ values is shown in figure 5. A similar effect is also seen in figure 4, where average iterations for various tolerance τ values is shown. On the other hand, based on relative-average comparison, the relative iterations between PageRank computation with L1, L2, and L∞ norm as convergence check is 10.42 : 6.18 : 1. Hence, L2 norm is on average 69% faster than L1 norm, and L∞ norm is 518% faster (~6x) than L2 norm. The variation of relative-average iterations for various tolerance τ values is shown in figure 6. This shows that while L1 norm is on average slower than L2 norm, the difference between the two diminishes for large graphs (average-relative comparison gives higher importance to results from larger graphs, unlike relative-average). It should also be noted that L2 norm is not always faster than L1 norm in several cases (usually for smaller tolerance τ values) as can be seen in table 1. Parameter values can have a significant effect on performance, as seen in these experiments. Different convergence functions converge at different rates, and which of them converges faster depends upon the tolerance τ value. Iteration count needs to be checked in order to ensure that no approach is suffering from sensitivity issues, or is leading to a single iteration convergence. Finally, the relative performance comparison method affects which results get more importance, and which do not, in the final average. Taking note of each of these points, when comparing iterative algorithms, will thus ensure that the performance results are accurate and useful. Table 2: List of parameter adjustment strategies, and links to source code. Damping Factor adjust dynamic-adjust Tolerance L1 norm L2 norm L∞ norm 1. Comparing the effect of using different values of damping factor, with PageRank (pull, CSR). 2. Experimenting PageRank improvement by adjusting damping factor (α) between iterations. 3. Comparing the effect of using different functions for convergence check, with PageRank (...). 4. Comparing the effect of using different values of tolerance, with PageRank (pull, CSR).