SlideShare a Scribd company logo
1 of 7
Download to read offline
Adjusting PageRank parameters and comparing results
Web graphs unaltered are reducible, and thus the rate of convergence of the power-iteration
method is the rate at which αk
→ 0, where α is the damping factor, and k is the iteration
count. An estimate of the number of iterations needed to converge to a tolerance τ is logα τ.
For τ = 10-6
and α = 0.85, it can take roughly 85 iterations to converge. For α = 0.95, and α =
0.75, with the same tolerance τ = 10-6
, it takes roughly 269 and 48 iterations respectively. For
τ = 10-9
, and τ = 10-3
, with the same damping factor α = 0.85, it takes roughly 128 and 43
iterations respectively. Thus, adjusting the damping factor or the tolerance parameters of
the PageRank algorithm can have a significant effect on the convergence rate, both in terms
of time and iterations. However, especially with the damping factor α, adjustment of the
parameter value is a delicate balancing act. For smaller values of α, the convergence is fast,
but the link structure of the graph used to determine ranks is less true. Slightly different
values for α can produce very different rank vectors. Moreover, as α → 1, convergence
slows down drastically, and sensitivity issues begin to surface [langville04].
For the first experiment, the damping factor α (which is usually 0.85) is varied from 0.50 to
1.00 in steps of 0.05. This is in order to compare the performance variation with each
damping factor. The calculated error is the L1-norm with respect to default PageRank (α =
0.85). The PageRank algorithm used here is the standard power-iteration (pull) based
PageRank. The rank of a vertex in an iteration is calculated as c0 + αΣrn/dn, where c0 is the
common teleport contribution, α is the damping factor, rn is the previous rank of vertex with
an incoming edge, dn is the out-degree of the incoming-edge vertex, and N is the total
number of vertices in the graph. The common teleport contribution c0, calculated as (1-α)/N
+ αΣrn/N, includes the contribution due to a teleport from any vertex in the graph due to the
damping factor (1-α)/N, and teleport from dangling vertices (with no outgoing edges) in the
graph αΣrn/N. This is because a random surfer jumps to a random page upon visiting a page
with no links, in order to avoid the rank-sink effect.
All seventeen graphs used in this experiment are stored in the MatrixMarket (.mtx) file
format, and obtained from the SuiteSparse Matrix Collection. These include: web-Stanford,
web-BerkStan, web-Google, web-NotreDame, soc-Slashdot0811, soc-Slashdot0902,
soc-Epinions1, coAuthorsDBLP, coAuthorsCiteseer, soc-LiveJournal1, coPapersCiteseer,
coPapersDBLP, indochina-2004, italy_osm, great-britain_osm, germany_osm, asia_osm.
The experiment is implemented in C++, and compiled using GCC 9 with optimization level 3
(-O3). The system used is a Dell PowerEdge R740 Rack server with two Intel Xeon Silver
4116 CPUs @ 2.10GHz, 128GB DIMM DDR4 Synchronous Registered (Buffered) 2666 MHz
(8x16GB) DRAM, and running CentOS Linux release 7.9.2009 (Core). The iterations taken
with each test case is measured. 500 is the maximum iterations allowed. Statistics of each
test case is printed to standard output (stdout), and redirected to a log file, which is then
processed with a script to generate a CSV file, with each row representing the details of a
single test case. This CSV file is imported into Google Sheets, and necessary tables are set
up with the help of the FILTER function to create the charts.
When comparing the relative performance of different approaches with multiple test graphs,
there are two ways to obtain an average comparison: relative-average, and average-relative.
A relative-average comparison first finds relative performance (ratio) of each approach with
respect to a baseline approach (one of them), and then averages them. Consider, for
example, three approaches a, b, and c, with 3 test runs for each of the three approaches,
labeled a1, a2, a3, b1, b2, b3, c1, c2, c3. The relative performance of each approach with
respect to c would be a1/c1, b1/c1, c1/c1, a2/c2, b2/c2, and so on. The relative-average
comparison is now the average of these ratios, i.e., (a1/c1+a2/c2+a3/c3)/3 for a,
(b1/c1+b2/c2+b3/c3)/3, and 1 for c. In contrast, an average-relative comparison first finds the
average time/iterations taken for each approach with respect to a baseline approach, and
then finds the relative performance. Again, considering three approaches, with 3 test runs as
above, the average values of each approach would be (a1+a2+a3)/3 for a, (b1+b2+b3)/3 for b,
and (c1+c2+c3)/3 for c. The average-relative comparison of each approach with respect to c
would then be (a1+a2+a3)/(c1+c2+c3) for a, (b1+b2+b3)/(c1+c2+c3) for b, and 1 for c.
Semantically, a relative-average comparison gives equal importance to the relative
performance of each test run (graph), while an average-relative comparison gives equal
importance to magnitude (time/iterations) of all test runs (or simply, it gives higher
importance to test runs with larger graphs). For these experiments, both comparisons are
made, but only one of them is presented here if they are quite similar.
Figure 1: Average iterations for PageRank computation with damping factor α adjusted from 0.50 -
1.00 in steps of 0.05. Charts for relative-average, and average-relative iterations (with respect to
damping factor α = 0.85) follow the same curve, but with different values (values for relative-average
and average-relative iterations are quite similar).
Results (figure 1) indicate that increasing the damping factor α beyond 0.85 significantly
increases convergence time, and lowering it below 0.85 decreases convergence time. On
average, using a damping factor α = 0.95 increases both convergence time and iterations by
192%, and using a damping factor α = 0.75 decreases both by 41% (compared to damping
factor α = 0.85). Note that a higher damping factor implies that a random surfer follows links
with higher probability (and jumps to a random page with lower probability).
Observing that adjusting the damping factor has a significant effect, another experiment was
performed. The idea behind this experiment was to adjust the damping factor α in steps,
to see if it might help reduce PageRank computation time. The PageRank computation first
starts with a small α, changes it when ranks have converged, until the final desired value of
α. For example, the computation starts initially with α = 0.5, lets ranks converge quickly, and
then switches to α = 0.85 and continues PageRank computation until it converges. This
single-step change is attempted with the initial (fast converge) damping factor α from 0.1 to
0.84. Similar to this, two-step, three-step, and four-step changes are also attempted. With
a two-step approach, a midpoint between the damping_start value and 0.85 is selected as
well for the second set of iterations. Similarly, three-step and four-step approaches use two
and three midpoints respectively.
A small sample graph is used in this experiment, which is stored in the MatrixMarket (.mtx)
file format. The experiment is implemented in Node.js, and executed on a personal laptop.
Only the iteration count of each test case is measured. The tolerance τ = 10-5 is used for all
test cases. Statistics of each test case is printed to standard output (stdout), and redirected
to a log file, which is then processed with a script to generate a CSV file, with each row
representing the details of a single test case. This CSV file is imported into Google Sheets,
and necessary tables are set up with the help of the FILTER function to create the charts.
Figure 2: Iterations required for PageRank computation, when damping factor α is adjusted in 1-4
steps, starting with damping_start. 0-step is the fixed damping factor PageRank, with α = 0.85.
From the results (figure 2), it is clear that modifying the damping factor α in steps is not a
good idea. The standard fixed damping factor PageRank, with α = 0.85, converges in 35
iterations. Using a single step approach increases the number of iterations required, which
further increases as the initial damping factor damping_start is increased. Switching to a
multi-step approach also increases the number of iterations needed for convergence. A
possible explanation for this effect is that the ranks for different values of the damping factor
α are significantly different, and switching to a different damping factor α after each step
mostly leads to recomputation.
Similar to the damping factor α, adjusting the value of tolerance τ can have a significant
effect as well. Apart from the value of tolerance τ, it is observed that different people make
use of different error functions for measuring tolerance. Although L1 norm is commonly
used for convergence check, it appears nvGraph uses L2 norm instead [nvgraph]. Another
person in stackoverflow seems to suggest the use of per-vertex tolerance comparison, which
is essentially the L∞ norm. The L1 norm ||E||1 between two (rank) vectors r and s is
calculated as ||E||1 = Σ|rn - sn|, or as the sum of absolute errors. The L2 norm ||E||2 is
calculated as ||E||2 = √Σ|rn - sn|2
, or as the square-root of the sum of squared errors
(euclidean distance between the two vectors). The L∞ norm ||E||∞ is calculated as ||E||∞ =
max(|rn - sn|), or as the maximum of absolute errors.
This experiment was for comparing the performance between PageRank computation with
L1, L2 and L∞ norms as convergence check, for various tolerance τ values ranging from 10-0
to 10-10
(10-0
, 5×10-0
, 10-1
, 5×10-1
, ...). The input graphs, system used, and the rest of the
experimental process is similar to that of the first experiment.
tolerance L1 norm L2 norm L∞ norm
1.00E-05 49 65 27
5.00E-06 53 65 31
1.00E-06 63 500 41
5.00E-07 67 500 45
1.00E-07 77 500 55
5.00E-08 84 500 59
1.00E-08 500 500 70
5.00E-09 500 500 73
1.00E-09 500 500 500
5.00E-10 500 500 500
1.00E-10 500 500 500
Table 1: Iterations taken for PageRank computation of the web-Stanford graph, with L1, L2, and L∞
norms used as convergence check. At tolerance τ = 10-6
, the L2 norm suffers from sensitivity issues,
followed by L1 and L∞ norms at 10-8
and 10-9
respectively. Only relevant tolerances are shown here.
Figure 3: Iterations taken for PageRank computation of the asia_osm graph, with L1, L2, and L∞
norms used as convergence check. Until tolerance τ = 10-7
, the L∞ norm converges in just one
iteration.
Figure 4: Average iterations taken for PageRank computation with L1, L2 and L∞ norms as
convergence check, and tolerance τ adjusted from 10-0
to 10-10
(10-0
, 5×10-0
, 10-1
, 5×10-1
, ...). L∞
norm convergence check seems to be the fastest, followed by L1 norm (on average).
Figure 5: Average-relative iterations taken for PageRank computation with L1, L2 and L∞ norms as
convergence check, and tolerance τ adjusted from 10-0
. L∞ norm convergence check seems to be the
fastest, however, it is difficult to tell whether L1, or L2 norm comes in seconds place (on average).
Figure 6: Relative-average iterations taken for PageRank computation with L1, L2 and L∞ norms as
convergence check, and tolerance τ adjusted from 10-0
. L∞ norm convergence check seems to be the
fastest, followed by L2 norm (on average).
For various graphs, it is observed that PageRank computation with L1, L2, or L∞ norm as
convergence check suffers from sensitivity issues beyond certain (smaller) tolerance τ
values. As tolerance τ is decreased from 10-0
to 10-10
, L2 norm is usually (except road
networks) the first to suffer from this issue, followed by L1 norm (or L2), and eventually L∞
norm (if ever). This sensitivity issue was recognized by the fact that a given approach
abruptly takes 500 (max iterations) for the next lower tolerance τ value. This is shown in
table 1.
It is also observed that PageRank computation with L∞ norm as convergence check
completes in just one iteration (even for tolerance τ ≥ 10-6
) for large graphs (road
networks). This is because it is calculated as ||E||∞ = max(|rn - sn|), and depending upon the
order (number of vertices) N of the graph, 1/N can be less than the required tolerance τ to
converge.
Based on average-relative comparison, the relative iterations between PageRank
computation with L1, L2, and L∞ norm as convergence check is 4.73 : 4.08 : 1.00. Hence L2
norm is on average 16% faster than L1 norm, and L∞ norm is 308% faster (~4x) than L2
norm. The variation of average-relative iterations for various tolerance τ values is shown in
figure 5. A similar effect is also seen in figure 4, where average iterations for various
tolerance τ values is shown. On the other hand, based on relative-average comparison, the
relative iterations between PageRank computation with L1, L2, and L∞ norm as
convergence check is 10.42 : 6.18 : 1. Hence, L2 norm is on average 69% faster than L1
norm, and L∞ norm is 518% faster (~6x) than L2 norm. The variation of relative-average
iterations for various tolerance τ values is shown in figure 6. This shows that while L1 norm
is on average slower than L2 norm, the difference between the two diminishes for large
graphs (average-relative comparison gives higher importance to results from larger graphs,
unlike relative-average). It should also be noted that L2 norm is not always faster than L1
norm in several cases (usually for smaller tolerance τ values) as can be seen in table 1.
Parameter values can have a significant effect on performance, as seen in these
experiments. Different convergence functions converge at different rates, and which of
them converges faster depends upon the tolerance τ value. Iteration count needs to be
checked in order to ensure that no approach is suffering from sensitivity issues, or is leading
to a single iteration convergence. Finally, the relative performance comparison method
affects which results get more importance, and which do not, in the final average. Taking
note of each of these points, when comparing iterative algorithms, will thus ensure that the
performance results are accurate and useful.
Table 2: List of parameter adjustment strategies, and links to source code.
Damping Factor adjust dynamic-adjust
Tolerance L1 norm L2 norm L∞ norm
1. Comparing the effect of using different values of damping factor, with PageRank (pull, CSR).
2. Experimenting PageRank improvement by adjusting damping factor (α) between iterations.
3. Comparing the effect of using different functions for convergence check, with PageRank (...).
4. Comparing the effect of using different values of tolerance, with PageRank (pull, CSR).

More Related Content

What's hot

Mathematical Modelling of Control Systems
Mathematical Modelling of Control SystemsMathematical Modelling of Control Systems
Mathematical Modelling of Control SystemsDivyanshu Rai
 
Isen 614 project presentation
Isen 614 project presentationIsen 614 project presentation
Isen 614 project presentationVanshaj Handoo
 
Sensor Fusion Study - Ch7. Kalman Filter Generalizations [김영범]
Sensor Fusion Study - Ch7. Kalman Filter Generalizations [김영범]Sensor Fusion Study - Ch7. Kalman Filter Generalizations [김영범]
Sensor Fusion Study - Ch7. Kalman Filter Generalizations [김영범]AI Robotics KR
 
Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...
Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...
Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...AI Robotics KR
 
Computing Transformations Spring2005
Computing Transformations Spring2005Computing Transformations Spring2005
Computing Transformations Spring2005guest5989655
 
Sensor Fusion Study - Ch5. The discrete-time Kalman filter [박정은]
Sensor Fusion Study - Ch5. The discrete-time Kalman filter  [박정은]Sensor Fusion Study - Ch5. The discrete-time Kalman filter  [박정은]
Sensor Fusion Study - Ch5. The discrete-time Kalman filter [박정은]AI Robotics KR
 
Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [이해구]
Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [이해구]Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [이해구]
Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [이해구]AI Robotics KR
 
Logarithmic transformations
Logarithmic transformationsLogarithmic transformations
Logarithmic transformationsamylute
 
Time Series Analysis - 2 | Time Series in R | ARIMA Model Forecasting | Data ...
Time Series Analysis - 2 | Time Series in R | ARIMA Model Forecasting | Data ...Time Series Analysis - 2 | Time Series in R | ARIMA Model Forecasting | Data ...
Time Series Analysis - 2 | Time Series in R | ARIMA Model Forecasting | Data ...Simplilearn
 
Time Series - Auto Regressive Models
Time Series - Auto Regressive ModelsTime Series - Auto Regressive Models
Time Series - Auto Regressive ModelsBhaskar T
 
SPSSAssignment2_Report_BerkeleyCTeate
SPSSAssignment2_Report_BerkeleyCTeateSPSSAssignment2_Report_BerkeleyCTeate
SPSSAssignment2_Report_BerkeleyCTeateBerkeley Teate
 
Isen 614 project report
Isen 614 project reportIsen 614 project report
Isen 614 project reportVanshaj Handoo
 
Aristotle boyd martin-peci_poster_2017
Aristotle boyd martin-peci_poster_2017Aristotle boyd martin-peci_poster_2017
Aristotle boyd martin-peci_poster_2017Aristotle Boyd-Martin
 
Systemic Arterial Pulse Pressure Analysis
Systemic Arterial Pulse Pressure AnalysisSystemic Arterial Pulse Pressure Analysis
Systemic Arterial Pulse Pressure AnalysisCody Pilot
 
Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]
Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]
Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]AI Robotics KR
 

What's hot (20)

Mathematical Modelling of Control Systems
Mathematical Modelling of Control SystemsMathematical Modelling of Control Systems
Mathematical Modelling of Control Systems
 
X bar and-r_charts
X bar and-r_chartsX bar and-r_charts
X bar and-r_charts
 
Isen 614 project presentation
Isen 614 project presentationIsen 614 project presentation
Isen 614 project presentation
 
Sensor Fusion Study - Ch7. Kalman Filter Generalizations [김영범]
Sensor Fusion Study - Ch7. Kalman Filter Generalizations [김영범]Sensor Fusion Study - Ch7. Kalman Filter Generalizations [김영범]
Sensor Fusion Study - Ch7. Kalman Filter Generalizations [김영범]
 
Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...
Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...
Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...
 
Computing Transformations Spring2005
Computing Transformations Spring2005Computing Transformations Spring2005
Computing Transformations Spring2005
 
Sensor Fusion Study - Ch5. The discrete-time Kalman filter [박정은]
Sensor Fusion Study - Ch5. The discrete-time Kalman filter  [박정은]Sensor Fusion Study - Ch5. The discrete-time Kalman filter  [박정은]
Sensor Fusion Study - Ch5. The discrete-time Kalman filter [박정은]
 
Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [이해구]
Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [이해구]Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [이해구]
Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [이해구]
 
Logarithmic transformations
Logarithmic transformationsLogarithmic transformations
Logarithmic transformations
 
Time Series Analysis - 2 | Time Series in R | ARIMA Model Forecasting | Data ...
Time Series Analysis - 2 | Time Series in R | ARIMA Model Forecasting | Data ...Time Series Analysis - 2 | Time Series in R | ARIMA Model Forecasting | Data ...
Time Series Analysis - 2 | Time Series in R | ARIMA Model Forecasting | Data ...
 
Time Series - Auto Regressive Models
Time Series - Auto Regressive ModelsTime Series - Auto Regressive Models
Time Series - Auto Regressive Models
 
SPSSAssignment2_Report_BerkeleyCTeate
SPSSAssignment2_Report_BerkeleyCTeateSPSSAssignment2_Report_BerkeleyCTeate
SPSSAssignment2_Report_BerkeleyCTeate
 
Isen 614 project report
Isen 614 project reportIsen 614 project report
Isen 614 project report
 
AR model
AR modelAR model
AR model
 
Aristotle boyd martin-peci_poster_2017
Aristotle boyd martin-peci_poster_2017Aristotle boyd martin-peci_poster_2017
Aristotle boyd martin-peci_poster_2017
 
R chart
R chartR chart
R chart
 
Systemic Arterial Pulse Pressure Analysis
Systemic Arterial Pulse Pressure AnalysisSystemic Arterial Pulse Pressure Analysis
Systemic Arterial Pulse Pressure Analysis
 
Av 738- Adaptive Filtering - Wiener Filters[wk 3]
Av 738- Adaptive Filtering - Wiener Filters[wk 3]Av 738- Adaptive Filtering - Wiener Filters[wk 3]
Av 738- Adaptive Filtering - Wiener Filters[wk 3]
 
Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]
Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]
Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]
 
Detection & Estimation Theory
Detection & Estimation TheoryDetection & Estimation Theory
Detection & Estimation Theory
 

Similar to Adjusting PageRank parameters and comparing results : REPORT

Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...ANIRBANMAJUMDAR18
 
tw1979 Exercise 1 Report
tw1979 Exercise 1 Reporttw1979 Exercise 1 Report
tw1979 Exercise 1 ReportThomas Wigg
 
Integral method to analyze reaction kinetics
Integral method to analyze reaction kineticsIntegral method to analyze reaction kinetics
Integral method to analyze reaction kineticsvarshabhi27
 
Chapter Two PPT Lecture - Part One.ppt
Chapter Two PPT Lecture - Part One.pptChapter Two PPT Lecture - Part One.ppt
Chapter Two PPT Lecture - Part One.pptjosh658552
 
Applied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, InterpolationApplied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, InterpolationBrian Erandio
 
Simple lin regress_inference
Simple lin regress_inferenceSimple lin regress_inference
Simple lin regress_inferenceKemal İnciroğlu
 
Comparing the methods of Estimation of Three-Parameter Weibull distribution
Comparing the methods of Estimation of Three-Parameter Weibull distributionComparing the methods of Estimation of Three-Parameter Weibull distribution
Comparing the methods of Estimation of Three-Parameter Weibull distributionIOSRJM
 
Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...
Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...
Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...Thomas Templin
 
2014-mo444-practical-assignment-04-paulo_faria
2014-mo444-practical-assignment-04-paulo_faria2014-mo444-practical-assignment-04-paulo_faria
2014-mo444-practical-assignment-04-paulo_fariaPaulo Faria
 
Regression with Time Series Data
Regression with Time Series DataRegression with Time Series Data
Regression with Time Series DataRizano Ahdiat R
 
An econometric model for Linear Regression using Statistics
An econometric model for Linear Regression using StatisticsAn econometric model for Linear Regression using Statistics
An econometric model for Linear Regression using StatisticsIRJET Journal
 
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...IRJET Journal
 

Similar to Adjusting PageRank parameters and comparing results : REPORT (20)

Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...Linear regression [Theory and Application (In physics point of view) using py...
Linear regression [Theory and Application (In physics point of view) using py...
 
R analysis of covariance
R   analysis of covarianceR   analysis of covariance
R analysis of covariance
 
tw1979 Exercise 1 Report
tw1979 Exercise 1 Reporttw1979 Exercise 1 Report
tw1979 Exercise 1 Report
 
working with python
working with pythonworking with python
working with python
 
Integral method to analyze reaction kinetics
Integral method to analyze reaction kineticsIntegral method to analyze reaction kinetics
Integral method to analyze reaction kinetics
 
Building the Professional of 2020: An Approach to Business Change Process Int...
Building the Professional of 2020: An Approach to Business Change Process Int...Building the Professional of 2020: An Approach to Business Change Process Int...
Building the Professional of 2020: An Approach to Business Change Process Int...
 
Chapter Two PPT Lecture - Part One.ppt
Chapter Two PPT Lecture - Part One.pptChapter Two PPT Lecture - Part One.ppt
Chapter Two PPT Lecture - Part One.ppt
 
Applied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, InterpolationApplied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Applied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
 
RS
RSRS
RS
 
Simple lin regress_inference
Simple lin regress_inferenceSimple lin regress_inference
Simple lin regress_inference
 
Ann a Algorithms notes
Ann a Algorithms notesAnn a Algorithms notes
Ann a Algorithms notes
 
Comparing the methods of Estimation of Three-Parameter Weibull distribution
Comparing the methods of Estimation of Three-Parameter Weibull distributionComparing the methods of Estimation of Three-Parameter Weibull distribution
Comparing the methods of Estimation of Three-Parameter Weibull distribution
 
Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...
Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...
Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...
 
Data Analysis Homework Help
Data Analysis Homework HelpData Analysis Homework Help
Data Analysis Homework Help
 
Chapter 18,19
Chapter 18,19Chapter 18,19
Chapter 18,19
 
2014-mo444-practical-assignment-04-paulo_faria
2014-mo444-practical-assignment-04-paulo_faria2014-mo444-practical-assignment-04-paulo_faria
2014-mo444-practical-assignment-04-paulo_faria
 
Regression with Time Series Data
Regression with Time Series DataRegression with Time Series Data
Regression with Time Series Data
 
An econometric model for Linear Regression using Statistics
An econometric model for Linear Regression using StatisticsAn econometric model for Linear Regression using Statistics
An econometric model for Linear Regression using Statistics
 
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...
 
Computation Assignment Help
Computation Assignment Help Computation Assignment Help
Computation Assignment Help
 

More from Subhajit Sahu

DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTES
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTESDyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTES
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTESSubhajit Sahu
 
Shared memory Parallelism (NOTES)
Shared memory Parallelism (NOTES)Shared memory Parallelism (NOTES)
Shared memory Parallelism (NOTES)Subhajit Sahu
 
A Dynamic Algorithm for Local Community Detection in Graphs : NOTES
A Dynamic Algorithm for Local Community Detection in Graphs : NOTESA Dynamic Algorithm for Local Community Detection in Graphs : NOTES
A Dynamic Algorithm for Local Community Detection in Graphs : NOTESSubhajit Sahu
 
Scalable Static and Dynamic Community Detection Using Grappolo : NOTES
Scalable Static and Dynamic Community Detection Using Grappolo : NOTESScalable Static and Dynamic Community Detection Using Grappolo : NOTES
Scalable Static and Dynamic Community Detection Using Grappolo : NOTESSubhajit Sahu
 
Application Areas of Community Detection: A Review : NOTES
Application Areas of Community Detection: A Review : NOTESApplication Areas of Community Detection: A Review : NOTES
Application Areas of Community Detection: A Review : NOTESSubhajit Sahu
 
Community Detection on the GPU : NOTES
Community Detection on the GPU : NOTESCommunity Detection on the GPU : NOTES
Community Detection on the GPU : NOTESSubhajit Sahu
 
Survey for extra-child-process package : NOTES
Survey for extra-child-process package : NOTESSurvey for extra-child-process package : NOTES
Survey for extra-child-process package : NOTESSubhajit Sahu
 
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTER
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTERDynamic Batch Parallel Algorithms for Updating PageRank : POSTER
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTERSubhajit Sahu
 
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...Subhajit Sahu
 
Fast Incremental Community Detection on Dynamic Graphs : NOTES
Fast Incremental Community Detection on Dynamic Graphs : NOTESFast Incremental Community Detection on Dynamic Graphs : NOTES
Fast Incremental Community Detection on Dynamic Graphs : NOTESSubhajit Sahu
 
Can you fix farming by going back 8000 years : NOTES
Can you fix farming by going back 8000 years : NOTESCan you fix farming by going back 8000 years : NOTES
Can you fix farming by going back 8000 years : NOTESSubhajit Sahu
 
HITS algorithm : NOTES
HITS algorithm : NOTESHITS algorithm : NOTES
HITS algorithm : NOTESSubhajit Sahu
 
Basic Computer Architecture and the Case for GPUs : NOTES
Basic Computer Architecture and the Case for GPUs : NOTESBasic Computer Architecture and the Case for GPUs : NOTES
Basic Computer Architecture and the Case for GPUs : NOTESSubhajit Sahu
 
Dynamic Batch Parallel Algorithms for Updating Pagerank : SLIDES
Dynamic Batch Parallel Algorithms for Updating Pagerank : SLIDESDynamic Batch Parallel Algorithms for Updating Pagerank : SLIDES
Dynamic Batch Parallel Algorithms for Updating Pagerank : SLIDESSubhajit Sahu
 
Are Satellites Covered in Gold Foil : NOTES
Are Satellites Covered in Gold Foil : NOTESAre Satellites Covered in Gold Foil : NOTES
Are Satellites Covered in Gold Foil : NOTESSubhajit Sahu
 
Taxation for Traders < Markets and Taxation : NOTES
Taxation for Traders < Markets and Taxation : NOTESTaxation for Traders < Markets and Taxation : NOTES
Taxation for Traders < Markets and Taxation : NOTESSubhajit Sahu
 
A Generalization of the PageRank Algorithm : NOTES
A Generalization of the PageRank Algorithm : NOTESA Generalization of the PageRank Algorithm : NOTES
A Generalization of the PageRank Algorithm : NOTESSubhajit Sahu
 
ApproxBioWear: Approximating Additions for Efficient Biomedical Wearable Comp...
ApproxBioWear: Approximating Additions for Efficient Biomedical Wearable Comp...ApproxBioWear: Approximating Additions for Efficient Biomedical Wearable Comp...
ApproxBioWear: Approximating Additions for Efficient Biomedical Wearable Comp...Subhajit Sahu
 
Income Tax Calender 2021 (ITD) : NOTES
Income Tax Calender 2021 (ITD) : NOTESIncome Tax Calender 2021 (ITD) : NOTES
Income Tax Calender 2021 (ITD) : NOTESSubhajit Sahu
 
Youngistaan Foundation: Annual Report 2020-21 : NOTES
Youngistaan Foundation: Annual Report 2020-21 : NOTESYoungistaan Foundation: Annual Report 2020-21 : NOTES
Youngistaan Foundation: Annual Report 2020-21 : NOTESSubhajit Sahu
 

More from Subhajit Sahu (20)

DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTES
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTESDyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTES
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTES
 
Shared memory Parallelism (NOTES)
Shared memory Parallelism (NOTES)Shared memory Parallelism (NOTES)
Shared memory Parallelism (NOTES)
 
A Dynamic Algorithm for Local Community Detection in Graphs : NOTES
A Dynamic Algorithm for Local Community Detection in Graphs : NOTESA Dynamic Algorithm for Local Community Detection in Graphs : NOTES
A Dynamic Algorithm for Local Community Detection in Graphs : NOTES
 
Scalable Static and Dynamic Community Detection Using Grappolo : NOTES
Scalable Static and Dynamic Community Detection Using Grappolo : NOTESScalable Static and Dynamic Community Detection Using Grappolo : NOTES
Scalable Static and Dynamic Community Detection Using Grappolo : NOTES
 
Application Areas of Community Detection: A Review : NOTES
Application Areas of Community Detection: A Review : NOTESApplication Areas of Community Detection: A Review : NOTES
Application Areas of Community Detection: A Review : NOTES
 
Community Detection on the GPU : NOTES
Community Detection on the GPU : NOTESCommunity Detection on the GPU : NOTES
Community Detection on the GPU : NOTES
 
Survey for extra-child-process package : NOTES
Survey for extra-child-process package : NOTESSurvey for extra-child-process package : NOTES
Survey for extra-child-process package : NOTES
 
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTER
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTERDynamic Batch Parallel Algorithms for Updating PageRank : POSTER
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTER
 
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...
 
Fast Incremental Community Detection on Dynamic Graphs : NOTES
Fast Incremental Community Detection on Dynamic Graphs : NOTESFast Incremental Community Detection on Dynamic Graphs : NOTES
Fast Incremental Community Detection on Dynamic Graphs : NOTES
 
Can you fix farming by going back 8000 years : NOTES
Can you fix farming by going back 8000 years : NOTESCan you fix farming by going back 8000 years : NOTES
Can you fix farming by going back 8000 years : NOTES
 
HITS algorithm : NOTES
HITS algorithm : NOTESHITS algorithm : NOTES
HITS algorithm : NOTES
 
Basic Computer Architecture and the Case for GPUs : NOTES
Basic Computer Architecture and the Case for GPUs : NOTESBasic Computer Architecture and the Case for GPUs : NOTES
Basic Computer Architecture and the Case for GPUs : NOTES
 
Dynamic Batch Parallel Algorithms for Updating Pagerank : SLIDES
Dynamic Batch Parallel Algorithms for Updating Pagerank : SLIDESDynamic Batch Parallel Algorithms for Updating Pagerank : SLIDES
Dynamic Batch Parallel Algorithms for Updating Pagerank : SLIDES
 
Are Satellites Covered in Gold Foil : NOTES
Are Satellites Covered in Gold Foil : NOTESAre Satellites Covered in Gold Foil : NOTES
Are Satellites Covered in Gold Foil : NOTES
 
Taxation for Traders < Markets and Taxation : NOTES
Taxation for Traders < Markets and Taxation : NOTESTaxation for Traders < Markets and Taxation : NOTES
Taxation for Traders < Markets and Taxation : NOTES
 
A Generalization of the PageRank Algorithm : NOTES
A Generalization of the PageRank Algorithm : NOTESA Generalization of the PageRank Algorithm : NOTES
A Generalization of the PageRank Algorithm : NOTES
 
ApproxBioWear: Approximating Additions for Efficient Biomedical Wearable Comp...
ApproxBioWear: Approximating Additions for Efficient Biomedical Wearable Comp...ApproxBioWear: Approximating Additions for Efficient Biomedical Wearable Comp...
ApproxBioWear: Approximating Additions for Efficient Biomedical Wearable Comp...
 
Income Tax Calender 2021 (ITD) : NOTES
Income Tax Calender 2021 (ITD) : NOTESIncome Tax Calender 2021 (ITD) : NOTES
Income Tax Calender 2021 (ITD) : NOTES
 
Youngistaan Foundation: Annual Report 2020-21 : NOTES
Youngistaan Foundation: Annual Report 2020-21 : NOTESYoungistaan Foundation: Annual Report 2020-21 : NOTES
Youngistaan Foundation: Annual Report 2020-21 : NOTES
 

Recently uploaded

The Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdf
The Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdfThe Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdf
The Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdfkalichargn70th171
 
Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...OnePlan Solutions
 
How To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsHow To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsAndolasoft Inc
 
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer DataAdobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer DataBradBedford3
 
Project Based Learning (A.I).pptx detail explanation
Project Based Learning (A.I).pptx detail explanationProject Based Learning (A.I).pptx detail explanation
Project Based Learning (A.I).pptx detail explanationkaushalgiri8080
 
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...gurkirankumar98700
 
Professional Resume Template for Software Developers
Professional Resume Template for Software DevelopersProfessional Resume Template for Software Developers
Professional Resume Template for Software DevelopersVinodh Ram
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsArshad QA
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Modelsaagamshah0812
 
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideBuilding Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideChristina Lin
 
What is Binary Language? Computer Number Systems
What is Binary Language?  Computer Number SystemsWhat is Binary Language?  Computer Number Systems
What is Binary Language? Computer Number SystemsJheuzeDellosa
 
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...MyIntelliSource, Inc.
 
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...OnePlan Solutions
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comFatema Valibhai
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsJhone kinadey
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...harshavardhanraghave
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxComplianceQuest1
 

Recently uploaded (20)

The Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdf
The Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdfThe Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdf
The Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdf
 
Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...
 
How To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsHow To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.js
 
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer DataAdobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
 
Project Based Learning (A.I).pptx detail explanation
Project Based Learning (A.I).pptx detail explanationProject Based Learning (A.I).pptx detail explanation
Project Based Learning (A.I).pptx detail explanation
 
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
 
Professional Resume Template for Software Developers
Professional Resume Template for Software DevelopersProfessional Resume Template for Software Developers
Professional Resume Template for Software Developers
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview Questions
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
 
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideBuilding Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
 
What is Binary Language? Computer Number Systems
What is Binary Language?  Computer Number SystemsWhat is Binary Language?  Computer Number Systems
What is Binary Language? Computer Number Systems
 
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
 
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
 
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS LiveVip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
 
Exploring iOS App Development: Simplifying the Process
Exploring iOS App Development: Simplifying the ProcessExploring iOS App Development: Simplifying the Process
Exploring iOS App Development: Simplifying the Process
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.com
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial Goals
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docx
 
Call Girls In Mukherjee Nagar 📱 9999965857 🤩 Delhi 🫦 HOT AND SEXY VVIP 🍎 SE...
Call Girls In Mukherjee Nagar 📱  9999965857  🤩 Delhi 🫦 HOT AND SEXY VVIP 🍎 SE...Call Girls In Mukherjee Nagar 📱  9999965857  🤩 Delhi 🫦 HOT AND SEXY VVIP 🍎 SE...
Call Girls In Mukherjee Nagar 📱 9999965857 🤩 Delhi 🫦 HOT AND SEXY VVIP 🍎 SE...
 

Adjusting PageRank parameters and comparing results : REPORT

  • 1. Adjusting PageRank parameters and comparing results Web graphs unaltered are reducible, and thus the rate of convergence of the power-iteration method is the rate at which αk → 0, where α is the damping factor, and k is the iteration count. An estimate of the number of iterations needed to converge to a tolerance τ is logα τ. For τ = 10-6 and α = 0.85, it can take roughly 85 iterations to converge. For α = 0.95, and α = 0.75, with the same tolerance τ = 10-6 , it takes roughly 269 and 48 iterations respectively. For τ = 10-9 , and τ = 10-3 , with the same damping factor α = 0.85, it takes roughly 128 and 43 iterations respectively. Thus, adjusting the damping factor or the tolerance parameters of the PageRank algorithm can have a significant effect on the convergence rate, both in terms of time and iterations. However, especially with the damping factor α, adjustment of the parameter value is a delicate balancing act. For smaller values of α, the convergence is fast, but the link structure of the graph used to determine ranks is less true. Slightly different values for α can produce very different rank vectors. Moreover, as α → 1, convergence slows down drastically, and sensitivity issues begin to surface [langville04]. For the first experiment, the damping factor α (which is usually 0.85) is varied from 0.50 to 1.00 in steps of 0.05. This is in order to compare the performance variation with each damping factor. The calculated error is the L1-norm with respect to default PageRank (α = 0.85). The PageRank algorithm used here is the standard power-iteration (pull) based PageRank. The rank of a vertex in an iteration is calculated as c0 + αΣrn/dn, where c0 is the common teleport contribution, α is the damping factor, rn is the previous rank of vertex with an incoming edge, dn is the out-degree of the incoming-edge vertex, and N is the total number of vertices in the graph. The common teleport contribution c0, calculated as (1-α)/N + αΣrn/N, includes the contribution due to a teleport from any vertex in the graph due to the damping factor (1-α)/N, and teleport from dangling vertices (with no outgoing edges) in the graph αΣrn/N. This is because a random surfer jumps to a random page upon visiting a page with no links, in order to avoid the rank-sink effect. All seventeen graphs used in this experiment are stored in the MatrixMarket (.mtx) file format, and obtained from the SuiteSparse Matrix Collection. These include: web-Stanford, web-BerkStan, web-Google, web-NotreDame, soc-Slashdot0811, soc-Slashdot0902, soc-Epinions1, coAuthorsDBLP, coAuthorsCiteseer, soc-LiveJournal1, coPapersCiteseer, coPapersDBLP, indochina-2004, italy_osm, great-britain_osm, germany_osm, asia_osm. The experiment is implemented in C++, and compiled using GCC 9 with optimization level 3 (-O3). The system used is a Dell PowerEdge R740 Rack server with two Intel Xeon Silver 4116 CPUs @ 2.10GHz, 128GB DIMM DDR4 Synchronous Registered (Buffered) 2666 MHz (8x16GB) DRAM, and running CentOS Linux release 7.9.2009 (Core). The iterations taken with each test case is measured. 500 is the maximum iterations allowed. Statistics of each test case is printed to standard output (stdout), and redirected to a log file, which is then processed with a script to generate a CSV file, with each row representing the details of a single test case. This CSV file is imported into Google Sheets, and necessary tables are set up with the help of the FILTER function to create the charts. When comparing the relative performance of different approaches with multiple test graphs, there are two ways to obtain an average comparison: relative-average, and average-relative.
  • 2. A relative-average comparison first finds relative performance (ratio) of each approach with respect to a baseline approach (one of them), and then averages them. Consider, for example, three approaches a, b, and c, with 3 test runs for each of the three approaches, labeled a1, a2, a3, b1, b2, b3, c1, c2, c3. The relative performance of each approach with respect to c would be a1/c1, b1/c1, c1/c1, a2/c2, b2/c2, and so on. The relative-average comparison is now the average of these ratios, i.e., (a1/c1+a2/c2+a3/c3)/3 for a, (b1/c1+b2/c2+b3/c3)/3, and 1 for c. In contrast, an average-relative comparison first finds the average time/iterations taken for each approach with respect to a baseline approach, and then finds the relative performance. Again, considering three approaches, with 3 test runs as above, the average values of each approach would be (a1+a2+a3)/3 for a, (b1+b2+b3)/3 for b, and (c1+c2+c3)/3 for c. The average-relative comparison of each approach with respect to c would then be (a1+a2+a3)/(c1+c2+c3) for a, (b1+b2+b3)/(c1+c2+c3) for b, and 1 for c. Semantically, a relative-average comparison gives equal importance to the relative performance of each test run (graph), while an average-relative comparison gives equal importance to magnitude (time/iterations) of all test runs (or simply, it gives higher importance to test runs with larger graphs). For these experiments, both comparisons are made, but only one of them is presented here if they are quite similar. Figure 1: Average iterations for PageRank computation with damping factor α adjusted from 0.50 - 1.00 in steps of 0.05. Charts for relative-average, and average-relative iterations (with respect to damping factor α = 0.85) follow the same curve, but with different values (values for relative-average and average-relative iterations are quite similar). Results (figure 1) indicate that increasing the damping factor α beyond 0.85 significantly increases convergence time, and lowering it below 0.85 decreases convergence time. On average, using a damping factor α = 0.95 increases both convergence time and iterations by 192%, and using a damping factor α = 0.75 decreases both by 41% (compared to damping
  • 3. factor α = 0.85). Note that a higher damping factor implies that a random surfer follows links with higher probability (and jumps to a random page with lower probability). Observing that adjusting the damping factor has a significant effect, another experiment was performed. The idea behind this experiment was to adjust the damping factor α in steps, to see if it might help reduce PageRank computation time. The PageRank computation first starts with a small α, changes it when ranks have converged, until the final desired value of α. For example, the computation starts initially with α = 0.5, lets ranks converge quickly, and then switches to α = 0.85 and continues PageRank computation until it converges. This single-step change is attempted with the initial (fast converge) damping factor α from 0.1 to 0.84. Similar to this, two-step, three-step, and four-step changes are also attempted. With a two-step approach, a midpoint between the damping_start value and 0.85 is selected as well for the second set of iterations. Similarly, three-step and four-step approaches use two and three midpoints respectively. A small sample graph is used in this experiment, which is stored in the MatrixMarket (.mtx) file format. The experiment is implemented in Node.js, and executed on a personal laptop. Only the iteration count of each test case is measured. The tolerance τ = 10-5 is used for all test cases. Statistics of each test case is printed to standard output (stdout), and redirected to a log file, which is then processed with a script to generate a CSV file, with each row representing the details of a single test case. This CSV file is imported into Google Sheets, and necessary tables are set up with the help of the FILTER function to create the charts. Figure 2: Iterations required for PageRank computation, when damping factor α is adjusted in 1-4 steps, starting with damping_start. 0-step is the fixed damping factor PageRank, with α = 0.85.
  • 4. From the results (figure 2), it is clear that modifying the damping factor α in steps is not a good idea. The standard fixed damping factor PageRank, with α = 0.85, converges in 35 iterations. Using a single step approach increases the number of iterations required, which further increases as the initial damping factor damping_start is increased. Switching to a multi-step approach also increases the number of iterations needed for convergence. A possible explanation for this effect is that the ranks for different values of the damping factor α are significantly different, and switching to a different damping factor α after each step mostly leads to recomputation. Similar to the damping factor α, adjusting the value of tolerance τ can have a significant effect as well. Apart from the value of tolerance τ, it is observed that different people make use of different error functions for measuring tolerance. Although L1 norm is commonly used for convergence check, it appears nvGraph uses L2 norm instead [nvgraph]. Another person in stackoverflow seems to suggest the use of per-vertex tolerance comparison, which is essentially the L∞ norm. The L1 norm ||E||1 between two (rank) vectors r and s is calculated as ||E||1 = Σ|rn - sn|, or as the sum of absolute errors. The L2 norm ||E||2 is calculated as ||E||2 = √Σ|rn - sn|2 , or as the square-root of the sum of squared errors (euclidean distance between the two vectors). The L∞ norm ||E||∞ is calculated as ||E||∞ = max(|rn - sn|), or as the maximum of absolute errors. This experiment was for comparing the performance between PageRank computation with L1, L2 and L∞ norms as convergence check, for various tolerance τ values ranging from 10-0 to 10-10 (10-0 , 5×10-0 , 10-1 , 5×10-1 , ...). The input graphs, system used, and the rest of the experimental process is similar to that of the first experiment. tolerance L1 norm L2 norm L∞ norm 1.00E-05 49 65 27 5.00E-06 53 65 31 1.00E-06 63 500 41 5.00E-07 67 500 45 1.00E-07 77 500 55 5.00E-08 84 500 59 1.00E-08 500 500 70 5.00E-09 500 500 73 1.00E-09 500 500 500 5.00E-10 500 500 500 1.00E-10 500 500 500 Table 1: Iterations taken for PageRank computation of the web-Stanford graph, with L1, L2, and L∞ norms used as convergence check. At tolerance τ = 10-6 , the L2 norm suffers from sensitivity issues, followed by L1 and L∞ norms at 10-8 and 10-9 respectively. Only relevant tolerances are shown here.
  • 5. Figure 3: Iterations taken for PageRank computation of the asia_osm graph, with L1, L2, and L∞ norms used as convergence check. Until tolerance τ = 10-7 , the L∞ norm converges in just one iteration. Figure 4: Average iterations taken for PageRank computation with L1, L2 and L∞ norms as convergence check, and tolerance τ adjusted from 10-0 to 10-10 (10-0 , 5×10-0 , 10-1 , 5×10-1 , ...). L∞ norm convergence check seems to be the fastest, followed by L1 norm (on average).
  • 6. Figure 5: Average-relative iterations taken for PageRank computation with L1, L2 and L∞ norms as convergence check, and tolerance τ adjusted from 10-0 . L∞ norm convergence check seems to be the fastest, however, it is difficult to tell whether L1, or L2 norm comes in seconds place (on average). Figure 6: Relative-average iterations taken for PageRank computation with L1, L2 and L∞ norms as convergence check, and tolerance τ adjusted from 10-0 . L∞ norm convergence check seems to be the fastest, followed by L2 norm (on average).
  • 7. For various graphs, it is observed that PageRank computation with L1, L2, or L∞ norm as convergence check suffers from sensitivity issues beyond certain (smaller) tolerance τ values. As tolerance τ is decreased from 10-0 to 10-10 , L2 norm is usually (except road networks) the first to suffer from this issue, followed by L1 norm (or L2), and eventually L∞ norm (if ever). This sensitivity issue was recognized by the fact that a given approach abruptly takes 500 (max iterations) for the next lower tolerance τ value. This is shown in table 1. It is also observed that PageRank computation with L∞ norm as convergence check completes in just one iteration (even for tolerance τ ≥ 10-6 ) for large graphs (road networks). This is because it is calculated as ||E||∞ = max(|rn - sn|), and depending upon the order (number of vertices) N of the graph, 1/N can be less than the required tolerance τ to converge. Based on average-relative comparison, the relative iterations between PageRank computation with L1, L2, and L∞ norm as convergence check is 4.73 : 4.08 : 1.00. Hence L2 norm is on average 16% faster than L1 norm, and L∞ norm is 308% faster (~4x) than L2 norm. The variation of average-relative iterations for various tolerance τ values is shown in figure 5. A similar effect is also seen in figure 4, where average iterations for various tolerance τ values is shown. On the other hand, based on relative-average comparison, the relative iterations between PageRank computation with L1, L2, and L∞ norm as convergence check is 10.42 : 6.18 : 1. Hence, L2 norm is on average 69% faster than L1 norm, and L∞ norm is 518% faster (~6x) than L2 norm. The variation of relative-average iterations for various tolerance τ values is shown in figure 6. This shows that while L1 norm is on average slower than L2 norm, the difference between the two diminishes for large graphs (average-relative comparison gives higher importance to results from larger graphs, unlike relative-average). It should also be noted that L2 norm is not always faster than L1 norm in several cases (usually for smaller tolerance τ values) as can be seen in table 1. Parameter values can have a significant effect on performance, as seen in these experiments. Different convergence functions converge at different rates, and which of them converges faster depends upon the tolerance τ value. Iteration count needs to be checked in order to ensure that no approach is suffering from sensitivity issues, or is leading to a single iteration convergence. Finally, the relative performance comparison method affects which results get more importance, and which do not, in the final average. Taking note of each of these points, when comparing iterative algorithms, will thus ensure that the performance results are accurate and useful. Table 2: List of parameter adjustment strategies, and links to source code. Damping Factor adjust dynamic-adjust Tolerance L1 norm L2 norm L∞ norm 1. Comparing the effect of using different values of damping factor, with PageRank (pull, CSR). 2. Experimenting PageRank improvement by adjusting damping factor (α) between iterations. 3. Comparing the effect of using different functions for convergence check, with PageRank (...). 4. Comparing the effect of using different values of tolerance, with PageRank (pull, CSR).