Robert Merton adopted the Biblical parable, "the rich get richer and the poor get poorer," (Matthew, 13:12) in explaining the disproportionate credit given to eminent scientists relative to similar contributions from unknown scientists. In doing so, he established a basic sociological effect spanning, "...in varying degrees every social institution..." This pdf traces a brief history of scientific citations, establishes its relationship to models of relative proportionate growth and extends it to nonscalable randomness and/or extreme value theory. Along the way, "hot hands" in streaks of success are also considered.
1. 1
The Matthew Effect:
History, Illustrations, Implications and Generalizations
Thomas Ball
Market Modeler’s Group
September 14, 2017
2. 2
I. A brief history of the ‘Matthew Effect’
II. Proportionate Random Growth
III. Scale, Scaling and Scalability
IV. Nonscalable Randomness
V. Summary
VI. Appendix
Table of Contents
“Philosophers are right when they tell us that nothing is great
or little otherwise than by comparison.”
-Jonathan Swift, Gulliver’s Travels
3. 3
Robert Merton’s (1910-2003) landmark 1968 paper is qualitative and theoretical, mostly
concerned with elucidating the rampant inequities in scientific citation and publication
- The Gospel of Matthew (13:12), “For whosoever hath, to him shall be given, and he shall have
more abundance: but whosoever hath not, from him shall be taken away even that he hath.”
- “Eminent scientists get disproportionately greater credit for their contributions to science while
relatively unknown scientists tend to get disproportionately less credit for comparable
contributions…”
- ”…(This effect or principle) holds in varying degrees for every social institution...”
Robert Merton, The Matthew Effect in Science
I. A brief history of the Matthew Effect *Robert Merton, The Matthew Effect in Science, Science, 159 (3810), 56-63, Jan 5, 1968
4. 4
- Quetelet's 1835 Treatise on Man was the first in which organic variation in humans and social
statistics were dealt with from the point of view of the mathematical theory of probabilities
- “If an individual in any given epoch of society possessed all the qualities of the l’homme
moyen, he would represent all that is great, good, or beautiful.”*
- The average man of average appetites is characterized by the arithmetic mean of
measured variables that follow a normal distribution
- His notions live on in a multitude of ways
Quetelet, Social Physics and L’Homme Moyen
Adolphe Quetelet (1796-1874) was a Belgian astronomer, statistician and sociologist who was
key in introducing quantitative methods to the social sciences, e.g., the BMI
*Quetelet, Adolphe, Treatise on Man and the Development of His Faculties, 1835, vol. 1, p. 12I. A brief history of the Matthew Effect
L’Homme Moyen
5. 5
Log Rate = 1.06 Ln(x) + 3.5
Log # Cited = 1.41 Ln(x) + 7.8
0
3
6
9
12
1903 1910 1921 1928 1933 1938 1944 1948 1955 1960
0
3
5
8
Galton Revisited
Francis Galton (1822-1911) was a British polymath who introduced fingerprinting to Scotland
Yard and sought a solid foundation for the social sciences based on quantified measures,
mathematical theory and Darwin’s theory of evolution
- Pioneered studies in the rate of creativity, eminence and quality among eminent men
- In replicating Galton’s analysis on American men of science in the 20th c, Derek de Sola Price
found “no material changes in the estimated incidence of scientific eminence” since Galton*
I. A brief history of the Matthew Effect
*Derek de Sola Price, Little Science, Big Science: Prologue to a Science of Science,
1963, Columbia, p. 38
Actual
Log
# Men Cited
Rate per Capita
# of Men Cited in Editions of
American Men of Science
Edition
Comparison of Frequency vs Rate
Actual vs Log
Price’s 1963 Replication of Galton’s Work
Edition
(Year)
# Men Cited
Rate per Capita
(Number/ Million
US Population)
1903 4,000 50
1910 5,500 60
1921 9,500 90
1928 13,500 110
1933 22,000 175
1938 28,000 220
1944 34,000 240
1948 50,000 340
1955 74,000 440
1960 96,000 480
6. 6
The Halo Effect
Rosenzweig’s 2007 management strategy book, The Halo Effect, explains how the Matthew
Effect works wrt evaluations of corporate performance and brand equity – without ever
mentioning Merton or the Matthew Effect
- Describes the halo effect as a “heuristic, a sort of rule of thumb that people use to make
guesses about things that are hard to assess directly...”*
- Cites Edward Thorndike (1874-1949) as the psychologist who first identified the halo effect
in an analysis of performance ratings of soldiers by their superiors in WWI
- Replicated many times, particularly in marketing and education
I. A brief history of the Matthew Effect *Phil Rosenzweig, The Halo Effect, 2997, Free Press, p. 52
7. 7
Derek de Sola Price (1922-1983), extended univariate citation frequency distributions to
social networks of scientists, citations and publications
Price’s Model: Social Networks and Cumulative Advantage
I. A brief history of the Matthew Effect
- Price’s theory of ‘cumulative advantage‘ was built on a statistical model of the situation in which
‘success breeds success’*
- ‘Cumulative Advantage’ was later picked up and renamed by Barabasi-Albert as ‘preferential
attachment’ in networks, a key concept in network theory
- “Many large networks follow a scale-free power-law distribution where 1), networks
expand continuously by the addition of new vertices and 2), new vertices attach
preferentially to sites that are already well connected” **
Cumulative Advantage vs Preferential Attachment
*Derek de Sola Price, Networks of Scientific Papers, Science, 149(3683): 510-515, July 30, 1965
**Barabási and Albert, Emergence of Scaling in Random Networks, Science, vol 286, Oct 15, 1999,
pps. 509-512.
8. 8
Among the mechanisms underlying the notion that “success breeds success” are the "luck
vs skill" debates in sports, e.g., the ”hot hands” phenomenon
Steph Curry, Aaron Judge and the Matthew Effect
- “Hot hands” is a term describing winning streaks, e.g., in basketball, where players are
perceived to make more three point baskets than expected in a run or sequence of shots
- Historically debunked based on unconditional expectations
- A recent paper* showed that expectations are more appropriately modeled conditionally, in
other words, given a basket, the likelihood that the next shot is also a basket goes up
Hot Hands in Basketball
*Miller and Sanjurjo, Is it a Fallacy to Believe in the Hot Hand in the NBA Three-Point
Contest? http://www.igier.unibocconi.it/files/548.pdfI. A brief history of the Matthew Effect
9. 9
- Generates a deterministic, sigmoidal or S-shaped curve that reaches a theoretical,
asympotic limit, carrying capacity or level of saturation
- Predates Verhulst’s definition of logistic growth by nearly 20 years
Benjamin Gompertz: First Nonlinear Model
II. Proportionate Random Growth
Benjamin Gompertz (1779-1865) was a British mathematician and actuary who, in 1825,
developed a model of population mortality and growth that was nonlinear in the parameters
Gompertz Curve
Source: Benjamin Gompertz, https://en.wikipedia.org/wiki/Benjamin_Gompertz
Acceleration Deceleration
Bemjamin Gompertz
10. 10
- A common rule of thumb for many events, e.g., "80% of sales come from 20% of clients,” a
discretization of a power law for a particular set of parameters
Vilfredo Pareto, Distribution of Income and the 80/20 Rule
Vilfredo Pareto (1848-1923) was an Italian economist, developed power laws for income and
wealth distributions as well as the famous “80/20 rule” – the Pareto Principle
Fat Tail
II. Proportionate Random Growth Source: Vilfredo Pareto, https://en.wikipedia.org/wiki/Vilfredo_Pareto
11. 11
- The frequency of any word is inversely proportional to its rank
- Zipf later applied this law to the distribution of languages, city populations and more
- In the case of Zipf’s Law wrt cities, the exponent or tail index usually scales around 1
Zipf’s Law of Relative Frequency
Zipf’s Law and City Size
George Kingsley Zipf (1902-1950), in his 1932 book Selected Studies of the Principle of
Relative Frequency in Language, described a “statistical law of relative word frequency”*
Lack of Fit
II. Proportionate Random Growth
Tail Index ~ 1
Note that both the x- and y-
axes are expressed in
logarithmic or relative terms
in powers of 10
*George Zipf, Selected Studies of the Principle of Relative Frequency in
Language, 1932, Harvard
12. 12
- A rule of proportionate growth or proportionate effect which states that the proportional or
relative rate of growth of a corporation is independent of its absolute size
- Laws of proportionate growth give rise to a distribution that is log-normal
Gibrat’s Law of Firm Size
About the same time as Zipf but in France, Robert Gibrat (1904-1980) developed Gibrat's Law
II. Proportionate Random Growth Source: Gibrat’s Law, https://en.wikipedia.org/wiki/Gibrat%27s_law
13. 13
A.S.C. Ehrenberg
Andrew Ehrenberg (1926-2010) was a British statistician and marketing scientist
ATR Model Marketing Funnel Customer Journey
- Among the first to identify lawlike relationships and build models applied to consumer behavior,
brand choice, purchase, repeat purchase and loyalty, even for low involvement products
- “Consumer choice shows patterns that are regular and predictable as a function of the
market share of individual brands. Share plays a key role that supersedes external factors
such as advertising, pricing and distribution” (Repeat Buying, 1988, p. 18)
- Ehrenberg also developed the ‘ATR model,’ precursor of the classic ‘marketing funnel,’
renamed today as the ‘customer journey’
II. Proportionate Random Growth
14. 14
Decompositional Models of Brand Equity and the Halo Effect
After Ehrenberg, marketing scientists attempted to build models decomposing the bias in
the halo effect with respect to brand equity attribute ratings
- Dillon, et al., identified common sources of rating bias including halo error and response style
- Halo error refers to an overall liking or disliking where the bias varies in magnitude (e.g.,
sales, share, size, reputation) with the brand being rated
- Response style refers to individual tendencies to systematically prefer response categories
- Ratings decomposed into two components:
- Global impressions (non-attribute, halo error)
- Brand-specific impressions (truer brand performance as a function of attributes and benefits)
Source: Dillon, Madden, Kirmani and Mukherjee, Understanding What’s in a Brand
Rating, Jour Mkt Res, XXXVIII, Nov 2001, pps. 415-249
Elements of Brand Equity
II. Proportionate Random Growth
15. 15
Rogers-Bass Model of Innovation and Diffusion in Marketing
- Based on Everett Rogers 1962 book, Diffusion of Innovations, which described the different
stages of consumer product adoption from Innovators to the Imitators following their lead
- Describes an S-shaped curve that is mathematically similar to Gompertz’ curve
II. Proportionate Random Growth
Frank Bass’ model dates from the late 1960s and states that the probability that an
individual will adopt an innovation — given that the individual has not yet adopted it—is
linear with respect to the number of previous adopters
Sources: Everett Rogers, https://en.wikipedia.org/wiki/Everett_Rogers,
Bass-Anderson Model, https://en.wikipedia.org/wiki/Bass_diffusion_model
Rogers’ Diffusion of Innovations Bass Diffusion Model
16. 16
Constructal Law
The Constructal Law is a “law” of physics that accounts for the natural tendency of all flow
systems (animate and inanimate) to change into configurations that offer progressively
greater flow access over time
- S-shaped curves underlie the flows that bathe and connect landscapes, united not only by the
tapestry of tree-shaped flows but also by the non-monotonic manner in which these flow
architectures spread
- Views the position of design in nature as a universal, physical phenomenon
- The ‘secret’ of global design is dependence on a few large objects, many small ones
II. Proportionate Random Growth
Source: Bejan and Lorente, Constructal law of design and evolution: Physics,
biology, technology, and society, J. Appl. Phys. 113, 151301, 2013
17. 17
Scale, Scaling, Scalability
“Everywhere Nature works true to scale, everything has its proper size accordingly. Men
and trees, birds and fishes, stars and star-systems, have their appropriate dimensions, and
their more or less narrow range of absolute magnitude.”*
City Size vs Macro Indexes***
Kleiber’s Law of Mammalian Mass
vs Metabolic Rate**
*D’Arcy Thompson, On Growth and Form, 1917
**https://en.wikipedia.org/wiki/Kleiber%27s_law
***http://financingcities.ifmr.co.in/blog/2013/08/02/the-urban-organism-cities-as-living-beings/
****http://www.theworldeconomy.org/MaddisonTables/MaddisontableB-10.pdf
Growth in Wealth and
Population
1-2010 CE****
Examples of Scaling
Sublinear or Subexponential Linear or Exponential Superexponential
Log Body Mass
LogMetabolicRate
Slope < 1.0 Slope ~ 1.0 Slopes >>1.0
III. Scale, Scaling and Scalability
- Scalable has supplanted sustainable as a key strategic buzzword
18. 18
World views as expressed in religion, philosophy and systems theory are scalable and
simplifying approximations binding and uniting individual entities with the whole
The Great Chain of Being vs Complex Systems Theory
Ancient vs Modern World Views
Summas of Scale
III. Scale, Scaling and Scalability
- Visualizations trace an ideal of existence as functioning in a single, harmonious hierarchy
19. 19
Simulating Income Inequality
Given a room full of 100 people with 100 dollars each
- With every tick of the clock, every person with money gives a dollar to one randomly
chosen other person
III. Scale, Scaling and Scalability
http://www.decisionsciencenews.com/2017/06/19/counterintuitive-problem-everyone-room-
keeps-giving-dollars-random-others-youll-never-guess-happens-next/
Source: Dan Goldstein, DecisionScienceNews.com, June 19, 2017
- What’s your assumption about the distribution of the results?
20. 20
Trends in Income Inequality
Income (and wealth) inequality show all the signs of exponential growth as well as the Matthew
Effect where the rich are truly getting richer
Source: Thomas Piketty, Capital: In the 21st Century, 2014
- Inequality is not inevitable
- Important drivers concern society’s definition of power - almost philosophically - and how
these views are enacted wrt regimes of budgetary, legislative, policy and regulatory
decisions, e.g., tax structures
III. Scale, Scaling and Scalability
21. 21
In probability and statistics, the exponential family is a set of probability distributions
historically chosen for mathematical convenience, tractability, generality and scalability
The Exponential Family of Distributions
Commonly Used Continuous Exponential Distributions
- Exponential, S-shaped assumptions underpin most of the “nonlinear” markov process models
used in, e.g., NLP, text mining, fMRIs, deep learning neural nets and artificial intelligence
- The simplest exponential distributions possess a single parameter used for exponentiation of
the function, have defined, finite moments (e.g., mean, std dev, skewness, kurtosis), frequently
requiring fixed or known inputs
Source: Exponential Family of Distributions, https://en.wikipedia.org/wiki/Exponential_familyIII. Scale, Scaling and Scalability
22. 22
Problems with Scalability
Growth is not always smooth and S-shaped
- Convergence in the limit, if it happens at all, can be fast, slow, inconsistent, oscillating,
punctuated with bursts, sudden and abrupt jumps and shifts, collapse as well as having
unpredictable feedback effects (cybernetic forces), and so on
- Fixed assumptions that nonlinearity is scalable and S-shaped can be nontrivially wrong
- Extreme magnitude events do not scale
Alternative Exponential Growth Curves
v1*
*Derek de Sola Price, Little Science, Big Science: Prologue to
a Science of Science, 1963, Columbia, p. 24
Possible Shapes for Growth as it Approaches the Asymptote
Alternative Exponential Growth Curves
v2
Fitted Asymptote ObservedObserved
III. Scale, Scaling and Scalability
23. 23
Extreme Valued Phenomena are as Ubiquitous as Scalable
Weather, terrorism, traditional and social media, entertainment, engineering, financial markets,
insurance claims, risk management, unique website visitors, and more, all evidence extreme
valued behavior and events
IV. Nonscalable Randomness
- A key diagnostic to the presence of extreme events is lack of fit in the tails with respect to
scalable assumptions
Lack of Fit to Scalability*
Actual vs Predicted
*Lin and Tegmark, Critical Behavior from Deep Dynamics: A Hidden Dimension in Natural Language,
July 2016, arXiv: 1606.06737
**Richard Chirgwin, fMRI bugs could upend years of research,, July 2016,
http://www.theregister.co.uk/2016/07/03/mri_software_bugs_could_upend_years_of_research
Lack of Fit
Lack of Fit
24. 24
What’s Different About Extreme Valued Data, Models and Theory?
Traditional, scalable, Gaussian ways of looking at the world begin by focusing on the average,
ordinary and typical
IV. Nonscalable Randomness
- Extreme value theory takes the exceptional as the starting point and deals with the ordinary
as subordinate
- The ordinary is less consequential
Logic Rational, common sense Irrational? Unconventional
Randomness Scalable Nonscalable
Dominated by Many small events, tyranny of majority A few large events, tyranny of minority
Moments Defined and finite May be undefined and infinite
Potential Average Greatest
Virality Virtually impossible High likelihood
Tails Thin tailed with exponents of ~1.0 or less Fat-tailed with exponents greater than 1.0
Methodologies Rich and well-developed Not as well-developed
Use of data Uses all of the data May or may not use all of the data
Tail Index Not important Key metric
Models GLMs Block Maxima, Peak Over Threshold
Risks Known, everyday Unknown, rare
Time Horizon Typically short Frequently long
Rare Events Outliers to be deleted Explicitly incorporated
Distributions Exponential GPD, GEV
Scalable Extreme ValueAssumption
Comparing Assumptions: Scalable vs Extreme Value
25. 25
Extreme Value Strategies in Marketing
Anita Elberse’s 2013 book*, Blockbusters, summarizes the relevance of extreme valued
phenomena for marketing strategy
- Blockbuster or tentpole strategies: assume that a single, enormous, viral hit can cross-
subsidize modest returns or, worse, large losses across the rest of the portfolio
- Ever since Jaws and Star Wars, many film studios have relied on this strategy
- Used today by, e.g., HBO and Netflix to drive growth and membership
- The Long Tail
*Anita Elberse, Blockbusters: Hit-making, Risk-taking, and the
Big Business of Entertainment, HBR, 2013
**Chris Anderson, The Long Tail, Wired.com, Oct, 2004,
https://www.wired.com/2004/10/tail/
-$1,000,000
-$500,000
$0
$500,000
$1,000,000
$1,500,000
Portfolio
Example of Cross-Subsidization
Net Positive Revenue Across Portfolio
Tentpole Hit
IV. Nonscalable Randomness
The Long Tail**
26. 26
Dutch Engineering Genius in Risk Management
Much of Holland is below the level of the North Sea
IV. Nonscalable Randomness
Flood Control in Holland
- For centuries, Holland has survived by virtue of an extensive dyke and levee system
- Entire system is currently engineered to withstand a 1 in 10,000 year storm or event, as
predicted using extreme value models
- Due to global warming, Holland is in the process of rebuilding the levees protecting
Amsterdam and Rotterdam to a 1 in 100,000 year event
27. 27
Market Returns Don’t Scale
Under Gaussian assumptions, there isn’t enough time in the history of the universe for
market events or moves larger than +/-5 sigma (standard deviations)
IV. Nonscalable Randomness
S&P Daily Returns: Actual vs Expected
Under Gaussian Assumptions
1950-2016
Event or Move Gaussian Cauchy**
5-sigma 1 in 3.5 million 1 in 16
10-sigma 1 in 1.3x10^23 1 in 32
20-sigma 1 in 3.6x10^88 1 in 63
30-sigma 1 in 2.0x10^197 1 in 94
Event Probabilities:
Gaussian vs Cauchy Assumptions*
- The S&P 500 recorded at least 18 occurrences of daily returns larger than +/-5 sigma
between 1950 and 2016
- Changing assumptions from thin-tailed, finite Gaussian to fat-tailed, extreme value
probability distribution dramatically reshapes expectations*
*David Hand, The Improbability Principle: Why Coincidences, Miracles,
and Rare Events Happen Every Day, 2013, p. 158
**Cauchy distribution is symmetric with a single parameter for the mean
Cauchy has much
thicker tails
28. 28
Financial Crises and Bubbles
There are many types of financial crises including inflation, currency, banking, debt, etc.
IV. Nonscalable Randomness
- Financial bubbles are the most painfully familiar, e.g., 17th c Dutch Tulip mania or the 2008
Downturn
- Can occur whenever asset prices increase exponentially over and above their
fundamental, “real” value
- Timing them is a nontrivial challenge where the best you can do is assign likelihoods
- On the upside, can release large amounts of liquidity and investment capital,
stimulating growth
- On the downside, can be economically and socially catastrophic
Dutch Tulip Mania
1636-37
Source: Ben Thompson, Tulips, Myths and Cryptocurrencies, May 2017,
https://stratechery.com/2017/tulips-myths-and-cryptocurrencies/
29. 29
Prices of the cryptocurrency, Bitcoin, have skyrocketed this year
Bitcoin May Be Another Bubble
Bitcoin Price Index
2010-Present
IV. Nonscalable Randomness
- Most recently, Bitcoin’s price has pulled back
- Is this a signal of impending collapse?
- Or profit-taking?
Source: Ben Thompson, Tulips, Myths and Cryptocurrencies, May 2017,
https://stratechery.com/2017/tulips-myths-and-cryptocurrencies/
30. 30
Summary
The Matthew Effect has many analogues all with widespread impact
V. Summary
- As Merton noted, it is a truly ubiquitous social phenomena and a source of human bias
- Ubiquity does not imply universality much less inevitability
- Post-hoc, statistical controls and models attempting to mitigate the bias don’t work well
- With support, prospective social controls as expressed in budgetary and legislative
policy may mitigate and/or blunt some types of bias, e.g., income and wealth inequality
- Scalability assumes that man and biological life are the measure of all things
- Focus on average, ordinary and typical events
- Exponential distributions and laws of proportionate random growth provide best fit
- Nonscalable randomness and extreme value events are just as ubiquitous
- Rare, large magnitude events do not scale
- Destabilizing, disruptive fundamental drivers of major regime changes
- Lack of fit to scale, overdispersion and tail indexes are key diagnostics
- There is so much we don’t know
31. 31
Summary
Strategically focused analyses and models face greater challenges than ever
V. Summary
- Strategic thinking and decision-making is more demanding, cross-disciplinary,
computationally complex and global than ever
- Information is moving faster, increasing pace of disruptions and extreme events
- Difficulties in finding accessible and comparable cross-cultural information
- Strategy is integrating new and emerging disciplines including behavioral economics,
experimental economics, network theory, information theory, complex systems theory,
decision, choice and design architecture
- It’s a “post-truth” world
- Deming, “Without data, you’re just another guy with an opinion”
-Regrettably, even with data, you’re still just another guy with an opinion
- At best, statisticians and marketing scientists are responding slowly
- Unrealistic expectations of a statistical “magic bullet” in predictive modeling
- Lack of skepticism regarding overhyped claims
- Statistical analysis has never promised certainty
- Hindrances include inertia of outdated theoretical assumptions and models
- Increasing irrelevance of statistical tests of significance wrt Neyman-Pearson
- Difficulties in meeting challenges of massive, unstructured, computationally complex
and “soft” data with current methods and models
32. 32
Questions
1) Much of EVT is about fitting distributions
- Is this necessary?
- Advent of momentless information theoretic models rooted in complexity science and
criterions of mutual information and KL divergence
- Focus on structural comparisons, finding homologues, even for brief time series,
like a longitudinal, information-theoretic PCA
2) Comparing performance and efficiency of algorithms for computationally complex,
massive data using hundreds of millions of features
- Iterative, approximating panel data models and deep learning neural nets are the two most
likely approaches
- Are these approaches comparable?
- Can NNs be made to deliver results comparable to panel data models, e.g., slopes
and rates, elasticities, cross-elasticities, etc.
3) Why are people so gullible in not asking reasonably skeptical questions regarding blatantly
overhyped claims?
- For instance, claims of machine learning classification accuracy rates in excess of 70% or
80%, much less 90%, are commonly made
34. 34
MIT Pantheon Project*: Mapping Historic Cultural Production
MITs Pantheon Project is an ambitious, quantitative, carefully curated and manually verified
summary of historic, cultural ‘eminence’ and productivity
- Information theoretic approach with no statistical or distributional assumptions
- Rigorous criteria based on cutoffs for inclusion as control for bias
- Focus on the subset of cultural production identified as global, meaning that it has
broken the barriers of space, time and language
- E.g., biographies that have presence in more than 25 languages in Wikipedia
Total # People
Ranked=10,967
*Source: Yu, A. Z., et al. (2016). Pantheon 1.0, a manually verified dataset of globally
famous biographies. Scientific Data 2:150075. doi: 10.1038/sdata.2015.75
35. 35
MIT Pantheon Project*: Trends Over Time
% Trends in Membership by Epoch
-4,000 BCE-2010 CE
Trends in Membership by Century
-4,000 BCE-2010 CE
Log Frequency
Raw Frequency
*Source: Yu, A. Z., et al. (2016). Pantheon 1.0, a manually verified dataset of globally
famous biographies. Scientific Data 2:150075. doi: 10.1038/sdata.2015.75
55.5%
Comparisons of two temporal breakouts: broad epochs vs more granular centuries
- The left chart shows that the earliest epoch comprises less than 5% of the total while more than
50% of ranked personages are from the 20th c
- The right chart compares raw vs log frequencies by century
- The logged values demonstrate the choppy nature of the growth rate in inclusion or
membership, perhaps reflecting uneven and asymmetric growth in knowledge or bias
N=10,967 People
36. 36
MIT Pantheon Project*: The Top 25 Occupations
Top 25 Occupations
88% of Total
Top 25 Occupations by Group
The top occupation is Politician with nearly 1/4th of the total
- Two classes among the top 25 occupations have more women than men
- Singers and Companions
- Arts is the largest occupational group
*Source: Yu, A. Z., et al. (2016). Pantheon 1.0, a manually verified dataset of globally
famous biographies. Scientific Data 2:150075. doi: 10.1038/sdata.2015.75
N=10,967 People
37. Agencies have Huge Street Cred Issues
The market caps for the Big Four Agency holding entities are dwarfed by the Big Four Tech* entities
- Tech sector growth outpaced all
- Entertainment performance was mixed
- Agency, Retail and Telecom declined
*Source: Scott Galloway, The Four Horsemen of Tech, https://www.youtube.com/watch?v=XCvwCcEP74Q
38. 38
Winners and Losers: Dot Com to the Present
Telecoms ruled at the end of the Dot Com bubble while Agencies were slightly larger than Tech
- By 2017, Tech was the overwhelmingly dominant sector and the only sector whose growth
outpaced the indexes
Winners and Losers: Dot Com to the Present
Telecoms ruled at the end of the Dot Com bubble and Agencies were comparable with Tech
- By 2017 Tech was overwhelmingly dominant
- The only sector whose growth outpaced the indexes
39. 39
There are Scales Then There are Scales
Physicists estimate that about 4% of the universe consists of “known” matter
- The enormity of the unknown described as “dark energy,” e.g., black holes
- “Life” can be partitioned into 3 groups: man, insects and microbes