We model the evolution of age-dependent personal income distribution and inequality
as expressed by the Gini ratio. In our framework, inequality is an emergent property of a
theoretical model we develop for the dynamics of individual incomes. The model relates
the evolution of personal income to the individual’s capability to earn money, the size of
her work instrument, her work experience and aggregate output growth. Our model is
calibrated to the single-year population cohorts as well as the personal incomes data in 10-
and 5- year age bins available from March Current Population Survey (CPS). We predict
the dynamics of personal incomes for every single person in the working-age population
in the USA between 1930 and 2011. The model output is then aggregated to construct
annual age-dependent and overall personal income distributions (PID) and to compute the
Gini ratios. The latter are predicted very accurately - up to 3 decimal places. We show
that Gini for people with income is approximately constant since 1930, which is confirmed
empirically. Because of the increasing proportion of people with income between 1947 and
1999, the overall Gini reveals a tendency to decline slightly with time. The age-dependent
Gini ratios have different trends. For example, the group between 55 and 64 years of age
does not demonstrate any decline in the Gini ratio since 2000. In the youngest age group
(from 15 to 24 years), however, the level of income inequality increases with time. We also
find that in the latter cohort the average income decreases relatively to the age group with
the highest mean income. Consequently, each year it is becoming progressively harder for
young people to earn a proportional share of the overall income.
Differences in earnings between men and women differ across time and countries. Yet, explanations for these changes, after taking into account worker characteristics, are lacking. In this research we study the relation between differences in earnings and movements in the labor market in transition countries. We find that periods of high churning are related to larger gender wage gaps.
Productivity and inequality effects of rapid labor reallocationGRAPE
What happens when an economy faces a shock so severe that shakes the employment structure? Sure, workers move across sectors and overall productivity might increase, but at what cost? Here we exploit the quasi-natural experiment following transition to understand what are the short and long run effects of labor market reallocation. Our results suggest that inequalities increase in the short run,without a significant increase in productivity. In the long run, the effects seem to be negligible.
Can we really explain worker flows in transition economies?GRAPE
We test the validity of Aghion & Blanchard (1994) as well as Caballero & Hammour (1996, 2001) in the context of 26 transition economies over the period 1989-2006. We find that demographics and education can accommodate a fair share of shift from public to private and from manufacturing to services -- as opposed to the actual worker flows between jobs. Whether or not this results in reduced employment at the end of the transition process stems not from the wage setting mechanism (such as collective bargaining, indexation, etc.) but rather seems to be related to the policies able to keep older cohorts in employment.
Can we really explain worker flows in transition?GRAPE
Labor reallocation in transition economies has been described using relatively simple models, where workers migrate from the less productive public sector to the private sector. While this might be true, it is only a part of the stories. Other changes happened at the same time as well, in particular a global shift towards services and a generational change. Our presentation explores the relative importance of those changes.
Productivity and inequality effects of rapid labor market reallocationGRAPE
Periods of economic distress are usually followed by changes in the productive structure. In our paper, we explore a particularly strong shock: transition to a market economy, and analyze the short and long run effects on inequalities.
Differences in earnings between men and women differ across time and countries. Yet, explanations for these changes, after taking into account worker characteristics, are lacking. In this research we study the relation between differences in earnings and movements in the labor market in transition countries. We find that periods of high churning are related to larger gender wage gaps.
Productivity and inequality effects of rapid labor reallocationGRAPE
What happens when an economy faces a shock so severe that shakes the employment structure? Sure, workers move across sectors and overall productivity might increase, but at what cost? Here we exploit the quasi-natural experiment following transition to understand what are the short and long run effects of labor market reallocation. Our results suggest that inequalities increase in the short run,without a significant increase in productivity. In the long run, the effects seem to be negligible.
Can we really explain worker flows in transition economies?GRAPE
We test the validity of Aghion & Blanchard (1994) as well as Caballero & Hammour (1996, 2001) in the context of 26 transition economies over the period 1989-2006. We find that demographics and education can accommodate a fair share of shift from public to private and from manufacturing to services -- as opposed to the actual worker flows between jobs. Whether or not this results in reduced employment at the end of the transition process stems not from the wage setting mechanism (such as collective bargaining, indexation, etc.) but rather seems to be related to the policies able to keep older cohorts in employment.
Can we really explain worker flows in transition?GRAPE
Labor reallocation in transition economies has been described using relatively simple models, where workers migrate from the less productive public sector to the private sector. While this might be true, it is only a part of the stories. Other changes happened at the same time as well, in particular a global shift towards services and a generational change. Our presentation explores the relative importance of those changes.
Productivity and inequality effects of rapid labor market reallocationGRAPE
Periods of economic distress are usually followed by changes in the productive structure. In our paper, we explore a particularly strong shock: transition to a market economy, and analyze the short and long run effects on inequalities.
The selling environment in which a firm produces and sells its product is called a market structure.*
Defined by three characteristics:
The number of firms in the market
The ease of entry and exit of firms
The degree of product differentiation
How has Income Inequality and Wage Disparity Between Native and Foreign Sub-p...Dinal Shah
This study uses repeated cross-sectional data from 2007 to 2015 to examine in
detail what impact the 2008 recession had on income inequality and wage disparity
in the UK and on its sub-populations. The findings suggest inequality in the UK has
been decreasing since 2007, but that the foreign sub-population experiences a
greater degree of inequality than the native sub-population. Additionally, foreign and
native labour are imperfect substitutes where the wages of foreign workers are
greatly more susceptible to economic shocks effecting the UK’s labour market. To
the best of my knowledge this is the first paper to conduct research into income
inequality between native and foreign workers.
The automatic adjustment of Spanish Pensions. End-of-project talk delivered in September 2013 in the research seminar of the Bank of Spain. It summarizes a general equilibrium exploration of the effects of the 2013 reform of the Spanish pension system, including a new automatic adjustment mechanism. It also compares the General Equilibrium methodology with other methodological approaches
GDP includes how much we’re spending on things like education, the police and health… But it isn’t designed to measure the outcomes of all that spending – are our kids getting smarter, our streets safer, our hospitals more effective? OECD produces many comparable indicators that can help assess progress. Visit: www.oecd.org/statistics
The selling environment in which a firm produces and sells its product is called a market structure.*
Defined by three characteristics:
The number of firms in the market
The ease of entry and exit of firms
The degree of product differentiation
How has Income Inequality and Wage Disparity Between Native and Foreign Sub-p...Dinal Shah
This study uses repeated cross-sectional data from 2007 to 2015 to examine in
detail what impact the 2008 recession had on income inequality and wage disparity
in the UK and on its sub-populations. The findings suggest inequality in the UK has
been decreasing since 2007, but that the foreign sub-population experiences a
greater degree of inequality than the native sub-population. Additionally, foreign and
native labour are imperfect substitutes where the wages of foreign workers are
greatly more susceptible to economic shocks effecting the UK’s labour market. To
the best of my knowledge this is the first paper to conduct research into income
inequality between native and foreign workers.
The automatic adjustment of Spanish Pensions. End-of-project talk delivered in September 2013 in the research seminar of the Bank of Spain. It summarizes a general equilibrium exploration of the effects of the 2013 reform of the Spanish pension system, including a new automatic adjustment mechanism. It also compares the General Equilibrium methodology with other methodological approaches
GDP includes how much we’re spending on things like education, the police and health… But it isn’t designed to measure the outcomes of all that spending – are our kids getting smarter, our streets safer, our hospitals more effective? OECD produces many comparable indicators that can help assess progress. Visit: www.oecd.org/statistics
The European Unemployment Puzzle: implications from population agingGRAPE
We study the link between the evolving age structure of the working population and unemployment. We build a large New Keynesian OLG model with a realistic age structure, labor market frictions, sticky prices, and aggregate shocks. Once calibrated to the European economies, we use this model to provide comparative statics across past and contemporaneous age structures of the working population. Thus, we quantify the extent to which the response of labor markets to adverse TFP shocks and monetary policy shocks becomes muted with the aging of the working population. Our findings have important policy implications for European labor markets and beyond. For example, the working population is expected to further age in Europe, whereas the share of young workers will remain robust in the US. Our results suggest a partial reversal of the European-US unemployment puzzle. Furthermore, with the aging population, lowering inflation volatility is less costly in terms of higher unemployment volatility. It suggests that optimal monetary policy should be more hawkish in the older society.
Inflation, Unemployment, and Labor Force: The Phillips Curve and Long-term Pr...Ivan Kitov
Inflation, Unemployment, and Labor Force: The Phillips Curve and Long-term Projections for Japan
presented at
Euro Area Business Cycle Network (EABCN)
Inflation Developments after the Great Recession
Eltville (Frankfurt), 6-7 December 2013
Hosted by the Deutsche Bundesbank;
Sponsored by the EABCN
The European Unemployment Puzzle: implications from population agingGRAPE
We study the link between the evolving age structure of the working population and unemployment. We build a large new Keynesian OLG model with a realistic age structure, labor market frictions, sticky prices, and aggregate shocks. Once calibrated to the European economy, we quantify the considerable extent to which demographic changes over the last 30 years contribute to the decline of unemployment rate. Our findings have important policy implications given the expected aging of the working population in Europe. Furthermore, lowering inflation volatility is less costly in terms of higher unemployment volatility. It suggests that optimal monetary policy is more hawkish in the older society. Our results hint also at a partial reversal of the European-US
unemployment puzzle due to the fact that in the US the share of young workers is expected to remain robust.
Starzenie się społeczeństwa w Polsce jest faktem i system ubezpieczeń społecznych musiał w związku z tym zostać zreformowany. W 1999 roku system emerytalny zdefiniowanego świadczenia został zmieniony na system zdefiniowanej składki - czy w tej sytuacji podniesienie wieku emerytalnego wciąż jest konieczne?
Poverty and DiscriminationEC 325 Problem Set 3 April 6, 2017.docxharrisonhoward80223
Poverty and Discrimination
EC 325 Problem Set 3
April 6, 2017
1. Briefly define and explain [3 points each]
Medicare
It is the government insurance for elder that is 65+, and it covers the fee that is related to sickness (doctors’ office visits, the drug costs, and etc.), which is financed by the payrool tax
Medicaid
It is the government insurance for the poor people, which also covers the drug fee and etc.
Food Stamps (SNAP)
It is the stamps that the government issued for the poor people that they can only use the stamps for the food.
Total fertility rate
It is the number of babies that a woman gives birth to assuming the current age-specific birth rate stay constant.
2. The birth rate among unmarried teenagers in the US rose from 22 per 1000 to 31 per 1000 from 1970 to 2010, and the birth rate among all unmarried women in their childbearing years (15-45) rose from 26 to 48 per 1000 during this period. Yet the teen birth rate, the birth rate for all women 15-45 and the total fertility rate have all fallen sharply since 1970.
a. How can both the increases and the decreases be true? [10 points]
b. What changes in behavior explain these patterns?
3. Explain how multiple programs designed to help the poor (Food Stamps/SNAP, TANF, Earned Income Tax Credit, Housing Assistance) can result in very high marginal tax rates on poor recipients of these benefits? [3 points]
4. Means tested transfer programs face conflicting goals of providing enough money for people with no other source of income, not discouraging work, and minimizing total costs. Design 2 programs (benefit for families with no other income, implicit tax rate, maximum income at which individuals are no longer eligible) and explain how well they do in achieving the 3 goals. [5 points]
5. You estimate 2 regressions, where ln L is the natural log of the number of workers in millions and ln W is the natural log of the hourly wage: [25 points]
Labor demand ln L = 2.98 – 0,11 ln W R2 = 0.65
(1.41) (0.03)
Labor supply ln L = 2.63 + 0.10 ln W R2 = 0.58
(1.28) (0.04)
a. Which coefficients are statistically significant and which are not?
b. If the minimum wage is $7.25, how many workers will firms want to hire, how many people will want to work, and what is the unemployment rate in this labor market?
c. If the minimum wage is $10.00, how many workers will firms want to hire, how many people will want to work, and what is the unemployment rate in this labor market?
d. If the average worker in this labor market works 30 hours per week and 50 weeks per year, what are the total earnings of all minimum wage workers at $7.25 and $10.00? Give your answers in billions of dollars.
e. Compare the average annual earnings for all people in this labor market and the unemployment rate when the minimum wage is $7.25 and $10. Given these numbers, should the minimum wage be raised to $10?
f. What percent of the change .
Assessing the consistency, quality, and completeness of the Reviewed Event Bu...Ivan Kitov
The Reviewed Event Bulletin (REB) of the IDC includes more than 500,000 events with associated seismic phases. The quality of these events and its completeness depends on multistage automatic processing followed by interactive analysis. The IDC raw data archive allows to apply the method of waveform cross correlation (WCC) for assessment of the similarity between seismic signals associated with REB events, and thus, the overall bulletin consistency. For cross correlation, we create a global set of master-events (ME) in the areas where reliable seismic events are available in the REB. Using only events within 3 degrees from a given ME, we apply the Principal Component Analysis to signals at each associated station. The major components are used to build synthetic MEs. Using real and synthetic MEs, we process continuous data in a specified region with the aim to find new REB-compatible events, which are missing from the REB. Therefore, the developed method allows to test REB consistency, quality, and completeness in any specified region or globally. It can also be thought as an alternative to the manual spot check during an independent review of the REB in routine IDC event analysis or as an additional tool for the independent reviewer.
Detection and location of small aftershocks using waveform cross correlationIvan Kitov
Aftershock sequences of earthquakes with magnitudes 5.0 and lower are difficult to detect and locate by sparse regional networks. Signals from aftershocks with magnitudes 2 to 3 are usually below detection thresholds of standard 3-C seismic stations at near regional distances. For seismic events close in space, the method waveform cross correlation (WCC) allows to reduce detection threshold by at least a unit of magnitude and to improve location precision to a few kilometres. Therefore, the WCC method is directly applicable to weak aftershock sequences. Here, we recover seismic activity after the earthquake near the town of Mariupol (Ukraine) occurred on August 7, 2016. The main shock was detected by many stations of the International monitoring system (IMS), including the closest primary IMS array stations AKASG (6.62 deg.) and BRTR (7.81), as well as 3-C station KBZ (5.00). The International data centre located this event (47.0013N, 37.5427E), estimated its origin time (08:15:4.1 UTC), magnitude (mb=4.5), and depth (6.8 km). This event was also detected by two array stations of the Institute for Dynamics of Geospheres (IDG) of the Russian Academy of Sciences (RAS): portable 3-C array RDON (3.28), which is the closest station, and MHVAR (7.96). Using signals from the main shock at five stations as waveform templates, we calculated continuous traces of cross correlation coefficient (CC) from the 7th to the 11th of August. We found that the best templates should include all regional phases, and thus, have the length from 80 s to 180 s. For detection, we used standard STA/LTA method with threshold depending on station. The accuracy of onset time estimation by the STA/LTA detector based on CC-traces is close to one sample, which varies from 0.05 s at BRTR to 0.005 s for RDON and MHVAR. Arrival times of all detected signals were reduced to origin times using the observed travel times from the main shock. Clusters of origin times are considered as event hypotheses in the phase association procedure. As a result, we found 12 aftershocks with magnitudes between 1.5 and 3.5. These small events were detected neither by the IDC nor by the near regional network of the Geophysical Survey of RAS, which has three closest 3-C stations at distances of 2.2 to 3.5 degrees from the studied earthquake. We also applied procedure of relative location and all aftershocks were found within a few km from the main shock.
Waveform cross correlation: coherency of seismic signals estimated from repea...Ivan Kitov
Waveform cross correlation (WCC) is an optimal detection technique for signals from spatially close seismic sources. Observations at various distances from a multitude of sources in a variety of seismotectonic and geological conditions demonstrate that signals from close events recorded at common stations are characterized by high level of similarity. Signals from remote sources are less similar mainly because of the variations in propagation paths. Different parts of a complete seismic wavetrain have different sensitivity to the propagation path. The initial part retains general characteristics of the source time function. The shape of later seismic phases is chiefly defined by propagation path. Here, we investigate the level of similarity between hundreds of signals generated by chemical blasts within a phosphate mine in Jordan and measured by 5 seismic stations at near-regional distances. We have revealed a much higher similarity of the first 3 s to 5 s of signals from different blasts, also at distances of about 20 km, at the same station as well as at different stations. This observation evidences in favour of high coherency in the initial part of signals at all stations. We also demonstrate that the observed coherency allows the use of very short (say, 3 s) waveform templates for detection and further phase association of signals based on cross correlation. Longer templates are characterized by larger overall signal specificity, which may reduce detection threshold and spatial resolution of the WCC method. However, different propagation paths within the same geological province may have similar transfer functions producing regular seismic phases with similar shapes independent on source position. This may increase the number of false detections from remote sources. We compare the performance of short and long waveform templates using detection statistics and the results of event hypotheses creation and further event location.
Remote detection of weak aftershocks of the DPRK underground explosions using...Ivan Kitov
We have estimated the performance of discrimination criterion based on the P/S spectral amplitude ratios obtained from six underground tests conducted by the DPRK since October 2006 and six aftershocks induced by the last two explosions. Two aftershocks were detected in routine processing at the International Data Centre of the Comprehensive Nuclear-Test-Ban Treaty Organization. Three aftershocks were detected by a prototype waveform cross correlation tool with explosions as master events, and one aftershock was found with the aftershocks as master events. Two seismic arrays USRK and KSRS of the International Monitoring System (IMS) and two non-IMS 3-component stations SEHB (South Korea) and MDJ (China) were used. With increasing frequency, all stations demonstrate approximately the same level of deviation between the Pg/Lg spectral amplitude ratios belonging to the DPRK explosions and their aftershocks. For a single station, simple statistical estimates show that the probability of any of six aftershocks not to be a sample from the explosion population is larger than 99.996% at the KSRS and even larger at USRK. The probability of any of the DPRK explosion to be a representative of the aftershock population is extremely small as defined by the distance of 20 and more standard deviations to the mean explosion Pg/Lg value. For network discrimination, we use the Mahalanobis distance combining the Pg/Lg estimates at three stations: USRK, KSRS and MDJ. At frequencies above 4 Hz, the (squared) Mahalanobis distance, D2, between the populations of explosions and aftershocks is larger than 100. In the frequency band between 6 and 12 Hz at USRK, the aftershocks distance from the average explosion D2>21,000. Statistically, the probability to mix up explosions and aftershocks is negligible. These discrimination results are related only to the aftershocks of the DPRK tests and cannot be directly extrapolated to the population of tectonic earthquakes in the same area.
Investigation of repeated events at Jordan phosphate mine with waveform cross...Ivan Kitov
More than 1500 events were measured at 3 seismic stations. Their signals are processed using waveform cross correlation and Principal Component Analysis. The best waveforms and eigenvectors are used for detection.
Investigation of repeated blasts at Aitik mine using waveform cross correlationIvan Kitov
We present results of signal detection from repeated events at the Aitik and Kiruna mines in Sweden as based on waveform cross correlation. Several advanced methods based on tensor Singular Value Decomposition is applied to waveforms measured at seismic array ARCES, which consists of three-component sensors.
Recovery of aftershock sequences using waveform cross correlation: from catas...Ivan Kitov
Description of a software package for signal detection and association using waveform cross correlation. Recovery of aftershock sequences of the largest events: Sumatra 2004 and Tohoku 2011. Finding of a small aftershock of the September 9, 2016 DPRK test.
Testing the global grid of master events for waveform cross correlation with ...Ivan Kitov
Abstract
The Comprehensive Nuclear-Test-Ban Treaty’s verification regime requires uniform distribution of monitoring capabilities over the globe. The use of waveform cross correlation as a monitoring technique demands waveform templates from master events outside regions of natural seismicity and test sites. We populated aseismic areas with masters having synthetic templates for predefined sets (from 3 to 10) of primary array stations of the International Monitoring System. Previously, we tested the global set of master events and synthetic templates using IMS seismic data for February 12, 2013 and demonstrated excellent detection and location capability of the matched filter technique. In this study, we test the global grid of synthetic master events using seismic events from the Reviewed Event Bulletin. For detection, we use standard STA/LTA (SNR) procedure applied to the time series of cross correlation coefficient (CC). Phase association is based on SNR, CC, and arrival times. Azimuth and slowness estimates based f-k analysis cross correlation traces are used to reject false arrivals.
The use of waveform cross correlation for creation of an accurate catalogue o...Ivan Kitov
Page 3
In the current study of mining activity within the Russian platform, we use the advantages of location and historical bulletins/catalogues of mining explosions recorded by small-aperture seismic array Mikhnevo (MHVAR). The Institute of Geospheres Dynamics (IDG) of the Russian Academy of Sciences runs seismic array MHVAR (54.950N; 37.767E) since 2004.
Small-aperture seismic array “Mikhnevo” includes ten vertical stations (solid triangles), with one station in the geometrical centre of the array (C00) and other nine stations distributed over three circles with radii of 130 m, 320 m, and 600 m. The array aperture in approximately 1.1 km. Two 3C stations (solid triangles in circles) were added to the outer circle in order to improve the overall stations sensitivity (detection threshold) and resolution. All stations are equipped with short-period seismometers SM3-KV, which are characterized by flat response between 0.8 Hz and 30 Hz and gain of 180,000 [Vs/m]. Later, a 3C broad band station (BB) was installed in the centre of the array for surface wave measurements. The array response function (only for 12 vertical channels) is similar to that for many small-aperture arrays. Such arrays are designed to measure high-frequency signals from regional and near-regional sources with magnitudes above 1.5-2.0.
Page 4
MHVAR detects regional seismic phases (Pn, Sn, Lg, Rg) from various sources. Figure shows some selected waveforms with source-station distance decreasing up-down. Correspondingly the length of records decreases – for the closest mines it’s harder to distinguish between P and S phases.
Page 5
More than 50 areas at regional and near regional distances with different levels of mining activity have been identified by MHVAR. Since 2004, thousands of events have been reported in the IDG seismic catalogue as mining explosions. The IDG publishes this mining event catalogue as a part of the annual issues of “Earthquakes in Russia”, which is available for the broader geophysical community. The map shows several selected mines at near-regional distances where MHVAR successfully detects events with magnitudes 1.0 and lower. We also show a few selected mines at regional distances with the largest events of magnitude (ML) 2.0 and above. Such events should be also detected by IMS arrays. Joint interpretation of signals detected by MHVAR and IMS arrays allows significant improvements in signal detection, location, characterization and identification of events in the IDG catalogue when the historical data are revisited. The work on joint analysis of the IDG and IMS data is possible under the “Contract for limited access to IMS data and IDC products” between the CTBTO and IDG, which allows obtaining data through 2011.
To begin with, we have chosen blasts with larger magnitudes from well-known ironstone mine Mikhailovskiy (red circle), which is situated at regional distances somewhere between MHVAR (~330 km) and IMS array AKASG
Big Data solution for CTBT monitoring:CEA-IDC joint global cross correlation ...Ivan Kitov
Cross correlation of seismic waveforms from the IDC 15-year archive needs a BigData solution. Joint efforts are aimed at solving scientific and technical problems/
Joint interpretation of infrasound, acoustic, and seismic waves from meteorit...Ivan Kitov
Sources of signals
Peak energy release. Acoustic (low-amplitude shock) wave
Infrasound source vs. seismic source
Seismic waves: Pn, Lg
Acousto-seismic waves: LR, LQ
Comparison with atmospheric nuclear tests: Love and Rayleigh waves
Comparison with the 1987 Chulym meteorite
Synthetics vs. real waveforms from underground nuclear explosions as master t...Ivan Kitov
The cross-correlation (CC) and master event technique is efficient in Comprehensive Nuclear-Test Ban Treaty (CTBT) monitoring. Two primary goals of CTBT monitoring are detection and location of nuclear explosions. Therefore, the CC monitoring should be focused on finding such events. The use of physically adequate masters may increase the number of valid events in the Reviewed Event Bulletin (REB) of the International Data Centre by a factor of 2. Inadequate master events may increase the number of irrelevant events in REB and reduce the sensitivity of the CC technique to valid events. In order to cover the entire earth, including vast aseismic territories, with the CC based nuclear test monitoring we conducted a thorough research and defined the most appropriate real and synthetic master events representing underground explosion sources. A procedure was developed on optimizing the master event simulation based on principal component analysis with bootstrap aggregation as a dimension reduction technique narrowing the classes of CC templates used in detection and location process. Actual waveforms and metadata from the DTRA Verification Database were used to validate our approach. The detection and location results based on real and synthetic master events were compared.
Performance of waveform cross correlation using a global and regular grid of ...Ivan Kitov
Outline
1.Motivation
2.Global seismic monitoring: IMS
3.Global seismicity: IDC view
4.Global cross correlation grid: a design
5.Cross correlation at teleseismic distances
6.Underground nuclear explosions as master events
7.Synthetic master events
8.Principal and Independent Component Analysis
9.Testing with world seismicity of February 12, 2013
10. DPRK 2013 of February 12, 2013
Review regional Source Specific Station Corrections (SSSCs) developed for no...Ivan Kitov
1, To evaluate current version of SSSCs installed at IDC (Russian Federation-United States Joint Calibration Program)
2. To test and evaluate various sets of SSSCs with highest emphasis in GT0-2 explosions
3. To develop procedures for SSSCs validation
4. To study location problem in presence of correlated errors and deviations from Gaussian statistics
5. To select sets or subsets of proposed SSSCs for installation into routine IDC processing
Global grid of master events for waveform cross correlation: design and testingIvan Kitov
Seismic monitoring of the Comprehensive Nuclear-Test-Ban Treaty requires a uniform coverage of the earth. The global use of waveform cross correlation for monitoring purposes is hindered by the absence of master events outside the zones of seismic activity. To populate the aseismic areas we have studied two principal approaches. Around the seismically active areas, we replicate real events best representing seismicity in a given region and distribute them over a regular grid to distances ~1000 km. These replicated events are called “grand masters”. For remote aseismic areas, we calculate synthetic seismograms for a regular grid of master events and a predefined set of array stations of the International Monitoring System. Both approaches were tested and showed a resolution similar to the use of real events. Considering three types of master events, we have created a regular and uniform grid with approximately 100 km spacing between nodes as obtained from the equilibrium distribution of charged particles over the earth’s surface. We have created three versions of the grid: v0.1 with only synthetic templates, v0.2 with real masters added where possible, and v0.3 with grand masters added. The performance of v0.1 has been assessed by full processing of a few data days.
Key words: waveform cross correlation, master events, seismic monitoring, array seismology, IDC, CTBT
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
The Art of the Pitch: WordPress Relationships and Sales
The dynamics of personal income distribution and inequality in the United States
1. The Dynamics of Personal Income Distribution and
Inequality in the United States
Oleg I. Kitov 1 and Ivan O. Kitov 2
5th ECINEQ Meeting
Bari, 22 July 2013
1
Department of Economics and Institute for New Economic Thinking at the Oxford
Martin School, University of Oxford
2
Russian Academy of Sciences
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 1 / 24
2. Overview
We model personal income dynamics, personal income distribution and
inequality in the United States. Note that the measurement unit is an
individual, as opposed to a tax unit or a household. The structure of the
presentation is as follows:
1 Motivation: Inequality through personal income dynamics.
2 Model: An equation for person incomes dynamics.
3 Data: Age-dependent incomes and inequality from tabulated CPS.
4 Calibration: Model parameters implied by CPS data.
5 Results: Age-dependent Gini indices predicted by the model.
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 2 / 24
3. Motivation: Micro Level
Differences in Personal Incomes
Growing income variance within a cohort as it ages.
Disproportionately high top income shares.
Long tails and skewness to the right.
Lower median earning relatively to the mean.
Varying peak income age for different education levels.
Growing age of peak mean income.
Falling share of income attributed to youngest cohorts
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 3 / 24
4. Motivation: Macro Level
Measurements of Income Distribution and Inequality
Parametrized distributions: logarithmic, exponential, gamma etc.
Top incomes satisfying power law distribution
Increasing top income shares using IRS data (Piketty and Saez, 2003).
No increase using CPS data (Burkhauser et at, 2012).
Age-dependent distributions not modeled.
Macro measures not reconciled with micro facts and theory.
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 4 / 24
5. Model: Overview
Two-factor model for the evolution of individual money income with work
experience. Two income regions governed by distinct laws:
1 Sub-critical region: a two factor model for personal income growth.
Incomes grow with work experience, reach a peak at a certain point
and then start declining.
2 Super-critical region: if personal income reaches a certain (Pareto)
threshold, it does not follow the two-factor model dynamics, but
rather a follows power law, i.e. a person can obtain any income with
rapidly decreasing probability.
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 5 / 24
6. Model: Personal Income Growth
The rate of change of personal money income, M(t), for an individual
with work experience t is modeled as a dissipation process that depends on
two indepenendent parameters (latent factors):
1 Ability to earn money (human capital): σ (t)
2 Instrument for earning money (job type): Λ (t)
The differential equation for the evolution of personal income is given by:
dM(t)
dt
= σ(t) −
α
Λ(t)
M(t) (1)
where α is the dissipation coefficient. We also introduce a time dimension,
τ, which represents calendar years. Finally, let Σ (t) = σ(t)
α be the
modified ability to earn money.
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 6 / 24
7. Model: Time Dependence of Parameters
Ability to each money and the instrument are allowed to vary with
experience, t, and calendar years, τ. For simplicity we assume that both
evolve as a square root of aggregate output per capita, Y (t):
Σ (τ0, t) = Σ (τ0, t0)
Y (τ)
Y (τ0)
= Σ (τ0, t0)
Y (τ0 + t)
Y (τ0)
(2)
Λ (τ0, t) = Λ (τ0, t0)
Y (τ)
Y (τ0)
= Λ (τ0, t0)
Y (τ0 + t)
Y (τ0)
(3)
Note that the product Σ (τ0, t0) Λ (τ0, t0) evolves with time in line with
growth of real GDP per capita. We call this product a capacity to earn
money.
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 7 / 24
8. Model: Relative Parameters
We assume that Σ (τ0, t) and Λ (τ0, t) are bounded above and below and
introduce the corresponding dimensionless variables, which are measured
relatively to a person with the minimum values:
S (τ0, t) =
Σ (τ0, t)
Σmin (τ0, t)
(4)
L (τ0, t) =
Λ (τ0, t)
Λmin (τ0, t)
(5)
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 8 / 24
9. Model: Distribution of Parameters
We allow the relative initial values of S (τ0, t0) and L (τ0, t0), for any τ0
and t0, to take discrete values from a sequence of integer numbers ranging
from 2 to 30, with uniform probability distribution over realizations. The
relative capacity for a person to earn money is distributed over the working
age population as the product of independently distributed Si and Lj :
Si (τ0, t) Lj (τ0, t) ∈
2 × 2
900
, . . .
2 × 30
900
,
3 × 2
900
, . . . ,
30 × 30
900
(6)
Each of the 841 combinations of Si Lj define a unique time history of
income rate dynamics. No individual future income trajectory is predefined,
but it can only be chosen from the set of 841 predefined individual paths
for each single year of birth, or equivalently initial work year τ0.
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 9 / 24
10. Model: Solution with Constant Parameters
For simplicity, assume that ability and instrument parameters do not
change over time and solve the model analytically. The solution is:
M(t) = ΣΛ 1 − exp −
α
Λ
t (7)
Personal income rate is an exponential function of:
1 Work Experience, t
2 Capability to earn money, Σ
3 Instrument to earn money, Λ
4 Output per capita growth, Y
5 Dissipation rate, α
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 10 / 24
11. Model: Solution Continued
Substitute in the product of the relative values Si and Lj , time dependent
minimum values Σmin and Λmin, and normalize to the maximum values
Σmax , and Λmax , to get Mij (t):
˜Mij (t) = ΣminΛmin
˜Si
˜Lj 1 − exp −
1
Λmin
˜α
˜Lj
t (8)
where
Mij (t) =
Mij (t)
Smax Lmax
˜Si =
Si
Smax
˜Li =
Li
Lmax
˜α =
α
Lmax
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 11 / 24
12. Model: Simulated Income Growth Paths
0 20 40 60
0
0.2
0.4
0.6
0.8
1
Normalizedincome
1930
2x2, model
2x2, real
30x30, model
30x30, real
0 20 40 60
0
0.2
0.4
0.6
0.8
1
Normalizedincome
2011
2x2, model
2x2, real
30x30, model
30x30, real
The evolution of personal income for different capacities for a 75-year-old
person (work experience 60 years) in 1930 and 2011.
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 12 / 24
13. Model: Income Decay
Average income among the population reaches its peak at some age and
then starts declining. In our model, we set the money earning capability to
zero Σ (t) = 0, at some critical at some critical work experience, t = Tc.
The solution for t > Tc is:
˜Mij (t) = ΣminΛmin
˜Si
˜Lj 1 − exp −
1
Λmin
˜α
˜Lj
Tc × (9)
exp −
1
Λmin
˜γ
˜Lj
(t − Tc)
1 The first term is the level of income rate attained at time Tc.
2 The second term represents an exponential decay of the income rate
for work experience above Tc.
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 13 / 24
14. Model: Simulated Income Paths with Decay
0 10 20 30 40 50 60
0
0.2
0.4
0.6
0.8
1
Normalizedincome
1930: 2x2
1930:30x30
2011: 2x2
2011:30x30
The change in the personal income distribution between 1930 and 2011
associated with growing Tc and larger earning tool, L. The exponential fall
after Tc is taken into account.
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 14 / 24
15. Pareto Distribution
In order to account for top incomes, which evolve according to a power
law, we need to assume that there exists some critical level of income rate
that separates the two income classes: exponential and Pareto. We will
refer to this level as Pareto threshold income, Mp (t). Any person reaching
the Pareto threshold can obtain any income in the distribution with a
rapidly decreasing probability governed by a power law. Pareto threshold is
evolving in time according to:
Mp (τ) = Mp (τ0)
Y (τ)
Y (τ0)
(10)
People with high enough Si and Lj can eventually reach the threshold and
obtain an opportunity to get rich.
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 15 / 24
16. Data: CPS Age-dependent Incomes
We use tabulated CPS data from 1947 to 2011.
Age-dependent data is available in 5- and 10-year age cohorts in
income bins.
The number and width of income bins have been revised several times.
Data distorted by top-coding of incomes.
Annual output per capita growth rates are taken from BEA and
Maddison.
Data on population age distribution is taken from the Census Bureau
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 16 / 24
17. Data: CPS Tabulated Age-dependent Incomes
1950 1960 1970 1980 1990 2000 2010
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
Proportionofpopulationwithincome
Working age
15-24
25-34
35-44
45-54
55-64
1950 1960 1970 1980 1990 2000 2010
0
0.025
0.05
0.075
0.1
0.125
Proportionofpopulationintheopen-endedbin
Working age
15-24
25-34
35-44
45-54
55-64
1 Left panel: Portion of population with income in various age groups.
In the group between 15 and 24 years of age, the portion has been
falling since 1979. Notice the break in the distributions between 1977
and 1979 induced by large revisions implemented in 1980.
2 Right panel: The portion of population in the open-end high income
bin.
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 17 / 24
18. Data: Normalized Age-dependent Mean Income
10 20 30 40 50 60 70 80
0
1
2
3
4
5
6
x 10
4
MeanIncome,2011dollars
1948
1968
1988
2011
Observed
Peak
1950 1960 1970 1980 1990 2000 2010
0
0.2
0.4
0.6
0.8
1
Normalizedincome,2011dollars
Working age
15-24
25-34
35-44
45-54
55-64
1 Left panel: years 1948, 1960, 1974, 1987, and 2011- mean income
normalized to peak value in these years.
2 Right panel: Mean income in various age groups normalized to peak
value in a given year. The age of peak mean income changes with
time.
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 18 / 24
19. Calibration and Model Predictions
1 Assume every individual in the population starts earning income at
the age of 15.
2 For each year, τ, and work experience, t, calibrate model parameters
to the tabulated age-dependent CPS data, GDP per capita growth
and age-distribution of the population:
Initial values of Σ and Λ
critical age Tc
exponents α and γ
Pareto threshold Mp
3 Predict incomes for each individual in a given year, depending on
his/her work experience, based on the assumption of uniform
distribution of ability and instrument over every year of age.
4 Aggregate individual incomes over work experience cohorts to match
the format of 5- and 10-year age cohorts in the tabulated CPS data.
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 19 / 24
20. Results: Predicting Mean Income
0 10 20 30 40 50 60
0
0.2
0.4
0.6
0.8
1
1.2
Normalizedmeanincome
Model income
Actual income
When we aggregate the model and estimate the (normalized) mean
incomes in the 5-year age bins we can predict the actual data available
from CPS quite accurately. Figure compares model predictions with actual
mean incomes in 1998.
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 20 / 24
21. Results: Predicting Overall Gini
1940 1950 1960 1970 1980 1990 2000 2010
0.45
0.5
0.55
0.6
0.65
0.7
Gini Working age
Model
CPS with income
CPS all
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 21 / 24
22. Results: Predicting Age-dependent Gini
1940 1960 1980 2000 2020
0.35
0.4
0.45
0.5
0.55
0.6
0.65
Gini
Age 25-34
Model
CPS with income
CPS all
1940 1960 1980 2000 2020
0.35
0.4
0.45
0.5
0.55
0.6
0.65
Gini
Age 35-44
Model
CPS with income
CPS all
1940 1960 1980 2000 2020
0.35
0.4
0.45
0.5
0.55
0.6
0.65
Gini
Age 45-54
Model
CPS with income
CPS all
1940 1960 1980 2000 2020
0.35
0.4
0.45
0.5
0.55
0.6
0.65
Gini
Age 55-64
Model
CPS with income
CPS all
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 22 / 24
23. Results: Predicting Age-dependent Gini
1 For the age groups with high ratios of individuals with income, 45-55
and 54-65, the three curves are much closer.
2 The two observed curves should converge as the proportion of people
with income approaches unity, which should also result in our model
predictions matching the observations much closer.
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 23 / 24
24. THANK YOU!
O.I. Kitov and I.O. Kitov Personal Income Distrubution in the U.S. ECINEQ, Bari 2013 24 / 24