Presentation at 10th International Workshop on Simulation and Statistics. A novel method for the estimation of the underlying signal of multiple ranked lists.
This is the entrance exam paper for ISI MSQE Entrance Exam for the year 2008. Much more information on the ISI MSQE Entrance Exam and ISI MSQE Entrance preparation help available on http://crackdse.com
This is the entrance exam paper for ISI MSQE Entrance Exam for the year 2008. Much more information on the ISI MSQE Entrance Exam and ISI MSQE Entrance preparation help available on http://crackdse.com
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning.
Part 3 covers: 1. Motivation: Average-case versus worst-case in high dimensions 2. Algorithm halting times (runtimes) 3. Outlook
My talk in the Mathematical Finance Seminar at Humboldt-Universität zu Berlin, October 27, 2022, about my recent works (i) "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://arxiv.org/abs/2111.01874), (ii) "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation" (link: https://arxiv.org/abs/2003.05708) and (iii) "Optimal Damping with Hierarchical Adaptive Quadrature for Efficient Fourier Pricing of Multi-Asset Options in Lévy Models" (link: https://arxiv.org/abs/2203.08196)
A lambda calculus for density matrices with classical and probabilistic controlsAlejandro Díaz-Caro
Slides of my presentation at APLAS'17 (Suzhou, China, December 2017).
Publication: LNCS 10695:448-467, 2017 (http://dx.doi.org/10.1007/978-3-319-71237-6_22)
ArXiv'd at https://arxiv.org/abs/1705.00097
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Chiheb Ben Hammouda
Conference talk at the SIAM Conference on Financial Mathematics and Engineering, held in virtual format, June 1-4 2021, about our recently published work "Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model".
- Link of the paper: https://www.tandfonline.com/doi/abs/10.1080/14697688.2020.1744700
Theta θ(g,x) and pi π(g,x) polynomials of hexagonal trapezoid system tb,aijcsa
A counting polynomial, called Omega Ω(G,x), was proposed by Diudea. It is defined on the ground of
“opposite edge strips” ops. Theta Θ(G,x) and Pi Π(G,x) polynomials can also be calculated by ops
counting. In this paper we compute these counting polynomials for a family of Benzenoid graphs that called
Hexagonal trapezoid system Tb,a.
Olivier Hudry (INFRES-MIC2 Télécom ParisTech)
A Branch and Bound Algorithm to Compute a Median Permutation
Algorithms & Permutations 2012, Paris.
http://igm.univ-mlv.fr/AlgoB/algoperm2012/
Positive and negative solutions of a boundary value problem for a fractional ...journal ijrtem
: In this work, we study a boundary value problem for a fractional
q, -difference equation. By
using the monotone iterative technique and lower-upper solution method, we get the existence of positive or
negative solutions under the nonlinear term is local continuity and local monotonicity. The results show that we
can construct two iterative sequences for approximating the solutions
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning.
Part 3 covers: 1. Motivation: Average-case versus worst-case in high dimensions 2. Algorithm halting times (runtimes) 3. Outlook
My talk in the Mathematical Finance Seminar at Humboldt-Universität zu Berlin, October 27, 2022, about my recent works (i) "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://arxiv.org/abs/2111.01874), (ii) "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation" (link: https://arxiv.org/abs/2003.05708) and (iii) "Optimal Damping with Hierarchical Adaptive Quadrature for Efficient Fourier Pricing of Multi-Asset Options in Lévy Models" (link: https://arxiv.org/abs/2203.08196)
A lambda calculus for density matrices with classical and probabilistic controlsAlejandro Díaz-Caro
Slides of my presentation at APLAS'17 (Suzhou, China, December 2017).
Publication: LNCS 10695:448-467, 2017 (http://dx.doi.org/10.1007/978-3-319-71237-6_22)
ArXiv'd at https://arxiv.org/abs/1705.00097
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Chiheb Ben Hammouda
Conference talk at the SIAM Conference on Financial Mathematics and Engineering, held in virtual format, June 1-4 2021, about our recently published work "Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model".
- Link of the paper: https://www.tandfonline.com/doi/abs/10.1080/14697688.2020.1744700
Theta θ(g,x) and pi π(g,x) polynomials of hexagonal trapezoid system tb,aijcsa
A counting polynomial, called Omega Ω(G,x), was proposed by Diudea. It is defined on the ground of
“opposite edge strips” ops. Theta Θ(G,x) and Pi Π(G,x) polynomials can also be calculated by ops
counting. In this paper we compute these counting polynomials for a family of Benzenoid graphs that called
Hexagonal trapezoid system Tb,a.
Olivier Hudry (INFRES-MIC2 Télécom ParisTech)
A Branch and Bound Algorithm to Compute a Median Permutation
Algorithms & Permutations 2012, Paris.
http://igm.univ-mlv.fr/AlgoB/algoperm2012/
Positive and negative solutions of a boundary value problem for a fractional ...journal ijrtem
: In this work, we study a boundary value problem for a fractional
q, -difference equation. By
using the monotone iterative technique and lower-upper solution method, we get the existence of positive or
negative solutions under the nonlinear term is local continuity and local monotonicity. The results show that we
can construct two iterative sequences for approximating the solutions
Presentazione per l'esame Gestione Avanzata dei Dati con il professore Costagliola. Presentazione del sito web creato in cui l'obbiettivo era quello di associare le canzoni presenti nelle puntate dei telefilm utilizzando la tecnica dello scaping per ottenere i vari dati da diverse fonti.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
Estimation of the Latent Signals for Consensus Across Multiple Ranked Lists using Convex Optimisation
1. Estimation of the Latent Signals for Consensus
Across Multiple Ranked Lists using Convex
Optimisation
Luca Vitale, Michael G. Schimek
Department of Economics and Statistics, Univerist`a degli studi di Salerno, Italy
and
Institute for Medical Informatics, Statistics and Documentation, Medical University
of Graz, Austria
SimStat
Salzburg, 2-6 September, 2019
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
2. The data-analytic problem
We have access only to the rankings of objects, not to the data
that informed the assessors’ decisions that led to those
rankings
p = 5 (# of objects) n = 4 (# of assessors)
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
3. What do we aim at?
Let us have a set of p distinct objects
Let us assume n independent assessments assigning
rank positions to the same set of objects
These objects are ranked between 1 and p, without ties
Our aim is estimation of those signal parameters
which determine the realized rank assignments
Practical Problem: Probabilistic models, especially of
Bayesian type, require computationally highly demanding
stochastic optimisation techniques
Consequence: Only rather small sets of objects can be
handled and the majority of models does not allow to solve
p n-problems
Please note: rank aggregation is a different task under the
assumption that the observed rankings are ’correct’
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
4. Necessary assumptions for the proposed method
overcoming current limitations
We assume
Xij = θtrue
i + Zij,
where the real-valued parameters θtrue
i are to be
estimated, and the Zij’s are arbitrary random variables (the
object-specific noise of each assessor)
The parameters θtrue
i represent the (normalised) ‘true’
consensus signals underlying the assessments
Random variables X1j, . . . , Xpj are observed by the jth
assessor
These random variables are ordered Xπ1,j > . . . > Xπp,j
and define the ranked list produced by the j-th assessor
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
5. Proposed method for signal estimation
The goal of consensus across the assessors (rankers) is
achieved by the reduction of the global rank order
induced noise Z
The minimum noise result is used for indirect inference to
estimate the unobserved signals θi informing the
consensus ranks
The solution is obtained by convex optimisation
techniques
Two types of penalisation are applied: linear and
quadratic, where b denotes the penalty parameter
We consider two different approaches for handling the
necessary constraints: a full method and a reduced
method, the latter for higher computational efficiency
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
6. Convex optimisation
min
x
cT x
s.t. Ax ≤ b
x ≥ 0
(1)
Where:
c is a real t-dimensional vector, where t equals the number
of variables
A is a m × t dimensional real matrix, where m is the
number of constraints
b is a m-dimensional real vector and represents the
penalisation for each constraint
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
7. Software used for convex optimisation
Gurobi1 is powerful mathematical programming solver available
for Linear Programming (LP) and Quadratic Programming (QP)
problems
In order to solve the optimisation problem, the simplex method
is used.
1
Gurobi Optimizer Reference Manual
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
8. Full method - example for construction of constraints
Constraints for one item swap:
assessor 1
θ3 + z(3,1) − θ2 − z(2,1) ≥ b
θ3 + z(3,1) − θ1 − z(1,1) ≥ b
θ2 + z(2,1) − θ1 − z(1,1) ≥ b
assessor 2
θ3 + z(3,2) − θ1 − z(1,2) ≥ b
θ3 + z(3,2) − θ2 − z(2,2) ≥ b
θ1 + z(1,2) − θ2 − z(1,2) ≥ b
# of variables: n × p + p # of constraints: n × (p−1)p
2
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
9. Full method - example for construction of constraints
Constraints for one item swap:
assessor 1
θ3 + z(3,1) − θ2 − z(2,1) ≥ b
θ3 + z(3,1) − θ1 − z(1,1) ≥ b
θ2 + z(2,1) − θ1 − z(1,1) ≥ b
assessor 2
θ3 + z(3,2) − θ1 − z(1,2) ≥ b
θ3 + z(3,2) − θ2 − z(2,2) ≥ b
θ1 + z(1,2) − θ2 − z(1,2) ≥ b
# of variables: n × p + p # of constraints: n × (p−1)p
2
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
10. Full method - example for construction of constraints
Constraints for one item swap:
assessor 1
θ3 + z(3,1) − θ2 − z(2,1) ≥ b
θ3 + z(3,1) − θ1 − z(1,1) ≥ b
θ2 + z(2,1) − θ1 − z(1,1) ≥ b
assessor 2
θ3 + z(3,2) − θ1 − z(1,2) ≥ b
θ3 + z(3,2) − θ2 − z(2,2) ≥ b
θ1 + z(1,2) − θ2 − z(1,2) ≥ b
# of variables: n × p + p # of constraints: n × (p−1)p
2
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
11. Full method - example for construction of constraints
Constraints for one item swap:
assessor 1
θ3 + z(3,1) − θ2 − z(2,1) ≥ b
θ3 + z(3,1) − θ1 − z(1,1) ≥ b
θ2 + z(2,1) − θ1 − z(1,1) ≥ b
assessor 2
θ3 + z(3,2) − θ1 − z(1,2) ≥ b
θ3 + z(3,2) − θ2 − z(2,2) ≥ b
θ1 + z(1,2) − θ2 − z(1,2) ≥ b
# of variables: n × p + p # of constraints: n × (p−1)p
2
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
12. Full method - example for construction of constraints
Constraints for one item swap:
assessor 1
θ3 + z(3,1) − θ2 − z(2,1) ≥ b
θ3 + z(3,1) − θ1 − z(1,1) ≥ b
θ2 + z(2,1) − θ1 − z(1,1) ≥ b
assessor 2
θ3 + z(3,2) − θ1 − z(1,2) ≥ b
θ3 + z(3,2) − θ2 − z(2,2) ≥ b
θ1 + z(1,2) − θ2 − z(1,2) ≥ b
# of variables: n × p + p # of constraints: n × (p−1)p
2
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
13. Full method - example for construction of constraints
Constraints for one item swap:
assessor 1
θ3 + z(3,1) − θ2 − z(2,1) ≥ b
θ3 + z(3,1) − θ1 − z(1,1) ≥ b
θ2 + z(2,1) − θ1 − z(1,1) ≥ b
assessor 2
θ3 + z(3,2) − θ1 − z(1,2) ≥ b
θ3 + z(3,2) − θ2 − z(2,2) ≥ b
θ1 + z(1,2) − θ2 − z(1,2) ≥ b
# of variables: n × p + p # of constraints: n × (p−1)p
2
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
14. Full method - example for construction of constraints
Constraints for one item swap:
assessor 1
θ3 + z(3,1) − θ2 − z(2,1) ≥ b
θ3 + z(3,1) − θ1 − z(1,1) ≥ b
θ2 + z(2,1) − θ1 − z(1,1) ≥ b
assessor 2
θ3 + z(3,2) − θ1 − z(1,2) ≥ b
θ3 + z(3,2) − θ2 − z(2,2) ≥ b
θ1 + z(1,2) − θ2 − z(1,2) ≥ b
# of variables: n × p + p # of constraints: n × (p−1)p
2
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
15. Full method - example for construction of constraints
Constraints for one item swap:
assessor 1
θ3 + z(3,1) − θ2 − z(2,1) ≥ b
θ3 + z(3,1) − θ1 − z(1,1) ≥ b
θ2 + z(2,1) − θ1 − z(1,1) ≥ b
assessor 2
θ3 + z(3,2) − θ1 − z(1,2) ≥ b
θ3 + z(3,2) − θ2 − z(2,2) ≥ b
θ1 + z(1,2) − θ2 − z(1,2) ≥ b
# of variables: n × p + p # of constraints: n × (p−1)p
2
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
16. Full method - example for construction of constraints
Constraints for one item swap:
assessor 1
θ3 + z(3,1) − θ2 − z(2,1) ≥ b
θ3 + z(3,1) − θ1 − z(1,1) ≥ b
θ2 + z(2,1) − θ1 − z(1,1) ≥ b
assessor 2
θ3 + z(3,2) − θ1 − z(1,2) ≥ b
θ3 + z(3,2) − θ2 − z(2,2) ≥ b
θ1 + z(1,2) − θ2 − z(1,2) ≥ b
# of variables: n × p + p # of constraints: n × (p−1)p
2
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
17. Full method - example for construction of constraints
Constraints for one item swap:
assessor 1
θ3 + z(3,1) − θ2 − z(2,1) ≥ b
θ3 + z(3,1) − θ1 − z(1,1) ≥ b
θ2 + z(2,1) − θ1 − z(1,1) ≥ b
assessor 2
θ3 + z(3,2) − θ1 − z(1,2) ≥ b
θ3 + z(3,2) − θ2 − z(2,2) ≥ b
θ1 + z(1,2) − θ2 − z(1,2) ≥ b
# of variables: n × p + p # of constraints: n × (p−1)p
2
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
18. Full method - example for construction of constraints
Constraints for one item swap:
assessor 1
θ3 + z(3,1) − θ2 − z(2,1) ≥ b
θ3 + z(3,1) − θ1 − z(1,1) ≥ b
θ2 + z(2,1) − θ1 − z(1,1) ≥ b
assessor 2
θ3 + z(3,2) − θ1 − z(1,2) ≥ b
θ3 + z(3,2) − θ2 − z(2,2) ≥ b
θ1 + z(1,2) − θ2 − z(1,2) ≥ b
# of variables: n × p + p # of constraints: n × (p−1)p
2
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
19. The objective function
We consider two kinds of minimisation:
Linear optimisation (LP) based on the sum of the support
variables z(i,j)
p
i=1
n
j=1
z(i,j)
Quadratic optimisation (QP) based on the sum of the
squared support variables z(i,j)
p
i=1
n
j=1
z(i,j)
2
The minimisation of the objective function permits an
automatic adaptation of the individual signals θ towards
their consensus signal ˆθ that represents the observed
individual rankings in an optimal way
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
20. Full method - general formulation (linear case)
minimize
x
p
i=1
n
j=1
z(i,j)
subject to θπ(1,j) + z(π(1,j),j) − θπ(2,j) − z(π(2,j),j) ≥ b
θπ(1,j) + z(π(1,j),j) − θπ(3,j) − z(π(3,j),j) ≥ b
...
θπ(1,j) + z(π(1,j),j) − θπ(p,j) − z(π(p,j),j) ≥ b
− − − −−
θπ(2,j) + z(π(2,j),j) − θπ(3,j) − z(π(3,j),j) ≥ b
θπ(2,j) + z(π(2,j),j) − θπ(4,j) − z(π(4,j),j) ≥ b
...
θπ(2,j) + z(π(2,j),1) − θπ(p,j) − z(π(p,j),j) ≥ b
− − − − − − − − − − − − − − − −−
...
− − − − − − − − − − − − − − − −−
θi ≥ 0 i = 1, . . . , p
z(i,j) ≥ 0 i = 1, . . . , p, j = 1, . . . , n
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
21. Reduced method - example for construction of
constraints
Constraints for one item swap:
assessor 1
θ3 + z(1,1) − θ2 ≥ b
θ2 + z(2,1) − θ1 ≥ b
assessor 2
θ3 + z(1,2) − θ1 ≥ b
θ1 + z(2,2) − θ2 ≥ b
# of variables: n × (p − 1) + p # of constraints: n × (p − 1)
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
22. Reduced method - example for construction of
constraints
Constraints for one item swap:
assessor 1
θ3 + z(1,1) − θ2 ≥ b
θ2 + z(2,1) − θ1 ≥ b
assessor 2
θ3 + z(1,2) − θ1 ≥ b
θ1 + z(2,2) − θ2 ≥ b
# of variables: n × (p − 1) + p # of constraints: n × (p − 1)
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
23. Reduced method - example for construction of
constraints
Constraints for one item swap:
assessor 1
θ3 + z(1,1) − θ2 ≥ b
θ2 + z(2,1) − θ1 ≥ b
assessor 2
θ3 + z(1,2) − θ1 ≥ b
θ1 + z(2,2) − θ2 ≥ b
# of variables: n × (p − 1) + p # of constraints: n × (p − 1)
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
24. Reduced method - example for construction of
constraints
Constraints for one item swap:
assessor 1
θ3 + z(1,1) − θ2 ≥ b
θ2 + z(2,1) − θ1 ≥ b
assessor 2
θ3 + z(1,2) − θ1 ≥ b
θ1 + z(2,2) − θ2 ≥ b
# of variables: n × (p − 1) + p # of constraints: n × (p − 1)
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
25. Reduced method - example for construction of
constraints
Constraints for one item swap:
assessor 1
θ3 + z(1,1) − θ2 ≥ b
θ2 + z(2,1) − θ1 ≥ b
assessor 2
θ3 + z(1,2) − θ1 ≥ b
θ1 + z(2,2) − θ2 ≥ b
# of variables: n × (p − 1) + p # of constraints: n × (p − 1)
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
26. Reduced method - example for construction of
constraints
Constraints for one item swap:
assessor 1
θ3 + z(1,1) − θ2 ≥ b
θ2 + z(2,1) − θ1 ≥ b
assessor 2
θ3 + z(1,2) − θ1 ≥ b
θ1 + z(2,2) − θ2 ≥ b
# of variables: n × (p − 1) + p # of constraints: n × (p − 1)
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
27. Reduced method - example for construction of
constraints
Constraints for one item swap:
assessor 1
θ3 + z(1,1) − θ2 ≥ b
θ2 + z(2,1) − θ1 ≥ b
assessor 2
θ3 + z(1,2) − θ1 ≥ b
θ1 + z(2,2) − θ2 ≥ b
# of variables: n × (p − 1) + p # of constraints: n × (p − 1)
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
28. Reduced method - example for construction of
constraints
Constraints for one item swap:
assessor 1
θ3 + z(1,1) − θ2 ≥ b
θ2 + z(2,1) − θ1 ≥ b
assessor 2
θ3 + z(1,2) − θ1 ≥ b
θ1 + z(2,2) − θ2 ≥ b
# of variables: n × (p − 1) + p # of constraints: n × (p − 1)
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
29. Reduced method - example for construction of
constraints
Constraints for one item swap:
assessor 1
θ3 + z(1,1) − θ2 ≥ b
θ2 + z(2,1) − θ1 ≥ b
assessor 2
θ3 + z(1,2) − θ1 ≥ b
θ1 + z(2,2) − θ2 ≥ b
# of variables: n × (p − 1) + p # of constraints: n × (p − 1)
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
30. Reduced method - general formulation (linear case)
minimize
x
p−1
i=1
n
j=1
z(i,j)
subject to θπ(1,j) + z(1,j) − θπ(2,j) ≥ b
θπ(2,j) + z(2,j) − θπ(3,j) ≥ b
...
θπ(p−1,j) + z(p−1,j) − θπ(p,j) ≥ b
− − − − − − − − − − − − − −
θi ≥ 0 i = 1, . . . , p
z(i,j) ≥ 0 i = 1, . . . , p − 1, j = 1, . . . , n
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
31. Full versus reduced method - linear versus quadratic
optimisation
Number of variables full is n × p + p compared to
n × (p − 1) + p reduced
Number of constraints full is n × (p−1)p
2 compared to
n × (p − 1) reduced
As a consequence, the reduced method offers a
substantial gain in numerical efficiency, especially for a
large or huge number of objects p
Linear optimisation estimates only a discrete
approximation to the real-valued signals ⇒ an additional
bootstrap step is needed
Quadratic optimisation estimates the real-valued
signals directly ⇒ no additional computational step is
needed (unless standard errors are required)
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
32. Estimation of ˆθ and its standard error se(ˆθ) via
bootstrap
Let us select B independent bootstrap samples from the
columns of the input ranking matrix with replacement
For these bootstrap replicates we estimate the
corresponding parameters ˆθ∗(b)
Then we can estimate the standard error se(ˆθ) by the
standard deviation of the B replications
seB = {
B
b=1
[ˆθ∗
(b) − ¯θ∗
]2
/(B − 1)}1/2
,
where
¯θ∗
=
B
b=1
ˆθ∗
(b)/B
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
33. The algorithmic workflow
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
34. Simulation study
simulate for p object values from θ ∼ N(0, 1);
simulate n different sigma values for each assessor from
σ ∼| N(0, 0.42) |;
For each assessor j:
simulate p noises using Zj ∼ N(0, σ2
j )
Xi,j = θi + Zi,j
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
36. Computer used
4 core windows7, 64 bit Intel Core i5-3470 CPU@3.2GHz
16 GB RAM 2133 MHz
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
38. Execution time of all methods
00:00:00
01:00:00
02:00:00
03:00:00
20.20 100.20 20.100
P,N
time
Method
FullLinear
FullQuadratic
ReducedLinear
ReducedQuadratic
Mean time with 500 bootstrap (200 MC)
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
41. Signal estimation of reduced linear and reduced
quadratic method for p=20 and n=20 data
−2
−1
0
1
2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Objects
Signalvalue
Type
theta.true
signal.estimate
Reduced Quadratic
−2
−1
0
1
2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Objects
Signalvalue
Type
theta.true
signal.estimate
Reduced Linear
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
42. Signal estimation of full linear and full quadratic
method for p=20 and n=20 data
−2
−1
0
1
2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Objects
Signalvalue
Type
theta.true
signal.estimate
Full Quadratic
−2
−1
0
1
2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Objects
Signalvalue
Type
theta.true
signal.estimate
Full Linear
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
43. Execution time for various combinations of n and p
00:00:00
00:10:00
00:20:00
00:30:00
00:40:00
00:50:00
20.20 100.20 500.20 1000.20 20.100 20.500 20.1000
(N,P)
time
Method
ReducedLinear
ReducedQuadratic
Mean time with 500 bootstrap (1000 MC)
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
44. Execution time increasing p and n = 20
00:00:00
02:00:00
04:00:00
06:00:00
100 150 225 338 507 760 1140 1710 2565 3848 5772 8658 12987 19480 29220 43830 65745 98618 147927221890332835499252
p
time
method
reducedQuadratic
reducedLinear
Time execution for one estimation, n=20 and p increasing
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
45. Summary and conclusions
A convex optimisation approach for estimation of the
consensus signals underlying ranked lists was proposed
It is computationally more efficient than any stochastic
optimisation approach, e.g. McMC
The quality of estimation is very promising, even for n p
The reduced method is substantially faster and can handle
large or even huge data
Linear
Pros
faster execution
Cons
discrete approximate
estimates
need of bootstrap step
Quadratic
Pros
real-valued precise
estimates
no need of bootstrap step
Cons
slower execution
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation
46. Further work
Development of an R package for CRAN
Implementation of other Bootstrap concepts and
comparison with the current one
Generalisation of the method for missing ranks and tied
ranks
Detection of irregular assessments
L. Vitale, M. G. Schimek Consensus across ranked lists using convex optimisation