This document discusses query modeling using non-relevance information for relevance feedback. It presents a model that estimates a relevant model from relevant documents and a non-relevant model from non-relevant documents. When expanding the initial query, terms are weighted based on their probability in the relevant model normalized by their probability in the non-relevant model. Experiments on the TREC 2008 Relevance Feedback Track optimize parameters and submit runs using the top 10 terms from the expanded query model.
Linear Discriminant Analysis and Its Generalization일상 온
The brief introduction to the linear discriminant analysis and some extended methods. Much of the materials are taken from The Elements of Statistical Learning by Hastie et al. (2008).
Linear Discriminant Analysis and Its Generalization일상 온
The brief introduction to the linear discriminant analysis and some extended methods. Much of the materials are taken from The Elements of Statistical Learning by Hastie et al. (2008).
Paper Introduction: Combinatorial Model and Bounds for Target Set SelectionYu Liu
The paper Combinatorial Model and Bounds for Target Set Selection by Eyal Ackerman, Oren Ben-Zwi, Guy Wolfovitz:
1. a combinatorial model for the dynamic activation process of
influential networks;
2. representing Perfect Target Set Selection Problem and its
variants by linear integer programs;
3. combinatorial lower and upper bounds on the size of the
minimum Perfect Target Set
Sequential quasi-Monte Carlo (SQMC) is a quasi-Monte Carlo (QMC) version of sequential Monte Carlo (or particle filtering), a popular class of Monte Carlo techniques used to carry out inference in state space models. In this talk I will first review the SQMC methodology as well as some theoretical results. Although SQMC converges faster than the usual Monte Carlo error rate its performance deteriorates quickly as the dimension of the hidden variable increases. However, I will show with an example that SQMC may perform well for some "high" dimensional problems. I will conclude this talk with some open problems and potential applications of SQMC in complicated settings.
A discussion on sampling graphs to approximate network classification functionsLARCA UPC
The problem of network classification consists on assigning a finite set of labels to the nodes of the graphs; the underlying assumption is that nodes with the same label tend to be connected via strong paths in the graph. This is similar to the assumptions made by semi-supervised learning algorithms based on graphs, which build an artificial graph from vectorial data. Such semi-supervised algorithms are based on label propagation principles and their accuracy heavily relies on the structure (presence of edges) in the graph.
In this talk I will discuss ideas of how to perform sampling in the network graph, thus sparsifying the structure in order to apply semi-supervised algorithms and compute efficiently the classification function on the network. I will show very preliminary experiments indicating that the sampling technique has an important effect on the final results and discuss open theoretical and practical questions that are to be solved yet.
This talk introduces a new way to compact a (possibly non-uniform) probability distribution “F” into a set of representative points, called support points. These point sets can have important uses for both small-data problems, such as experimental design and uncertainty quantification in engineering applications, as well as big-data problems, such as the optimal reduction of large datasets in Bayesian computation. We first present support points as the minimizer of a powerful goodness-of-fit test called the energy distance, and discuss why such point sets are appealing to use for simulation and integration. An extension of this point set, called projected support points, is then introduced for high-dimensional integration under non-uniform “F”. We show that support points (and its variants) can provide good solutions to the aforementioned small-data and big-data problems. This talk concludes with some new ideas and ongoing work on experimental design, potential theory and robust optimization.
Introduction to Bayesian classifier. It describes the basic algorithm and applications of Bayesian classification. Explained with the help of numerical problems.
To make Reinforcement Learning Algorithms work in the real-world, one has to get around (what Sutton calls) the "deadly triad": the combination of bootstrapping, function approximation and off-policy evaluation. The first step here is to understand Value Function Vector Space/Geometry and then make one's way into Gradient TD Algorithms (a big breakthrough to overcome the "deadly triad").
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Paper Introduction: Combinatorial Model and Bounds for Target Set SelectionYu Liu
The paper Combinatorial Model and Bounds for Target Set Selection by Eyal Ackerman, Oren Ben-Zwi, Guy Wolfovitz:
1. a combinatorial model for the dynamic activation process of
influential networks;
2. representing Perfect Target Set Selection Problem and its
variants by linear integer programs;
3. combinatorial lower and upper bounds on the size of the
minimum Perfect Target Set
Sequential quasi-Monte Carlo (SQMC) is a quasi-Monte Carlo (QMC) version of sequential Monte Carlo (or particle filtering), a popular class of Monte Carlo techniques used to carry out inference in state space models. In this talk I will first review the SQMC methodology as well as some theoretical results. Although SQMC converges faster than the usual Monte Carlo error rate its performance deteriorates quickly as the dimension of the hidden variable increases. However, I will show with an example that SQMC may perform well for some "high" dimensional problems. I will conclude this talk with some open problems and potential applications of SQMC in complicated settings.
A discussion on sampling graphs to approximate network classification functionsLARCA UPC
The problem of network classification consists on assigning a finite set of labels to the nodes of the graphs; the underlying assumption is that nodes with the same label tend to be connected via strong paths in the graph. This is similar to the assumptions made by semi-supervised learning algorithms based on graphs, which build an artificial graph from vectorial data. Such semi-supervised algorithms are based on label propagation principles and their accuracy heavily relies on the structure (presence of edges) in the graph.
In this talk I will discuss ideas of how to perform sampling in the network graph, thus sparsifying the structure in order to apply semi-supervised algorithms and compute efficiently the classification function on the network. I will show very preliminary experiments indicating that the sampling technique has an important effect on the final results and discuss open theoretical and practical questions that are to be solved yet.
This talk introduces a new way to compact a (possibly non-uniform) probability distribution “F” into a set of representative points, called support points. These point sets can have important uses for both small-data problems, such as experimental design and uncertainty quantification in engineering applications, as well as big-data problems, such as the optimal reduction of large datasets in Bayesian computation. We first present support points as the minimizer of a powerful goodness-of-fit test called the energy distance, and discuss why such point sets are appealing to use for simulation and integration. An extension of this point set, called projected support points, is then introduced for high-dimensional integration under non-uniform “F”. We show that support points (and its variants) can provide good solutions to the aforementioned small-data and big-data problems. This talk concludes with some new ideas and ongoing work on experimental design, potential theory and robust optimization.
Introduction to Bayesian classifier. It describes the basic algorithm and applications of Bayesian classification. Explained with the help of numerical problems.
To make Reinforcement Learning Algorithms work in the real-world, one has to get around (what Sutton calls) the "deadly triad": the combination of bootstrapping, function approximation and off-policy evaluation. The first step here is to understand Value Function Vector Space/Geometry and then make one's way into Gradient TD Algorithms (a big breakthrough to overcome the "deadly triad").
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Epistemic Interaction - tuning interfaces to provide information for AI support
Query Modeling Using Non-relevance Information - TREC 2008 Relevance Feedback Track Talk - Edgar Meij (Univ. of Amsterdam)
1. The University of Amsterdam at the
TREC 2008 Relevance Feedback Track
Query Modeling Using Non-relevance Information
Edgar Meij, W. Weerkamp, J. He, and M. de Rijke
ISLA
University of Amsterdam
http://ilps.science.uva.nl
TREC 2008
2. Introduction Model Experiments Conclusion
Outline
Introduction
Model
Experiments
Conclusion
3. Introduction Model Experiments Conclusion
Motivation
• Pseudo-relevance feedback approaches generally assume
a term’s non-relevance status is implicitly indicated by its
absence
• How should we interpret explicit non-relevance information
in a generative language modeling setting?
4. Introduction Model Experiments Conclusion
Retrieval Model
• Documents are ranked according to the KL-divergence
between a query model and each document model
P(t|θQ )
Score(D,Q) = − P(t|θQ ) log
P(t|θD )
t∈V
rank
= − P(t|θQ ) log P(t|θD )
t∈V
• Document models are smoothed using a reference corpus
• We use Jelinek-Mercer smoothing
P(t|θD ) = (1 − λD )P(t|D) + λD P(t)
5. Introduction Model Experiments Conclusion
Retrieval Model
• Documents are ranked according to the KL-divergence
between a query model and each document model
P(t|θQ )
Score(D,Q) = − P(t|θQ ) log
P(t|θD )
t∈V
rank
= − P(t|θQ ) log P(t|θD )
t∈V
• Document models are smoothed using a reference corpus
• We use Jelinek-Mercer smoothing
P(t|θD ) = (1 − λD )P(t|D) + λD P(t)
6. Introduction Model Experiments Conclusion
Query Modeling
• Assumption: the better the query model reflects the
information need, the better the results
• Baseline: Each query term is equally important and
receives an equal probability mass (set A)
c(t, Q)
P(t|θQ ) = P(t|Q) =
|Q|
• Cast pseudo-relevance feedback as query model updating
ˆ
P(t|θQ ) = (1 − λQ )P(t|Q) + λQ P(t|θQ )
• Smooth the initial query by adding and (re)weighing terms
7. Introduction Model Experiments Conclusion
Query Modeling
• Assumption: the better the query model reflects the
information need, the better the results
• Baseline: Each query term is equally important and
receives an equal probability mass (set A)
c(t, Q)
P(t|θQ ) = P(t|Q) =
|Q|
• Cast pseudo-relevance feedback as query model updating
ˆ
P(t|θQ ) = (1 − λQ )P(t|Q) + λQ P(t|θQ )
• Smooth the initial query by adding and (re)weighing terms
8. Introduction Model Experiments Conclusion
Outline
Introduction
Model
Experiments
Conclusion
9. Introduction Model Experiments Conclusion
(Non) Relevant Models
• Relevant model estimated using interpolated MLE on the
set of relevant documents:
P(t|θR ) = δ1 P(t) + (1 − δ1 )P(t|R)
D∈R P(t|D)
= δ1 P(t) + (1 − δ1 )
|R|
• Non-relevant model likewise:
P(t|θ¬R ) = δ2 P(t) + (1 − δ2 )P(t|¬R)
P(t|D)
= δ2 P(t) + (1 − δ2 ) D∈¬R
|¬R|
10. Introduction Model Experiments Conclusion
Our Model
ˆ
In order to arrive at an expanded query model θQ , we sample
terms proportional to the following:
• Each term is sampled according to the probability of
observing that term in each relevant document
• For each relevant document, adjust the probability mass of
each term by
• the probability of occurring given the relevant model
• normalized by its probability given the non-relevant model
11. Introduction Model Experiments Conclusion
Our Model
ˆ
In order to arrive at an expanded query model θQ , we sample
terms proportional to the following:
• Each term is sampled according to the probability of
observing that term in each relevant document
• For each relevant document, adjust the probability mass of
each term by
• the probability of occurring given the relevant model
• normalized by its probability given the non-relevant model
12. Introduction Model Experiments Conclusion
Normalized Log-Likelihood Ratio
NLLR(D|R) = H(θD , θ¬R ) − H(θR , θD )
P(t|θR )
= P(t|θD ) log
P(t|θ¬R )
t∈V
(1 − δ1 )P(t|R) + δ1 P(t)
= P(t|θD ) log
(1 − δ2 )P(t|¬R) + δ2 P(t)
t∈V
• Measures how much better the relevant model can encode
events from the document model than the non-relevant
model
• If a term has a high probability of occurring in θR / θ¬R it is
rewarded / penalized
13. Introduction Model Experiments Conclusion
Normalized Log-Likelihood Ratio
NLLR(D|R) = H(θD , θ¬R ) − H(θR , θD )
P(t|θR )
= P(t|θD ) log
P(t|θ¬R )
t∈V
(1 − δ1 )P(t|R) + δ1 P(t)
= P(t|θD ) log
(1 − δ2 )P(t|¬R) + δ2 P(t)
t∈V
• Measures how much better the relevant model can encode
events from the document model than the non-relevant
model
• If a term has a high probability of occurring in θR / θ¬R it is
rewarded / penalized
14. Introduction Model Experiments Conclusion
Query Model
• Expanded query part
ˆ
P(t|θQ ) ∝ P(t|θD )P(θD |θR )
D∈R
where
NLLR(D|R)
P(θD |θR ) =
D NLLR(D |R)
15. Introduction Model Experiments Conclusion
Outline
Introduction
Model
Experiments
Conclusion
16. Introduction Model Experiments Conclusion
Experimental Setup
• Preprocessing
• Porter stemming
• Stopwords removed
• Training
• Optimize MAP on held-out set (odd-numbered topics)
• Sweep over free parameters
• λD , λQ
• δ1 for P(t|θR )
• δ2 for P(t|θ¬R )
• Submitted runs
• Used 10 terms with the highest P(t|θQ )
• met6: Non-relevant documents
• met9: Substitutes non-relevant model with collection
17. Introduction Model Experiments Conclusion
statMAP
A B C D E
met6 0.2289 0.2595 0.2750 0.2758 0.2822
met9 0.2289 0.2608 0.2787 0.2777 0.2810
indicates a statistically significant difference with the previous
set at the 0.01 level, tested using a Wilcoxon test
18. Introduction Model Experiments Conclusion
31 TREC Terabyte topics
MAP P5 P10
A 0.1364 0.2516 0.2452
met6 B 0.1726 0.3161 0.3194
met6 C 0.1682 0.3032 0.2968
met6 D 0.1746 0.3097 0.3065
met6 E 0.1910 0.3935 0.3645
met9 B 0.1769 0.3161 0.3194
met9 C 0.1699 0.3161 0.3032
met9 D 0.1738 0.4000 0.3710
met9 E 0.1959 0.2903 0.2871
/ indicates a statistically significant difference with the
baseline (set A) at the 0.05 / 0.01 level resp.
19. Introduction Model Experiments Conclusion
31 TREC Terabyte topics, set E
<num>814</num>
<title>Johnstown flood</title>
<desc>Provide information about the Johnstown Flood in Johnstown, Pennsylvania
</desc>
flood
johnstown
dam
club
AP P10
water
noaa baseline 0.3366 0.3000
gov met6 0.7853 1.0000
sir
www
time
0 0.125 0.250 0.375 0.500
20. Introduction Model Experiments Conclusion
31 TREC Terabyte topics, set E
<num>808</num>
<title>North Korean Counterfeiting</title>
<desc>What information is available on the involvement of the North Korean Government
in counterfeiting of US currency</desc>
north
korean
counterfeit
korea
AP P10
state
drug baseline 0.2497 0.6000
weapon met6 0.0096 0.0000
countri
nuclear
traffick
0 0.125 0.250 0.375 0.500
23. Introduction Model Experiments Conclusion
31 TREC Terabyte topics, set E
0.2040
0.2020
0.2000
0.1980
MAP
0.1960
0.1940
0.1920
0.1900
5 15 25 35 45 55 65 75 85 95 105
Number of terms
24. Introduction Model Experiments Conclusion
Conclusion and Future Work
• Conclusion
• Modeled (non)relevant documents as separate models and
created a query model by sampling proportional to the
NLLR of these models
• Results improve over baseline
• Non-relevance information does not help significantly
• Future work
• Further analysis
• Compare with other, established RF methods
• Set/Estimate λQ based on relevance information
• amount
• confidence
25. Introduction Model Experiments Conclusion
Questions?
Edgar.Meij@uva.nl
http://www.science.uva.nl/~emeij