User interaction with search interfaces differs depending on
the type of task
the domain expertise of the information seeker
the amount of time and effort available to invest in the process
Classification of Researcher's Collaboration Patterns Towards Research Perfor...Nur Hazimah Khalid
A VIVA presentation slide for Master of Computer Science on 24th May 2016 at Faculty of Computing, Universiti Teknologi Malaysia by Nur Hazimah Khalid. Thank you.
The Effect of Search Task Familiarity on Search Behaviours in Biomedical SearchYing-Hsang Liu
Presents a user study that was conducted to evaluate the effectiveness of a metadata based query suggestion interface for biomedical search, and investigated the impact of search task familiarity on search behaviours. Forty-four researchers in Health Sciences participated in the evaluation - each conducted two research requests of their own, alternately with the proposed interface and the PubMed baseline. The results show that when searching for an unfamiliar topic, users were more likely to change their queries. The proposed interface was relatively more effective when less familiar search requests were attempted. Implications for the evaluation of interactive search systems will be discussed.
Classification of Researcher's Collaboration Patterns Towards Research Perfor...Nur Hazimah Khalid
A VIVA presentation slide for Master of Computer Science on 24th May 2016 at Faculty of Computing, Universiti Teknologi Malaysia by Nur Hazimah Khalid. Thank you.
The Effect of Search Task Familiarity on Search Behaviours in Biomedical SearchYing-Hsang Liu
Presents a user study that was conducted to evaluate the effectiveness of a metadata based query suggestion interface for biomedical search, and investigated the impact of search task familiarity on search behaviours. Forty-four researchers in Health Sciences participated in the evaluation - each conducted two research requests of their own, alternately with the proposed interface and the PubMed baseline. The results show that when searching for an unfamiliar topic, users were more likely to change their queries. The proposed interface was relatively more effective when less familiar search requests were attempted. Implications for the evaluation of interactive search systems will be discussed.
With the help of this powerpoint presentation, Ken Mease, discusses the advantages of various types of data sources and collection methods, including archival and secondary data, survey data, quantitative and qualitative approaches and data, and finally de jure and de facto information. The presentation was held at the Workshop on Governance Assessment Methods and Applications of Governance Data in Policy-Making (June 2009)
An overview of secondary research in evidence based medicine: Literature reviews, systematic reviews, meta-analysis, meta-synthesis, and integrative reviews.
Data collection - Statistical data are a numerical statement of aggregates. Data, generally, are obtained through properly organized statistical inquiries conducted by the investigators. Data can either be from primary or secondary sources.
Nowadays much is written about how to manage projects, but too little on what really happens in project
actuality. Project Actuality came out in the Rethinking Project Management (RPM) agenda in 2006 and it
aims at understanding what really happens at project context. To be able to understand project actuality
phenomenon, we first need to get a better comprehension on its definition and discover how to observe it
and analyse it. This paper presents the results of the systematic review conducted to collect evidence on
Project Actuality. The research focused on four search engines, in publications from 1994 to 2013. Among
others, the study concludes that project actuality has been analysed by several methods and techniques,
mostly on large organization and public sectors, in Northern Europe. The most common definitions,
techniques, and tips were identified as well as the intent of transforming the results in knowledge.
With the help of this powerpoint presentation, Ken Mease, discusses the advantages of various types of data sources and collection methods, including archival and secondary data, survey data, quantitative and qualitative approaches and data, and finally de jure and de facto information. The presentation was held at the Workshop on Governance Assessment Methods and Applications of Governance Data in Policy-Making (June 2009)
An overview of secondary research in evidence based medicine: Literature reviews, systematic reviews, meta-analysis, meta-synthesis, and integrative reviews.
Data collection - Statistical data are a numerical statement of aggregates. Data, generally, are obtained through properly organized statistical inquiries conducted by the investigators. Data can either be from primary or secondary sources.
Nowadays much is written about how to manage projects, but too little on what really happens in project
actuality. Project Actuality came out in the Rethinking Project Management (RPM) agenda in 2006 and it
aims at understanding what really happens at project context. To be able to understand project actuality
phenomenon, we first need to get a better comprehension on its definition and discover how to observe it
and analyse it. This paper presents the results of the systematic review conducted to collect evidence on
Project Actuality. The research focused on four search engines, in publications from 1994 to 2013. Among
others, the study concludes that project actuality has been analysed by several methods and techniques,
mostly on large organization and public sectors, in Northern Europe. The most common definitions,
techniques, and tips were identified as well as the intent of transforming the results in knowledge.
Data Collection I Available Data and Observation OVERVIE.docxtheodorelove43763
Data Collection I
Available Data and Observation
OVERVIEW
The choice of a data collection approach should logically flow from the prior decisions
about the research questions and measurement choices. Basically, the data collection
decision depends on three factors. The first factor centers on what the researchers
want to know. For example, do they want to know what people think or what they
do'' If learning what people think is the goal, the researchers would choose a data
collection approach that asks them. If the researchers want to know what people do,
the approach to choose would be observation.
The second factor is where the data reside. Perhaps other researchers have already
collected the needed data. If so, then the task is to obtain those studies, reports, and/
or databases. Alternatively, if the data are in files stored in a basement, then the tasks
are to access those files and gather the needed information systematically. If the data
are in the physical or built environment. the researchers will need to get out of the
office and observe. If people have the desired information, then the researchers will
need to decide whether to use interviews, surveys, or focus groups.
The third factor is the amount of resources available to collect the data. If suffi-
cient money, staff, and time were available. then it would be possible, for example.
to conduct face-to-face interviews with a large number of the people who have the
desired information. If there is very little money, staff, or time, then interviewing ju~t
a few people or conducting three focus groups might have to do.
There are trade-offs in collecting data that ultimately affect the conclusions th:1t
can be drawn. However, regardless of what data collection approach is selectee.
researchers need to develop very clear, specific guidelines to ensure that data i' :<-
curate. reliable, and unbiased. The data collection methods should be described : ..
the final report, along with any problems encountered and any limitatiom that m:~·· ·
affect the conclusions.
This phase of the research process requires attention to detaiL and ever: dcta1: ;.. >
to be nailed down. The decision about the best data collection approach i ~ i nten'- ~ '- ::-
\'. 'th the measurement strategy and sampling approach. It take~ timet,, n:::"e 'L:~:.
97
98 CHAPTER 7
the pieces align and requires flexibility because the initial plan may not work if the
situation proves to have unexpected barriers. The next three chapters provide a toolkit
for the most commonly used methods of data collection. This chapter discusses the
larger issues of structured and semistructured data collection-that is, quantitative
and qualitative data collection methods. It then turns to several commonly used data
collection methods: using available data, collecting data from records and files, and
observation. The purpose is provide some "how to" basics so that it is easier to identify
the stre.
A Research Plan to Study Impact of a Collaborative Web Search Tool on Novice'...Karthikeyan Umapathy
In the past decade, research efforts dedicated to studying the process of collaborative web search have been on the rise. Yet, limited number of studies have examined the impact of collaborative information search process on novice’s query behaviors. Studying and analyzing factors that influence web search behaviors, specifically users’ patterns of queries when using collaborative search systems can help with making query suggestions for group users. Improvements in user query behaviors and system query suggestions help in reducing search time and increasing query success rates for novices. In this paper, we present an empirical study plan designed to investigate the influence of collaboration between experts and novices as well as use of a collaborative web search tool on novice’s query behavior. In this research-in-progress study, we intend to use SearchTeam as our collaborative search tool. The results of this study are expected to provide information that could help collaborative web search tool designers to find ways to improve the query suggestions feature for group users. Additionally, this study will test the hypothesis that – having domain experts working with non-experts using collaborative search systems would immensely increase the query success rates for non-expert users, and help them learn querying strategies over the course of time. If the above hypothesis is proven, then use of collaborative web search tools during training of interns would be highly recommended.
Data Collection Methods in Qualitative Research- Divergent InsightsDivergent Insights
Data Collection is the process of gathering and measuring information on variables of interest. It helps you learn more about your customers, discover market trends, improves the quality of decisions, helps understand the needs, resolve issues and improve the quality of your products or services. It always helps to gather useful information about customers, their needs, market status, etc. Divergent Insights always helps you know your customers by collecting data to improve the business. Visit us to know more: www.divergentinsights.com
Slides | Targeting the librarian’s role in research servicesLibrary_Connect
Slides from the Nov. 8, 2016 Library Connect webinar "Targeting the librarian’s role in research services" with Nina Exner, Amanda Horsman and Mark Reed. See the full webinar at: http://libraryconnect.elsevier.com/library-connect-webinars?commid=223121
Action research is a philosophy and methodology of research generally applied in the social sciences. It seeks trasformative change through the simultaneous process of taking action and doing research which are linked together by critical reflection
This presentation was provided by Jim Hahn of The University of Illinois, during the NISO event "Discovery and Online Search, Part One: Drivers of Change in Online Search," held on June 12, 2019.
Information and network security 47 authentication applicationsVaibhav Khanna
Kerberos provides a centralized authentication server whose function is to authenticate users to servers and servers to users. In Kerberos Authentication server and database is used for client authentication. Kerberos runs as a third-party trusted server known as the Key Distribution Center (KDC).
Information and network security 46 digital signature algorithmVaibhav Khanna
The Digital Signature Algorithm (DSA) is a Federal Information Processing Standard for digital signatures, based on the mathematical concept of modular exponentiation and the discrete logarithm problem. DSA is a variant of the Schnorr and ElGamal signature schemes
Information and network security 45 digital signature standardVaibhav Khanna
The Digital Signature Standard is a Federal Information Processing Standard specifying a suite of algorithms that can be used to generate digital signatures established by the U.S. National Institute of Standards and Technology in 1994
Information and network security 44 direct digital signaturesVaibhav Khanna
The Direct Digital Signature is only include two parties one to send message and other one to receive it. According to direct digital signature both parties trust each other and knows there public key. The message are prone to get corrupted and the sender can declines about the message sent by him any time
Information and network security 43 digital signaturesVaibhav Khanna
Digital signatures are the public-key primitives of message authentication. In the physical world, it is common to use handwritten signatures on handwritten or typed messages. ... Digital signature is a cryptographic value that is calculated from the data and a secret key known only by the signer
Information and network security 42 security of message authentication codeVaibhav Khanna
Message Authentication Requirements
Disclosure: Release of message contents to any person or process not possess- ing the appropriate cryptographic key.
Traffic analysis: Discovery of the pattern of traffic between parties. ...
Masquerade: Insertion of messages into the network from a fraudulent source
Information and network security 41 message authentication codeVaibhav Khanna
In cryptography, a message authentication code, sometimes known as a tag, is a short piece of information used to authenticate a message—in other words, to confirm that the message came from the stated sender and has not been changed.
Information and network security 40 sha3 secure hash algorithmVaibhav Khanna
SHA-3 is the latest member of the Secure Hash Algorithm family of standards, released by NIST on August 5, 2015. Although part of the same series of standards, SHA-3 is internally different from the MD5-like structure of SHA-1 and SHA-2
Information and network security 39 secure hash algorithmVaibhav Khanna
The Secure Hash Algorithms are a family of cryptographic hash functions published by the National Institute of Standards and Technology as a U.S. Federal Information Processing Standard, including: SHA-0: A retronym applied to the original version of the 160-bit hash function published in 1993 under the name "SHA"
Information and network security 38 birthday attacks and security of hash fun...Vaibhav Khanna
Birthday attack can be used in communication abusage between two or more parties. ... The mathematics behind this problem led to a well-known cryptographic attack called the birthday attack, which uses this probabilistic model to reduce the complexity of cracking a hash function
Information and network security 35 the chinese remainder theoremVaibhav Khanna
In number theory, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer n by several integers, then one can determine uniquely the remainder of the division of n by the product of these integers, under the condition that the divisors are pairwise coprime.
Information and network security 34 primalityVaibhav Khanna
A primality test is an algorithm for determining whether an input number is prime. Among other fields of mathematics, it is used for cryptography. Unlike integer factorization, primality tests do not generally give prime factors, only stating whether the input number is prime or not
Information and network security 33 rsa algorithmVaibhav Khanna
RSA algorithm is asymmetric cryptography algorithm. Asymmetric actually means that it works on two different keys i.e. Public Key and Private Key. As the name describes that the Public Key is given to everyone and Private key is kept private
Information and network security 32 principles of public key cryptosystemsVaibhav Khanna
Public-key cryptography, or asymmetric cryptography, is an encryption scheme that uses two mathematically related, but not identical, keys - a public key and a private key. Unlike symmetric key algorithms that rely on one key to both encrypt and decrypt, each key performs a unique function.
Information and network security 31 public key cryptographyVaibhav Khanna
Public-key cryptography, or asymmetric cryptography, is a cryptographic system that uses pairs of keys: public keys, and private keys. The generation of such key pairs depends on cryptographic algorithms which are based on mathematical problems termed one-way function
Information and network security 30 random numbersVaibhav Khanna
Random numbers are fundamental building blocks of cryptographic systems and as such, play a key role in each of these elements. Random numbers are used to inject unpredictable or non-deterministic data into cryptographic algorithms and protocols to make the resulting data streams unrepeatable and virtually unguessable
Information and network security 29 international data encryption algorithmVaibhav Khanna
International Data Encryption Algorithm (IDEA) is a once-proprietary free and open block cipher that was once intended to replace Data Encryption Standard (DES). IDEA has been and is optionally available for use with Pretty Good Privacy (PGP). IDEA has been succeeded by the IDEA NXT algorithm
Information and network security 28 blowfishVaibhav Khanna
Blowfish is a symmetric-key block cipher, designed in 1993 by Bruce Schneier and included in many cipher suites and encryption products. Blowfish provides a good encryption rate in software and no effective cryptanalysis of it has been found to date
Information and network security 27 triple desVaibhav Khanna
Part of what Triple DES does is to protect against brute force attacks. The original DES symmetric encryption algorithm specified the use of 56-bit keys -- not enough, by 1999, to protect against practical brute force attacks. Triple DES specifies the use of three distinct DES keys, for a total key length of 168 bits
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Information retrieval 2 search behaviour and search process
1. Information Retrieval : 2
Search Behavior and Search Process
Prof Neeraj Bhargava
Vaibhav Khanna
Department of Computer Science
School of Engineering and Systems Sciences
Maharshi Dayanand Saraswati University Ajmer
2. How People Search
• User interaction with search interfaces
differs depending on
the type of task
the domain expertise of the information seeker
the amount of time and effort available to invest in theprocess
• Marchionini makes a distinction between information
lookup and exploratory search
• Information lookup tasks
are akin to fact retrieval or question answering
can be satisfied by discrete pieces of information: numbers,
dates, names, or Web sites
can work well for standard Web search interactions
3. How People Search
• Exploratory search is divided into learning and
investigating tasks
• Learning search
• requires more than single query-response pairs
• requires the searcher to spend time scanning and reading multiple
information items
• synthesizing content to form new understanding
Investigating refers to a longer-term process which
• involves multiple iterations that take place over perhaps very long
periods of time
• may return results that are critically assessed before being
integrated into personal and professional knowledge bases
• may be concerned with finding a large proportion of the relevant
information available
4. How People Search
• Information seeking can be seen as being part of
a larger process referred to as sensemaking
• Sensemaking is an iterative process of formulating
a conceptual representation from a large collection
• Russell et al. observe that most of the effort in
sensemaking goes towards the synthesis of a
good representation
• Some sensemaking activities interweave search
throughout, while others consist of doing a batch of
search followed by a batch of analysis and
synthesis
5. How People Search
Examples of deep analysis tasks that require
sensemaking (in addition to search)
• the legal discovery process
• epidemiology (disease tracking)
• studying customer complaints to improve service obtaining
business intelligence.
6. Classic x Dynamic Model
Classic notion of the information seeking process:
1. problem identification
2. articulation of information need(s)
3. query formulation
4. results evaluation
More recent models emphasize the dynamic nature of
the search process
The users learn as they search
Their information needs adjust as they see retrieval results and
other document surrogates
This dynamic process is sometimes referred to as the
berry picking model of search
7. Classic x Dynamic Model
The rapid response times of today’s Web search
engines allow searchers:
a to look at the results that come back
a to reformulate their query based on these results
This kind of behavior is a commonly-observed strategy
within the berry-picking approach
Sometimes it is referred to as orienteering
Jansen et a/ made a analysis of search logs and found
that the proportion of users who modified queries is
52%
8. Classic x Dynamic Model
Some seeking models cast the process in terms of
strategies and how choices for next steps are
made
In some cases, these models are meant to reflect conscious
planning behavior by expert searchers
In others, the models are meant to capture the less planned,
potentially more reactive behavior of a typical information seeker
9. Navigation x Search
Navigation: the searcher looks at an information
structure and browses among the available
information
This browsing strategy is preferrable when the
information structure is well-matched to the
user’s information need
it is mentally less taxing to recognize a piece of information than it
is to recall it
it works well only so long as appropriate links are available
If the links are not avaliable, then the browsing
experience might be frustrating
10. Navigation x Search
Spool discusses an example of a user looking for a
Say the user first clicks on printers, then laser
printers,
then the following sequence oflinks:
HP laser printers
HP laser printers model 9750
software for HP laser printers model 9750
software drivers for HP laser printers model
9750
software drivers for HP laser printers model 9750 for the
Win98 operating system
This kind of interaction is acceptable when each
refinement makes sense for the task at hand
11. Search Process
Numerous studies have been made of people engaged
in the search process
The results of these studies can help guide the design
of search interfaces
One common observation is that users often
reformulate their queries with slight
modifications
Another is that searchers often search for information
that they have previously accessed
The users’ search strategies differ when searching over
previously seenmaterials
Researchers have developed search interfaces support
both query history and revisitation
12. Search Process
Studies also show that it is difficult for people to
determine whether or not a document is relevant to
a topic
The less users know about a topic, the poorer judges they are of
whether a search result is relevant to that topic
Other studies found that searchers tend to look at only
the top-ranked retrieved results
Further, they are biased towards thinking the top one or
two results are better than those beneath them
13. Search Process
Chap 02: User Interfaces for Search, Baeza-Yates & Ribeiro-Neto, Modern Information Retrieval, 2nd Edition —p. 13
Studies also show that people are poor at
estimating how much of the relevant material they
have found
Other studies have assessed the effects of
knowledge of the search process itself
These studies have observed that experts use
different strategies than novices searchers
For instance, Tabatabai et a/ found that
expert searchers were more patient than novices
this positive attitude led to better search outcomes
14. Assignment
• How do we use the search behaviour in
improving IR
• Explain the Search Process and its Role in IR