The document summarizes a study comparing reading from paper versus online. It finds that paper offers key advantages for annotation while reading, quick navigation between sections, and flexibility of spatial layout. These features allow readers to better understand text, identify its structure, create an outline, cross-reference other documents, and interleave reading and writing. The online condition made annotation difficult and disrupted the reading flow. Design implications include supporting annotation that is distinct from the base text and allowing fluid movement between documents during reading.
Document Retrieval System, a Case StudyIJERA Editor
In this work we have proposed a method for automatic indexing and retrieval. This method will provide as a
result the most likelihood document which is related to the input query. The technique used in this project is
known as singular-value decomposition, in this method a large term by document matrix is analyzed and
decomposed into 100 factors. Documents are represented by 100 item vector of factor weights. On the other
hand queries are represented as pseudo-document vectors, which are formed from weighed combinations of
terms.
Regression model focused on query for multi documents summarization based on ...TELKOMNIKA JOURNAL
Document summarization is needed to get the information effectively and efficiently. One method
used to obtain the document summarization by applying machine learning techniques. This paper
proposes the application of regression models to query-focused multi-document summarization based on
the significance of the sentence position. The method used is the Support Vector Regression (SVR) which
estimates the weight of the sentence on a set of documents to be made as a summary based on sentence
feature which has been defined previously. A series of evaluations performed on a data set of DUC 2005.
From the test results obtained summary which has an average precision and recall values of 0.0580 and
0.0590 for measurements using ROUGE-2, ROUGE 0.0997 and 0.1019 for measurements using
the proposed regression-SU4. Model can perform measurements of the significance of the position of
the sentence in the document well.
This research proposal outlines an experimental study that will investigate the effects of reading electronic text on screen versus printed text, as well as the impact of different types of electronic text formatting, on undergraduate students' reading comprehension. Students will read an academic text either on screen or in print and complete a comprehension test. The on-screen text will be presented with variations in layout, font, section division, and consistency to determine if these factors influence comprehension. The results could help inform practices in online and traditional classes and support further research on digital literacy.
An Improved Similarity Matching based Clustering Framework for Short and Sent...IJECEIAES
Text clustering plays a key role in navigation and browsing process. For an efficient text clustering, the large amount of information is grouped into meaningful clusters. Multiple text clustering techniques do not address the issues such as, high time and space complexity, inability to understand the relational and contextual attributes of the word, less robustness, risks related to privacy exposure, etc. To address these issues, an efficient text based clustering framework is proposed. The Reuters dataset is chosen as the input dataset. Once the input dataset is preprocessed, the similarity between the words are computed using the cosine similarity. The similarities between the components are compared and the vector data is created. From the vector data the clustering particle is computed. To optimize the clustering results, mutation is applied to the vector data. The performance the proposed text based clustering framework is analyzed using the metrics such as Mean Square Error (MSE), Peak Signal Noise Ratio (PSNR) and Processing time. From the experimental results, it is found that, the proposed text based clustering framework produced optimal MSE, PSNR and processing time when compared to the existing Fuzzy C-Means (FCM) and Pairwise Random Swap (PRS) methods.
Advantages of Query Biased Summaries in Information RetrievalOnur Yılmaz
Presentation of the paper:
Advantages of query biased summaries in information retrieval (1998)
Anastasios Tombros and Mark Sanderson.
In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval (SIGIR '98).
ACM, New York, NY, USA, 2-10.
DOI=10.1145/290941.290947
Text Mining is the technique that helps users to find out useful information from a large amount of text documents on the web or database. Most popular text mining and classification methods have adopted term-based approaches. The term based approaches and the pattern-based method describing user preferences. This review paper analyse how the text mining work on the three level i.e sentence level, document level and feature level. In this paper we review the related work which is previously done. This paper also demonstrated that what are the problems arise while doing text mining done at the feature level. This paper presents the technique to text mining for the compound sentences.
This document discusses digital text, which refers to electronic versions of written texts that can be found online, on computers, or handheld devices. Digital text is more flexible than printed text as it can be searched, rearranged, annotated, and read aloud. The document outlines several types of digital texts, including pre-configured narratives, lists/inventories, database matrices, and self-generated narratives. It also discusses the structure of digital texts as being multi-linear and polyhedral. Applications of digital texts include use by those who have trouble reading standard print or who need additional supports. Designing digital texts involves creating original works, reinventing lesson plans, and allowing for growth and change.
A Domain Based Approach to Information Retrieval in Digital Libraries - Rotel...University of Bari (Italy)
The current abundance of electronic documents requires automatic techniques that support the users in understanding their content and extracting useful information. To this aim, improving the retrieval performance must necessarily go beyond simple lexical interpretation of the user queries, and pass through an understanding of their semantic content and aims. It goes without saying that any digital library would take enormous advantage from the availability of effective Information Retrieval techniques to provide to their users. This paper proposes an approach to Information Retrieval based on a correspondence of the domain of discourse between the query and the documents in the repository. Such an association is based on standard general-purpose linguistic resources (WordNet and WordNet Domains) and on a novel similarity assessment technique. Although the work is at a preliminary stage, interesting initial results suggest to go on extending and improving the approach.
Document Retrieval System, a Case StudyIJERA Editor
In this work we have proposed a method for automatic indexing and retrieval. This method will provide as a
result the most likelihood document which is related to the input query. The technique used in this project is
known as singular-value decomposition, in this method a large term by document matrix is analyzed and
decomposed into 100 factors. Documents are represented by 100 item vector of factor weights. On the other
hand queries are represented as pseudo-document vectors, which are formed from weighed combinations of
terms.
Regression model focused on query for multi documents summarization based on ...TELKOMNIKA JOURNAL
Document summarization is needed to get the information effectively and efficiently. One method
used to obtain the document summarization by applying machine learning techniques. This paper
proposes the application of regression models to query-focused multi-document summarization based on
the significance of the sentence position. The method used is the Support Vector Regression (SVR) which
estimates the weight of the sentence on a set of documents to be made as a summary based on sentence
feature which has been defined previously. A series of evaluations performed on a data set of DUC 2005.
From the test results obtained summary which has an average precision and recall values of 0.0580 and
0.0590 for measurements using ROUGE-2, ROUGE 0.0997 and 0.1019 for measurements using
the proposed regression-SU4. Model can perform measurements of the significance of the position of
the sentence in the document well.
This research proposal outlines an experimental study that will investigate the effects of reading electronic text on screen versus printed text, as well as the impact of different types of electronic text formatting, on undergraduate students' reading comprehension. Students will read an academic text either on screen or in print and complete a comprehension test. The on-screen text will be presented with variations in layout, font, section division, and consistency to determine if these factors influence comprehension. The results could help inform practices in online and traditional classes and support further research on digital literacy.
An Improved Similarity Matching based Clustering Framework for Short and Sent...IJECEIAES
Text clustering plays a key role in navigation and browsing process. For an efficient text clustering, the large amount of information is grouped into meaningful clusters. Multiple text clustering techniques do not address the issues such as, high time and space complexity, inability to understand the relational and contextual attributes of the word, less robustness, risks related to privacy exposure, etc. To address these issues, an efficient text based clustering framework is proposed. The Reuters dataset is chosen as the input dataset. Once the input dataset is preprocessed, the similarity between the words are computed using the cosine similarity. The similarities between the components are compared and the vector data is created. From the vector data the clustering particle is computed. To optimize the clustering results, mutation is applied to the vector data. The performance the proposed text based clustering framework is analyzed using the metrics such as Mean Square Error (MSE), Peak Signal Noise Ratio (PSNR) and Processing time. From the experimental results, it is found that, the proposed text based clustering framework produced optimal MSE, PSNR and processing time when compared to the existing Fuzzy C-Means (FCM) and Pairwise Random Swap (PRS) methods.
Advantages of Query Biased Summaries in Information RetrievalOnur Yılmaz
Presentation of the paper:
Advantages of query biased summaries in information retrieval (1998)
Anastasios Tombros and Mark Sanderson.
In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval (SIGIR '98).
ACM, New York, NY, USA, 2-10.
DOI=10.1145/290941.290947
Text Mining is the technique that helps users to find out useful information from a large amount of text documents on the web or database. Most popular text mining and classification methods have adopted term-based approaches. The term based approaches and the pattern-based method describing user preferences. This review paper analyse how the text mining work on the three level i.e sentence level, document level and feature level. In this paper we review the related work which is previously done. This paper also demonstrated that what are the problems arise while doing text mining done at the feature level. This paper presents the technique to text mining for the compound sentences.
This document discusses digital text, which refers to electronic versions of written texts that can be found online, on computers, or handheld devices. Digital text is more flexible than printed text as it can be searched, rearranged, annotated, and read aloud. The document outlines several types of digital texts, including pre-configured narratives, lists/inventories, database matrices, and self-generated narratives. It also discusses the structure of digital texts as being multi-linear and polyhedral. Applications of digital texts include use by those who have trouble reading standard print or who need additional supports. Designing digital texts involves creating original works, reinventing lesson plans, and allowing for growth and change.
A Domain Based Approach to Information Retrieval in Digital Libraries - Rotel...University of Bari (Italy)
The current abundance of electronic documents requires automatic techniques that support the users in understanding their content and extracting useful information. To this aim, improving the retrieval performance must necessarily go beyond simple lexical interpretation of the user queries, and pass through an understanding of their semantic content and aims. It goes without saying that any digital library would take enormous advantage from the availability of effective Information Retrieval techniques to provide to their users. This paper proposes an approach to Information Retrieval based on a correspondence of the domain of discourse between the query and the documents in the repository. Such an association is based on standard general-purpose linguistic resources (WordNet and WordNet Domains) and on a novel similarity assessment technique. Although the work is at a preliminary stage, interesting initial results suggest to go on extending and improving the approach.
A Comparison Of Paper-Based And Online Annotations In The WorkplaceGina Rizzo
This document presents a comparison of paper-based annotations versus online annotations in a workplace setting. It begins with an introduction and overview of annotations and their role in learning. It then discusses different types of annotations based on their form and function. Challenges of online annotations are outlined, such as a lack of physical tangibility compared to paper. The document describes a study that was conducted, including an online annotation study using a tool called SpreadCrumbs and a field study of common paper-based annotation habits. The results provide insights into how to better support active reading and annotations in digital contexts.
A Centroid And Relationship Based Clustering For Organizing Research PapersDaniel Wachtel
This document summarizes a research paper that proposes a new method for organizing research papers using centroid and relationship-based clustering. The method aims to group similar research papers together based on common terms in their titles, keywords, frequent sentences, and referenced titles. Only the most important information from each paper is considered, such as the title, keywords, top frequent sentences and most similar referenced papers, in order to reduce processing time for large collections of papers. The clustering algorithm uses these relationships between papers to assign them to topic clusters in an efficient and effective manner.
IRJET- A Survey Paper on Text Summarization MethodsIRJET Journal
This document summarizes various approaches for automatic text summarization, including extractive and abstractive methods. It discusses early surface-level approaches from the 1950s that identified important sentences based on word frequency. It also reviews corpus-based, cohesion-based, rhetoric-based, and graph-based approaches. The document then examines single document summarization techniques like naive Bayes methods, log-linear models, and deep natural language analysis. It concludes with a discussion of multi-document summarization, including abstraction and information fusion as well as graph spreading activation approaches. The goal of the survey is to provide an overview of the major existing methods that have been used for automatic text summarization.
This document provides an agenda for research on knowledge discovery from web search. It begins with an introduction on knowledge discovery and how search engines can help extract information. It then outlines the goals and objectives, provides a literature review on related work, and discusses some common limitations observed, such as models achieving low accuracy and WSD approaches not being efficient enough. The document serves to provide background and planning for a research study on improving knowledge discovery through web search.
A Comparative Study Of Citations And Links In Document ClassificationWhitney Anderson
This document compares the use of citations and links for document classification in a digital library and web directory. It finds that measures based on co-citation perform better for classifying pages in the web directory, while measures based on bibliographic coupling perform better in the digital library. It also finds that combining text-based and citation/link-based classifiers provides gains for the web directory, but not for the digital library. User studies found that misclassifications by citation/link classifiers are often difficult cases even for humans.
A proactive recommendation system for writing Helping without disrupting.pdfCharlie Congdon
1) The document describes research into developing a Proactive Recommendation System (PRS) to help writers find relevant information while writing, without disrupting the writing process.
2) Two experiments were conducted using IntelliGent, a PRS, to study the effects of presenting information during the writing stages of planning and reviewing. The experiments found that presenting relevant information during these stages can improve writing quality without impairing the writing tasks.
3) Key challenges for PRS identified are: helping writers adopt planning practices to provide context for recommendations, interrupting writers only when relevant information can improve their work, and designing interfaces that integrate external information sources into the writing process without disruption.
Semantic tagging for documents using 'short text' informationcsandit
Tagging documents with relevant and comprehensive k
eywords offer invaluable assistance to
the readers to quickly overview any document. With
the ever increasing volume and variety of
the documents published on the internet, the intere
st in developing newer and successful
techniques for annotating (tagging) documents is al
so increasing. However, an interesting
challenge in document tagging occurs when the full
content of the document is not readily
accessible. In such a scenario, techniques which us
e “short text”, e.g., a document title, a news
article headline, to annotate the entire article ar
e particularly useful. In this paper, we pro-
pose a novel approach to automatically tag document
s with relevant tags or key-phrases using
only “short text” information from the documents. W
e employ crowd-sourced knowledge from
Wikipedia, Dbpedia, Freebase, Yago and similar open
source knowledge bases to generate
semantically relevant tags for the document. Using
the intelligence from the open web, we prune
out tags that create ambiguity in or “topic drift”
from the main topic of our query document.
We have used real world dataset from a corpus of re
search articles to annotate 50 research
articles. As a baseline, we used the full text info
rmation from the document to generate tags. The
proposed and the baseline approach were compared us
ing the author assigned keywords for the
documents as the ground truth information. We found
that the tags generated using proposed
approach are better than using the baseline in term
s of overlap with the ground truth tags
measured via Jaccard index (0.058 vs. 0.044). In te
rms of computational efficiency, the
proposed approach is at least 3 times faster than t
he baseline approach. Finally, we
qualitatively analyse the quality of the predicted
tags for a few samples in the test corpus. The
evaluation shows the effectiveness of the proposed
approach both in terms of quality of tags
generated and the computational time.
Semantics-based clustering approach for similar research area detectionTELKOMNIKA JOURNAL
The manual process of searching out individuals in an already existing
research field is cumbersome and time-consuming. Prominent and rookie
researchers alike are predisposed to seek existing research publications in
a research field of interest before coming up with a thesis. From
extant literature, automated similar research area detection systems have
been developed to solve this problem. However, most of them use
keyword-matching techniques, which do not sufficiently capture the implicit
semantics of keywords thereby leaving out some research articles. In this
study, we propose the use of ontology-based pre-processing, Latent Semantic
Indexing and K-Means Clustering to develop a prototype similar research area
detection system, that can be used to determine similar research domain
publications. Our proposed system solves the challenge of high dimensionality
and data sparsity faced by the traditional document clustering technique. Our
system is evaluated with randomly selected publications from faculties
in Nigerian universities and results show that the integration of ontologies
in preprocessing provides more accurate clustering results.
This doctoral research explores how users make sense of and use information when seeking information from web-based resources. The research examines user behaviors and strategies as they interact with information sources to identify examples of how users collect, evaluate, understand, interpret, and integrate new information. Three empirical studies were conducted involving participants completing information seeking tasks. The results provide insights into users' information interaction strategies and sensemaking activities. Implications for the design of technologies to better support sensemaking are discussed.
Slidecast - What's wrong with online reading (short)Randy Connolly
Slidecast version of the short (50 min) version my "What's Wrong with Online Reading" presentation. Audio recorded during a Nov 2011 guest lecture. Unfortunately, some of the vocal commentary for the last dozen slides was lost.
This talk discusses how a wide range of research in web usability, psychology, education, and communication theory provides corroborating evidence that on-line reading is transforming cognition, learning, and the very nature of knowledge in some disturbing ways.
The document presents research on the optimal method for presenting short text passages on wearable devices like smartwatches. It describes an experiment that tested three text presentation methods (scrolling, leading, and rapid serial visual presentation) on an Apple Watch. The experiment measured the effects of the methods on participants' reading comprehension, information retention, and perceived task load. The results showed the presentation methods did not significantly influence comprehension or retention. However, one dimension of task load, mental demand, was significantly affected, with scrolling requiring the least mental demand and leading the most. The findings suggest scrolling may be the optimal method for short texts on wearables.
This summarizes an academic paper that proposes an automatic ontology creation method for classifying research papers. It uses text mining techniques like classification and clustering algorithms. It first builds a research ontology by extracting keywords and patterns from previous papers. It then uses a decision tree algorithm to classify new papers into disciplines defined in the ontology. The classified papers are then clustered based on similarities to group them. The method was tested on a dataset of 100 papers and achieved average precision of 85.7% for term-based and 89.3% for pattern-based keyword extraction.
An in-depth review on News Classification through NLPIRJET Journal
This document provides an in-depth literature review of news classification through natural language processing (NLP). It discusses several existing approaches to news classification, including models that use convolutional neural networks (CNNs), graph-based approaches, and attention mechanisms. The document also notes that current search engines often return too many irrelevant results, so classification could help layer search results. It concludes that while many techniques have been developed, inconsistencies remain in effectively classifying news, so further research on combining NLP, feature extraction, and fuzzy logic is needed.
A Review Of Text Mining Techniques And ApplicationsLisa Graves
This document provides a review of various text mining techniques and applications. It discusses techniques used for text classification and summarization, including Naive Bayes classification, backpropagation neural networks, keyword matching, and information extraction. It also covers applications of text mining in areas like sentiment analysis of social media posts and hotel reviews. Finally, it discusses the need for organizational text mining to extract useful information and insights from large amounts of unstructured text data.
Great model a model for the automatic generation of semantic relations betwee...ijcsity
The
large
a
v
ailable
am
ou
n
t
of
non
-
structured
texts
that
b
e
-
long
to
differe
n
t
domains
su
c
h
as
healthcare
(e.g.
medical
records),
justice
(e.g.
l
a
ws,
declarations),
insurance
(e.g.
declarations),
etc. increases
the
effort
required
for
the
analysis
of
information
in
a
decision making
pro
-
cess.
Differe
n
t
pr
o
jects
and t
o
ols
h
av
e
pro
p
osed
strategies
to
reduce
this
complexi
t
y
b
y
classifying,
summarizing
or
annotating
the
texts.
P
artic
-
ularl
y
,
text
summary
strategies
h
av
e
pr
ov
en
to
b
e
v
ery
useful
to
pr
o
vide
a
compact
view
of
an
original
text.
H
ow
e
v
er,
the
a
v
ailable
strategies
to
generate
these
summaries
do
not
fit
v
ery
w
ell
within
the
domains
that
require
ta
k
e
i
n
to
consideration
the
tem
p
oral
dimension
of
the
text
(e.g.
a
rece
n
t
piece
of
text
in
a
medical
record
is
more
im
p
orta
n
t
than
a
pre
-
vious
one)
and
the
profile
of
the
p
erson
who
requires
the
summary
(e.g
the
medical
s
p
ecialization).
T
o
co
p
e with
these
limitations
this
pa
p
er
prese
n
ts
”GRe
A
T”
a
m
o
del
for
automatic
summary
generation
that
re
-
lies
on
natural
language
pr
o
cessing
and
text
mining
te
c
hniques
to
extract
the
most
rele
v
a
n
t
information
from
narrati
v
e
texts
and
disc
o
v
er
new
in
-
formation
from
the
detection
of
related
information. GRe
A
T
M
o
del
w
as impleme
n
ted
on
sof
tw
are
to
b
e
v
alidated
in
a
health
institution
where
it
has
sh
o
wn
to
b
e
v
ery
useful
to displ
a
y
a
preview
of
the
information
a
b
ou
t
medical
health
records
and
disc
o
v
er
new
facts
and
h
y
p
otheses
within
the
information.
Se
v
eral
tests
w
ere
executed
su
c
h
as
F
unctional
-
i
t
y
,
Usabili
t
y
and
P
erformance
regarding
to
the
impleme
n
ted
sof
t
w
are.
In
addition,
precision
and
recall
measures
w
ere
applied
on
the
results
ob
-
tained
through
the
impleme
n
ted
t
o
ol,
as
w
ell
as
on
the
loss
of
information
obtained
b
y
pr
o
viding
a
text
more
shorter than
the
original
THE USE OF CLOUD COMPUTING SYSTEMS IN HIGHER EDUCATION; The Lived Experiences of Faculty
Dr. Joseph K. Adjei
School of Technology (SOT)
Ghana Institute of Management and Public Administration (GIMPA)
2nd International Conference of the African Virtual University
A Newly Proposed Technique for Summarizing the Abstractive Newspapers’ Articl...mlaij
In this new era, where tremendous information is available on the internet, it is of most important to
provide the improved mechanism to extract the information quickly and most efficiently. It is very difficult
for human beings to manually extract the summary of large documents of text. Therefore, there is a
problem of searching for relevant documents from the number of documents available, and absorbing
relevant information from it. In order to solve the above two problems, the automatic text summarization is
very much necessary. Text summarization is the process of identifying the most important meaningful
information in a document or set of related documents and compressing them into a shorter version
preserving its overall meanings. More specific, Abstractive Text Summarization (ATS), is the task of
constructing summary sentences by merging facts from different source sentences and condensing them
into a shorter representation while preserving information content and overall meaning. This Paper
introduces a newly proposed technique for Summarizing the abstractive newspapers’ articles based on
deep learning.
This document provides information on research methods for students. It discusses why research skills are important, literature reviews, structured investigations, and questionnaire design. Key points covered include defining an appropriate scope for literature reviews, critically analyzing previous research, planning and conducting structured studies, and designing questionnaires to avoid biases. Online resources for further reading on various research topics are also provided.
This document proposes a method for automatically assigning conference papers to reviewers based on the matching degree between reviewers and papers. It combines a preference-based approach using reviewers' expertise with a topic-based approach using the relevance of references between papers and reviewers' previous work. The method models reviewers based on expertise degree calculated from publications and relevance degree calculated from common references between papers and reviewers. It then calculates an overall matching degree and uses this in an assignment algorithm to assign papers to reviewers while balancing loads. The approach aims to make the paper assignment task more effective and reduce loads on program committees and reviewers compared to existing methods.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
A Comparison Of Paper-Based And Online Annotations In The WorkplaceGina Rizzo
This document presents a comparison of paper-based annotations versus online annotations in a workplace setting. It begins with an introduction and overview of annotations and their role in learning. It then discusses different types of annotations based on their form and function. Challenges of online annotations are outlined, such as a lack of physical tangibility compared to paper. The document describes a study that was conducted, including an online annotation study using a tool called SpreadCrumbs and a field study of common paper-based annotation habits. The results provide insights into how to better support active reading and annotations in digital contexts.
A Centroid And Relationship Based Clustering For Organizing Research PapersDaniel Wachtel
This document summarizes a research paper that proposes a new method for organizing research papers using centroid and relationship-based clustering. The method aims to group similar research papers together based on common terms in their titles, keywords, frequent sentences, and referenced titles. Only the most important information from each paper is considered, such as the title, keywords, top frequent sentences and most similar referenced papers, in order to reduce processing time for large collections of papers. The clustering algorithm uses these relationships between papers to assign them to topic clusters in an efficient and effective manner.
IRJET- A Survey Paper on Text Summarization MethodsIRJET Journal
This document summarizes various approaches for automatic text summarization, including extractive and abstractive methods. It discusses early surface-level approaches from the 1950s that identified important sentences based on word frequency. It also reviews corpus-based, cohesion-based, rhetoric-based, and graph-based approaches. The document then examines single document summarization techniques like naive Bayes methods, log-linear models, and deep natural language analysis. It concludes with a discussion of multi-document summarization, including abstraction and information fusion as well as graph spreading activation approaches. The goal of the survey is to provide an overview of the major existing methods that have been used for automatic text summarization.
This document provides an agenda for research on knowledge discovery from web search. It begins with an introduction on knowledge discovery and how search engines can help extract information. It then outlines the goals and objectives, provides a literature review on related work, and discusses some common limitations observed, such as models achieving low accuracy and WSD approaches not being efficient enough. The document serves to provide background and planning for a research study on improving knowledge discovery through web search.
A Comparative Study Of Citations And Links In Document ClassificationWhitney Anderson
This document compares the use of citations and links for document classification in a digital library and web directory. It finds that measures based on co-citation perform better for classifying pages in the web directory, while measures based on bibliographic coupling perform better in the digital library. It also finds that combining text-based and citation/link-based classifiers provides gains for the web directory, but not for the digital library. User studies found that misclassifications by citation/link classifiers are often difficult cases even for humans.
A proactive recommendation system for writing Helping without disrupting.pdfCharlie Congdon
1) The document describes research into developing a Proactive Recommendation System (PRS) to help writers find relevant information while writing, without disrupting the writing process.
2) Two experiments were conducted using IntelliGent, a PRS, to study the effects of presenting information during the writing stages of planning and reviewing. The experiments found that presenting relevant information during these stages can improve writing quality without impairing the writing tasks.
3) Key challenges for PRS identified are: helping writers adopt planning practices to provide context for recommendations, interrupting writers only when relevant information can improve their work, and designing interfaces that integrate external information sources into the writing process without disruption.
Semantic tagging for documents using 'short text' informationcsandit
Tagging documents with relevant and comprehensive k
eywords offer invaluable assistance to
the readers to quickly overview any document. With
the ever increasing volume and variety of
the documents published on the internet, the intere
st in developing newer and successful
techniques for annotating (tagging) documents is al
so increasing. However, an interesting
challenge in document tagging occurs when the full
content of the document is not readily
accessible. In such a scenario, techniques which us
e “short text”, e.g., a document title, a news
article headline, to annotate the entire article ar
e particularly useful. In this paper, we pro-
pose a novel approach to automatically tag document
s with relevant tags or key-phrases using
only “short text” information from the documents. W
e employ crowd-sourced knowledge from
Wikipedia, Dbpedia, Freebase, Yago and similar open
source knowledge bases to generate
semantically relevant tags for the document. Using
the intelligence from the open web, we prune
out tags that create ambiguity in or “topic drift”
from the main topic of our query document.
We have used real world dataset from a corpus of re
search articles to annotate 50 research
articles. As a baseline, we used the full text info
rmation from the document to generate tags. The
proposed and the baseline approach were compared us
ing the author assigned keywords for the
documents as the ground truth information. We found
that the tags generated using proposed
approach are better than using the baseline in term
s of overlap with the ground truth tags
measured via Jaccard index (0.058 vs. 0.044). In te
rms of computational efficiency, the
proposed approach is at least 3 times faster than t
he baseline approach. Finally, we
qualitatively analyse the quality of the predicted
tags for a few samples in the test corpus. The
evaluation shows the effectiveness of the proposed
approach both in terms of quality of tags
generated and the computational time.
Semantics-based clustering approach for similar research area detectionTELKOMNIKA JOURNAL
The manual process of searching out individuals in an already existing
research field is cumbersome and time-consuming. Prominent and rookie
researchers alike are predisposed to seek existing research publications in
a research field of interest before coming up with a thesis. From
extant literature, automated similar research area detection systems have
been developed to solve this problem. However, most of them use
keyword-matching techniques, which do not sufficiently capture the implicit
semantics of keywords thereby leaving out some research articles. In this
study, we propose the use of ontology-based pre-processing, Latent Semantic
Indexing and K-Means Clustering to develop a prototype similar research area
detection system, that can be used to determine similar research domain
publications. Our proposed system solves the challenge of high dimensionality
and data sparsity faced by the traditional document clustering technique. Our
system is evaluated with randomly selected publications from faculties
in Nigerian universities and results show that the integration of ontologies
in preprocessing provides more accurate clustering results.
This doctoral research explores how users make sense of and use information when seeking information from web-based resources. The research examines user behaviors and strategies as they interact with information sources to identify examples of how users collect, evaluate, understand, interpret, and integrate new information. Three empirical studies were conducted involving participants completing information seeking tasks. The results provide insights into users' information interaction strategies and sensemaking activities. Implications for the design of technologies to better support sensemaking are discussed.
Slidecast - What's wrong with online reading (short)Randy Connolly
Slidecast version of the short (50 min) version my "What's Wrong with Online Reading" presentation. Audio recorded during a Nov 2011 guest lecture. Unfortunately, some of the vocal commentary for the last dozen slides was lost.
This talk discusses how a wide range of research in web usability, psychology, education, and communication theory provides corroborating evidence that on-line reading is transforming cognition, learning, and the very nature of knowledge in some disturbing ways.
The document presents research on the optimal method for presenting short text passages on wearable devices like smartwatches. It describes an experiment that tested three text presentation methods (scrolling, leading, and rapid serial visual presentation) on an Apple Watch. The experiment measured the effects of the methods on participants' reading comprehension, information retention, and perceived task load. The results showed the presentation methods did not significantly influence comprehension or retention. However, one dimension of task load, mental demand, was significantly affected, with scrolling requiring the least mental demand and leading the most. The findings suggest scrolling may be the optimal method for short texts on wearables.
This summarizes an academic paper that proposes an automatic ontology creation method for classifying research papers. It uses text mining techniques like classification and clustering algorithms. It first builds a research ontology by extracting keywords and patterns from previous papers. It then uses a decision tree algorithm to classify new papers into disciplines defined in the ontology. The classified papers are then clustered based on similarities to group them. The method was tested on a dataset of 100 papers and achieved average precision of 85.7% for term-based and 89.3% for pattern-based keyword extraction.
An in-depth review on News Classification through NLPIRJET Journal
This document provides an in-depth literature review of news classification through natural language processing (NLP). It discusses several existing approaches to news classification, including models that use convolutional neural networks (CNNs), graph-based approaches, and attention mechanisms. The document also notes that current search engines often return too many irrelevant results, so classification could help layer search results. It concludes that while many techniques have been developed, inconsistencies remain in effectively classifying news, so further research on combining NLP, feature extraction, and fuzzy logic is needed.
A Review Of Text Mining Techniques And ApplicationsLisa Graves
This document provides a review of various text mining techniques and applications. It discusses techniques used for text classification and summarization, including Naive Bayes classification, backpropagation neural networks, keyword matching, and information extraction. It also covers applications of text mining in areas like sentiment analysis of social media posts and hotel reviews. Finally, it discusses the need for organizational text mining to extract useful information and insights from large amounts of unstructured text data.
Great model a model for the automatic generation of semantic relations betwee...ijcsity
The
large
a
v
ailable
am
ou
n
t
of
non
-
structured
texts
that
b
e
-
long
to
differe
n
t
domains
su
c
h
as
healthcare
(e.g.
medical
records),
justice
(e.g.
l
a
ws,
declarations),
insurance
(e.g.
declarations),
etc. increases
the
effort
required
for
the
analysis
of
information
in
a
decision making
pro
-
cess.
Differe
n
t
pr
o
jects
and t
o
ols
h
av
e
pro
p
osed
strategies
to
reduce
this
complexi
t
y
b
y
classifying,
summarizing
or
annotating
the
texts.
P
artic
-
ularl
y
,
text
summary
strategies
h
av
e
pr
ov
en
to
b
e
v
ery
useful
to
pr
o
vide
a
compact
view
of
an
original
text.
H
ow
e
v
er,
the
a
v
ailable
strategies
to
generate
these
summaries
do
not
fit
v
ery
w
ell
within
the
domains
that
require
ta
k
e
i
n
to
consideration
the
tem
p
oral
dimension
of
the
text
(e.g.
a
rece
n
t
piece
of
text
in
a
medical
record
is
more
im
p
orta
n
t
than
a
pre
-
vious
one)
and
the
profile
of
the
p
erson
who
requires
the
summary
(e.g
the
medical
s
p
ecialization).
T
o
co
p
e with
these
limitations
this
pa
p
er
prese
n
ts
”GRe
A
T”
a
m
o
del
for
automatic
summary
generation
that
re
-
lies
on
natural
language
pr
o
cessing
and
text
mining
te
c
hniques
to
extract
the
most
rele
v
a
n
t
information
from
narrati
v
e
texts
and
disc
o
v
er
new
in
-
formation
from
the
detection
of
related
information. GRe
A
T
M
o
del
w
as impleme
n
ted
on
sof
tw
are
to
b
e
v
alidated
in
a
health
institution
where
it
has
sh
o
wn
to
b
e
v
ery
useful
to displ
a
y
a
preview
of
the
information
a
b
ou
t
medical
health
records
and
disc
o
v
er
new
facts
and
h
y
p
otheses
within
the
information.
Se
v
eral
tests
w
ere
executed
su
c
h
as
F
unctional
-
i
t
y
,
Usabili
t
y
and
P
erformance
regarding
to
the
impleme
n
ted
sof
t
w
are.
In
addition,
precision
and
recall
measures
w
ere
applied
on
the
results
ob
-
tained
through
the
impleme
n
ted
t
o
ol,
as
w
ell
as
on
the
loss
of
information
obtained
b
y
pr
o
viding
a
text
more
shorter than
the
original
THE USE OF CLOUD COMPUTING SYSTEMS IN HIGHER EDUCATION; The Lived Experiences of Faculty
Dr. Joseph K. Adjei
School of Technology (SOT)
Ghana Institute of Management and Public Administration (GIMPA)
2nd International Conference of the African Virtual University
A Newly Proposed Technique for Summarizing the Abstractive Newspapers’ Articl...mlaij
In this new era, where tremendous information is available on the internet, it is of most important to
provide the improved mechanism to extract the information quickly and most efficiently. It is very difficult
for human beings to manually extract the summary of large documents of text. Therefore, there is a
problem of searching for relevant documents from the number of documents available, and absorbing
relevant information from it. In order to solve the above two problems, the automatic text summarization is
very much necessary. Text summarization is the process of identifying the most important meaningful
information in a document or set of related documents and compressing them into a shorter version
preserving its overall meanings. More specific, Abstractive Text Summarization (ATS), is the task of
constructing summary sentences by merging facts from different source sentences and condensing them
into a shorter representation while preserving information content and overall meaning. This Paper
introduces a newly proposed technique for Summarizing the abstractive newspapers’ articles based on
deep learning.
This document provides information on research methods for students. It discusses why research skills are important, literature reviews, structured investigations, and questionnaire design. Key points covered include defining an appropriate scope for literature reviews, critically analyzing previous research, planning and conducting structured studies, and designing questionnaires to avoid biases. Online resources for further reading on various research topics are also provided.
This document proposes a method for automatically assigning conference papers to reviewers based on the matching degree between reviewers and papers. It combines a preference-based approach using reviewers' expertise with a topic-based approach using the relevance of references between papers and reviewers' previous work. The method models reviewers based on expertise degree calculated from publications and relevance degree calculated from common references between papers and reviewers. It then calculates an overall matching degree and uses this in an assignment algorithm to assign papers to reviewers while balancing loads. The approach aims to make the paper assignment task more effective and reduce loads on program committees and reviewers compared to existing methods.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
This presentation was provided by Racquel Jemison, Ph.D., Christina MacLaughlin, Ph.D., and Paulomi Majumder. Ph.D., all of the American Chemical Society, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
2. A Comparison of Reading Paper and On-Line Documents
Kenton O’Hara & Abigail Sellen
Rank Xerox Research Centre (EuroPARC)
61 Regent St.
Cambridge, CB2 1AB, U.K.
<surname>@cambridge.rxrc.xerox.com
ABSTRACT
We report on a laboratory study which compares reading
from paper to reading on-line. Critical differences have to do
with the major advantages that paper offers in supporting
annotation while reading, quick navigation, and flexibility of
spatial layout. These, in turn, allow readers to deepen their
understanding of the text, extract a sense of its structure,
create a plan for writing, cross-refer to other documents, and
interleave reading and writing. We discuss the design
implications of these findings for the development of better
reading technologies.
KEYWORDS
reading, paper, documents, digital document readers,
hypertext, digital libraries, design, Web
INTRODUCTION
Jay Bolter, in his book “Writing Space” [2], heralded the
computer as the fourth great document medium, next to the
papyrus, the medieval codex, and the printed book. Implied
in this is that the demise of the printed page is merely a
matter of time. Certainly many of today’s hot topics in
human-computer interaction point to digital alternatives to
paper documents: the Web, new hypertext applications,
digital libraries, and digital document reading devices. Some
have predicted that such advances will make books as we
know them obsolete, will radically alter the relationship
between authors and readers, and will change forever our
concept of libraries as repositories of physical volumes of
text, and of publishers as producers and sellers of paper
books.
But the reality of day to day life shows that paper still
continues to be the preferred medium for much of our
reading activity. This is despite the fact that screen
technologies have vastly improved in recent years, that
wireless, mobile computing technology is now widely
available, and that new navigational and input techniques
significantly improve the flexibility of our interaction with
digital documents.
We describe a laboratory study aimed specifically at
discovering how reading from paper compares to reading on-
line documents so that we can design better on-line reading
tools. This focus on design results in a study which differs
This paper was prepared for submission to CHI ‘97, Atlanta,
GA.
from typical laboratory comparisons of reading paper and on-
line documents in three fundamental ways:
• We take a broadly descriptive approach rather than
focusing on the measurement of one or two narrowly
defined aspects of reading behaviour;
• We use an experimental task which we believe, based on
our field studies of reading in organisations [21], is both
naturalistic and representative of reading in real work
settings;
• We ask subjects to comment on videotapes of their own
behaviour to enrich our understanding of reading through
the readers’ perspective.
The Literature
A fairly substantial body of literature comparing the reading
of paper versus on-line documents can be found in the
psychological, human factors, and ergonomics literature (see
[4] for a comprehensive review).
The majority of studies focus on “outcome” measures of
reading, such as speed [15, 23], proof-reading accuracy [8,
23, 3], and comprehension [15, 6]. A lesser effort has been
devoted to looking at “process” differences between reading
on paper and reading on screen such as how readers look at
text in terms of eye movements [9], how they manipulate it
[19], and how they navigate through it [5].
Such measures do not generally find remarkable differences
between paper and screen, although during the early ‘80’s, a
fairly reliable finding was that reading from a screen was
significantly slower than reading from paper [15]. However,
since that time, a series of experiments [9] have shown that
improvements in screen technology lessen these differences,
and may even eliminate them [16].
One reason sometimes cited for the failure to uncover
significant differences between paper and on-line reading is
the insensitivity of the measures used. For example, Dillon
[4] has argued that this may account for the fact that the
literature has generally suggested no negative effects of
electronic text on comprehension. An important question this
raises is: If the differences in performance between paper-
based reading and on-line reading are that subtle, is it the
case that even the reader fails to detect them, let alone the
experimental method? Such a question is not so important for
theoretical purposes, but is critical from a design perspective.
Perhaps a more fundamental problem with many of the
studies has to do with the approach itself. After reviewing
this literature, Dillon [4, p. 1322] remarked on the distorted
view many researchers appeared to have of reading: “Most
[ergonomists] seem to concern themselves with the control of
3. 2
so many variables that the resulting experimental task bears
little resemblance to the activities most of us routinely
perform as ‘reading’.”
Taken together, a review of the experimental literature
suggests that, if we wish to extrapolate our findings to real
reading situations, one must be quite careful about designing
the experimental method. It must involve a task which is
representative of real world tasks and it must employ analytic
methods which capture issues of relevance to design. One
might argue that laboratory methods are simply inappropriate
for these reasons. While there are many issues laboratory
investigations of reading cannot address, we will demonstrate
that there are nonetheless many lessons for design which can
be produced systematically within such an approach.
METHOD
Choice of Experimental Task
Reading is a highly practised activity which forms a
component of a wide range of different activities, and which
serves many different purposes [12, 18]. For example, text
may be read by skimming it rapidly, scanning for a specific
piece of information, reading it for comprehension, or
reading it reflectively. Why a text is read, and the broader
context within which the reading activity is embedded, shape
these reading processes.
In choosing a task for the study, we wanted to select both an
ecologically valid task and one which would be rich in the
demands made on the document presentation medium. We
chose the task of text summarisation as fulfilling these
criteria. Reading in order to write a summary is a task which
requires a deep understanding of the text. Indeed, Winograd
[22] has pointed out that some of the strategies used in
summarisation may also relate to the more general process of
text comprehension.
Also, because text summarisation involves reading in order to
write, it has a strong connection with many of the kinds of
reading tasks we have observed “knowledge workers”
carrying out. For example, we found at the IMF that
economists spend a great deal of their time reading and
condensing information, whether it be from their own notes,
or the notes and documents of other people. In more general
terms, we found that, while editing documents on-line, 89%
of the time they read and refer to other documents. Thus it
seems that reading in conjunction with writing is a
fundamental part of this kind of professional work. As such,
this task seemed a good candidate for our purposes.
Choice of On-Line Condition
In choosing both the software and hardware for the on-line
condition, we had at least two alternative approaches. We
could attempt to optimise the configuration used in the on-
line condition by choosing the “best” available software and
hardware, or we could opt for a set-up which aimed to
emulate a more conventional situation — a typical
workstation running a commonly used word processing
application. The former approach would be, in a sense,
second-guessing what aspects of hardware and software best
support reading, which is antithetical to our approach. We
therefore chose the latter approach recognising that better
systems and better interfaces may significantly alter the
reading process. Nonetheless such an approach gives us
insights into the benefits and drawbacks in a typical on-line
reading situation.
Procedure
Subjects were 10 members of staff from our laboratory who
participated voluntarily. They were asked to summarise a 4
page article from a general science magazine. Five subjects
were randomly assigned to the “Paper” condition, and 5 to
the “On-line” condition. All subjects in the On-line condition
were experienced users of both application and system used
in the study. In the Paper condition, subjects were to read the
document on paper and summarise it on paper; in the On-line
condition, subjects were to read the document on-line and
write the summary using on-line tools.
The paper article was presented in three-column format and
was black on a white background with some colour pictures.
In the Paper condition, subjects were presented with 3
documents in a pile. On the bottom was the article to be
summarised, on top of this was some paper for the purposes
of making notes, and on top of this was a blank sheet of
paper on which the summary was to be written.
The On-line condition used a similar format for the article,
and was run on an Apple Macintosh Quadra 950 with a
ProNitron 80.19 colour screen 1152 pixels wide by 870
pixels high, and with a refresh rate of 75 Hz. Three open
documents were displayed in Microsoft Word (version 6.0).
The first was the article itself which was presented in page
layout mode so that, although each page was scrollable, it
visually resembled a single sheet of paper. On top of this was
a blank word document (in Normal view mode) with the title
“Notes” on the title bar. On top of this was another blank
word document (in Normal view mode) with the title
“Summary” on the title bar.
Subjects in both conditions were asked to create a brief
summary of the source article, and were told it should be
approximately 200-300 words long and provide a clear
indication of the main points and ideas of the document. It
was made clear that they could mark the article in any way
they saw fit and that they could make as many notes as they
wished. Subjects were told that they could refer back to the
source article at any point during the task.
Each session was video recorded. On completion, the
experimenter asked each subject a series of questions about
their activities during the task. In addition to these questions,
each subject was shown, and asked to comment on, various
clips of their behaviour which the experimenter had noted as
being interesting during the course of the task. The clips were
used to cue subjects’ recall as to what they were doing at
these points and why.
SELECTED FINDINGS
The data were analysed by watching the tapes and describing
subjects’ strategies during the course of each session, with
the aim of uncovering similarities and differences among
subjects and across conditions. Transcriptions of subjects’
comments and notes taken during the interviews were used to
supplement these descriptions. Only a selection of the
findings are presented — those which we feel have the most
important implications for design.
4. 3
Annotation While Reading
Subjects comments in both conditions indicated that
annotating and note- taking while reading was important in
deepening their comprehension of the text, and in helping
them to form a plan for writing the summary. Essentially, it
did this by drawing attention to important points and making
explicit the structure between them.
Paper. The majority of subjects (4 of 5), during the first pass
through the document, undertook some form of note-taking
activity while reading. In general such markings were made
quickly and interwoven with the ongoing reading.
Two subjects relied heavily on annotation of the source
document itself. Their markings included underlining, the use
of asterisks, and making notes in the margin. Marking was
idiosyncratic both in the choice of marks and in the ways in
which subtle style changes were used to different effect. For
example, the thickness of a line was used to indicate the
degree of importance of some piece of text for later
reference, while asterisks were used to link disparate sections
of text.
One reason for this kind of annotation was to provide a set of
markings for later reference. In effect, such markings helped
the subjects extract structure from the text on re-reading the
article. One feature of such marks was that they relied
heavily on the context of the original document, or as one
subject put it:
“The first reading was quite slow and the second
reading was skimming. The annotations helped me skim
when I was skimming in the second reading...Whenever I
finished writing about some point I skimmed forward - I
looked for the next annotated bit and then I just read
around it a little bit if I needed to remind myself of what
they were trying to say.”
Quite apart from the way these marks supported re-reading,
at least one subject who marked up the source document
suggested that the very act of making such marks is a process
which aids understanding and facilitates the building of an
internal representation of the text. This subject believed that
“as you underline something you re-read the words, and this
enforces it more." Previous research [1] offers confirmation
of this finding.
Notes written separately were also used as a resource for
later reference, and three of the subjects relied on this
strategy. However, this was different from annotating the
source document, in that it helped in re-structuring and
collating information from diverse locations. As one subject
put it, it provided “a pool of text and ideas that I can dip into
to write real sentences”. But the notes were more than this,
because they were also in the form of plans or outlines which
were enriched and modified iteratively during the course of
reading. A key feature of this kind of note-taking mentioned
was that it needed to be done without disrupting the main
reading task:
"And the fact that the notes aren’t in full sentences or
anything, they are deliberately shortened so as not to
interfere with reading and thinking and things, they are
just jotting."
Indeed, a more general characteristic of this kind of note-
taking on a separate piece of paper was its smooth integration
with reading. Note-taking was both frequent and interleaved
with reading the source document. Reasons why this was
possible are elaborated when we discuss navigation and
spatial layout issues.
On-line. In the On-line condition, 4 of the subjects expressed
that, had the document been on paper, their natural tendency
would have been to annotate the document in some way or
another. However only one of the subjects attempted to do
this on-line, using a complex customisation of the tool bar to
allow him to draw boxes around relevant sections of text or
to draw a line down the side of the text. In doing so, this
subject experienced a number of difficulties which interfered
with the smooth flow of reading:
“So the annotation was not as easy as all that... I think
the whole process would have been a lot quicker on
paper. Annotations are that much more flexible because
you can write in the margins which you can't very easily
do here. You have to establish a new text block and then
have to write.”
This comment emphasises the difficulty of making marks
within the context of the article due to the inflexibility of
interaction techniques via mouse and keyboard.
Another reason for the general reluctance to annotate the
document was that annotating on-line results in making
changes to the original document: emboldening, italicising or
underlining all alter the original document. Subjects indicated
that they wanted to regard annotations as a separate layer of
the document, and felt uncomfortable not maintaining this
distinction. But it was not just the fact that the methods
interfered with the base document. They also expressed
dissatisfaction at the fact that they could not easily make
annotations that were perceptually distinct from the
underlying text. This distinctiveness is, in part, what supports
quick re-reading by drawing attention to points of interest.
It seems, then, that note-taking separate from the source
document was more appropriate with these on-line tools.
Four subjects did do this. Two of them took notes using the
copy and paste function to transfer information from the
source text directly into a separate document. For one of
these subjects this was fundamentally different from the note-
taking activity observed in the Paper condition in that the
transfer of verbatim information was followed by extensive
editing on the copied text. This text then became the basis of
the final written summary. The other two subjects took most
of their notes only after reading the whole source document,
producing a plan almost entirely from memory with very
little reference back to the source document. In all cases,
none of the frequent back and forth movement between notes
and reading was observed as in the Paper condition.
Summary. To summarise, we found that the ability to
annotate while reading was important in enforcing an
understanding of the source document, and helped in
planning for writing. Three major differences between
conditions were noted:
(1) Annotation on paper was relatively effortless and
smoothly integrated with reading compared to on-line
5. 4
annotation which was cumbersome and detracted from
the reading task.
(2) Paper supported annotation of the source document itself
which many subjects felt was important. The On-line
condition did not provide enough flexibility to do this,
nor did it support the richness and variation of
annotations on paper. In addition, subjects were reluctant
to tamper with the original text and wanted their marks
perceptually distinct from the original document.
(3) Subjects in both conditions also took notes on separate
documents. Note-taking in the Paper condition was
frequent and was interleaved with reading. In the On-line
condition, reading was interspersed with long periods of
editing, or note-taking was done after reading with little
reference back to the source document.
Movement Within and Between Documents
Subjects spent a good deal of time moving through their
documents in both conditions. It emerged from the interviews
that this served at least three important functions:
Planning. Subjects talked about the need to make
connections between different parts of a document to create a
plan and develop an overall sense of its structure — a process
supported by skim reading though the text.
"This bit is where you seem to flick through some
pages." (Exp.)
"I get the 1st and 2nd pages and then look at the
third...[I’m] trying to connect bits of information to write
on my plan - areas that I want included in the same
paragraph - or topics I want included in my paragraph."
(Subject, Paper condition)
For Reference. Subjects were also seen scanning the source
article to check on facts, particular expressions, and
spellings:
“I was looking for the “dragon’s blood tree”. I was
trying to find that name...I couldn't remember exactly
what they called it.” (Subject, On-line condition)
For Checking Understanding. There were several times when
subjects needed to re-read sections of the document to
confirm or clarify their understanding:
"You refer back to it after you have read something up
here." (Exp.)
"Yes it was because of the name of the tree - Cinnabar.
And I needed to know if it was the same tree as they
talked about over here." (Subject, Paper condition)
Paper. Movement through paper documents was
characterised by its speed and automaticity. For example,
page turning in the Paper condition was often anticipatory,
with one hand often lifting a page even before it was read,
minimising disruption between reading the text at the end of
one page and the beginning of another. That it was so
automatic is highlighted by the fact that sometimes pages
were turned prematurely
Partly what speeded this movement through paper was the
use of two hands in combination with the tangibility afforded
by the paper, enabling the effective interleaving and
overlapping of activities. For example, subjects used one
hand to keep hold of a page while searching through the
document, or referring to another page, with the other. By
marking one’s place with one hand, subjects could quickly
return to their prior activity. Two handed manipulation also
offered opportunities for other types of interactions such as
writing while moving a page closer to read, or turning over a
page with one hand while the other was used to feel how
many pages were left. The quick flicking through pages when
searching and skimming was also quite clearly a two-handed
action using one hand as an anchor for the actions of the
other. Aside from issues of speed, physical cues such as
thickness provided important implicit information about
where in the document a subject was, as well as its overall
length.
A final feature of paper that showed itself to be important in
navigation was the fixity of information with respect to the
physical page. Consistent with previous research [20, 13],
subjects in the Paper condition showed that they had acquired
incidental knowledge of the location of information by
reference to its physical place on the page:
"I knew it was on the 3rd page ‘cause I could remember
that it was in the middle [column] ‘cause it was this
botanic bit so I knew it was on this page.”
This incidental memory meant that subjects could flick
through the document quickly using only surface visual
features.
On-line. In the On-line condition, whether scrolling or
paging through the document, navigation was found to be
irritatingly slow and distracting — the rendering of the
pictures exacerbating the problem:
“I was getting very annoyed and clicking on those things
and shouting at it... I just found that it took ages and
ages. I was losing interest - it was distracting me from
the point.”
However, the sheer length of response time was only one
drawback. Another feature which limited quick movement
was the fact that one handed input meant that navigation
activities had to be performed serially with other activities.
Combining a single source of input with a significant system
response time meant that interleaving any two activities was
cumbersome, as well as making it impossible to perform
anything concurrently.
All of this was made worse by the lack of feedback in
response to many actions which meant that subjects were not
supported in temporarily committing to various actions. For
example, one subject tried to get to a particular location in
the document using the drag function on the scroll bar, but
the document contents did not move concurrently in response
to the dragging. As a result, the subject had to release the
page icon and then wait to find out if she was in the right
place. This kind of scrolling can make finding exact locations
extremely time-consuming.
Spatial constraints on the interface also interfered with quick,
flexible movement. For example, in order to move an
electronic window, subjects were required to access the title
bar which was often obscured by another window. Similar
problems were encountered by subjects attempting to resize
windows or scroll — actions which were also restricted to
6. 5
limited active areas. The overall result of this was that
subjects’ attention was drawn away from reading, as they
attended to the often complicated task of dealing with the
mechanics of moving in these spatially constrained ways.
Unlike with the paper documents, assessing document length
was difficult to do in any incidental or implicit way. For
many of the reasons just discussed, subjects complained
about having to scroll through to assess a document’s length.
Further, while information about document length and page
number was available at the bottom of the active document,
none of the subjects remembered using this information when
asked.
Finally, whilst subjects in the Paper condition used the fixity
of information with respect to the physical page to support
search, it was an interesting question whether the use of Page
Layout mode was used in the On-line condition for similar
purposes. This mode fixes the contents of the document with
respect to its electronic page. When asked, subjects said that
the fact that it was difficult to view a whole page meant that
they had little sense of the location of information with
respect to its page. However, comments by subjects in both
conditions suggested that some did use the fixed relationship
between pictures and text to support navigation.
Summary. Movement through documents in both conditions
was found to be important for information organisation, for
reference, and for checking understanding. However, there
were four ways in which this movement differed:
(1) Navigation through paper was quick, automatic, and
interwoven with reading. In the On-line condition it was
slow, laborious, and detracted from reading.
(2) Two handed movement through paper allowed readers to
interleave and overlap navigation with other activities,
and allowed temporary commitment to interim activities.
Movement through the on-line documents required
breaking away from ongoing activity and committing to
navigation activities because it was: one-handed, not
always accompanied by immediate feedback, and
spatially constrained to active areas on the screen.
(3) Subjects reading from paper used its tactile qualities to
support navigation and to implicitly assess document
length. Subjects in the On-line condition failed to make
use of explicit cues such as page length to assess
document length.
(4) The fixity of information with respect to physical paper
pages supported incidental memory for where things
were, which in turn supported search and re-reading
activities. The inability to see a complete page may
undermine use of this feature on-line, but it appears
pictures were used as anchor points.
Spatial Layout
Finally, we consider differences in the way subjects in both
conditions laid out their documents in space. The reasons
why the spatial arrangement was important can be
summarised as follows:
To Gain a Sense of Overall Structure. Subjects talked of the
need to lay pages out in order to form a mental picture or
overview of the document:
“I did put it into two page mode at one point just to get a
sense of the how the article was structured and that was
actually very helpful.” (Subject, On-line cond.)
To Cross Reference. They also commented on the need to lay
pages close to each other in order to check on or to relate
specific pieces of information across pages.
"When I was referring back to it I didn’t like the paper
clip being on it...I could lay out the pages and have two
pages at once or even so that I could lay them out like
that...so that I could find whatever bit I was looking for
and also so that I could join things together." (Subject,
Paper cond.)
To Interleave Reading and Writing. Juxtaposing the article to
be read with the document being written also appeared to be
important in the writing process:
“I always [overlap the source document with the
plan/summary page] so that you can see it and write it at
the same time so you are not looking here and then
writing over there You have them together and you just
write from one to the other don’t you...” (Subject, Paper
cond.)
Paper. The ability to unclip the documents in this experiment
meant it was possible to lay out individual pages in space. All
5 subjects exploited this fact. Visualising more than one page
in order to see its structure was one reason for this. In
addition, pages were often arranged such that they were
within easy reach for quick reference:
"You have got p1 and next to it p2 and then you have got
your notes. Any reason for that?" (Exp.)
"It will be there in case I need to refer back to it but also
to the side so that I am concentrating on p2." (Subject)
Spatial arrangement was also dynamic. Pages would be
moved in and out of the centre of attention as different parts
of the document became more or less important. Hidden
pages would be brought to the front, while others would be
covered up or if needed for quick reference later, moved to a
more peripheral region of the workspace.
The ability to quickly refer to other documents while writing
was also important. Most subjects made constant shifts back
and forth between reading the source document and writing
during planning. The source document and the plan/summary
document would be positioned close together, often
overlapping, during the planning phase of writing so that
sections of the source text could be close what was currently
being written.
notes
1
2,3,
4
summary
Figure 1. An example of document placement layout during the
planning phase of writing (pages 1-4 being the source article).
7. 6
One example of how documents were arranged during this
planning stage is shown in Figure 1. The subject, when
writing the summary, needed to refer to three documents at
once — the source document, the notes, and the summary
itself. By positioning and overlapping the documents this
way, it was possible to make the desired information
available in a compact space. At one point the notes were
moved several times within the space of seconds as the need
to refer to the source document several times in quick
succession was demanded by the creation of a single
sentence. This points to the need to be able to access other
documents quite quickly so that reading can be effectively
integrated with the process of writing.
Another important characteristic of the interplay of reading
and writing was the way in which the paper could be quickly
and even simultaneously accessed while reading and
referring to other documents. Aside from its concurrent
accessibility, another feature of separate reading and writing
“spaces” is it allows independent manipulation. This is
important when one considers that the optimal angles for
reading and writing may be quite different from one another.
Subjects placed the paper at a greater angle away from the
perpendicular (approx. 30 - 45 degrees) while writing than
while reading (0-20 degrees). Additionally, the ergonomics
for writing with a pen required continuous minor adjustments
to the paper over time, using one hand as an anchor.
On-line. The most obvious constraint on spatial layout is the
restricted field of view offered by the screen. Subjects did
make use of multiple page view mode, however, even when
only two pages were displayed, the resolution was essentially
unreadable. At readable resolution, subjects could never see
even one whole page at a time. Subjects expressed a great
deal of concern over this trade-off, and talked of feeling lost
in that much of the necessary contextual information for
developing a sense of text and location lay beyond the
window boundaries.
By resizing and overlapping windows, however, subjects
were able to do some cross-referencing of documents. But
again, limitations on the space available and the lack of
flexibility in the ways they could be laid out caused problems
for quick access across documents. For example, there were
occasions when subjects wanted to use all 3 documents for
various parts of the task but this called for very small window
sizes or obscured windows. While Page view allowed the
display of multiple pages, it displayed them in a fixed order.
The lack of any real periphery also meant that subjects had to
give some thought to selecting and displaying documents.
Generally speaking the task was divided into discrete phases:
first pass, note-taking, and summary writing. Window
positioning and sizing tended to occur at the boundary
between these different phases and not during the phases.
Thus subjects were having to anticipate what they would
need rather than reacting to the requirements of the task as
they arose.
A commonly used arrangement, especially in the note-taking
or writing phase, was placing two documents side by side
with minimal overlap. This avoided problems with the
currently active window being sent to the foreground which
was not only time-consuming, but was sometimes undesired,
such as in the case of this subject trying to alternate between
taking notes in one document and navigating in another:
“It was annoying that its got to be the current window in
order to be able to move, and the current window is
always the one in front. So I can't hide the one I was
typing into behind there and simply move the pointer
into it and start typing or pasting or whatever.”
This foreground/background distinction constrained the
extent to which a page was accessible for writing
concurrently with reading — something which was clearly
important for these subjects.
Summary. Laying out pages in space was found to important
for gaining an overall sense of the structure of a document,
for referring to other documents, and for integrating reading
with writing. There were three main differences in the two
conditions:
(1) Laying out paper in space allowed the visualisation of a
great deal of information, and provided a holding space
for quick reference to other documents. The restrictions
on field of view for on-line documents meant that
subjects either lost resolution through shrinking the
documents, or had to use overlapping windows.
(2) The layout of paper documents was flexible and
dynamic, providing quick access for cross-referencing,
and supporting the juxtaposition of documents for
reading and writing. In the On-line condition, subjects
had to plan in advance how to position and size the
windows in anticipation of their future requirements.
(3) Paper supported the use of independent reading and
writing spaces which could be accessed concurrently and
manipulated independently. Because only one window
could accept input at a time, subjects in the on-line
condition experienced difficulties integrating reading and
writing.
DISCUSSION AND DESIGN IMPLICATIONS
On-line tools clearly offer valuable benefits for the writing
process: They support fast keyboard entry, information re-
use, easy modifiability of text, and provide specialised tools
such as spell-checkers and word count facilities. But in the
support of reading, and more specifically in support of
reading for the purpose of writing, this study has shown that
the benefits of paper far outweigh those of on-line tools.
Further, unlike much of the existing literature comparing
paper to screen, none of these benefits have do with issues of
screen resolution, contrast or viewing angle. Rather, the
critical differences have to do with the major advantages that
paper offers in supporting annotation while reading, quick
navigation, and flexibility of spatial layout. These, in turn,
allow readers to deepen their understanding of the text,
extract a sense of its structure, plan for writing, cross-refer to
other documents, and interleave reading and writing.
In terms of design, one could make the decision not to
attempt to supplant paper for reading but rather to develop
scanning technologies to ease the transition of paper-based
information into the digital realm for the purpose of writing.
Many such technologies developed in our own laboratory
[17] as well as others [e.g., 11] could well be applied in this
8. 7
way. This approach accepts that paper is, and is likely to
remain, the best medium in support of reading.
But one could also choose to look to paper for improving the
design of digital reading technologies, both in terms of the
development of better hardware and software. This is not to
say that such technologies must become more “paper-like” or
support reading in the same way as paper. Rather, the use of
paper can be used to help draw attention to the kinds of
processes that are important to support within reading, and to
suggest alternative ways in which that support could be
offered. On the basis of the findings of this experiment, some
examples are as follows:
Recognise that annotation can be an integral part of
reading and build support for these processes. For many
reasons which this experiment has pointed out, on-line tools
simply did not support the seamless integration of note-taking
while reading. Whether annotating the source document
itself, or annotating within a separate document, the process
was effortful and distracting.
Part of the problem is the lack of support for free text
annotation. Not only does stylus input support variegated,
idiosyncratic marks, but it also supports the making of marks
in context, and as a distinct layer on top of typed text. It is
clear that, for this purpose, OCR is unnecessary, since it is
not so important that these notes be used verbatim for writing
as it is that they support the process of helping to extract
structure from a document and to speed re-reading. For this
purpose, it is more important to concentrate on input
techniques which maximise richness and variation in
marking, such as the use of pressure-sensitivity to vary line
thickness, and the use of texture and colour. Careful
consideration will then need to be given to how the input
technology and display device interact. For example, if
emulating direct pen input onto the document through a
touch-sensitive screen, then the screen will need to be quite
mobile to be placed at the correct ergonomic angle for
writing.
This experiment has also found that there is a clear need for
markings to be functionally distinct from the “base”
document so that changes to the underlying text are not
actually implemented unless the reader explicitly allows it.
The use of hand-written markings is a way of enforcing the
idea of markings as a layer on top of a document. However,
digital technologies can offer interesting advantages over
paper by allowing readers to choose whether the annotations
are to be permanent or temporary, by offering selective
viewing capabilities according to who wrote the marks or
when, or by allowing marks to actually effect changes in the
underlying text if desired. Many such features can be found
in a prototype collaborative writing system called “MATE”
[10]. While there are some systems such as TkMan which
offer the ability to highlight text, we know of no commercial
systems that adequately support these functions.
The need to support quicker, more effortless navigation
techniques. Improvements in system response times clearly
will benefit on-line navigation, but a range of other issues
need to be addressed if it is to be quick and non-disruptive to
ongoing reading activities. Improved input techniques could
have significant impact here. Multiple input, or at least two-
handed input could support a whole range of new navigation
techniques, such as using one hand to mark one’s place in a
document while scrolling or page turning with the other. This
would also allow for the support of concurrent activities like
writing on one document while navigating in another.
Multiple input raises another design possibility which is that
kind of input can depend on the device used. For example,
drawing directly from the paper case, direct input through
touch could correspond to navigation activities whereas
stylus input could produce markings.
Another major design issue that the findings help make clear
is the need for better feedback to aid navigation. Not only
does it need to be immediate, but, if it is to support quick
interim actions, it needs to be continuous during the course of
an action, and not simply delivered on its completion (e.g.,
releasing the mouse button after dragging). Further, other
modes of feedback such as audio or even tactile feedback
could provide more implicit cues as to where in a document
one is, and how much information there is. For example, one
could imagine using non-speech audio cues [7] to indicate the
thickness of a document, or number of pages left. This
feedback could be provided in the natural course of
navigation so that the cues are picked up incidentally. This is
not to say that visual cues have already been exploited as
much as they could be. The use of perspective and
stereoscopic displays may offer a "third" dimension to better
represent features such as document thickness, for example.
Of additional consideration for the support of navigation is
heavier reliance on a mode which fixes information with
respect to a page. However, this study suggests that this will
only be of benefit provided a whole page can be viewed at
once. Designers might also consider emphasising endemic
reference points such as page corners and edges for use in
information search.
The need to support more flexibility and control in spatial
layout. With regard to the issue of spatial layout, the study
points out many benefits of a larger screen size.
Improvements in display technology will mean cheaper and
larger display areas with increased resolution. This will
improve readers’ ability to gain quick access to other
documents, which was shown to be crucial to this kind of
reading task. However, even a large screen is not sufficient to
offer the reader the same level of flexibility as paper in terms
of document positioning or the number of documents
displayed.
One approach is to create virtual space whereby the visible
screen space forms only a part of the workspace available,
such as in fvwm in UNIX, and in Hypercard. In these
applications, documents can be moved around this larger,
virtual workspace through a separate, miniaturised overview.
This offers an advantage over paper in that it can potentially
help to manage extremely large workspaces not practically
possible in the paper domain. However, it is fundamentally
different in that, in the paper case, what is at the focus of
attention and what is at the periphery exists in a spatial
continuum. In these electronic cases, peripheral documents
and documents at the focus of attention exist in separate
virtual spaces. Thus, drawing a new document into focus, or
setting them aside, requires a shift of attention to a different
representational space, and the dynamics of this will
necessarily take more time and effort.
9. 8
The spontaneous dynamics of spatial layout may also be
facilitated by improvements in interaction techniques. For
example, designers could consider doing away with the idea
of constraining document movement to “active” areas such as
title bars. There are various window managers in current
systems that allow the whole document to be made “active”
(e.g. Motif Window Manager in UNIX). However, this is
done by requiring users to enter a navigation mode.
Consistent with a previous suggestion, the need for these
temporal modes might be handled by the use of multiple,
specialist input devices such as hand versus stylus.
Another implication which arises from the experiment is the
need for increased flexibility in the way that readers can
arrange on-line documents in the space available. This ranges
from simple suggestions like removing constraints on
readers’ ability to lay any two pages side-by-side, to the
possibility of placing documents and pages at angles to each
other. Allowing readers to make virtual “piles” is also an
interesting idea, and one that interface designers have tried
elsewhere [14].
An alternative approach to increasing the flexibility of spatial
layout is to treat portable, wireless displays as a way of
providing a physical embodiment for documents. As with
paper, documents can be placed by moving the display itself,
rather than moving documents within a display. However,
unless one has many of such displays, this approach does not
address the requirement of providing concurrent access to
multiple documents or pages. However, the experiment has
pointed to the effectiveness of independent reading and
writing spaces in the paper domain. Thus, two such displays
— one for reading and one for writing — might deliver
valuable benefits.
CONCLUSIONS
Using a broadly descriptive approach, we have carried out a
laboratory study which draws attention to a number of
important implications for the design of better interfaces for
reading digital documents. Not only was this study motivated
by our own field studies of reading in real work settings, but
the design of the experiment was grounded in and guided by
these observations. In turn, the findings help us to understand
much of what we observed in the reading practices of these
organisations. For example, it gives us a better understanding
of why economists at the IMF always mark up and review
their colleagues documents on paper, why they choose to
read important documents from paper, and why paper
documents surround their workstations as they do their
authoring work [21].
What we hope to have demonstrated is the value of
contrasting paper-based and on-line reading specifically with
an eye to design. By doing so, we can begin to unravel the
complexity of the design challenge, and begin to make some
better informed predictions about when and if we can ever
expect a paperless future.
ACKNOWLEDGEMENTS
Thanks to many members of RXRC Cambridge for
comments on previous drafts of this paper and help in
carrying out the experiment. Most especially thanks to Paul
Dourish, Richard Harper, and Jim Holmes.
REFERENCES
1. Anderson, T.H. and Armbruster, B.B. (1982). Reader
and text-studying strategies. In Otto, W. and White, S.
(Eds) Reading Expository Material. London: Academic
Press.
2. Bolter, J.D. (1991). Writing space: The computer,
hypertext, and the history of writing. Hillsdale, N.J.:
Erlbaum.
3. Creed, A., Dennis, I., and Newstead, S. (1987). Proof-
reading on VDUs. Behaviour and Information
Technology, 6, 3-13.
4. Dillon, A. (1992). Reading from paper versus screens: A
critical review of the empirical literature. Ergonomics,
Vol. 35(10), 1297-1326.
5. Dillon, A., Richardson, J., and McKnight, C. (1990)
Navigation in hypertext: a critical review of the concept.
In Diaper, D., Gilmore, D., Cockton, G., and Shackel, B.
(Eds) INTERACT ’90. North Holland, Amsterdam.
6. Egan, D. E., Remde, J. R., Gomez, L. M., Landauer,
T.K., Eberhardt, J., & Lochbaum, C.C. (1989).
Formative design-evaluation of SuperBook. ACM
Transactions on Information Systems ,7 (1), 30-57.
7. Gaver, W. (1992). Auditory icons: Using sound in
computer interfaces. Human-Computer Interaction, 2,
167-177.
8. Gould, J.D. and Grischowsky, N. (1984) Doing the same
work with hard copy and cathode ray tube computer
terminals. Human Factors, 26, 323-337.
9. Gould, J.D., Alfaro, L., Barnes, V., Finn, R.,
Grischowsky, N., and Minuto, A. & Salaun, J. (1987)
Reading is slower form CRT displays than from paper:
Attempts to isolate a single variable explanation. Human
Factors, 29, 269-299.
10. Hardock, G. (1992). Unpublished Masters’ thesis. Dept.
of Computer Science, University of Toronto.
11. Johnson, W., Jellinek, H., Klotz, L., Rao, R., & Card, S.
(1993). Bridging the paper and electronic worlds: The
paper user interface. In Proceedings of INTERCHI ’93,
(24-29 April, Amsterdam), 507-512.
12. Lorch Jr., R.F., Lorch, E.P. and Klusewitz, M.A. (1993)
College students’ conditional knowledge about reading.
Journal of Educational Psychology, 85, 239-252.
13. Lovelace, E.A. and Southall, S.D. (1983) Memory for
words in prose and their locations on the page. Memory
and Cognition, 11, 429-434.
14. Mander, R., Salomon, G., & Wong, Y. (1992). A “pile”
metaphor for supporting casual organization of
information. In Proceedings of CHI ’92, (Monterey,
CA.), 627-634.
15. Muter, P., Latremouille, S.A., Treunit, W.C., and Beam,
P. (1982) Extended reading of continuous text on
television screens. Human Factors, 24, 501-508.
16. Muter, P. & Maurutto, P. (1991) Reading and skimming
from computer screens: the paperless office revisited.
Behaviour & Information Technology, 10(4), 257-266.
10. 9
17. Newman, W., & Wellner, P. (1992). A desk supporting
computer-based interaction with paper documents.
Proceedings of CHI ’92, 587-592.
18. Oakhill, J. and Garnham, A.(1988) Becoming a Skilled
Reader. Oxford: Blackwell.
19. Richardson, J., Dillon, A., McKnight, C. And Saadat-
Samardi, M. (1988) The manipulation of screen
presented text: experimental investigation of an interface
incorporating a movement grammar, HUSAT Memo
431, Loughborough University of Technology.
20. Rothkopf, E.Z. (1971) Incidental memory for location of
information in text. Journal of Verbal Learning and
Verbal Behavior, 10, 608-613.
21. Sellen, A.J., and Harper, R.H.R. (1996). Paper as an
analytical resource for the design of new technologies.
Submitted to CHI ‘97.
22. Winograd, P.N. (1984) Strategic difficulties in
summarising texts. Reading Research Quarterly, 19,
404-425.
23. Wright, P. and Lickorish, A. (1983) Proof-reading texts
on screen and paper. Behaviour and Information
Technology, 2, 227-235.