This document discusses the plurality of verbs in natural language. It proposes that verb roots have predicative, cumulative meanings from the start, like noun roots. This predicts that sentences involving multiple events described by a verb, like "Twenty children ate ten pizzas," can receive a cumulative interpretation without additional operators. Verb plurality comes from lexical cumulativity of roots rather than syntactic operators. Phrasal plurality is constrained and comes from agreement with a plural DP nearby in the structure. The document also argues against a [singular] feature in English, proposing that singular forms result from lack of an overt plural feature rather than a singular feature.
This document provides an introduction to syntax and context-free grammars. It defines syntax as the study of the structure of language and the relationships between words. Context-free grammars are introduced as a formal device for representing syntactic structure, using rewrite rules and tree structures. Key concepts covered include constituents, parts of speech, grammatical relations, and phrase structure vs. dependency representations. The document also discusses determining syntactic categories and the role of context-free grammars in linguistic theory.
1) The document defines a phrase as a group of words that does not contain a subject or verb. It provides examples of phrases.
2) Phrases are analyzed using Universal Grammar and native speaker intuitions. This results in a theory of grammar that includes words, phrases, and categories like nouns and verbs.
3) Phrase structure rules determine what elements can go into a phrase and their ordering. Phrases are formed by heads like nouns and verbs combined with other constituents.
This document provides an overview of syntax and generative grammar. It defines syntax as the way words are arranged to show relationships of meaning within and between sentences. Grammar is defined as the art of writing, but is now used to study language. Generative grammar uses formal rules to generate an infinite set of grammatical sentences. It distinguishes between deep structure and surface structure. Tree diagrams are used to represent syntactic structures with symbols like S, NP, VP. Phrase structure rules, lexical rules, and movement rules are discussed. Complement phrases and recursion are also explained.
This document provides an overview of syntax and generative grammar. It defines syntax as the way words are arranged to show relationships of meaning within and between sentences. Grammar is defined as the art of writing, but is now used to study language. Generative grammar uses formal rules to generate an infinite set of grammatical sentences. It distinguishes between deep structure and surface structure. Tree diagrams are used to represent syntactic structures with symbols like S, NP, VP. Phrase structure rules, lexical rules, and movement rules are discussed. Complement phrases and recursion are also explained.
We all do our research and put an effort in making a clear and an accurate presentation, but I'd be glad if this could help especially for those who are taking major in English and the like. Good luck!
A proper credit would be appreciated.
• Rosebella B. Carredo, BSEd Major in English, University of Mindanao
The document discusses the creation of a music magazine called "Rumble" targeted towards teenagers. To design the magazine, the author conducted research including questionnaires to understand what their target audience wanted. Based on the feedback, the author included many photos and stories about music artists and festival events. The front cover was designed to catch readers' attention with posters and event information. The magazine also had a simple color scheme of black, white, red, and yellow and used fonts and layouts found in other successful indie magazines.
This short document shares photos from various photographers without captions or context. It ends by inviting the reader to be inspired and create their own presentation on SlideShare.
The document discusses different aspects of music videos and their relationship to songs. It addresses how music videos can represent the meaning of lyrics visually, help tell the artist's story, and promote the artist's image. Specifically, it discusses how the artist's performance in the video can make the song feel more realistic and engage audiences. Technical elements like camerawork, lighting, editing, and mise-en-scène are also important for holding the video together and setting the right mood. Overall, the document analyzes how music videos can illustrate, amplify, or sometimes disregard the meaning of the song through visuals and technical production qualities.
This document provides an introduction to syntax and context-free grammars. It defines syntax as the study of the structure of language and the relationships between words. Context-free grammars are introduced as a formal device for representing syntactic structure, using rewrite rules and tree structures. Key concepts covered include constituents, parts of speech, grammatical relations, and phrase structure vs. dependency representations. The document also discusses determining syntactic categories and the role of context-free grammars in linguistic theory.
1) The document defines a phrase as a group of words that does not contain a subject or verb. It provides examples of phrases.
2) Phrases are analyzed using Universal Grammar and native speaker intuitions. This results in a theory of grammar that includes words, phrases, and categories like nouns and verbs.
3) Phrase structure rules determine what elements can go into a phrase and their ordering. Phrases are formed by heads like nouns and verbs combined with other constituents.
This document provides an overview of syntax and generative grammar. It defines syntax as the way words are arranged to show relationships of meaning within and between sentences. Grammar is defined as the art of writing, but is now used to study language. Generative grammar uses formal rules to generate an infinite set of grammatical sentences. It distinguishes between deep structure and surface structure. Tree diagrams are used to represent syntactic structures with symbols like S, NP, VP. Phrase structure rules, lexical rules, and movement rules are discussed. Complement phrases and recursion are also explained.
This document provides an overview of syntax and generative grammar. It defines syntax as the way words are arranged to show relationships of meaning within and between sentences. Grammar is defined as the art of writing, but is now used to study language. Generative grammar uses formal rules to generate an infinite set of grammatical sentences. It distinguishes between deep structure and surface structure. Tree diagrams are used to represent syntactic structures with symbols like S, NP, VP. Phrase structure rules, lexical rules, and movement rules are discussed. Complement phrases and recursion are also explained.
We all do our research and put an effort in making a clear and an accurate presentation, but I'd be glad if this could help especially for those who are taking major in English and the like. Good luck!
A proper credit would be appreciated.
• Rosebella B. Carredo, BSEd Major in English, University of Mindanao
The document discusses the creation of a music magazine called "Rumble" targeted towards teenagers. To design the magazine, the author conducted research including questionnaires to understand what their target audience wanted. Based on the feedback, the author included many photos and stories about music artists and festival events. The front cover was designed to catch readers' attention with posters and event information. The magazine also had a simple color scheme of black, white, red, and yellow and used fonts and layouts found in other successful indie magazines.
This short document shares photos from various photographers without captions or context. It ends by inviting the reader to be inspired and create their own presentation on SlideShare.
The document discusses different aspects of music videos and their relationship to songs. It addresses how music videos can represent the meaning of lyrics visually, help tell the artist's story, and promote the artist's image. Specifically, it discusses how the artist's performance in the video can make the song feel more realistic and engage audiences. Technical elements like camerawork, lighting, editing, and mise-en-scène are also important for holding the video together and setting the right mood. Overall, the document analyzes how music videos can illustrate, amplify, or sometimes disregard the meaning of the song through visuals and technical production qualities.
This document provides an overview of Deep Patel's winter training at the Oil and Natural Gas Corporation Limited (ONGC) Hazira plant from December 7, 2015 to January 6, 2016. It discusses ONGC's role in India's oil and gas production, describes the various processing units at the Hazira plant including co-generation, oil and gas processing, and environmental and safety systems. It also acknowledges and thanks the individuals who provided guidance and support during the training period.
The document discusses a marketing plan for Puzhal. It likely outlines strategies to promote and sell products or services related to Puzhal. The plan aims to increase awareness of Puzhal and drive sales or engagement through targeted campaigns and messaging.
- Autodesk has been using machine translation since the early 2000s, starting with rule-based MT and later moving to statistical MT.
- The current MT infrastructure at Autodesk supports 14 languages and has increased translator productivity by up to 100% while reducing costs by up to 30%.
- Future plans include further integrating MT, terminology handling, and other language tools to better support translators and automate routine tasks.
The document summarizes the Ford-Fulkerson algorithm for finding the maximum flow in a flow network. It defines key terms like flow network, source, sink, flow, residual graph and augmented path. It then outlines the steps of the Ford-Fulkerson algorithm to incrementally send flow along augmented paths from the source to the sink until no more such paths exist. An example applying the algorithm to find the maximum flow in a sample network is provided with illustrations of the residual capacities after each flow augmentation.
Preparation Of Specimen For Microscopic ExaminationPATEL DEEP
The document provides detailed steps for preparing metallographic specimens for microscopic examination, including:
1) Cutting a representative sample from the material being tested, mounting the sample, grinding it with progressively finer grit paper, and polishing it to a mirror finish.
2) Etching the polished sample to reveal microstructural features by selectively corroding the material, then washing and drying it.
3) The final prepared sample is then ready for examination under a microscope to study properties like grain size and phase distribution at different magnifications. Proper preparation is crucial to obtain accurate results without introduced artifacts.
The document summarizes the key components and functions of a vehicle transmission system. It discusses the purpose of transmitting engine torque to drive the wheels. It then describes the main types of transmissions including manual, automatic, CVT, and their basic workings. The document also explains the purpose and function of key components that work together in a transmission system, such as the clutch, gearbox, driveshaft, differential, and universal joints.
The Ins and Outs of Preposition Semantics: Challenges in Comprehensive Corpu...Seth Grimes
Presentation by Nathan Scheider, Georgetown University, to the Washington DC Natural Language Processing meetup, October 14, 2019, https://www.meetup.com/DC-NLP/events/264894589/.
Preposition Semantics: Challenges in Comprehensive Corpus Annotation and Auto...Seth Grimes
The document summarizes Nathan Schneider's presentation on preposition semantics. It discusses challenges in annotating prepositions in corpora and approaches to their semantic description and disambiguation. It presents Schneider's work on developing a unified semantic scheme for prepositions and possessives consisting of 50 semantic classes applied to a corpus of English web reviews. Inter-annotator agreement for the new corpus was 78%. Models for preposition disambiguation were evaluated, with the feature-rich linear model achieving the highest accuracy of 80%.
The document discusses the key concepts of syntax including:
- Syntax examines how words are combined to form sentences.
- Speakers have linguistic competence which includes understanding grammaticality, word order, constituents, functions, ambiguity, and paraphrase.
- Generative grammar uses phrase structure rules to represent the hierarchical structure of sentences and generate all possible grammatical sentences.
- Tests like substitution and movement are used to determine if a string of words forms a constituent.
This document provides an introduction to linguistics and summarizes several key concepts and theories. It covers the main branches of linguistics like phonetics, phonology, morphology, syntax, semantics, and pragmatics. Some key topics discussed include morphemes, language acquisition theories, Chomsky's work on universal grammar, and the history and development of the English language. The document also defines several important linguistic terms and outlines different approaches to teaching English as a second language.
Word Frequency Effects and Plurality in L2 Word Recognition—A Preliminary Study—Yu Tamura
This document summarizes a preliminary study that examined the effects of word frequency and plurality on L2 word recognition in Japanese learners of English. The study tested 32 Japanese undergraduate and graduate students using a picture matching task and an L1 translation matching task with English nouns that were either singular-dominant, plural-dominant, or had similar frequencies for both forms. Results from the L1 matching task showed significantly faster response times for plural-dominant nouns in plural form compared to singular form, but no differences for other word types. The picture matching task showed a significant interaction where response times differed between singular and plural forms depending on the word type. Overall, the results provide preliminary evidence that plurality information may be represented for
The document discusses different parts of speech, beginning with nouns. It defines nouns and provides examples of different types of nouns such as proper, common, abstract, collective, concrete, and compound nouns. It then discusses pronouns, adjectives, verbs and their classifications. For nouns, it covers common and proper nouns. For pronouns, it discusses personal, reflexive, demonstrative, interrogative, indefinite and distributive pronouns. It also provides examples and exercises to identify different parts of speech in sentences.
Nouns are words that name people, places, things, or ideas. They serve various functions in sentences, such as subjects, objects, and modifiers. There are different types of nouns including common nouns, proper nouns, countable nouns, uncountable nouns, singular nouns, and plural nouns. Nouns can also take possessive forms to indicate ownership or relationships between nouns.
Nouns are words that name people, places, things, or ideas. They serve various functions in sentences, such as subjects, objects, and modifiers. There are different types of nouns including common nouns, proper nouns, countable nouns, uncountable nouns, singular nouns, and plural nouns. Nouns can also take possessive forms to indicate ownership or relationships between nouns.
This document provides information about nouns, including the definition of a noun, the four main word classes, and different types of nouns such as common and proper nouns, countable and uncountable nouns, collective nouns, compound nouns, and plural noun rules. It discusses how to make nouns plural depending on their ending, including irregular plural nouns, and provides examples for each rule.
English dictionaries since 1755 have attempted to present succinct statements of the meaning(s) of each word. A word may have more than one meaning but, so the theory goes, each meaning can in principle be summarized in a neat paraphrase that is substitutable (in context) for the target word (the definiendum). Such paraphrases must be so worded that the the substitution can be made without changing the truth of what is said – salva veritate, in Leibniz’s famous phrase. Building on Leibniz, philosophers of language such as Anna Wierzbicka have argued that the duty of the lexicographer is to “seek the invariant”.
In this presentation, I argue that this view of word meaning and definition may be all very well as a principle for developing stipulative definitions of terminology in scientific discourse, but it has led to serious misunderstandings about the nature of meaning in natural language, creating insuperable obstacles for the understanding of how word meaning works. As a result, linguists from Bloomfield to Chomsky and philosophers of language from Leibniz to Russell – great thinkers all – have been unable to say anything true or useful about meaning in language.
I argue that, instead, lexicographers should aim to discover patterns of word use in large corpora, and associate meanings with patterns instead of (or as well as) words in isolation.
They should also distinguish normal uses of each word from exploitations of norms.
This document discusses morphology and morphological parsing. It defines morphology as the study of how words are composed of morphemes, which are the smallest meaning-bearing units. There are two main types of morphology: inflectional and derivational. Inflectional morphology involves combining a stem with an affix without changing the word's class, while derivational morphology typically changes the word's class. The document outlines rules and examples of English inflectional and derivational morphology. It emphasizes that morphological parsing is important for tasks like spell-checking, part-of-speech identification, and information retrieval. Finite state automata are presented as a way to model a language's morphotactics and represent its lexicon to perform morphological recognition and
This document discusses morphology and morphological parsing. It defines morphology as the study of how words are composed of morphemes, which are the smallest meaning-bearing units. There are two broad classes of morphology: inflectional morphology, which combines a stem and affix but results in a word of the same class, and derivational morphology, which typically results in a word of a different class. Morphological parsing involves breaking words down into their constituent morphemes. Finite state transducers can be used to represent morphological rules and perform parsing by mapping between a lexical level and surface form of words.
This document provides an overview of Deep Patel's winter training at the Oil and Natural Gas Corporation Limited (ONGC) Hazira plant from December 7, 2015 to January 6, 2016. It discusses ONGC's role in India's oil and gas production, describes the various processing units at the Hazira plant including co-generation, oil and gas processing, and environmental and safety systems. It also acknowledges and thanks the individuals who provided guidance and support during the training period.
The document discusses a marketing plan for Puzhal. It likely outlines strategies to promote and sell products or services related to Puzhal. The plan aims to increase awareness of Puzhal and drive sales or engagement through targeted campaigns and messaging.
- Autodesk has been using machine translation since the early 2000s, starting with rule-based MT and later moving to statistical MT.
- The current MT infrastructure at Autodesk supports 14 languages and has increased translator productivity by up to 100% while reducing costs by up to 30%.
- Future plans include further integrating MT, terminology handling, and other language tools to better support translators and automate routine tasks.
The document summarizes the Ford-Fulkerson algorithm for finding the maximum flow in a flow network. It defines key terms like flow network, source, sink, flow, residual graph and augmented path. It then outlines the steps of the Ford-Fulkerson algorithm to incrementally send flow along augmented paths from the source to the sink until no more such paths exist. An example applying the algorithm to find the maximum flow in a sample network is provided with illustrations of the residual capacities after each flow augmentation.
Preparation Of Specimen For Microscopic ExaminationPATEL DEEP
The document provides detailed steps for preparing metallographic specimens for microscopic examination, including:
1) Cutting a representative sample from the material being tested, mounting the sample, grinding it with progressively finer grit paper, and polishing it to a mirror finish.
2) Etching the polished sample to reveal microstructural features by selectively corroding the material, then washing and drying it.
3) The final prepared sample is then ready for examination under a microscope to study properties like grain size and phase distribution at different magnifications. Proper preparation is crucial to obtain accurate results without introduced artifacts.
The document summarizes the key components and functions of a vehicle transmission system. It discusses the purpose of transmitting engine torque to drive the wheels. It then describes the main types of transmissions including manual, automatic, CVT, and their basic workings. The document also explains the purpose and function of key components that work together in a transmission system, such as the clutch, gearbox, driveshaft, differential, and universal joints.
The Ins and Outs of Preposition Semantics: Challenges in Comprehensive Corpu...Seth Grimes
Presentation by Nathan Scheider, Georgetown University, to the Washington DC Natural Language Processing meetup, October 14, 2019, https://www.meetup.com/DC-NLP/events/264894589/.
Preposition Semantics: Challenges in Comprehensive Corpus Annotation and Auto...Seth Grimes
The document summarizes Nathan Schneider's presentation on preposition semantics. It discusses challenges in annotating prepositions in corpora and approaches to their semantic description and disambiguation. It presents Schneider's work on developing a unified semantic scheme for prepositions and possessives consisting of 50 semantic classes applied to a corpus of English web reviews. Inter-annotator agreement for the new corpus was 78%. Models for preposition disambiguation were evaluated, with the feature-rich linear model achieving the highest accuracy of 80%.
The document discusses the key concepts of syntax including:
- Syntax examines how words are combined to form sentences.
- Speakers have linguistic competence which includes understanding grammaticality, word order, constituents, functions, ambiguity, and paraphrase.
- Generative grammar uses phrase structure rules to represent the hierarchical structure of sentences and generate all possible grammatical sentences.
- Tests like substitution and movement are used to determine if a string of words forms a constituent.
This document provides an introduction to linguistics and summarizes several key concepts and theories. It covers the main branches of linguistics like phonetics, phonology, morphology, syntax, semantics, and pragmatics. Some key topics discussed include morphemes, language acquisition theories, Chomsky's work on universal grammar, and the history and development of the English language. The document also defines several important linguistic terms and outlines different approaches to teaching English as a second language.
Word Frequency Effects and Plurality in L2 Word Recognition—A Preliminary Study—Yu Tamura
This document summarizes a preliminary study that examined the effects of word frequency and plurality on L2 word recognition in Japanese learners of English. The study tested 32 Japanese undergraduate and graduate students using a picture matching task and an L1 translation matching task with English nouns that were either singular-dominant, plural-dominant, or had similar frequencies for both forms. Results from the L1 matching task showed significantly faster response times for plural-dominant nouns in plural form compared to singular form, but no differences for other word types. The picture matching task showed a significant interaction where response times differed between singular and plural forms depending on the word type. Overall, the results provide preliminary evidence that plurality information may be represented for
The document discusses different parts of speech, beginning with nouns. It defines nouns and provides examples of different types of nouns such as proper, common, abstract, collective, concrete, and compound nouns. It then discusses pronouns, adjectives, verbs and their classifications. For nouns, it covers common and proper nouns. For pronouns, it discusses personal, reflexive, demonstrative, interrogative, indefinite and distributive pronouns. It also provides examples and exercises to identify different parts of speech in sentences.
Nouns are words that name people, places, things, or ideas. They serve various functions in sentences, such as subjects, objects, and modifiers. There are different types of nouns including common nouns, proper nouns, countable nouns, uncountable nouns, singular nouns, and plural nouns. Nouns can also take possessive forms to indicate ownership or relationships between nouns.
Nouns are words that name people, places, things, or ideas. They serve various functions in sentences, such as subjects, objects, and modifiers. There are different types of nouns including common nouns, proper nouns, countable nouns, uncountable nouns, singular nouns, and plural nouns. Nouns can also take possessive forms to indicate ownership or relationships between nouns.
This document provides information about nouns, including the definition of a noun, the four main word classes, and different types of nouns such as common and proper nouns, countable and uncountable nouns, collective nouns, compound nouns, and plural noun rules. It discusses how to make nouns plural depending on their ending, including irregular plural nouns, and provides examples for each rule.
English dictionaries since 1755 have attempted to present succinct statements of the meaning(s) of each word. A word may have more than one meaning but, so the theory goes, each meaning can in principle be summarized in a neat paraphrase that is substitutable (in context) for the target word (the definiendum). Such paraphrases must be so worded that the the substitution can be made without changing the truth of what is said – salva veritate, in Leibniz’s famous phrase. Building on Leibniz, philosophers of language such as Anna Wierzbicka have argued that the duty of the lexicographer is to “seek the invariant”.
In this presentation, I argue that this view of word meaning and definition may be all very well as a principle for developing stipulative definitions of terminology in scientific discourse, but it has led to serious misunderstandings about the nature of meaning in natural language, creating insuperable obstacles for the understanding of how word meaning works. As a result, linguists from Bloomfield to Chomsky and philosophers of language from Leibniz to Russell – great thinkers all – have been unable to say anything true or useful about meaning in language.
I argue that, instead, lexicographers should aim to discover patterns of word use in large corpora, and associate meanings with patterns instead of (or as well as) words in isolation.
They should also distinguish normal uses of each word from exploitations of norms.
This document discusses morphology and morphological parsing. It defines morphology as the study of how words are composed of morphemes, which are the smallest meaning-bearing units. There are two main types of morphology: inflectional and derivational. Inflectional morphology involves combining a stem with an affix without changing the word's class, while derivational morphology typically changes the word's class. The document outlines rules and examples of English inflectional and derivational morphology. It emphasizes that morphological parsing is important for tasks like spell-checking, part-of-speech identification, and information retrieval. Finite state automata are presented as a way to model a language's morphotactics and represent its lexicon to perform morphological recognition and
This document discusses morphology and morphological parsing. It defines morphology as the study of how words are composed of morphemes, which are the smallest meaning-bearing units. There are two broad classes of morphology: inflectional morphology, which combines a stem and affix but results in a word of the same class, and derivational morphology, which typically results in a word of a different class. Morphological parsing involves breaking words down into their constituent morphemes. Finite state transducers can be used to represent morphological rules and perform parsing by mapping between a lexical level and surface form of words.
This document discusses morphology and morphological parsing. It defines morphology as the study of how words are composed of morphemes, which are the smallest meaning-bearing units. There are two broad classes of morphology: inflectional morphology, which combines a stem and affix but results in a word of the same class, and derivational morphology, which typically results in a word of a different class. Morphological parsing involves breaking words down into their constituent morphemes. Finite state transducers can be used to represent morphological rules and perform parsing by mapping between a lexical level and surface form of words.
This document provides an overview of word embeddings and the Word2Vec algorithm. It begins by establishing that measuring document similarity is an important natural language processing task, and that representing words as vectors is an effective approach. It then discusses different methods for representing words as vectors, including one-hot encoding and distributed representations. Word2Vec is introduced as a model that learns word embeddings by predicting words in a sentence based on context. The document demonstrates how Word2Vec can be used to find word similarities and analogies. It also references the theoretical justification that words with similar contexts have similar meanings.
The document discusses syntax and sentence structure. It explains that speakers have rules for forming sentences stored in their brains rather than mental dictionaries of all possible sentences. Syntax rules specify how words combine into phrases and sentences, including word order and grammatical relationships. Phrase structure trees are used to represent the hierarchical structure of sentences based on syntactic categories like noun phrases and verb phrases. Recursive phrase structure rules allow for an infinite number of sentences. The document also discusses heads, complements, ambiguities, and other aspects of syntactic analysis.
Corpus linguistics is the study of language based on large collections of electronic texts known as corpora. Corpora can be useful for teachers by providing real examples of how language is used. Dictionary definitions were traditionally created by experts but are now often based on examples found in corpus searches. Corpus evidence can show how words are actually used in different grammatical structures and contexts, rather than only showing prescribed rules. This gives insights into norms of usage that can inform teaching.
This document provides an overview of syntax and its key concepts. It discusses what syntax is, types of grammar, generative grammar, deep and surface structure, structural ambiguity, tree diagrams, symbols used in syntactic analysis, phrase structure rules, lexical rules, transformational rules, complement phrases, and recursion. The core concepts covered include how syntax studies word and sentence arrangements, generative grammar aims to generate all grammatical sentences, deep and surface structure represent underlying and surface levels, and transformational rules convert structures between declarative and interrogative forms.
Ena121 & 131 grammar lecture 1 word classes & clause elementsElisabeth Wulff Sahlén
This document provides an agenda and information for language skills and practical English grammar courses. It introduces the instructor and covers topics like introductions, roll call, word classes including nouns, verbs, adjectives and adverbs. Students are instructed to review clause elements like subjects and verbs on their own before the first seminar. The document also includes information about course content, materials, assessments, and miscellaneous course details.
Ena121 & 131 grammar lecture 1 word classes & clause elements
Kratzer05.key
1. On the Plurality of Verbs
Angelika Kratzer
Ventsislav Zhechev
Event Semantics, SS 2006
Sigrid Beck, Arnim von Stechow
2. Agenda
• Pluralization in Natural
Language
• Nominal number.
Eliminating [singular] as
a number feature
• On the plurality of verbs.
Lexical Cumulativity
• How many readings?
Phrasal Cumulativity
• Evidence for
Lexical Cumulativity
• The source of
Phrasal Cumulativity
• Conclusion
2
4. Are there plural verbs and how
did they become that way?
• Fa%
• denotes a relation between individuals and events
• Verb meanings start out singular
• singular individuals are being linked to singular events
4
5. • Kri&a (1992) and Landman (1996) suggest that
verbs are born as plurals
• Then
• fa% could also link plural individuals to plural events
On the other hand…
5
6. What about the VPs?
• VPs and bigger verbal proje)ions
• can be plural too
• but their plurality cannot always be inherited
from their verbs
• so there must be another source of
pluralization
6
7. What about the VPs?
• Sternefeld (1998), Sauerland (1998), Beck
(2000), and Beck and Sauerland (2000) have
proposed that there is an optional and freely
available operator in the syntax that
pluralizes predicates
7
8. The claims of this paper
• There is a distin)ive theoretical place for
lexical pluralization
• The pluralization of phrasal verbal proje)ions
is constrained:
• it can only occur in the immediate neighborhood
of a DP with plural agreement morphology
8
9. How do you pluralize a predicate?
• The domain of entities De should contain both
singular and plural individuals
• Following Link (1983), we construe plural
individuals as sums and
• assume that De is cumulative:
• whenever x and y are in De, so is x+y
9
10. How do you pluralize a predicate?
• In addition we need a domain of events Ds
• The sum operation is also defined for events and
• Ds can also be assumed to be cumulative
10
11. How do you pluralize a predicate?
• Following Kri&a (1986) we extend the sum
operation to ordered pairs built from members
of De and Ds:
• <Mary, fall1>+<John, fall2> = <Mary+John, fall1+fall2>
• Now we can define pluralization as
• an operation ∗ that maps sets that come with a sum
operation to their smallest cumulative superset
11
13. Some questions
• Where do pluralization operators show up?
• Why do they show up where they do?
• How are they related to plural morphology on
nouns and verbs?
• What is their semantic effe)?
• If there is pluralization, shouldn’t there be
singularization too?
13
15. If there is pluralization, shouldn’t
there be singularization too?
• There is no such thing as singular number
• Therefore, we do not expe) operators that
singularize
15
16. Cumulativity
• Manfred Kri&a’s universal:
• Simple predicat& in natural language typically are cumulative
• Apparent counterexamples:
• singular count nouns like child, chair, chin
16
17. • Following Link, the extensions of singular
count nouns are taken to be sets of singularities
• Hence, they could not be cumulative:
• If Josephine is a child, and Beatrice is too, the sum
of Beatrice and Josephine is not a child
Cumulativity
17
18. • However, child, chair, or chin are not necessarily
simple predicates:
• they may already be complex by the time we get
to see or hear them
• They each might consist of
• a root (√child, √chair, √chin) and
• a piece of nominal infle)ion
18
19. • Müller (2000),
Rullmann and You (2003)
• common noun roots have
predicative, number-
neutral (tra*numeral)
denotations
• the root √child would
denote the set consisting
of all singular children
and their sums
• Kri&a (1995) and
Yang (2001)
building on Carlson (1977)
• noun roots are referential
and refer to kinds
• it would now be part of
the job of nominal
infle)ion to turn those
roots into predicates
Two views on the nature of roots
19
20. Roots as kinds
• This proposal does not assume any particular
mode of individuation or portioning for the
denotations of noun roots
• Cross-linguistically, the fun)ion of
individuating and portioning is often carried by
classifiers
20
21. Classifiers
• Following Kri&a (1995):
• English has a multiply ambiguous
non-overt classifier and
• the noun forms that are usually categorized
as ‘singular’ are in reality roots with
an incorporated classifier
21
23. • The word zebra is a ‘singular’ predicate by the
time we see or hear it
• It was turned into a predicate by an
incorporated ambiguous classifier, and is
therefore ambiguous, too
23
a. ⟦√zebra⟧ = zebra
b. ⟦CLind⟧ = λx.λy.[kind(x) ∧ individual(y) ∧ y ≤ x]
c. ⟦CLkind⟧ = λx.λy.[kind(x) ∧ kind(y) ∧ y ≤ x]
24. (3)
a. This zebra h, not been fed.
b.This zebra is almost extinct.
(4)
a. Those zebr, have not been fed.
b.Those two zebr, are almost extinct.
24
25. (5)
a. This wine is for table 8.
b.You dropped two red win&.
(6)
a. This Pinot Noir is rare.
b.We t,ted five different Pinot Noirs.
25
27. Mass nouns
• In English they are predicative by the time we
see them
• hence, should come with an obligatory classifier
• Following Chierchia (1998), that classifier
should map a kind into the set of all of its
singular or plural realizations
• thus, predicative mass nouns are already
pluralized by their classifier and
• are not submitted to further pluralization
27
28. • Unless mass nouns combine with an individual
or a kind classifier, they cannot proje) [plural]
• Both mass nouns and non-plural count nouns
proje) only a classifier
• therefore they both lack [plural] and
• therefore they both trigger singular agreement
28
29. • Kratzer suggests that there is no feature
[singular] in English
• What we call singular might be
the absence of plural
• in the morphology and
• in the semantics
• Non-overt classifiers are responsible for the
presence of singular predicates
29
To sum up
30. To sum up
• Agreement phenomena show that there is a tight
conne)ion between nominal and verbal number
• if there is no nominal [singular]
• then there is no verbal [singular] either
• The conne)ion between nominal and verbal number
should then be established via [plural] alone:
• when we see singular agreement, what we see is
infle)ion that is there because of the absence of [plural]
30
32. Kri&a’s cumulativity universal
• Simple predicat& in natural language typically are cumulative
• We posit referential denotations for noun roots
• Thus they are never predicative
• hence, they satisfy Kri&a’s universal trivially
• English nouns may become predicative
in the course of a synta)ic derivation
• but when they do, they are no longer simple
32
33. What about verb roots and
verb stems?
• Verbs have the chara)eristic property
of taking arguments
• Some of those arguments seem to be synta)ically
acquired in the course of a derivation
• Marantz (1984) and Kratzer (1994) have argued that
external arguments are always added in the syntax
• Pylkkänen (2001, 2002) makes the same point
for applicative arguments
33
34. What about verb roots and
verb stems?
• Some dire) internal arguments seem to be
introduced synta)ically, too, via secondary
predicates or serialization, for example
• But there are also transitive and unaccusative
verbs with inherently relational meanings:
• relate, connect, r&emble, surp,s, outdo,
depend, hinder, ca1e
34
35. What about verb roots and
verb stems?
• With many transitive and unaccusative verbs,
the kind of event described varies with the
kind of dire) internal argument in sometimes
erratic ways
• pick, pop, rise
• It would be hard to account for this
dependency under the assumption that
a verb’s argument stru)ure is always
synta)ically constru)ed
35
36. • Then there seems to be a large group of
inherently relational verb roots
• and this suggests that as a class, verb roots might
be predicative from the start
• then we would expe) them to fall under Kri&a’s
generalization, and have cumulative denotations
• And if external and applicative arguments are
not true arguments of their verbs,
• we need thematic role predicates like agent or goal
to introduce them
• those predicates’ denotations should then be
cumulative, too
36
37. • If the denotations of verbs and thematic role
predicates are cumulative from the start, we
expe) effortless availability of a cumulative
interpretation for sentences like
• Twenty children ate ten pizz,.
• Formal representation:
• ∃e∃x∃y [∗child(x) ∧ |x| = 20 ∧ ∗agent(x)(e) ∧
∗pizza(y) ∧ |y| = 10 ∧ ∗eat(y)(e)]
• As in Landman (1996, 2000), the basic predicates
of the metalanguage are singular predicates that
are pluralized with the ∗-operator
37
38. (8)Two children lifted two box&.
• Suppose the two children are C,ey and Stacey, and
the two boxes are Red and Green. C,ey lifted Red
on her own once, and Stacey did so twice. In
addition, C,ey and Stacey jointly lifted Green.
• We have four events, e1, e2, e3, and e4:
38
Cumulativity and event semantics
Event Box liffted Box liffter
e1 Red Casey
e2 Red Stacey
e3 Red Stacey
e4 Green Casey+Stacey
39. (9) Extension of lift
• {<e1, Red>, <e2, Red>, <e3, Red>, <e4, Green>, …}
(10) Extension of agent
• {<e1, C,ey>, <e2, Stacey>, <e3, Stacey>, <e4, C,ey + Stacey>, …}
39
Event Box liffted Box liffter
e1 Red Casey
e2 Red Stacey
e3 Red Stacey
e4 Green Casey + Stacey
44. (13)
a. C,ey and Stacey lifted Red.
b.C,ey and Stacey lifted Green.
c. C,ey lifted Red (at le,t) once.
d.Stacey lifted Red (at le,t) twice.
44
(14)
a. Red w, lifted fourteen tim&.
b.C,ey and Stacey together did eleven liftings.
45. • A basic principle of counting says that if
I count you as an entity, I can’t count
your head separately
• What really seems to count in counting
is atomicity
• The extension of ∗lift contains exa)ly three
atomic pairs that conne) Red to a lifting event
• The extension of ∗agent contains exa)ly one
atomic pair that conne)s C,ey and Stacey
to a lifting event.
45
(14)
a. Red w, lifted fourteen tim&.
b.C,ey and Stacey together did eleven liftings.
46. • Cumulation preserves all information we want
to extra) from a verb’s extension
• we have all the information we might need to get
the semantics of adverbs like twice or three tim&,
or individually, or together right
46
To sum up
47. On the Plurality of Verbs
Angelika Kratzer
Ventsislav Zhechev
Event Semantics, SS 2006
Sigrid Beck, Arnim von Stechow
Part
II
48. Agenda
• Pluralization in Natural
Language
• Nominal number.
Eliminating [singular] as
a number feature
• On the plurality of verbs.
Lexical Cumulativity
• How many readings?
Phrasal Cumulativity
• Evidence for
Lexical Cumulativity
• The source of
Phrasal Cumulativity
• Conclusion
48
50. How many readings?
(15)Two children lifted two box&.
• there is a reading of (15) that lumps together what
are traditionally called collective and cumulative
interpretations, and doesn’t distinguish between
one-time and repetitive liftings
• Is it right to lump together all the
interpretations that others have taken
pains to distinguish?
50
51. Evidence from VP-ellipsis
• The ambiguity in the overt and in the silent
VP must be resolved in the same way
(16) I went to the bank, and you did, too.
(17) The two boys lifted the two box&,
and the two girls did, too.
• Is (17) true in a situation in which the two boys
jointly lifted each of the two boxes,
but the two girls each lifted a different one
of the two boxes on her own?
51
52. Distributivity
• In addition to the cumulative interpretation,
(15) has two distributive interpretations
• Landman (1989) argued that when a plural DP
produces distributive interpretations of
this kind, they should be derived by
pluralizing its sister predicate
52
(15)Two children lifted two box&.
53. Distributivity
(18)
a. (2 children) ∗ [lifted 2 boxes]
b.(2 boxes) ∗ λ1 [2 children lifted t1]
(19)
a. ∗ λxλe∃y [∗box(y) ∧ |y| = 2 ∧ ∗lift(y)(e) ∧ ∗agent(x)(e)]
b.∗ λxλe∃y[∗child(y) ∧ |y| = 2 ∧ ∗lift(x)(e) ∧ ∗agent(y)(e)]
• Since starring a predicate always extends the
original extension, both 18(a) and (b) still cover
all the scenarios we discussed before
53
(15)Two children lifted two box&.
54. Lumping the readings together
• If pluralization of verbal predicates is the
corre) way of accounting for distributive
interpretations, we are committed to lumping
together interpretations in a particular way
54
1. Cumulative/Collective/Repetitive
2.
Cumulative/Collective/Repetitive
Subject Distributive
3.
Cumulative/Collective/Repetitive
Object Distributive
55. How many readings?
• If starring of a plural DPs sister node is obligatory,
as I will be suggested below, we only expe)
two truly distin) readings for sentence (15),
one of which is highly dispreferred
• The difference boils down to whether or not
we move the obje) over the subje)
55
(15)Two children lifted two box&.
56. How many readings?
• As far as grammar goes, no distin)ion is made
between subject distributive, cumulative, collective,
and iterative interpretations
• All those different ways of understanding (15)
correspond to a single reading that can be
computed in a straightforward way from a
single synta)ic representation
56
(15)Two children lifted two box&.
57. Individuating the readings
• In ellipsis constru)ions, for example, it should
be possible to mix distributive and cumulative
or co%ective interpretations
(20)The two chefs cooked a stew, and the two students
did, too. The chefs were very experienced, so they each
prepared a Moroccan tagine. The two students worked
together on a Bœuf Bourguignon.
57
58. Individuating the readings
• Roger Schwarzschild has observed that separating
distributive and co%ective/cumulative interpretations
can have undesirable consequences in the scope
of negation
• Be,ly, better make sure those guys don’t win a car this week!
58
59. To sum up
• We have seen what looked like initial support
for lexical cumulativity,
• but we have also seen that lexical cumulativity
alone is not enough
• Phrasal cumulativity is needed to account for
certain cases of distributive interpretations
• thus, we need ∗-operators that can pluralize phrases
59
60. To sum up
• Do we still need Lexical Cumulativity,
if we already have the ∗-operators?
• Can’t those ∗-operators alone do the jobs we
thought Lexical Cumulativity was responsible for?
60
62. Attacking Lexical Cumulativity
• Several authors have argued that the
denotations of verbs can be rendered
cumulative through the freely available
optional presence of synta)ically represented
∗-operators that can pluralize any kind of
verbal predicate:
• lexical, phr,al, b,ic or syntactica%y derived
62
63. The goal of this se)ion
• To show that
• the proposal of Sternefeld, Sauerland, and Beck
over-generates
• there is still a theoretically distinguished place
for Lexical Cumulativity and
Kri&a’s Cumulativity Universal
63
64. (22)What does this intern do?
a. She guards a parking lot.
b.He cooks for an elderly lady.
c. She waters a garden.
d.He watch& a baby.
e. She clea* an office building.
(23)
a. I dialed a wrong phone number for 5 minut&.
b.She bounced a ba% for 20 minut&.
c. He kicked a wa% for a couple of hours.
d.She opened and closed a drawer for half an hour.
e. I petted a rabbit for two hours.
64
65. • If ∗-operators could be inserted freely, they
could immediately produce 24(b) from 24(a),
for example, hence derive unattested
interpretations for the sentences in (22) and (23):
(24)
a. λe∃x [ba%(x) ∧ ∗bounce(x)(e)]
b.∗ λe∃x [ba%(x) ∧ ∗bounce(x)(e)]
65
Restri) ∗-operators?
(22) She guards a parking lot.
(23) She bounced a ba% for 20 minut&.
66. • λxλe ∗bounce(x)(e)
• λR<e<st>>λe∃x [ba%(x) ∧ R(x)(e)]
• λe∃x [ba%(x) ∧ ∗bounce(x)(e)]
• being a possibly plural event e such that
there is a ba% x and e is an event of bouncing x
66
[bounce a ball]VP
67. • In (22), habitual aspe) seems to be responsible
for the necessarily iterative interpretation
• In (23), durativity plays a similar role
• Given lexical cumulativity, we still predi) the
fa)s in (22) and (23), even if the habitual
operator and durational adverbs take scope
over the indefinite dire) obje)
67
Explaining (22) and (23)
(22) She guards a parking lot.
(23) She bounced a ba% for 20 minut&.
68. (26)
a. Ich hab’ fünf Minuten lang
eine falsche Telefonnummer gewählt.
b.Ich hab’ eine falsche Telefonnummer
fünf Minuten lang gewählt.
68
69. • Suppose the denotation of durational adverbials
like for 5 minut& is as in (27):
(27) λP<st>λe [P(e) ∧ e = σe′ [P(e′) ∧ e′ < e] ∧ fminute(e) = 5]
• The definition in (27) uses Link’s σ-operator.
In our case, the operator maps the events in the set
{e′| P(e′) ∧ e′ < e} to their supremum – if it exist
69
Durational adverbials
70. • Following Morzycki’s Program of Modified
Modification (Morzycki 2004) and the
independently developed analysis of durational
adverbs in van Geenhoven (2004), we would
eventually want to split up the denotation of
durational adverbials like for 5 minut& into at
least two parts:
(28)
a. λP<st>λe [P(e) ∧ e = σe′ [P(e′) ∧ e′ < e]]
b.λe fminute(e) = 5
70
71. • The denotation of dial a number for 5 minut&,
for example, can now be computed by applying
the denotation of for 5 minut& to
the denotation of dial a number
• the VP dial a number is thus clearly in
the scope of for 5 minut&
• The result is the denotation in (29):
(29)
λes∃x [number(x) ∧ ∗dial(x)(e) ∧
e = σe′ [∃x [number(x) ∧ ∗dial(x)(e′)] ∧ e′ < e] ∧
fminute(e) = 5]
71
72. • Assuming Lexical Cumulativity,
• iterative interpretations for verbs are possible
from the very start
• iterativity without concurrent object distributivity
is the automatic result of introducing an ordinary
singular indefinite in the early stages of
a synta)ic derivation
72
To sum up
73. To sum up
• Habitual operators and durational adverbs
do no longer have to pluralize the predicates
they operate over, or introduce quantification
over sub-events
• they merely have to make sure that those predicates
do not describe any singular events, but are properly
plural in a lexically defined sense
• Thus, we no longer have to stipulate obligatory
narrow scope for such operators
• the desired interpretations can be derived,
even if the relevant aspe)ual operators are
sitting above dire) obje)s
73
74. To sum up
• ∗−operators cannot be inserted freely
• If they could, we wouldn’t expe) the ‘failure of
distribution’ effe) illustrated in (22) and (23)
• What is the force that produces
phrasal cumulativity,
hence, many cases of distributivity?
74
(22) She guards a parking lot.
(23) She bounced a ba% for 20 minut&.
76. What makes phrasal
cumulativity possible?
• It cannot be an accident that none of the
sentences in (22) or (23) contained any plural DPs
• Maybe phrasal ∗-operators are necessarily tied to
the presence of plural DPs in some way or other
• Schwarzschild (1993-94), for example, proposed
that all plural VPs are obligatorily translated
with the ∗-operator, hence always have
cumulative denotations
76
(22) She guards a parking lot. (23) She bounced a ba% for 20 minut&.
77. Reformulating Schwarzschild
• At the level where semantic interpretation
takes place, sister constituents of plural DPs
are pluralized, regardless of whether they are
still in their base position or have moved away
77
78. Plural DPs
• Following Sauerland (2005),
when a plural DP is built
from a determiner and
a plural noun, for example,
both the noun and the
determiner come with their
own number proje)ion
78
plural
Det
plural
classifier N
79. What is the role of the higher
[plural]?
• It can’t seem to be interpretable within its DP
• Suppose that nominal [plural] is always
interpretable, and it always carries
the cross-categorial plural operator
• then the higher [plural] is forced to move out
before semantic interpretation takes place
79
80. What is the role of the higher
[plural]?
• Moving as little as possible, it could become
a verbal infle)ional head right below its DP
• In this way, a DP could literally create its
own agreement proje)ion, possibly on top
of another verbal proje)ion
80
81. What is the role of the higher
[plural]?
• When [plural] migrates
out of its DP, we get
a ∗-operator that
pluralizes the DP’s
sister node, possibly
showing up as overt
verbal agreement
81
DP
plural
= *
…
Pluralized sister predicate
82. • An immediate predi)ion of this proposal is
that pluralization of phrasal verbal proje)ions
should require the presence of DPs with
[plural] agreement features in English
• But distributive/cumulative interpretations that
can be produced by Lexical Cumulativity alone,
should also be available for singular DPs
82
83. (30)
a. She sent her off>ring to 5 different boarding schools.
b.She sent her off>ring to a boarding school.
(31)
a. She sent her children to 5 different boarding schools.
b.She sent her children to a boarding school.
83
Testing the predi)ion
84. Testing the predi)ion
• Why is (32) bad?
(32) *Her off>ring each went to a boarding school.
• A possible explanation is that floated each
needs to agree with [plural]
84
85. Mass nouns
(33)
a. A% that furniture w, loaded onto 5 trucks.
b.A% that furniture w, loaded onto a truck.
(34)
a. Her off>ring inherited a% her jewelry.
b.Her off>ring inherited a vi%a in Tuscany.
85
86. Mass nouns
(35)
a. The moss on the rocks and the moss on the tre& is blighted.
b. Jane’s china andAlice’s china was stored in separate closets.
(36)
a. The sugar for the coffee and the sugar for the cake was stored in a pl,tic jar.
b. Jane’s silverware and Patsy’s silverware was sent to a co1in.
86
87. • It should not be possible to simultaneously
cumulate two non-event arguments
• we do not have lexical predicates with more than
two non-event arguments to begin with, and
• plausible assumptions about movement do not
seem to allow us to derive any such predicates
in the syntax
• Thus, a DP’s sister constituent can only be of
type <et> or <e<st>>
• hence, the pluralization operation could not affe)
any other non-event argument position apart
from the one that is about to be saturated by
the DP triggering the pluralization
87
88. • Beck and Sauerland (2000):
(37) Th&e 5 teachers gave a bad mark to those 20 prot&ting students.
• Beck and Sauerland argue that the intended
interpretation of (37) can only be derived by
pluralizing the following 2-place relation:
• λxλy∃z [bad-mark(z) ∧ gave-to(y)(z)(x)]
88
89. • The key for (37) is neo-Davidsonian association
of the agent argument, coupled with
movement of the indire) obje) to
a position right above the dire) obje)
• The moved DP’s sister predicate would now
be pluralized and would wind up with
the denotation in (38):
(38) ∗λyλe∃z [bad-mark(z) ∧ ∗gave(z)(e) ∧ ∗goal(y)(e)]
• This pluralized predicate is of type <e<st>>,
hence only has one non-event argument
89
(37) Th&e 5 teachers gave a bad mark to those 20 prot&ting students.
90. (39) ∗λyλe∃z [bad-mark(z) ∧ ∗gave(z)(e) ∧ ∗goal(y)(e)]
those 20 prot&ting students
(40)λe [∗agent(those 5 teachers)(e) ∧
((∗λyλe∃z [bad-mark(z) ∧ ∗gave(z)(e) ∧ ∗goal(y)(e)])
(those 20 prot&ting students))(e)]
• The interpretation captured in (40) says that those
five teachers were the agents of an event
in which those 20 protesting students received one
or more bad marks
90
(37) Th&e 5 teachers gave a bad mark to those 20 prot&ting students.
91. (41) The children are
holding a wheel.
• In situations of this
kind, (41) is false or
at least “highly strange”
91
Winter (2000)
cumulation’ idea, but not to the event-based account proposed here.
Winter asks us to judge the truth of sentence (33) in the scenario dep
(33) The children are holding a wheel.
Figure 5
Winter observes correctly that in situations of this kind, (33) is false o
strange”. On Winter’s own account, all non-lexical cases of distributi
92. Winter (2000)
• On Winter’s own account, (41) would be true
just in case each child is holding a wheel
• Winter predi)s (41) to be false in his scenario, then
92
(41) The children are holding a wheel.
93. Winter (2000)
• Boy1 and Boy2 are holding a wheel,
and so do Boy2 and Boy3
• The denotation of the unstarred VP in (41)
is therefore true of the two pluralities
Boy1+Boy2 and Boy2+Boy3
• If the plural subje) the children induces starring
of the VP, the denotation of that VP is true
of Boy1+Boy2+Boy3, and hence of the children
• (41) is thus predi)ed to be true
on Winter’s scenario
93
(41) The children are holding a wheel.
94. Winter (2000)
in an event-based approach
• On Kratzer’s account, the sister constituent
of the plural subje) in (41) expresses a relation
between individuals and events, and it is
that relation that is cumulated:
(42)
a. λxλe [∗agent(x)(e) ∧ ∃y [wheel(y) ∧ ∗hold(y)(e)]]
b.∗ λxλe [∗agent(x)(e) ∧ ∃y [wheel(y) ∧ ∗hold(y)(e)]]
94
(41) The children are holding a wheel.
95. Winter (2000)
in an event-based approach
• Does the pair consisting
of the three boys and
the event e represented
in Winter’s scenario
satisfy the starred
relation in 42(b)?
• It could only do so if
there are pairs <x1, e1>
and <x2, e2> that satisfy
the relation in 42(a),
where x1+x2 = the children
and e1+e2 = e
95
satisfy the starred relation in 34(b)? It could only do so if there are pairs <
<x2, e2> that satisfy the relation in 34(a), where x1+x2 = the children and
However, the event represented in figure 5 is most naturally conceptualiz
There are no natural, but only ‘strange’ or artificial ways of conceptualizin
two subevents. The subevents singled out in Figure 6, for example, do no
among the atoms in our domain of events:
Figure 6
(42) (∗) λxλe [∗agent(x)(e) ∧ ∃y [wheel(y) ∧ ∗hold(y)(e)]]
96. Winter (2000)
in an event-based approach
34
Figure 7
Sentence (33) is clearly true in the scenario of Figure (7). Rather than presenting a challenge
to our account, Winter’s example provides a surprising piece of support.
I conclude, then, that plural DPs are themselves sources of phrasal cumulativity - or more
concretely, their higher [plural] features are (in the sense of Sauerland (2005)). Pluralizing
their DP’s sister node seems to be the only way for those features to be put to semantic use.
Within an event semantics, a DP’s sister node often denotes a relation between individuals
96
•We expect (41) to be judged true in scenarios where the
relevant subsituations are individuated more clearly
• and in the scenario above sentence (41) is clearly true
(41) The children are holding a wheel.
97. To sum up
• Plural DPs are themselves sources of
phrasal cumulativity
• or more concretely, their higher [plural]
features are (in the sense of Sauerland (2005))
• pluralizing their DP’s sister node seems to be the
only way for those features to be put to semantic use
• Within an event semantics, a DP’s
sister node often denotes a relation
between individuals and events
• consequently, judgments about the truth
of sentences like (15) are bound to be sensitive
to the individuation of events
97
(15)Two children lifted two box&.
99. • It seems, that there are indeed
at least two pluralization mechanisms
at work in languages like English
• one is Lexical Cumulativity,
which seems to be universal
• the other one is carried by
the infle)ional feature [plural]
99
100. • is always interpretable, and
• always denotes the cross-categorial ∗-operator
• always originates within a DP and pluralizes
nominal or verbal proje)ions, depending
on whether it occupies a high or a low
position within its DP
• the low position provides access to a noun
• the high position provides access to a verbal proje)ion
100
[plural]
101. (35)
• To get a distributive interpretation for
Chinese sentences like (35), the overt
distributivity operator dou has to be used
• in Chinese, then, dou is a carrier of
the ∗-operator (Lin 1998, Yang 2001)
Chinese
Tamen mai-le yi-bu chezi.
They buy-Asp one-CL car
‘They bought a car.’
Lin (1998), 201
101
102. (36) Eine Kanne Milch hat jeweils
ein Pfund Käse produziert.
• jeweils may be given the following
interpretation:
(37) λP<st>λe [e = σe′ [P(e′) ∧ e′ < e]]
German
102
103. • Phrasal plurality is not always
linked to nominal [plural]
• the feature [plural] does not have to be
the one and only possible source of
phrasal plurality, even in a language
that also has [plural]
103
104. • There is some indication that
subje)-distributivity is hard to get
when the subje) is left in a low position
(38) Am Nebentisch rauchten vier Männer eine Zigarre.
• Subje)s sitting in low positions are also
known to have different agreement properties
in some languages
(39) Il &t arrivé d& enfants.
‘There is arrived children’
Subje)-distributivity
104