Richard Caneba proposes a feature structure unification approach to syntactic parsing that parses language from left to right. Each word posits a sequence of head-dependency relationships that form a "phrasal chain". Grammar rules unify these chains through feature structure unification. This approach captures intuitions about language understanding occurring left to right without requiring additional structure like hierarchical phrase structure trees. The goal is to develop this approach to handle ungrammaticality, garden path sentences, and integrate semantics and discourse.
On Japanese Resultatives: Some Cross-linguistic ImplicationsFukushima University
This is used in the oral presentation held at the University of Queensland at 10th of August, 2012.
The aims of this presentation are to exemplify the characteristics of Japanese resultative expressions and to provide some implications to cross-linguistic and descriptive perspectives. Resultative constructions have been explored deeply in the fields of syntax, lexical semantics, constructional grammar. Japanese resultatives have also been analysed in the theoretical frameworks and Japanese linguistics. In the previous literature, however, attention has barely been paid to the fact that the ‘Product-resultative’ type is not allowed in English such as ‘*Sam baked a meat pie delicious’ and ‘*David bored a hole round through the board,’ while they are allowed in Japanese. It suggests that Japanese resultatives distribute differently more than the way concluded in the previous literature, where it was concluded that English resultatives have more types of resultatives than that of Japanese.
In this talk, I exemplify that Japanese resultatives should be analysed as a type of adverbial modification rather than secondary predication or construction, and do not follow the ‘force-dynamics’ as English resultatives do. I also point out some irrelevant phenomena on the surface such as manner-incorporation, degree modifier, and that VP quantifiers have adverbial characteristics in common. The difference between Japanese and English resultatives shown in this talk provide us some cross-linguistic implications for further research.
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEMcscpconf
This paper discusses the importance of verb suffix mapping in Discourse translation system. In
discourse translation, the crucial step is Anaphora resolution and generation. In Anaphora
resolution, cohesion links like pronouns are identified between portions of text. These binders
make the text cohesive by referring to nouns appearing in the previous sentences or nouns
appearing in sentences after them. In Machine Translation systems, to convert the source
language sentences into meaningful target language sentences the verb suffixes should be
changed as per the cohesion links identified. This step of translation process is emphasized in
the present paper. Specifically, the discussion is on how the verbs change according to the
subjects and anaphors. To explain the concept, English is used as the source language (SL) and
an Indian language Telugu is used as Target language (TL)
My bonus #txjs talk, on paren-free (not in ES.next yet but now backward compatible), also the for-in (same old) and for-of (over values including iterators) loops.
Statistical Dependency Parsing in Korean: From Corpus Generation To Automatic...Jinho Choi
This paper gives two contributions to dependency parsing in Korean. First, we build a Korean dependency Treebank from an existing constituent Treebank. For a morphologically rich language like Korean, dependency parsing shows some advantages over constituent parsing. Since there is not much training data available, we automatically generate dependency trees by applying head-percolation rules and heuristics to the constituent trees. Second, we show how to extract useful features for dependency parsing from rich morphology in Korean. Once we build the dependency Treebank, any statistical parsing approach can be applied. The challenging part is how to extract features from tokens consisting of multiple morphemes. We suggest a way of selecting important morphemes and use only these as features to avoid sparsity. Our parsing approach is evaluated on three different genres using both gold-standard and automatic morphological analysis. We also test the impact of fine vs. coarse-grained morphologies on dependency parsing. With automatic morphological analysis, we achieve labeled attachment scores of 80%+. To the best of our knowledge, this is the first time that Korean dependency parsing has been evaluated on labeled edges with such a large variety of data.
On Japanese Resultatives: Some Cross-linguistic ImplicationsFukushima University
This is used in the oral presentation held at the University of Queensland at 10th of August, 2012.
The aims of this presentation are to exemplify the characteristics of Japanese resultative expressions and to provide some implications to cross-linguistic and descriptive perspectives. Resultative constructions have been explored deeply in the fields of syntax, lexical semantics, constructional grammar. Japanese resultatives have also been analysed in the theoretical frameworks and Japanese linguistics. In the previous literature, however, attention has barely been paid to the fact that the ‘Product-resultative’ type is not allowed in English such as ‘*Sam baked a meat pie delicious’ and ‘*David bored a hole round through the board,’ while they are allowed in Japanese. It suggests that Japanese resultatives distribute differently more than the way concluded in the previous literature, where it was concluded that English resultatives have more types of resultatives than that of Japanese.
In this talk, I exemplify that Japanese resultatives should be analysed as a type of adverbial modification rather than secondary predication or construction, and do not follow the ‘force-dynamics’ as English resultatives do. I also point out some irrelevant phenomena on the surface such as manner-incorporation, degree modifier, and that VP quantifiers have adverbial characteristics in common. The difference between Japanese and English resultatives shown in this talk provide us some cross-linguistic implications for further research.
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEMcscpconf
This paper discusses the importance of verb suffix mapping in Discourse translation system. In
discourse translation, the crucial step is Anaphora resolution and generation. In Anaphora
resolution, cohesion links like pronouns are identified between portions of text. These binders
make the text cohesive by referring to nouns appearing in the previous sentences or nouns
appearing in sentences after them. In Machine Translation systems, to convert the source
language sentences into meaningful target language sentences the verb suffixes should be
changed as per the cohesion links identified. This step of translation process is emphasized in
the present paper. Specifically, the discussion is on how the verbs change according to the
subjects and anaphors. To explain the concept, English is used as the source language (SL) and
an Indian language Telugu is used as Target language (TL)
My bonus #txjs talk, on paren-free (not in ES.next yet but now backward compatible), also the for-in (same old) and for-of (over values including iterators) loops.
Statistical Dependency Parsing in Korean: From Corpus Generation To Automatic...Jinho Choi
This paper gives two contributions to dependency parsing in Korean. First, we build a Korean dependency Treebank from an existing constituent Treebank. For a morphologically rich language like Korean, dependency parsing shows some advantages over constituent parsing. Since there is not much training data available, we automatically generate dependency trees by applying head-percolation rules and heuristics to the constituent trees. Second, we show how to extract useful features for dependency parsing from rich morphology in Korean. Once we build the dependency Treebank, any statistical parsing approach can be applied. The challenging part is how to extract features from tokens consisting of multiple morphemes. We suggest a way of selecting important morphemes and use only these as features to avoid sparsity. Our parsing approach is evaluated on three different genres using both gold-standard and automatic morphological analysis. We also test the impact of fine vs. coarse-grained morphologies on dependency parsing. With automatic morphological analysis, we achieve labeled attachment scores of 80%+. To the best of our knowledge, this is the first time that Korean dependency parsing has been evaluated on labeled edges with such a large variety of data.
Before dwelling totally to the structure of prediction, it is but right that a short background of the theory and other syntactic structures should be provided first.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
2. Intuitions
• An interpretive grammar views syntax as finding the most
appropriate sequence of head and dependency relationship
between phrases and words.
• Language understanding occurs (roughly) left to right
• Syntactic trees have a flat structure, that gives no syntactic
preferences to sequences of adjunctive modifiers of the same
category (adjectives, adverbs, modifying prepositional
phrases)
• We can infer a number of things immediately from the
perception of a weird, although by no means all things
3. Intuitions cont’d
• There are many patterns that exist in natural language, that
can be deterministic in some cases, and must be
defeasible/probabilistic in others.
• Reliably deterministic:
• [Det N] => NP[Det N]
• [Adj N] => NP[Adj N]
• Defeasible:
• *V NP NP…+ (<1.0)> VP*V NP NP…+
• *V NP NP…+ (<1.0)> VP*V NP*NP…+…+
• Make an attempt to do search ONLY if there is a genuine
ambiguity as to what the next step in a L-R parse should be
• Second object/Relative clause modifier in ditransitive context
• Prepositional phrase attachment
4. Feature Structure Unification
• A traditional challenge with the HPSG theory of grammar is
that, in order to preserve the recursiveness of their grammar
rules, they were required to have a “right-branching”
structure that posited additional feature structure nodes for
each dependency-head relationship the theory posits
• This is to some extent slightly cognitively unrealistic:
• Posits an unecessary amount of structure for a syntactic parse
• Intuitively there is no syntactic distinction that should be made
between sequences of adjuncts (it’s hard to tell the difference
between “the angry green dog” and the “green angry dog.”
5. Lexical Representation of
Syntax
• Each word posits a sequence of head-dependency
relationships that form a “phrasal chain.”
• These chains are based on the notion that we can infer
immediately some head-dependency relationships based on
the syntactic category of the word.
• Roughly, each node in a chain is of three types (not explicitly
defined in the lexicon, but nonetheless present):
• Word Level (WordUtteranceEvent)
• Dependency Level (PhraseUtteranceEvent)
• HeadLevel (PhraseUtteranceEvent)
6. Lexical Representation of
Syntax
• Let’s do a quick example to show the lexical syntactic
representation:
• “the angry dog”
• With part-of-speech tags, that is:
• [Det the][Adj angry][N dog].
• The representation in di-graph form:
7. Lexical Representation of Syntax
PhraseUtteranceEvents
WordUtteranceEvents
Syntactic Entry for a Common Noun
CandType CandType Verb
Preposition
PartOf IsA
Noun
CandType PartOf
Noun
IsA Specifier IsA
Determiner CommonNoun
Phon
dog
8. Lexical Representation of Syntax
PhraseUtteranceEvents
WordUtteranceEvents
Syntactic Entry for an Adjective
CandType IsA Noun
Noun
PartOf
IsA
Adjective
Phon
angry
NOTE: will need to posit a dependency layer, to account for adverbs that
modify the adjective i.e. “really big”.
9. Lexical Representation of Syntax
PhraseUtteranceEvents
WordUtteranceEvents
Syntactic Entry for a Determiner
CandType CandType Verb
Preposition
PartOf
CandType IsA
Noun Noun
PartOf
IsA
Determiner
Phon
the
10. Grammar Rules
• In our example, we will need to have at least two rules:
• One that unifies the structures posited by the determiner to the
structures posited by the common noun
• One that unifies the structures posited by adjective, either to the
determiner or the noun
• Let’s consider this from L-R:
• First, unify the Det-NP-XP structure chain to the Adj-NP structure
chain
• Next, unify that resulting structure chain to the N-NP-XP structure
chain
11. Grammar Rules
• Determiner-Adjective Rule
CandType
Preposition
Verb
PartOf CandType
Noun
CandType Noun
IsA
PartOf
Noun IsA Noun
PartOf
IsA Adjective
IsA
Determiner
Phon
Phon
the angry
12. Grammar Rules
• Determiner-Adjective Rule
CandType Same
Preposition
Verb
PartOf CandType
Noun
CandType Noun
IsA
PartOf
Noun IsA Noun
PartOf
IsA Adjective
IsA
Determiner
Phon
Phon
the angry
13. Grammar Rules
• Determiner-Adjective Rule
CandType CandType
Verb Preposition
PartOf
Noun IsA CandType Noun
IsA IsA Adjective
Determiner
Phon
Phon
the angry
14. Grammar Rules
• We would like to allow for anywhere from 0-infinite number
of adjectives to stand between the determiner and the noun
that selects the determiner as its specifier.
• We can achieve this by explicitly stating that whenever a Det
chain and an Adj chain are unified, it’s exposed as a
determiner on the right wall of the growing parse, as opposed
to an adjective.
15. Grammar Rules
• Determiner-Adjective Resulting Structure
CandType CandType
Verb Preposition
PartOf
Noun IsA CandType Noun
IsA IsA Adjective
Determiner
Phon
Phon
the angry
16. Grammar Rules
• Determiner-Adjective Resulting Structure + NP
CandType CandType
Verb CandType
Preposition Verb
PartOf
Preposition
PartOf
Noun
IsA CandType
Noun Noun
Noun
PartOf
IsA IsA
IsA Adjective CommonNoun
Determiner
Phon
Phon
Phon
the angry dog
17. Grammar Rules
• Expose the resulting structure from the Det-Adj unification as
just the Det structure:
XP XP
NP NP
Det Adj Spr N
18. Grammar Rules
• Expose the resulting structure from the Det-Adj unification as
just the Det structure:
Border
XP XP
Frontier
NP NP
Det Adj Spr N
19. Grammar Rules
• Expose the resulting structure from the Det-Adj unification as
just the Det structure:
Border
XP XP
Same Frontier
NP NP
Same
Det Adj Spr N
20. Grammar Rules
<!--Pre-head Adjective Modifier w/ Det: Shift Border--> <!--Subcategorization Rules: NP Specifier-->
<constraint shouldFalsify="false"> <constraint shouldFalsify="false">
Border(?ba, ?t0, ?w)^ Border(?ba, ?t0, ?w)^
Border(?bb, ?t0, ?w)^ Border(?bb, ?t0, ?w)^
Frontier(?fa, ?t1, ?w)^ Frontier(?fa, ?t1, ?w)^
Frontier(?fb, ?t1, ?w)^ Frontier(?fb, ?t1, ?w)^
Meets(?t0, ?t1, E, ?w)^ Meets(?t0, ?t1, E, ?w)^
PartOf(?ba, ?bb, E, ?w)^ PartOf(?ba, ?bb, E, ?w)^
PartOf(?fa, ?fb, E, ?w)^ PartOf(?fa, ?fb, E, ?w)^
IsA(?ba, Determiner, E, ?w)^ IsA(?ba, Determiner, E, ?w)^
IsA(?bb, Noun, E, ?w)^ IsA(?bb, Noun, E, ?w)^
IsA(?fa, Adjective, E, ?w)^ Specifier(?fa, ?spr, E, ?w)^
IsA(?fb, Noun, E, ?w) IsA(?spr, Determiner, E, ?w)^
==> IsA(?fb, Noun, E, ?w)^
Same(?bb, ?fb, E, ?w)^ Heard(?wue, E, ?w)^
Border(?ba, ?t1, ?w)^ IsA(?wue, WordUtteranceEvent, ?t1, ?w)
</constraint> ==>
Same(?ba, ?spr, E, ?w)^
Same(?bb, ?fb, E, ?w)^
Border(?wue, ?t1, ?w)^
_NPSPR(?ba, ?bb, ?fa, ?fb, E, ?w)
</constraint>
31. Grammar Rules
• Benefits of this feature-structure unification parse:
• Captures the intuition that when we hear a word, and posit its feature structure, we can infer the
existence of not only the word’s direct feature structure (usually generated by lexical rules) but also
the existence of additional structures and their head/dependency relationships, and some definition
of the values in the structure.
• Ambiguities (i.e. the head of an NP) are resolved from L-R through lazy definitions and unificiation of
under-defined structures to well-defined structures in terms of particular features.
• Posits no more additional structures in the parse tree than is necessary in order to reflect a parse,
whereas theories like HPSG posited by a large number of structures in a branching tree in order to
preserve the recursivity of its grammar rules.
• However, we have shown that with feature structure unification, at least in theory, we can preserve
recursivity of many of the rules without requiring a left or right branching structure.
• All of the necessary structure to build a parse are known from the beginning.
32. Grammar Rules!
• The future:
• Ungrammaticality: when objects aren’t where they are supposed to be, search for a likely head-
dependency relationship
• Missing arguments: “Car is big.”
• Extra words (rare to have full content words be considered extra, but occurs in natural language: “I saw the, um,
car.”)
• Dependents out of order: “Give the car me.”
• Dangling dependent: “
• Will require a good branch and bound system, that only performs search when what is expected/predicted
reasonably is violated.
• Give a feature-structure unification account of garden path sentence
• Should be fairly natural given the L-R predictive nature of the parser
• Attach a semantic representation that generates word-sense based on head-dependency
relationships.
• Syntax should be closely tied to semantics, in that both serve to help compute each other to varying degrees.
• Examine discourse from a syntactic perspective, and syntax from a discourse perspective, and use to
disambiguate simultaneously:
33. Notes on Theory (boring)
• By having a lexical representation that is closely tied to the syntax, a number of advantages
fall out:
• Parsimony: by allowing a lot of information to be loosely defined/undefined at the lexical
level, we do not need to posit additional lexical entries to cover all possible configurations of
a phrases arguments in the entry, nor do we need an excessive number of lexical rules to
generate these representations.
• Generativity: a word’s sense is at least in part generated by its relationship to its dependents
and head, and the semantic/syntactic type that these dependents/heads have in theory can
compute a words sense on the fly (inspired by GL theory from Pustejovsky).
• Context embedding: by tying your theory of the lexicon closely to syntactic theory, you move
towards embedding your lexical representation in a cognitive system that is closely tied to the
way words are ACTUALLY used.
34. Lexical Mosaics
• Thus, we can see that the sense of words comes from a number of
different locations:
• Memory
• Syntactic context
• Pragmatic/Discourse factors
• It is the hope for future research to tie these together in an
organized way to give a theory on lexical representation that is tied
closely to these factors, in a computable and tractable manner.
• Early goals:
• Compute word senses from syntactic context + memory (very
difficult)
• Use syntactic context to disambiguate lexical ambiguity
• Use generative word sense to disambiguate syntactic ambiguity
• Simultaneously attempt to give a computational account of lexical
memory, syntactic parsing, and pragmatic/discourse.